Lec 3 Tester Foundation Level CTFL
Lec 3 Tester Foundation Level CTFL
Lecture 3
Presented by:
Dr. Yasmine Afify
[email protected]
Reference
https://www.istqb.org/certifications/certified-tester-foundation-level
2
3
Software Systems Context
• An Error (mistake) made by a human being, will produce a defect (bug /
fault) in the program, which if executed in the code, will lead to a failure
• Defects occur due to:
• Human beings are imperfect
• Time pressure
• Complex code
• Infrastructure Complexity
• Software that does not work correctly can lead to:
• loss of money
• Loss of time
• Loss of business reputation
• Causing injury or death
4
Why Testing is Necessary
Errors, Defects and Failures
A flaw in a component or
Defect system that can cause the
component or system to fail to
perform its required function
May result in
6
7
Seven Testing Principles
• Principle 1 – Testing shows presence of defects
• We have to test an application with an intension of showing defects. For this
negative testing is the best approach. Testing can show that defects are
present but cannot prove that there are no defects.
8
Why don’t we test
everything ?
System has 20 screens
Average 4 menus / screen
Average 3 options / menu
Average of 10 fields / screen
2 types of input per field
Around 100 possible values
Approximate total for exhaustive testing
20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests
Test length = 1 sec then test duration = 17.7 days
Test length = 10 sec then test duration = 34 weeks
Test length = 1 min then test duration = 4 years
Test length = 10 mins then test duration = 40 years!
Seven Testing Principles
• Principle 3 – Defect clustering
• Testing effort shall be focused proportionally to the expected and observed
defect density of modules.
• A small number of modules usually contains most of the defects discovered
during prerelease testing or is responsible for most of the operational
failures.
10
Seven Testing Principles
• Principle 5 – Pesticide paradox
• If the same tests are repeated over and over again, eventually the same set of test
cases will no longer find any new defects.
• To overcome this “pesticide paradox”, test cases need to be regularly reviewed and
revised. New and different tests need to be written to exercise different parts of the
software to find potentially more defects.
13
Key Performance Indicator (KPI)
• It is very useful if the test basis (for any level or type of testing that
is being considered) has measurable coverage criteria defined.
• The measurable coverage criteria can act effectively as key
performance indicator (KPI) to drive the activities that
demonstrate achievement of software test objectives.
14
Testing Throughout SDLC
• In the Waterfall model, the development activities are completed one after another. Test
activities only occur after all other development activities have been completed.
• Incremental development involves establishing requirements, designing, building, and
testing a system in pieces, which means that the software’s features grow incrementally.
• Iterative development occurs when groups of features are specified, designed, built, and
tested together in a series of cycles, often of a fixed duration. Iterations may involve
changes to features developed in earlier iterations, along with changes in project scope.
Each iteration delivers working software which is a growing subset of the overall set of
features until the final software is delivered or development is stopped. Examples:
Rational Unified Process RUP, Scrum, Kanban, spiral/prototyping models.
• Using these models often involve overlapping and iterating test levels throughout
development. Ideally, each feature is tested at several test levels as it moves towards
delivery.
15
V-Model
User/Business Acceptance Test Acceptance
Integration Integration
Technical
Test Plan
Static Specification
Test Dynamic
testing testing
Unit Test
Program Unit
Plan
Specification Test
Coding
Static Testing
• Static testing relies on the manual examination of work products or tool-driven
evaluation of the code or other work products.
• Static testing assess the code or other work product being tested without
actually executing the code or work product being tested.
• Static analysis can even be applied with tools that evaluate work products
written in natural language such as requirements (e.g., checking for spelling,
grammar, and readability).
• Most types of maintainability defects can only be found by static testing (e.g.,
improper modularization, poor reusability of components, code that is difficult to
analyze and modify without introducing new defects).
• When applied early in SDLC, static testing enables the early detection of defects
before dynamic testing is performed.
17
Static Testing Reviews
• Reviews vary from informal to formal.
• Informal reviews are characterized by not following a
defined process & not having formal documented output.
• Formal reviews are characterized by team participation,
documented results of the review, and documented
procedures for conducting the review.
• The formality of a review process is related to factors
such as the software development lifecycle model, the
maturity of the development process, complexity of work
product to be reviewed, legal/regulatory requirements.
• The focus of a review depends on the agreed objectives
of the review (e.g., finding defects, gaining
understanding, educating participants such as testers and
new team members, or discussing.
18
Static Testing Types
• Peer Review.
• Walkthrough.
• Technical review.
• Inspection.
Moderator/ the person who leads, plans and runs the review
Facilitator May mediate between the various points of view and is often the
person upon whom the success of the review rests
Author the writer or person with chief responsibility for the document(s)
to be reviewed
Scribe documents all the issues, problems, and open points that were
identified during the meeting. With the advent of tools to support
the review process, especially logging of defects/open
points/decisions, there is often no need for a scribe
Activities of Formal Review/Inspection
Planning Initiate review (kickoff) Individual preparation
Identify:
Take notes:
• Scope
• Distribute documents • Defects
• Resources/timeframe
• Explain objective/process • Questions
• Roles
• Comments
• Entry & exit criteria
Issue communication
Follow-up Fixing & reporting & analysis (meeting)
23
Activities of Formal Review/Inspection
• Initiate review
•Distributing the work product (physically or by electronic means) and other
material, such as issue log forms, checklists, and related work products
•Explaining the scope, objectives, process, roles, and work products to the
participants
•Answering any questions that participants may have about the review
24
Activities of Formal Review/Inspection
• Review meeting/Issue communication and analysis
• Communicating identified potential defects (e.g., in a review meeting)
• Analyzing potential defects, assigning ownership and status to them
• Evaluating and documenting quality characteristics
Evaluating the review findings against the exit criteria to make a review decision (reject;
•
27
Sample scenario during inspection session
• Say: The author has created this product and asked us to help make it
better. Please focus your comments on improving the product.
• Say: To hunt out significant defects, look beneath the superficial minor
defects or style issues you see. If you aren’t sure if something is a defect,
point it out and we’ll decide as a team.
• Say: Our goal is to identify defects, not invent solutions. In general, I will
permit about 1 minute of discussion on an issue to see if it can be resolved
quickly. If not, I will ask that it be recorded and we’ll move on to try to find
additional defects.
• Say: If anyone spots a typo or small cosmetic problem, please record it on
the typo list, rather than bringing it up in the discussion.
• Say: Let’s have only one person speaking at a time, so there aren’t multiple
meetings going on simultaneously. 28
29
30
Success Factors for Reviews
• Each review has clear predefined objectives and used as measurable
exit criteria.
• The right people for the review objectives are involved
• Testers are valued reviewers who contribute to the review and also
learn about the product which enables them to prepare tests earlier
• Defects found are welcomed and expressed objectively
• People issues and psychological aspects are dealt with (make it a
positive experience for the author)
• The review is conducted in an atmosphere of trust; the outcome will not
be used for the evaluation of the participants
Success Factors for Reviews
• Review techniques are applied that are suitable to achieve the
objectives and to the type and level of software work products and
reviewers.
• Any checklists used address the main risks and are up to date.
• Training is given, especially for formal techniques such as inspection.
• Management supports a good review process (e.g., by incorporating
adequate time for review activities in project schedules).
• Participants avoid body language and behaviours that might indicate
boredom, exasperation, or hostility to other participants.
• A culture of learning and process improvement is promoted.
Static Analysis by Tools
Typical Defects
Referencing a variable with an undefined value
Security vulnerabilities
34
35
Testing Methodologies
36
Black-Box Testing Method
• Testing without having any knowledge of the interior workings of
the application is called black-box testing.
• The tester is unaware of the system architecture and does not
have access to source code.
• User stories are used as test basis.
• Deviations from the requirements are checked.
• Typically, while performing a black-box test, a tester will interact
with the system's user interface by providing inputs and
examining outputs without knowing how and where the inputs
are worked upon.
37
White-Box Testing Method
• White-box testing is also called glass testing or open-box testing.
• White-box testing is the detailed investigation of internal logic and
structure of the code.
• In order to perform white-box testing on an application, a tester
needs to know the internal workings of the code. He needs to have a
look inside the source code and find out which unit/piece of the code
is behaving inappropriately.
• May be performed at ALL test levels.
38
Software Testing Types
▪ Functional Testing
▪ Non-functional Testing
▪ Structural/White-box Testing
▪ Change-related Testing (Re-testing and Regression)
1. Functional/Dynamic Testing
40
1.Functional/
Dynamic
Testing
•Tests that evaluate functions
that the system should
perform.
•For every test level, a
suitable test environment is
required.
•In acceptance testing, for
example, a production-like
test environment is ideal,
while in component testing
the developers typically use
their own development
environment.
41
Component/unit/module testing
• Component testing is the process of testing individual components in
isolation.
• Carried out by the team developing the system.
• Components may be:
• Individual functions or methods within an object
• Classes
• Database modules
• Examples of typical defects and failures:
• Incorrect functionality (e.g., not as described in design specifications)
• Data flow problems
• Incorrect code and logic
42
Integration testing
• Once all units are tested, programmers will integrate all units and
check interactions among the units.
44
Integration testing practices
• Driver:
A driver calls the component to be tested.
As shown in the diagram above ‘component B’ is called by the ‘Driver’.
• Stub:
A stub is called from the software component to be tested.
As shown in the diagram above ‘Stub’ is called by ‘component A’.
System testing
• A level of testing that validates the complete and fully
integrated software product.
• The purpose of a system test is to evaluate the end-to-end
system tasks the system can perform and the non-
functional behaviours it exhibits while performing those
tasks.
• The test environment should ideally correspond to the
final target/production environment.
• Independent testers typically carry out system testing.
46
System testing
Typical test objects for system testing include:
• Applications
• Hardware/software systems
• Operating systems
• System under test (SUT)
• System configuration and configuration data
48
User acceptance testing (UAT)
• UAT is a stage in the testing process in which users provide
input and advice on system testing.
• Main objective is building confidence that users can use the
system to meet their needs, fulfil requirements, and perform
business processes with minimum difficulty, cost, and risk.
• UAT is essential, even when comprehensive system and
release testing have been carried out.
• It is important because users have different perspective
than the developers. Moreover, the influences from the
user’s working environment have a major effect on the
reliability, performance, usability and robustness of a
system. These cannot be replicated in a testing environment.
49
User acceptance testing (UAT)
• The quality team has a meeting with the client, with "UAT test
cases" which are the basic scenarios the client should run himself.
• The client will then give feedback: bugs or approval and a sign off
that "UAT has passed successfully".
• This is a very crucial activity done for all projects in all IT
companies and the quality team is responsible for managing it.
50
Alpha and Beta Testing
• Both used by developers of commercial off-the-shelf (COTS)
software who want to get feedback from potential/existing users,
customers before the software product is put on the market.
• Alpha testing
• Users of the software test the software in a lab environment at
the developer’s site.
• Beta testing
• After alpha testing is done, a beta version of the software is
made available to users to allow them to experiment and to
raise problems that they discover in their own environment.
51
Operational Acceptance Testing (OAT)
Acceptance testing of the system by operations/administration
staff is usually performed in a (simulated) production
environment.
The tests focus on operational aspects:
• Backup and restore
• Installing, uninstalling and upgrading
• Disaster recovery
• User management
• Maintenance tasks
52
Contractual and regulatory acceptance
testing
• Contractual acceptance testing is performed against a contract’s
acceptance criteria for producing custom-developed software.
Acceptance criteria should be defined when the parties agree to the
contract.
• Regulatory acceptance testing is performed against any regulations
that must be adhered to, such as government, legal, or safety
regulations.
• They are often performed by users or by independent testers.
53
54
2. Non-functional Testing Types
Non-functional testing is the testing of “how well” the system behaves.
It involves testing a software for the requirements which are nonfunctional in
nature but important such as performance, security, scalability, etc.
Non-functional Testing Types
• Regression testing: It is possible that a change made in one part of the code may
accidentally affect the behaviour of other parts of the code, whether within the same
component, in other components. Such unintended side-effects are called regressions.
62
Debugging!!
• Debugging is a development activity, not a testing activity.
• Debugging identifies and fixes the source of defects (source of
failure).
• In some cases, testers are responsible for the initial test and the final
confirmation test, while developers do the debugging and associated
component testing.
• However, in Agile development and in some other lifecycles, testers
may be involved in debugging and component testing.
Maintenance Testing
• Once deployed to production environment, system need to be
maintained.
• Changes of various sorts in delivered system: to fix defects discovered in
operational use, to add new functionality, or to delete or alter already-
delivered functionality.
• Maintenance testing focuses on testing the changes to the working system,
as well as testing unchanged parts that might be affected by the changes.
• Maintenance involve planned releases and unplanned releases (hot fixes).
• Triggers for maintenance: modification, migration and retirement.
• Impact analysis is useful for regression testing during maintenance testing.
64
Test Types and Test Levels
• It is possible to perform any of test types at any test level. To
illustrate, examples of functional, non-functional, white-box, and
change-related tests will be given across all test levels, for a banking
application.
65
Test Types and Test Levels
• Examples on non-functional tests:
• For component testing, performance tests are designed to evaluate
the number of CPU cycles required to perform a complex total interest
calculation.
• For system testing, portability tests are designed to check whether
the presentation layer works on all supported browsers and mobile
devices.
• For system integration testing, reliability tests are designed to
evaluate system robustness if the credit score microservice fails to
respond.
• For acceptance testing, usability tests are designed to evaluate the
accessibility of the banker’s credit processing interface for people with
disabilities.
66
Test Types and Test Levels
• Examples on structured tests:
• For component testing, tests are designed to achieve complete
statement and decision coverage for all components that perform financial
calculations.
• For component integration testing, tests are designed to exercise how
each screen in the browser interface passes data to the next screen and to
the business logic.
• For system testing, tests are designed to cover sequences of web pages
that can occur during a credit line application.
• For system integration testing, tests are designed to exercise all
possible inquiry types sent to the credit score microservice.
• For acceptance testing, tests are designed to cover all supported
financial data file structures and value ranges for bank-to-bank transfers.
67
Test Types and Test Levels
• Examples on change-related tests:
• For component testing, automated regression tests are built for each
component and included within the continuous integration framework.
• For component integration testing, tests are designed to confirm fixes
to interface-related defects as fixes are checked into code repository.
• For system testing, all tests for a given workflow are re-executed if any
screen on that workflow changes.
• For system integration testing, tests of the application interacting with
the credit scoring microservice are re-executed daily as part of continuous
deployment of that microservice.
• For acceptance testing, all previously-failed tests are re-executed after
a defect found in acceptance testing is fixed.
68
69
70
71
72
Complete
• Static review objectives include ---, ---, and ---.
• It is very useful that test basis has measurable coverage criteria to be used as ---.
• An element of human psychology called --- makes it difficult to accept information that
disagrees with currently held beliefs.
• Static testing types include ---, ---, ---, and ---.
• Acceptance testing of system by administration staff is usually performed in a ---.
• Finding defects is not the main focus of --- testing, its goal is to build confidence in the
software.
• Maintenance testing involve planned releases and unplanned releases called ---.
• Triggers for software maintenance include ---, --- and ---.
• --- testing is used by developers of commercial off-the-shelf (COTS) software who want
to get feedback from potential/existing users, customers before the software product is
put on the market.
74
Specify Formal Review/Inspection Activity
a. Evaluating the review findings against the exit criteria to make a
review decision
b. Explaining the scope, objectives, process, roles, and work products
to the participants
c. Noting potential defects, recommendations, and questions
d. Defining the entry and exit criteria
Answers:
a. Review meeting/issue communication and analysis
b. Initiate review
c. Individual preparation
d. Planning
75
Whose Responsibility in Review?
• Document all the issues, problems, and open points that were identified
during the meeting. With the advent of tools to support the review process,
especially logging of defects/open points/decisions, there is often no need for a
scribe
• Decide on the execution of reviews, allocates time in project schedules and
determines if the review objectives have been met
• Lead, plan and run the review. May mediate between the various points of
view and is often the person upon whom the success of the review rests
Answers:
• scribe
• manager
• moderator/facilitator 76
Specify Review Technique
a. Reviewers are provided with structured guidelines on how to read through the work
product based on its expected usage.
b. Reviewers detect issues based on set of questions based on potential defects, which
may be derived from experience.
c. Reviewers are provided with little or no guidance on how this task should be
performed. It needs little preparation and is highly dependent on reviewer skills.
d. Reviewers take on different stakeholder viewpoints in individual reviewing.
Answers:
a. Scenario-based/dry run.
b. Checklist-based.
c. Ad hoc.
d. Perspective-based reading.
77
In Which Testing Level Can Defect be Found?
a. Incorrect sequencing or timing of interface calls
b. Incorrect in code logic
c. Failure of the system to work properly in the production environment(s)
Answers:
a. Integration testing
b. Component testing
c. System testing
78
Exercises
• Explain the testing principles in details.
• Identify three guidelines to successful conduct of review process.
• Clarify the tester role during each of SDLC phases.
• What should a tester do in case a component is not finished and
he needs to conduct component integration testing?
• Compare between regression and confirmation testing types.
• Illustrate steps of dynamic testing using a diagram.
79