0% found this document useful (0 votes)
69 views

Lec 3 Tester Foundation Level CTFL

Uploaded by

Philip Wagih
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views

Lec 3 Tester Foundation Level CTFL

Uploaded by

Philip Wagih
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Software Quality Assurance

Lecture 3

Presented by:
Dr. Yasmine Afify
[email protected]
Reference
https://www.istqb.org/certifications/certified-tester-foundation-level

The ISTQB® Certified Tester Foundation Level


(CTFL) certification provides essential testing
knowledge that can be put to practical use
and, very importantly, explains the
terminology and concepts that are used
worldwide in the testing domain. CTFL is
relevant across software delivery approaches
and practices including Waterfall, Agile,
DevOps, and Continuous Delivery. CTFL
certification is recognized as a prerequisite to
all other ISTQB® certifications where
Foundation Level is required.

2
3
Software Systems Context
• An Error (mistake) made by a human being, will produce a defect (bug /
fault) in the program, which if executed in the code, will lead to a failure
• Defects occur due to:
• Human beings are imperfect
• Time pressure
• Complex code
• Infrastructure Complexity
• Software that does not work correctly can lead to:
• loss of money
• Loss of time
• Loss of business reputation
• Causing injury or death
4
Why Testing is Necessary
Errors, Defects and Failures

Error a human action that


produces an incorrect result
Can manifest as

A flaw in a component or
Defect system that can cause the
component or system to fail to
perform its required function
May result in

Deviation of the component or


Failure
system from its expected
delivery, service or result
Testing Objectives
• To evaluate work products such as requirements, user stories, design,
and code
• To verify whether all specified requirements have been fulfilled
• To build confidence in the level of quality of the test object
• To prevent and find defects
• To provide sufficient information to stakeholders to allow them to make
informed decisions, especially regarding level of quality of test object
• To reduce the level of risk of inadequate software quality
• To comply with contractual, legal, or regulatory requirements or
standards, and/or to verify the test object’s compliance with such
requirements or standards

6
7
Seven Testing Principles
• Principle 1 – Testing shows presence of defects
• We have to test an application with an intension of showing defects. For this
negative testing is the best approach. Testing can show that defects are
present but cannot prove that there are no defects.

• Principle 2 – Exhaustive testing is impossible


• Testing everything (all combinations of inputs and preconditions) is not
feasible except for trivial cases.
• Instead of exhaustive testing, risk analysis and priorities should be used to
focus testing efforts.

8
Why don’t we test
everything ?
System has 20 screens
Average 4 menus / screen
Average 3 options / menu
Average of 10 fields / screen
2 types of input per field
Around 100 possible values
Approximate total for exhaustive testing
20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests
Test length = 1 sec then test duration = 17.7 days
Test length = 10 sec then test duration = 34 weeks
Test length = 1 min then test duration = 4 years
Test length = 10 mins then test duration = 40 years!
Seven Testing Principles
• Principle 3 – Defect clustering
• Testing effort shall be focused proportionally to the expected and observed
defect density of modules.
• A small number of modules usually contains most of the defects discovered
during prerelease testing or is responsible for most of the operational
failures.

• Principle 4 – Early testing


• To find defects early, testing activities shall be started as early as possible in
SDLC.

10
Seven Testing Principles
• Principle 5 – Pesticide paradox
• If the same tests are repeated over and over again, eventually the same set of test
cases will no longer find any new defects.
• To overcome this “pesticide paradox”, test cases need to be regularly reviewed and
revised. New and different tests need to be written to exercise different parts of the
software to find potentially more defects.

• Principle 6 – Testing is context dependent


• Testing is done differently in different contexts. We have to select appropriate
testing approach based on the type of application we are testing. For example,
safety-critical software is tested differently from an e-commerce site.

• Principle 7 – Absence-of-errors fallacy


• Finding and fixing defects does not help if the system built is unusable and does not
fulfill the users’ needs and expectations.
11
Psychology of Testing
• Identifying defects or failures may be perceived as criticism of the
product and of its author. An element of human psychology called
confirmation bias can make it difficult to accept information that
disagrees with currently held beliefs.
• To try to reduce these perceptions, information about defects and
failures should be communicated in a constructive way. This way,
tension can be reduced during both static and dynamic testing.
• Testers and test managers need to have good interpersonal skills to
be able to communicate effectively about defects, failures, test
results, test progress, and risks, and to build positive relationships
with colleagues.
Testing’s Contributions to Success
Techniques should be applied with the appropriate level of test
expertise, in the appropriate test levels, and at the appropriate points
in the software development lifecycle
• testers involved in requirements reviews or user story refinement could detect defects in
these work products.
• testers work closely with system designers while the system is being designed
increases each party’s understanding of the design and how to test it.
• testers work closely with developers while the code is under development increases
each party’s understanding of the code and how to test it.
• testers verify and validate the software prior to release can detect failures that might
otherwise be missed, and support the process of removing defects that caused failures.

13
Key Performance Indicator (KPI)
• It is very useful if the test basis (for any level or type of testing that
is being considered) has measurable coverage criteria defined.
• The measurable coverage criteria can act effectively as key
performance indicator (KPI) to drive the activities that
demonstrate achievement of software test objectives.

14
Testing Throughout SDLC
• In the Waterfall model, the development activities are completed one after another. Test
activities only occur after all other development activities have been completed.
• Incremental development involves establishing requirements, designing, building, and
testing a system in pieces, which means that the software’s features grow incrementally.
• Iterative development occurs when groups of features are specified, designed, built, and
tested together in a series of cycles, often of a fixed duration. Iterations may involve
changes to features developed in earlier iterations, along with changes in project scope.
Each iteration delivers working software which is a growing subset of the overall set of
features until the final software is delivered or development is stopped. Examples:
Rational Unified Process RUP, Scrum, Kanban, spiral/prototyping models.
• Using these models often involve overlapping and iterating test levels throughout
development. Ideally, each feature is tested at several test levels as it moves towards
delivery.

15
V-Model
User/Business Acceptance Test Acceptance

Requirements Plan Test

System Test System


System
Plan
Requirements Test

Integration Integration
Technical
Test Plan
Static Specification
Test Dynamic
testing testing
Unit Test
Program Unit
Plan
Specification Test

Coding
Static Testing
• Static testing relies on the manual examination of work products or tool-driven
evaluation of the code or other work products.
• Static testing assess the code or other work product being tested without
actually executing the code or work product being tested.
• Static analysis can even be applied with tools that evaluate work products
written in natural language such as requirements (e.g., checking for spelling,
grammar, and readability).
• Most types of maintainability defects can only be found by static testing (e.g.,
improper modularization, poor reusability of components, code that is difficult to
analyze and modify without introducing new defects).
• When applied early in SDLC, static testing enables the early detection of defects
before dynamic testing is performed.
17
Static Testing Reviews
• Reviews vary from informal to formal.
• Informal reviews are characterized by not following a
defined process & not having formal documented output.
• Formal reviews are characterized by team participation,
documented results of the review, and documented
procedures for conducting the review.
• The formality of a review process is related to factors
such as the software development lifecycle model, the
maturity of the development process, complexity of work
product to be reviewed, legal/regulatory requirements.
• The focus of a review depends on the agreed objectives
of the review (e.g., finding defects, gaining
understanding, educating participants such as testers and
new team members, or discussing.
18
Static Testing Types
• Peer Review.
• Walkthrough.
• Technical review.
• Inspection.

A single work product may be the subject


of more than one type of review.
For example, an informal review may be
carried out before a technical review, to
ensure the work product is ready for a
technical review.
19
Inspection

• Most formal review type


• Led by the trained moderator
• Pre-meeting preparation
• Documents are prepared and checked
thoroughly by the reviewers before
meeting
• Specified entry and exit criteria for
acceptance of the software product
• It involves peers to examine the product
• The defects found are documented in a
logging list or issue log
• A formal follow-up is carried out by the
moderator
Formal Review Roles and Responsibilities
Manager decides on the execution of reviews, allocates time in project
schedules and determines if the review objectives have been met

Moderator/ the person who leads, plans and runs the review
Facilitator May mediate between the various points of view and is often the
person upon whom the success of the review rests

Author the writer or person with chief responsibility for the document(s)
to be reviewed

Reviewers individuals with a specific technical or business background


Identify and describe findings (e.g. defects)

Scribe documents all the issues, problems, and open points that were
identified during the meeting. With the advent of tools to support
the review process, especially logging of defects/open
points/decisions, there is often no need for a scribe
Activities of Formal Review/Inspection
Planning Initiate review (kickoff) Individual preparation
Identify:
Take notes:
• Scope
• Distribute documents • Defects
• Resources/timeframe
• Explain objective/process • Questions
• Roles
• Comments
• Entry & exit criteria

Issue communication
Follow-up Fixing & reporting & analysis (meeting)

• Checking that defects have


been addressed • Create defect reports Discussing or logging,
• Gathering metrics • Author fixes the defects with documented results
• Checking on exit criteria
Activities of Formal Review/Inspection
The review process comprises the following main activities:
• Planning
• Defining the scope, which includes the purpose of the review, what
documents or parts of documents to review, and the quality
characteristics to be evaluated
• Estimating effort and timeframe
• Identifying review characteristics such as the review type with roles,
activities, and checklists
• Selecting the people to participate in the review and allocating roles
• Defining the entry and exit criteria for more formal review types (e.g.,
inspections)
• Checking that entry criteria are met (for more formal review types)

23
Activities of Formal Review/Inspection
• Initiate review
•Distributing the work product (physically or by electronic means) and other

material, such as issue log forms, checklists, and related work products
•Explaining the scope, objectives, process, roles, and work products to the

participants
•Answering any questions that participants may have about the review

• Individual review (i.e., individual preparation)


•Reviewing all or part of the work product

•Noting potential defects, recommendations, and questions

24
Activities of Formal Review/Inspection
• Review meeting/Issue communication and analysis
• Communicating identified potential defects (e.g., in a review meeting)
• Analyzing potential defects, assigning ownership and status to them
• Evaluating and documenting quality characteristics
Evaluating the review findings against the exit criteria to make a review decision (reject;

major changes needed; accept, possibly with minor changes)

• Fixing and reporting


• Creating defect reports for those findings that require changes
• Fixing defects found (typically done by the author) in the work product reviewed
• Communicating defects to the appropriate person or team
• Recording updated status of defects, may include agreement of comment originator 25
Review Techniques
• Ad hoc
Reviewers are provided with little or no guidance on how this task should be performed.
Reviewers often read the work product sequentially, identifying and documenting issues as
they encounter them. It needs little preparation and is highly dependent on reviewer skills.
• Checklist-based
A systematic technique, whereby the reviewers detect issues based on checklists that are
distributed at review initiation (e.g., by facilitator). A review checklist consists of a set of
questions based on potential defects, which may be derived from experience. you
should also look for defects outside the checklist.
• Scenarios and dry runs
Reviewers are provided with structured guidelines on how to read through the work product.
It supports reviewers in performing “dry runs” on the work product based on expected usage
of the work product. These scenarios provide reviewers with better guidelines on how to
identify specific defect types than simple checklist entries. 26
Review Techniques
• Perspective-based reading
Reviewers take on different stakeholder viewpoints in individual
reviewing. Typical stakeholder viewpoints include end user, marketing,
designer, tester, or operations.
Using different stakeholder viewpoints leads to more depth in individual
reviewing with less duplication of issues across reviewers.
Empirical studies have shown perspective-based reading to be the
most effective general technique for reviewing requirements and
technical work products.

27
Sample scenario during inspection session
• Say: The author has created this product and asked us to help make it
better. Please focus your comments on improving the product.
• Say: To hunt out significant defects, look beneath the superficial minor
defects or style issues you see. If you aren’t sure if something is a defect,
point it out and we’ll decide as a team.
• Say: Our goal is to identify defects, not invent solutions. In general, I will
permit about 1 minute of discussion on an issue to see if it can be resolved
quickly. If not, I will ask that it be recorded and we’ll move on to try to find
additional defects.
• Say: If anyone spots a typo or small cosmetic problem, please record it on
the typo list, rather than bringing it up in the discussion.
• Say: Let’s have only one person speaking at a time, so there aren’t multiple
meetings going on simultaneously. 28
29
30
Success Factors for Reviews
• Each review has clear predefined objectives and used as measurable
exit criteria.
• The right people for the review objectives are involved
• Testers are valued reviewers who contribute to the review and also
learn about the product which enables them to prepare tests earlier
• Defects found are welcomed and expressed objectively
• People issues and psychological aspects are dealt with (make it a
positive experience for the author)
• The review is conducted in an atmosphere of trust; the outcome will not
be used for the evaluation of the participants
Success Factors for Reviews
• Review techniques are applied that are suitable to achieve the
objectives and to the type and level of software work products and
reviewers.
• Any checklists used address the main risks and are up to date.
• Training is given, especially for formal techniques such as inspection.
• Management supports a good review process (e.g., by incorporating
adequate time for review activities in project schedules).
• Participants avoid body language and behaviours that might indicate
boredom, exasperation, or hostility to other participants.
• A culture of learning and process improvement is promoted.
Static Analysis by Tools
Typical Defects
Referencing a variable with an undefined value

Inconsistent interfaces between modules and components

Variables that are not used or are improperly declared

Unreachable (dead) code

Programming standards violations

Security vulnerabilities

Syntax violations of code and software models


Limitations of Static Testing
• Static techniques can check conformance with a
specification but not conformance with the
customer’s real requirements.
• Static techniques cannot check non-functional
characteristics such as performance, usability, etc.

34
35
Testing Methodologies

36
Black-Box Testing Method
• Testing without having any knowledge of the interior workings of
the application is called black-box testing.
• The tester is unaware of the system architecture and does not
have access to source code.
• User stories are used as test basis.
• Deviations from the requirements are checked.
• Typically, while performing a black-box test, a tester will interact
with the system's user interface by providing inputs and
examining outputs without knowing how and where the inputs
are worked upon.

37
White-Box Testing Method
• White-box testing is also called glass testing or open-box testing.
• White-box testing is the detailed investigation of internal logic and
structure of the code.
• In order to perform white-box testing on an application, a tester
needs to know the internal workings of the code. He needs to have a
look inside the source code and find out which unit/piece of the code
is behaving inappropriately.
• May be performed at ALL test levels.

38
Software Testing Types
▪ Functional Testing
▪ Non-functional Testing
▪ Structural/White-box Testing
▪ Change-related Testing (Re-testing and Regression)
1. Functional/Dynamic Testing

40
1.Functional/
Dynamic
Testing
•Tests that evaluate functions
that the system should
perform.
•For every test level, a
suitable test environment is
required.
•In acceptance testing, for
example, a production-like
test environment is ideal,
while in component testing
the developers typically use
their own development
environment.
41
Component/unit/module testing
• Component testing is the process of testing individual components in
isolation.
• Carried out by the team developing the system.
• Components may be:
• Individual functions or methods within an object
• Classes
• Database modules
• Examples of typical defects and failures:
• Incorrect functionality (e.g., not as described in design specifications)
• Data flow problems
• Incorrect code and logic
42
Integration testing
• Once all units are tested, programmers will integrate all units and
check interactions among the units.

• Two levels of integration testing:


• Component integration testing focuses on interactions and interfaces
between integrated components. It is generally automated. It is often the
responsibility of developers.
• System integration testing focuses on interactions and interfaces between
systems and packages. It is the responsibility of testers. It may be done
after/in parallel with system testing activities. It can also cover interactions
with, and interfaces provided by, external organizations (e.g., web services).
43
Integration testing
• Typical test objects for integration testing include: Subsystems,
Databases, Infrastructure, Interfaces, APIs.
• Examples of typical defects and failures for component integration
testing include:
• Incorrect/missing data
• Incorrect sequencing or timing of interface calls
• Interface mismatch
• Failures in communication between components
• Incorrect assumptions about meaning, units, or boundaries of
data being passed between components

44
Integration testing practices
• Driver:
A driver calls the component to be tested.
As shown in the diagram above ‘component B’ is called by the ‘Driver’.
• Stub:
A stub is called from the software component to be tested.
As shown in the diagram above ‘Stub’ is called by ‘component A’.
System testing
• A level of testing that validates the complete and fully
integrated software product.
• The purpose of a system test is to evaluate the end-to-end
system tasks the system can perform and the non-
functional behaviours it exhibits while performing those
tasks.
• The test environment should ideally correspond to the
final target/production environment.
• Independent testers typically carry out system testing.

46
System testing
Typical test objects for system testing include:
• Applications
• Hardware/software systems
• Operating systems
• System under test (SUT)
• System configuration and configuration data

Examples of typical defects and failures for system testing include:


• Incorrect or unexpected system functional or non-functional behaviour
• Failure to properly and completely carry out end-to-end functional tasks
• Failure of the system to work properly in the production environment(s)
• Failure of the system to work as described in system and user manuals
47
Acceptance Testing
Common forms of acceptance testing include the following:
• User acceptance testing
• Operational acceptance testing
• Contractual and regulatory acceptance testing
• Alpha and beta testing

48
User acceptance testing (UAT)
• UAT is a stage in the testing process in which users provide
input and advice on system testing.
• Main objective is building confidence that users can use the
system to meet their needs, fulfil requirements, and perform
business processes with minimum difficulty, cost, and risk.
• UAT is essential, even when comprehensive system and
release testing have been carried out.
• It is important because users have different perspective
than the developers. Moreover, the influences from the
user’s working environment have a major effect on the
reliability, performance, usability and robustness of a
system. These cannot be replicated in a testing environment.
49
User acceptance testing (UAT)

• The quality team has a meeting with the client, with "UAT test
cases" which are the basic scenarios the client should run himself.
• The client will then give feedback: bugs or approval and a sign off
that "UAT has passed successfully".
• This is a very crucial activity done for all projects in all IT
companies and the quality team is responsible for managing it.

50
Alpha and Beta Testing
• Both used by developers of commercial off-the-shelf (COTS)
software who want to get feedback from potential/existing users,
customers before the software product is put on the market.
• Alpha testing
• Users of the software test the software in a lab environment at
the developer’s site.
• Beta testing
• After alpha testing is done, a beta version of the software is
made available to users to allow them to experiment and to
raise problems that they discover in their own environment.
51
Operational Acceptance Testing (OAT)
Acceptance testing of the system by operations/administration
staff is usually performed in a (simulated) production
environment.
The tests focus on operational aspects:
• Backup and restore
• Installing, uninstalling and upgrading
• Disaster recovery
• User management
• Maintenance tasks
52
Contractual and regulatory acceptance
testing
• Contractual acceptance testing is performed against a contract’s
acceptance criteria for producing custom-developed software.
Acceptance criteria should be defined when the parties agree to the
contract.
• Regulatory acceptance testing is performed against any regulations
that must be adhered to, such as government, legal, or safety
regulations.
• They are often performed by users or by independent testers.

53
54
2. Non-functional Testing Types
Non-functional testing is the testing of “how well” the system behaves.
It involves testing a software for the requirements which are nonfunctional in
nature but important such as performance, security, scalability, etc.
Non-functional Testing Types

• Reliability testing • Load testing


• Usability testing • Performance testing
• Efficiency testing • Compatibility testing
• Maintainability testing • Security testing
• Portability testing • Scalability testing
• Baseline testing • Volume testing
• Compliance testing • Stress testing
• Documentation testing • Recovery testing
• Endurance testing • Internationalization testing and
Localization testing
Load vs Stress vs Volume

Volume Testing = Large amount of Data


Non-functional Testing Types

• Usability: Is the software product easy to use, learn and


understand from the user’s perspective?

• Maintainability: The effort needed to make specified


modifications. Is the software product easy to maintain?

• Efficiency: The relationship between the level of performance


of the software and the amount of resources used, under
stated conditions. Does the software product use the
hardware, system software and other resources efficiently?

• Baseline: It refers to the validation of the documents and


specifications on which test cases are designed.
Non-functional Testing Types

• Portability: The ability of software to be transferred from one


environment to another. Is the software product portable?

• Interoperability: The ability to interact with specified systems.


Does the software product work with other software
applications, as required by the users?

• Localization: checking default languages, currency, date, and


time format if it is designed for a particular region/locality.

• Recovery: It is done in order to check how fast and better the


application can recover after it has gone through any type of
crash or hardware failure.
3. Structural Testing
• White-box testing derives tests based on the system’s internal
structure or implementation. Internal structure may include code,
architecture, workflows, and/or data flows within the system
• Interested in what is happening ‘inside the system/application’.
• May be performed at all test levels.
• Often referred to as ‘white box’ or ‘glass box’ or ‘clear-box’
testing.
• The testers require knowledge of:
• how the software is implemented.
• how it works.
• internal implementations of the code.
4. Change-related Testing
• When changes are made to a system, either to correct a defect or because
of new/changing functionality, testing should be done to confirm that the
changes have corrected the defect or implemented the functionality
correctly and have not caused any unforeseen adverse consequences.

• Confirmation testing: It is a type of retesting that is carried out by software testers


as a part of defect fix verification. After a defect is detected and fixed, the software
should be re-tested to confirm that the original defect has been successfully removed.

• Regression testing: It is possible that a change made in one part of the code may
accidentally affect the behaviour of other parts of the code, whether within the same
component, in other components. Such unintended side-effects are called regressions.

• Confirmation and regression testing are performed at ALL test levels.


61
Regression Testing
• All tests are re-run every time a change is made to the
program.
• Regression testing is testing the system to check that changes
have not ‘broken’ previously working code.
• In a manual testing process, regression testing is expensive
but, with automated testing, it is simple and straightforward.

62
Debugging!!
• Debugging is a development activity, not a testing activity.
• Debugging identifies and fixes the source of defects (source of
failure).
• In some cases, testers are responsible for the initial test and the final
confirmation test, while developers do the debugging and associated
component testing.
• However, in Agile development and in some other lifecycles, testers
may be involved in debugging and component testing.
Maintenance Testing
• Once deployed to production environment, system need to be
maintained.
• Changes of various sorts in delivered system: to fix defects discovered in
operational use, to add new functionality, or to delete or alter already-
delivered functionality.
• Maintenance testing focuses on testing the changes to the working system,
as well as testing unchanged parts that might be affected by the changes.
• Maintenance involve planned releases and unplanned releases (hot fixes).
• Triggers for maintenance: modification, migration and retirement.
• Impact analysis is useful for regression testing during maintenance testing.
64
Test Types and Test Levels
• It is possible to perform any of test types at any test level. To
illustrate, examples of functional, non-functional, white-box, and
change-related tests will be given across all test levels, for a banking
application.

• Starting with functional tests:


• For component testing, tests are designed based on how a component
should calculate compound interest.
• For component integration testing, tests are designed based on how
account information captured at user interface is passed to business logic.
• For system testing, tests are designed based on how account holders can
apply for a line of credit on their checking accounts.
• For system integration testing, tests are designed based on how the system
uses an external microservice to check an account holder’s credit score.
• For acceptance testing, tests are designed based on how the banker
handles approving or declining a credit application.

65
Test Types and Test Levels
• Examples on non-functional tests:
• For component testing, performance tests are designed to evaluate
the number of CPU cycles required to perform a complex total interest
calculation.
• For system testing, portability tests are designed to check whether
the presentation layer works on all supported browsers and mobile
devices.
• For system integration testing, reliability tests are designed to
evaluate system robustness if the credit score microservice fails to
respond.
• For acceptance testing, usability tests are designed to evaluate the
accessibility of the banker’s credit processing interface for people with
disabilities.
66
Test Types and Test Levels
• Examples on structured tests:
• For component testing, tests are designed to achieve complete
statement and decision coverage for all components that perform financial
calculations.
• For component integration testing, tests are designed to exercise how
each screen in the browser interface passes data to the next screen and to
the business logic.
• For system testing, tests are designed to cover sequences of web pages
that can occur during a credit line application.
• For system integration testing, tests are designed to exercise all
possible inquiry types sent to the credit score microservice.
• For acceptance testing, tests are designed to cover all supported
financial data file structures and value ranges for bank-to-bank transfers.
67
Test Types and Test Levels
• Examples on change-related tests:
• For component testing, automated regression tests are built for each
component and included within the continuous integration framework.
• For component integration testing, tests are designed to confirm fixes
to interface-related defects as fixes are checked into code repository.
• For system testing, all tests for a given workflow are re-executed if any
screen on that workflow changes.
• For system integration testing, tests of the application interacting with
the credit scoring microservice are re-executed daily as part of continuous
deployment of that microservice.
• For acceptance testing, all previously-failed tests are re-executed after
a defect found in acceptance testing is fixed.

68
69
70
71
72
Complete
• Static review objectives include ---, ---, and ---.
• It is very useful that test basis has measurable coverage criteria to be used as ---.
• An element of human psychology called --- makes it difficult to accept information that
disagrees with currently held beliefs.
• Static testing types include ---, ---, ---, and ---.
• Acceptance testing of system by administration staff is usually performed in a ---.
• Finding defects is not the main focus of --- testing, its goal is to build confidence in the
software.
• Maintenance testing involve planned releases and unplanned releases called ---.
• Triggers for software maintenance include ---, --- and ---.
• --- testing is used by developers of commercial off-the-shelf (COTS) software who want
to get feedback from potential/existing users, customers before the software product is
put on the market.
74
Specify Formal Review/Inspection Activity
a. Evaluating the review findings against the exit criteria to make a
review decision
b. Explaining the scope, objectives, process, roles, and work products
to the participants
c. Noting potential defects, recommendations, and questions
d. Defining the entry and exit criteria

Answers:
a. Review meeting/issue communication and analysis
b. Initiate review
c. Individual preparation
d. Planning
75
Whose Responsibility in Review?
• Document all the issues, problems, and open points that were identified
during the meeting. With the advent of tools to support the review process,
especially logging of defects/open points/decisions, there is often no need for a
scribe
• Decide on the execution of reviews, allocates time in project schedules and
determines if the review objectives have been met
• Lead, plan and run the review. May mediate between the various points of
view and is often the person upon whom the success of the review rests

Answers:
• scribe
• manager
• moderator/facilitator 76
Specify Review Technique
a. Reviewers are provided with structured guidelines on how to read through the work
product based on its expected usage.
b. Reviewers detect issues based on set of questions based on potential defects, which
may be derived from experience.
c. Reviewers are provided with little or no guidance on how this task should be
performed. It needs little preparation and is highly dependent on reviewer skills.
d. Reviewers take on different stakeholder viewpoints in individual reviewing.

Answers:
a. Scenario-based/dry run.
b. Checklist-based.
c. Ad hoc.
d. Perspective-based reading.
77
In Which Testing Level Can Defect be Found?
a. Incorrect sequencing or timing of interface calls
b. Incorrect in code logic
c. Failure of the system to work properly in the production environment(s)

Answers:
a. Integration testing
b. Component testing
c. System testing
78
Exercises
• Explain the testing principles in details.
• Identify three guidelines to successful conduct of review process.
• Clarify the tester role during each of SDLC phases.
• What should a tester do in case a component is not finished and
he needs to conduct component integration testing?
• Compare between regression and confirmation testing types.
• Illustrate steps of dynamic testing using a diagram.

79

You might also like