0% found this document useful (0 votes)
67 views

ISTQB-Foundation-Agile-Syllabus - Chapter 2

The document discusses the key differences between testing in traditional vs. agile approaches. In agile, testing and development activities are highly integrated with short iterations. Testers, developers and stakeholders all test within each iteration. Documentation is minimized in favor of working software and automated tests. Test levels overlap as changes can occur throughout iterations. Unit, feature acceptance, and system testing may all take place within a single iteration.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

ISTQB-Foundation-Agile-Syllabus - Chapter 2

The document discusses the key differences between testing in traditional vs. agile approaches. In agile, testing and development activities are highly integrated with short iterations. Testers, developers and stakeholders all test within each iteration. Documentation is minimized in favor of working software and automated tests. Test levels overlap as changes can occur throughout iterations. Unit, feature acceptance, and system testing may all take place within a single iteration.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

2.

1 The Differences between Testing in


Traditional and Agile Approaches
• Testers must understand the differences between testing in traditional lifecycle models (e.g., sequential
such as the V-model or iterative such as RUP) and Agile lifecycles in order to work effectively and efficiently.
• The Agile models differ in terms of
 the way testing and development activities are integrated,
 the project work products,
 the names,
 entry and exit criteria used for various levels of testing,
 the use of tools, and
 how independent testing can be effectively utilized.

• Deviation from the ideals of Agile lifecycles (see Section 1.1) may represent intelligent customization and
adaptation of the practices.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.1 Testing and Development Activities)

• One of the main differences between traditional lifecycles and Agile lifecycles is the idea of very short
iterations, each iteration resulting in working software that delivers features of value to business
stakeholders.
Testers, developers, and business stakeholders all have a role in testing:
 with traditional lifecycles.
 Developers perform unit tests as they develop features from the user stories.
 Testers then test those features.
 Business stakeholders also test the stories during implementation.

• hardening or stabilization iterations occur periodically to resolve any lingering defects and other
forms of technical debt. However,
 the best practice is that no feature is considered done until it has been integrated and tested with
the system [Goucher09].
 Another good practice is to address defects remaining from the previous iteration at the
beginning of the next iteration, as part of the backlog for that iteration (referred to as “fix bugs
first”).
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.1 Testing and Development Activities)

 When risk-based testing is used as one of the test strategies,


• A high-level risk analysis occurs during release planning, with testers often driving that analysis.
• However, the specific quality risks associated with each iteration are identified and assessed in
iteration planning.
• This risk analysis can influence the sequence of development as well as the priority and depth of
testing for the features. It also influences the estimation of the test effort required for each
feature .

 In some Agile practices (e.g., Extreme Programming), pairing is used.


 Pairing can involve testers working together in twos to test a feature.
 Pairing can also involve a tester working collaboratively with a developer to develop and test a
feature.
 Pairing can be difficult when the test team is distributed, but processes and tools can help enable
distributed pairing.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.1 Testing and Development Activities)

 Testers may also serve as testing and quality coaches within the team,
 sharing testing knowledge and supporting quality assurance work within the team.
 This promotes a sense of collective ownership of quality of the product.

 Test automation at all levels of testing occurs in many Agile teams,,


 a higher percentage of the manual testing on Agile projects tends to be done using
• experience-based and
• defect-based techniques such as software attacks, exploratory testing, and error guessing .
 While developers will focus on creating unit tests,
 Testers should focus on creating automated integration, system, and system integration tests.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.1 Testing and Development Activities)

Change in Agile Project:-


• One core Agile principle is that change may occur throughout the project.
 Therefore, lightweight work product documentation is favored in Agile projects.
 Changes to existing features have testing implications, especially regression testing implications.
 The use of automated testing is one way of managing the amount of test effort associated with change.
 However, it’s important that the rate of change not exceed the project team’s ability to deal with the
risks associated with those changes.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.2 Project Work Products)

• In a typical Agile project, it is a common practice to avoid producing vast amounts of documentation.
• Instead, focus is more on having working software, together with automated tests that demonstrate
conformance to requirements.
• This encouragement to reduce documentation applies only to documentation that does not deliver value
to the customer.
• In a successful Agile project, a balance is struck between increasing efficiency by reducing documentation
and providing sufficient documentation to support business, testing, development, and maintenance
activities.
• The team must make a decision during release planning about which work products are required and what
level of work product documentation is needed.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.2 Project Work Products)

 Project work products of immediate interest to Agile testers typically fall into three categories:

1. Business-oriented work products


 that describe what is needed (e.g., requirements specifications) and how to use it (e.g., user
documentation)

2. Development work products


 that describe how the system is built (e.g., database entity relationship diagrams), that actually
implement the system (e.g., code), or that evaluate individual pieces of code (e.g., automated unit
tests)

3. Test work products


 that describe how the system is tested (e.g., test strategies and plans), that actually test the
system (e.g., manual and automated tests), or that present test results (e.g., test dashboards as
discussed in Section 2.2.1)
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.2 Project Work Products)
 Typical business-oriented work products on Agile projects include:-
 user stories and
• A User stories are the Agile form of requirements specifications, and should explain how the
system should behave with respect to a single, coherent feature or function.
• A user story should define a feature small enough to be completed in a single iteration.
• Larger collections of related features, or a collection of sub-features that make up a single complex
feature, may be referred to as “epics”.
• Epics may include user stories for different development teams. API-level (middleware) and UI
level (application)
• Each epic and its user stories should have associated acceptance criteria.

 acceptance criteria.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.2 Project Work Products)

Typical developer work products on Agile projects include :-


 Code.
 Automated unit tests.
 These tests might be created after or before (TDD) the development of code.

Typical tester work products on Agile projects include:-


 Automated tests
 documents such as test plans, quality risk catalogs, manual tests, defect reports (test metrics),
and test results logs.
• The documents are captured in as lightweight

In some Agile implementations, especially regulated, safety critical, distributed, or


highly complex projects and products, further formalization of these work products
is required.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.3 Test Levels)

 In sequential lifecycle models,


• The test levels are often defined such that the exit criteria of one level are part of the entry criteria for
the next level.
 In some iterative models, this rule does not apply.
• Test levels overlap.
• Requirement specification, design specification, and development activities may overlap with test
levels.
• In Agile lifecycles, overlap occurs because changes to requirements, design, and code can happen at
any point in an iteration.
• While Scrum, in theory, does not allow changes to the user stories after iteration planning, in practice
such changes sometimes occur.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.3 Test Levels)
During an iteration, any given user story will typically progress sequentially through the following test
activities:
1. Unit testing, typically done by the developer
2. Feature acceptance testing, which is sometimes broken into two activities:
 Feature verification testing, which is often automated, may be done by developers or testers, and
involves testing against the user story’s acceptance criteria
 Feature validation testing, which is usually manual and can involve developers, testers, and business
stakeholders working collaboratively to determine whether the feature is fit for use, to improve
visibility of the progress made, and to receive real feedback from the business stakeholders

 In some Agile projects,


there may be a system test level,
• which starts once the first user story is ready for such testing.
• This can involve executing functional tests, as well as non-functional tests for performance,
reliability, usability, and other relevant test types.
 Internal alpha tests and external beta tests may occur, either at the close of each iteration,
 after the completion of each iteration, or after a series of iterations.
 User acceptance tests,
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.4 Testing and Configuration Management )
• Agile projects often involve heavy use of automated tools to develop, test, and manage software
development.
• Developers use tools for static analysis, unit testing, and code coverage.
• Developers continuously check the code and unit tests into a configuration management system, using
automated build and test frameworks.
• These frameworks allow the continuous integration of new software with the system, with the static
analysis and unit tests run repeatedly as new software is checked in [Kubaczkowski].
• These automated tests can also include functional tests at the integration and system levels.
• Such functional automated tests may be created using functional testing harnesses, open-source user
interface functional test tools, or commercial tools, and can be integrated with the automated tests run
as part of the continuous integration framework.
• In some cases, due to the duration of the functional tests, the functional tests are separated from the unit
tests and run less frequently. For example, unit tests may be run each time new software is checked in,
while the longer functional tests are run only every few days.
• One goal of the automated tests is to confirm that the build is functioning and installable.
• If any automated test fails, the team should fix the underlying defect in time for the next code check-in.
• This requires an investment in real-time test reporting to provide good visibility into test results.
 This approach (Configuration Mangment) helps:-
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.5 Organizational Options for Independent Testing)

independent testers are often more effective at finding defects.


 In some Agile teams,
• developers create many of the tests in the form of automated tests.
• One or more testers may be embedded within the team, performing many of the testing tasks.
• However, given those testers’ position within the team, there is a risk of loss of independence and
objective evaluation.
 Other Agile teams retain fully independent,
• separate test teams, and
• assign testers on-demand during the final days of each sprint.
• This can preserve independence, and these testers can provide an objective, unbiased evaluation of
the software.
• However, time pressures, lack of understanding of the new features in the product, and relationship
issues with business stakeholders and developers often lead to problems with this approach.
 A third option is to have an independent,
• separate test team
• where testers are assigned to Agile teams on a long-term basis, at the beginning of the project,
allowing them to maintain their independence while gaining a good understanding of the product
2.2 Status of Testing in Agile Projects

Change takes place rapidly in Agile projects.


• means that test status, test progress, and product quality constantly develop,
• testers must devise ways to get that information to the team so that they can make decisions to stay on
track for successful completion of each iteration.
• change can affect existing features from previous iterations.
• manual and automated tests must be updated to deal effectively with regression risk
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )

Testers in Agile teams utilize various methods to record test progress and status, including
test automation results, progression of test tasks and stories on the:-
• Agile task board
• burndown charts
• communicated to the rest of the team using media such as
• wiki dashboards and
• dashboard-style emails,
• stand-up meetings.
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )

burndown charts
• track progress across the entire
release and within each iteration.
• A burndown chart [Crispin08]
represents the amount of work left
to be done against time allocated to
the release or iteration.
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )

Task board.
•To provide an instant, detailed
visual representation of the whole
team’s current status, including the
status of testing
•The whole team reviews the status
of the task board regularly
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )

The daily stand-up meeting:-


• includes all members of the Agile team including
testers.
•At this meeting, they communicate their current
status.
•The agenda for each member is:
• What have you completed since the last
meeting?
• What do you plan to complete by the next
meeting?
• What is getting in your way?
• Any issues that may block test progress are
communicated during the daily stand-up meetings,
so the whole team is aware of the issues and can
resolve them accordingly
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )

 To improve the overall product quality, many Agile teams perform:-


• customer satisfaction surveys
 to receive feedback on whether the product meets customer expectations.
 Teams may use other metrics development methodologies to improve the product quality, such as
• test pass/fail rates,
• defect discovery rates,
• confirmation and regression test results,
• defect density,
• defects found and fixed,
• requirements coverage,
• risk coverage,
• code coverage,
• code churn.

 The metrics captured and reported:-


• should be relevant and aid decision-making.
• should not be used to reward, punish, or isolate any team members.
2.2.2 Managing Regression Risk with Evolving Manual and
Automated Test Cases.

• In an Agile project, as each iteration completes, the product grows.


• The scope of testing also increases.
• The risk of introducing regression in Agile development is high due to extensive code churn (lines of code
added, modified, or deleted from one version to another).
• Testers also need to verify no regression has been introduced on features that were developed and tested in
previous iterations.
 In order to maintain velocity without incurring a large amount of technical debt,
• it is critical that teams invest in test automation at all test levels as early as possible.
• It is also critical that all test assets such as automated tests, manual test cases, test data, and other
testing artifacts are kept up to-date with each iteration.
 It is highly recommended that all test assets be maintained in a configuration management tool
complete repetition of all tests is seldom possible,
testers need to allocate time in each iteration to review manual and automated test cases from previous and
current iterations to select test cases that may be candidates for the regression test suite, and to retire test
cases that are no longer relevant.
Tests written in earlier iterations to verify specific features may have little value in later iterations due to feature
changes or new features which alter the way those earlier features behave.

While reviewing test cases, testers should consider suitability for automation.
The team needs to automate as many tests as possible from previous and current iterations.
• This allows automated regression tests to reduce regression risk with less effort than manual regression
testing would require.
• This reduced regression test effort frees the testers to more thoroughly test new features and
functions in the current iteration.
It is critical that testers have the ability to quickly identify and update test cases from previous iterations and/or
releases that are affected by the changes made in the current iteration.
Defining how the team designs, writes, and stores test cases should occur during release planning.
Good practices for test design and implementation need to be adopted early and applied consistently.
The shorter timeframes for testing and the constant change in each iteration will increase the impact of poor
test design and implementation practices.
 Use of test automation, at all test levels,
• allows Agile teams to provide rapid feedback on product quality.
 Well-written automated tests
• provide a living document of system functionality [Crispin08].
 By checking the automated tests and their corresponding test results into the configuration management
system,
• aligned with the versioning of the product builds,
• Agile teams can review the functionality tested and the test results for any given build at any given
point in time.
 Automated unit tests are run before source code
• is checked into the mainline of the configuration management system to ensure the code changes do
not break the software build.
 To reduce build breaks, which can slow down the progress of the whole team,
• code should not be checked in unless all automated unit tests pass.
 Automated unit test results provide immediate feedback on code and build quality, but not on product
quality.
 Automated acceptance tests:-
• are run regularly as part of the continuous integration full system build.
• These tests are run against a complete system build at least daily, but are generally not run with each code
check-in as they take longer to run than automated unit tests and could slow down code check-ins.
 The test results from automated acceptance tests
• provide feedback on product quality with respect to regression since the last build, but they do not
provide status of overall product quality
Automated tests contained in the regression test set are generally run as part of the daily main build in
the continuous integration environment, and again when a new build is deployed into the test
environment.
An initial subset of automated tests :-
• to cover critical system functionality and integration points should be created immediately after a new
build is deployed into the test environment.
• These tests are commonly known as build verification tests. Results from the build verification tests
will provide instant feedback on the software after deployment, so teams don’t waste time testing an
unstable build.

In addition to test automation, the following testing tasks may also be automated:
• Test data generation
• Loading test data into systems
• Deployment of builds into the test environments
• Restoration of a test environment (e.g., the database or website data files) to a baseline
• Comparison of data outputs
 Automation of these tasks reduces the overhead and allows the team to spend time developing and

You might also like