ISTQB-Foundation-Agile-Syllabus - Chapter 2
ISTQB-Foundation-Agile-Syllabus - Chapter 2
• Deviation from the ideals of Agile lifecycles (see Section 1.1) may represent intelligent customization and
adaptation of the practices.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.1 Testing and Development Activities)
• One of the main differences between traditional lifecycles and Agile lifecycles is the idea of very short
iterations, each iteration resulting in working software that delivers features of value to business
stakeholders.
Testers, developers, and business stakeholders all have a role in testing:
with traditional lifecycles.
Developers perform unit tests as they develop features from the user stories.
Testers then test those features.
Business stakeholders also test the stories during implementation.
• hardening or stabilization iterations occur periodically to resolve any lingering defects and other
forms of technical debt. However,
the best practice is that no feature is considered done until it has been integrated and tested with
the system [Goucher09].
Another good practice is to address defects remaining from the previous iteration at the
beginning of the next iteration, as part of the backlog for that iteration (referred to as “fix bugs
first”).
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.1 Testing and Development Activities)
Testers may also serve as testing and quality coaches within the team,
sharing testing knowledge and supporting quality assurance work within the team.
This promotes a sense of collective ownership of quality of the product.
• In a typical Agile project, it is a common practice to avoid producing vast amounts of documentation.
• Instead, focus is more on having working software, together with automated tests that demonstrate
conformance to requirements.
• This encouragement to reduce documentation applies only to documentation that does not deliver value
to the customer.
• In a successful Agile project, a balance is struck between increasing efficiency by reducing documentation
and providing sufficient documentation to support business, testing, development, and maintenance
activities.
• The team must make a decision during release planning about which work products are required and what
level of work product documentation is needed.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.2 Project Work Products)
Project work products of immediate interest to Agile testers typically fall into three categories:
acceptance criteria.
2.1 The Differences between Testing in Traditional and
Agile Approaches (2.1.2 Project Work Products)
Testers in Agile teams utilize various methods to record test progress and status, including
test automation results, progression of test tasks and stories on the:-
• Agile task board
• burndown charts
• communicated to the rest of the team using media such as
• wiki dashboards and
• dashboard-style emails,
• stand-up meetings.
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )
burndown charts
• track progress across the entire
release and within each iteration.
• A burndown chart [Crispin08]
represents the amount of work left
to be done against time allocated to
the release or iteration.
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )
Task board.
•To provide an instant, detailed
visual representation of the whole
team’s current status, including the
status of testing
•The whole team reviews the status
of the task board regularly
2.2 Status of Testing in Agile Projects (2.2.1 Communicating Test Status,
Progress, and Product Quality )
While reviewing test cases, testers should consider suitability for automation.
The team needs to automate as many tests as possible from previous and current iterations.
• This allows automated regression tests to reduce regression risk with less effort than manual regression
testing would require.
• This reduced regression test effort frees the testers to more thoroughly test new features and
functions in the current iteration.
It is critical that testers have the ability to quickly identify and update test cases from previous iterations and/or
releases that are affected by the changes made in the current iteration.
Defining how the team designs, writes, and stores test cases should occur during release planning.
Good practices for test design and implementation need to be adopted early and applied consistently.
The shorter timeframes for testing and the constant change in each iteration will increase the impact of poor
test design and implementation practices.
Use of test automation, at all test levels,
• allows Agile teams to provide rapid feedback on product quality.
Well-written automated tests
• provide a living document of system functionality [Crispin08].
By checking the automated tests and their corresponding test results into the configuration management
system,
• aligned with the versioning of the product builds,
• Agile teams can review the functionality tested and the test results for any given build at any given
point in time.
Automated unit tests are run before source code
• is checked into the mainline of the configuration management system to ensure the code changes do
not break the software build.
To reduce build breaks, which can slow down the progress of the whole team,
• code should not be checked in unless all automated unit tests pass.
Automated unit test results provide immediate feedback on code and build quality, but not on product
quality.
Automated acceptance tests:-
• are run regularly as part of the continuous integration full system build.
• These tests are run against a complete system build at least daily, but are generally not run with each code
check-in as they take longer to run than automated unit tests and could slow down code check-ins.
The test results from automated acceptance tests
• provide feedback on product quality with respect to regression since the last build, but they do not
provide status of overall product quality
Automated tests contained in the regression test set are generally run as part of the daily main build in
the continuous integration environment, and again when a new build is deployed into the test
environment.
An initial subset of automated tests :-
• to cover critical system functionality and integration points should be created immediately after a new
build is deployed into the test environment.
• These tests are commonly known as build verification tests. Results from the build verification tests
will provide instant feedback on the software after deployment, so teams don’t waste time testing an
unstable build.
In addition to test automation, the following testing tasks may also be automated:
• Test data generation
• Loading test data into systems
• Deployment of builds into the test environments
• Restoration of a test environment (e.g., the database or website data files) to a baseline
• Comparison of data outputs
Automation of these tasks reduces the overhead and allows the team to spend time developing and