Seminar Report'03 Extreme Programming
Seminar Report'03 Extreme Programming
1. INTRODUCTION
slowing you down. Add XP to this problem first. For example, if you find
that 25% of the way through your development process your requirements
specification becomes completely useless, then get together with your
customers and write user stories instead.
2. THE XP PRACTICES
1. Planning Game
2. Small releases
3. Metaphor
4. Simple design
5. Test Driven Development
6. Refactoring
7. Pair programming
8. Collective ownership
9. Continuous development
10. Sustainable pace
11. On-site customer
12. Coding standards
3. USER STORIES
User stories serve the same purpose as use cases but are not the same.
They are used to create time estimates for the release planning meeting.
They are also used instead of a large requirements document. User Stories
are written by the customers as things that the system needs to do for them.
They are similar to usage scenarios, except that they are not limited to
describing a user interface. They are in the format of about three sentences
of text written by the customer in the customers’ terminology without
techno-syntax.
User stories also drive the creation of the acceptance tests. One or
more automated acceptance tests must be created to verify the user story has
been correctly implemented.
week and you are at too detailed a level, combine some stories. About 80
user stories plus or minus 20 is a perfect number to create a release plan
during release planning.
4. ACCEPTANCE TESTS
Acceptance tests are created from user stories. During an iteration the
user stories selected during the iteration planning meeting will be translated
into acceptance tests. The customer specifies scenarios to test when a user
story has been correctly implemented. A story can have one or many
acceptance tests, what ever it takes to ensure the functionality works.
Acceptance tests are black box system tests. Each acceptance test
represents some expected result from the system. Customers are responsible
for verifying the correctness of the acceptance tests and reviewing test
scores to decide which failed tests are of highest priority. Acceptance tests
are also used as regression tests prior to a production release.
Automated means that the results are verified automatically. The tests
are usually written in the same computer language as the production code.
The test verification is therefore code, which compares actual results to the
expected results. If a result is unexpected, the test fails. This kind of testing
is in contrast to the usual print lines mixed into the production code, where
the programmer looks at the output on the command line and tries to figure
out whether the code behaves as expected.
Unit tests are written in java using JUnit. JUnit is a testing framework
written in Java. JUnit defines how to structure test cases and provides tools
to run them. The tests are usually executed in VisualAge.
immediately when the tests are run. Confidence in the code is enhanced if all
tests are still running after a day of coding. Each test should be independent
of other tests. This reduces the complexity of the tests and it avoids false
alarms because of unexpected side effects. One test method tests one or
more methods of production code the developer should write the unit tests
while (or before or after) developing the production code. Tests should be
run before and after refactoring a piece of code, before starting to implement
a new functionality in the system and during integration.
5. RELEASE PLANNING
6. SYSTEM METAPHOR
7. SPIKE SOLUTION
8. SMALL RELEASES
9. ITERATION
It may seem silly if your iterations are only one week long to make a
new plan, but it pays off in the end. By planning out each iteration as if it
was your last you will be setting yourself up for an on-time delivery of
your product.
9.1. BUGS
When a bug is found tests are created to guard against it coming back.
A bug in production requires an acceptance test be written to guard against
it. Creating an acceptance test first before debugging helps customers
concisely define the problem and communicate that problem to the
programmers. Programmers have a failed test to focus their efforts and know
when the problem is fixed.
The project velocity (or just velocity) is a measure of how much work
is getting done on your project. To measure the project velocity you simply
add up the estimates of the user stories that were finished during the
iteration. It's just that simple. You also total up the estimates for the tasks
finished during the iteration. Both of these measurements are used for
iteration planning.
A few ups and downs in project velocity are expected. You should use
a release planning meeting to re-estimate and re-negotiate the release plan if
your project velocity changes dramatically for more than one iteration.
Expect the project velocity to change again when the system is put into
production due to maintenance tasks. Project velocity is about as detailed a
measure as you can make that will be accurate. Don't bother dividing the
project velocity by the length of the iteration or the number of people. This
number isn't any good to compare two project's productivity. Each project
team will have a different bias to estimating stories and tasks, some estimate
high, some estimate low. It doesn't matter in the long run. Tracking the total
amount of work done during each iteration is the key to keeping the project
moving at a steady predictable pace.
The problem with any project is the initial estimate. Collecting lots of
details does not make your initial estimate anything other than a guess.
Worry about estimating the overall scope of the project and get that right
instead of creating large documents. Consider spending the time you would
have invested into creating a detailed specification on actually doing a
couple iterations of development. Measure the project velocity during these
initial explorations and make a much better guess at the project's total size.
11. REFACTORING
12. DOCUMENTATION
much and she spends too much time in preparation. She can balance things
between writing, conversation, and question answering to suit her own
needs.
Natural Communication of Requirements: the On-Site Customer practice
converts an inefficient paperwork exercise into a dynamic conversation
discussing requirements as the project goes along. Everything is
communicated -- must be communicated if the tests are to run -- and the
balance between paper, presentation, conversation, and question answering
can be adjusted to suit the participants.
Communicating Internal Design: All the members of the team need to
understand the system. They need to be able to advance the code, to make
it do whatever is needed. They need to do those changes in a way that are
consistent with the overall shape of the code, and the direction it is
moving: we might say with its design.
One way to be sure that everyone on the team understands and follows the
design is to create a design and document it with UML diagrams or words
or both. Everyone would agree that you shouldn't do too much of this up
front, but it's hard to say how much is too much. To communicate a design
in this way, you need good-looking documents and more comprehensive.
Release Planning Communicates Design: The XP release planning
process includes a step where the programmers take the stories and discuss
them among themselves, in order to estimate them. To estimate them, they
figure out roughly how to implement each story. Guess what: the
programmers are communicating about design. And watch them carefully:
sometimes when they think they might forget, they write a note about how
to implement a story, right on the card. They're recording enough about
the design to remember it later. This is design, and communication of
design.
natural willingness of programmers to talk about what they are doing and
how cool it is, and using pair programming, XP teams spread what needs
to be known without the need for much formality.
Communicating Status: Some aspects of your status certainly need to be
communicated externally to the project team. These communications will
often -- but not always -- lead to documents.
Standup Meeting: XP recommends a daily "standup" meeting, where the
team stands in a circle and quickly raises issues that people need to
understand. This will include design issues and the like, but the main thing
is a quick review of status, requests for help, problems encountered, and
discoveries made. The standup meeting makes sure that all the attendees
know about everything that's going on. Naturally, customers and managers
are invited, and should attend.
Big Visible Chart: The important status information that needs to be fresh
in people's minds all the time should be represented in Big Visible Chart.
Here are some examples of useful charts:
• Story Chart. You need to know how you're doing on getting stories
done. Consider a simple bar chart, the height showing the total number of
stories, with the number currently completed filled in. This chart shows
any growth in the requirements, and shows progress toward completion, at
a glance.
• Story Wall. Some teams keep all the story cards up on a wall, arranged
in iteration order: the planned order of implementation. When the customer
changes her mind about the order, or about a story, the wall is rearranged.
Stories that are done are marked, perhaps with a bright green diagonal line
through the card.
• Iteration Wall. The plan and status for the current iteration should be
visible to all, all the time. Some teams write the iteration plan on the
whiteboard. Typically when a task or story is done, the team will mark it
completed. A glance at the wall tells everyone what's up. Other teams fill
the wall with the story and task cards, marking them complete as they get
finished.
Communicating Externally: You may need external documentation to
support the product, either at the design level or for user documentation.
The natural flow of the process supports getting the work done.
Status Reports: Send management copies of your Big Visible Charts,
tied together with a few paragraphs.
Requirements Tracking: If you need requirements tracking, try
associating stories with acceptances tests, and maybe include a little
version information extracted from your code management software.
The details of what to document and how to produce the information
are always unique to the project and its people. But the focus that XP
would recommend is always this:
1. Communicate using the hottest medium you possibly can: from face-to-
face conversation at a whiteboard down through phone conversation,
through email, video or audiotape, down to paper or its online equivalent.
2. Find ways to have people's everyday activities embody communication.
Find ways to have their everyday activities automatically generate the
information that needs to be published. Find ways to extract information
from development artifacts like source code.
3. Treat periodic or specialized documents as stories: use your existing
planning mechanisms to get them done.
14. CONCLUSION
15. REFERENCES
Dept. of CSE 28 MESCE, Kuttippuram
Seminar Report’03 Extreme Programming
ABSTRACT
CONTENTS
1. INTRODUCTION 1
2. THE XP PRACTICES 5
3. USER STORIES 6
4. ACCEPTANCE TESTS 8
5. RELEASE PLANNING 11
6. SYSTEM METAPHOR 12
7. SPIKESOLUTION 12
8. SMALL RELEASES 13
9. ITERATION 14
11. REFACTORING 20
12. DOCUMENTATION 21
14. CONCLUSION 28
15. REFERENCES 29
ACKNOWLEDGMENT