Capability Assessment
Capability Assessment
NAME 1 NAME 2 NAME 3 NAME 4 NAME 5 NAME 6 NAME 7 NAME 8 NAME 9 NAME 10 NAME 11 NAME 12 NAME 13 NAME 14 NAME 15 NAME 16 NAME 17 NAME 18 NAME 19 NAME 20 NAME 21 NAME 22 NAME 23 NAME 24 NAME 25 NAME 26
Teamwork
How is morale on the team?
Morale is low. The team lacks trust and fears conflict.
Morale is not good. People don’t feel comfortable speaking openly, trust is low, and the team is avoids conflict rather than confronting difficult issues.
We are doing OK. We don’t always agree, but we try to act like a team. The team is building trust and open conversations happen many times.
The team is generally happy, engaged, productive, and genuinely enjoy working together. Trust is strong and team members can speak openly.
Most team members feel like this is one of the best teams they have ever worked on. They are excited to come in to work and are looking forward to the next day when they leave.
Does the team have working agreements?
No.
Generally the team recognizes some de facto norms, but they haven't written them down or agreed upon them.
The team endorses a written working agreement, which is kept up to date and clearly visible in a public area such as the team room or online.
The team endorses and follows a written working agreement that includes statements of team values and how members work together.
The team has legacy working agreements, but no longer needs to reference them, since they form the team’s culture. When exceptions arise, they quickly identify and address them.
Is the team self-organized?
Team leads or managers determine the tasks and estimates for work and assign items.
The team estimates as a group, and then team members create tasks for themselves. Leads outside the team are not really involved.
The team works together to organize tasks, and the Scrum Master is less critical to the day-to-day work of the team.
The team self-organizes (i.e., manages its own workload – each member deciding what to work on) with minimal external leadership.
The team is self-managing (i.e., considers the goal of management, decides how it will reach the goal, and works together to do so).
What stage of development applies to your team?
We are a team in name only. Team members work on multiple efforts with many different groups of people.
It is a new team
-or-
Several team members just left the team and/or new ones were added.
The team is just starting to figure out how to work together, and there is a considerable amount of conflict.
Mostly, the team knows how to work together as a team and is well on their way to high performance.
The team is motivated and autonomous, often reaching an unexpectedly high level of success.
How effectively has the team adopted the Agile roles?
Team leads and resource managers direct the team.
Team members take on Agile roles and responsibilities, but the team does not always honor them.
The team understands Agile roles and responsibilities. Team members are growing comfortable with their roles and feel supported in learning them.
The team is self-organizing. Team members are comfortable in the process and are able to cover for the SM/PO if they are absent.
Team members coach each other, and roles are blended. The lines between roles have become blurred to the extent that they don’t apply.
What is the continuity/stability of the team?
There is a constant churn of people moving on and off the team.
-and/or-
The team was formed for a single release (or a single major initiative) and will be disbanded after shipping.
Core team members are dedicated to the team but may be pulled and replaced frequently.
Team members are consistent and dedicated.
Most of the team has been constant over the past 12 months. The team has completed several production releases/major initiatives with little membership change or turnover.
The team has bonded and become instantiated in the organization. They receive work on a consistent basis, which they deliver. Turnover is low.
Is the team set up as a dedicated team?
[Definition: Team members are not assigned to any other projects.]
Most team members are on multiple teams or working on multiple projects.
Most team members are at least 50% allocated to the team.
No less than 70% of team members are allocated 100% to the team.
No less than 90% of team members are allocated 100% to the team.
Everyone on the team is 100% allocated to the team.
Continuous Improvement
Does the team use Work-In-Process (WIP)limits?
As long as they complete everything by the appropriate deadline, the team doesn’t care how much is in process at one time.
Team members are trying to work on as few stories at a time as possible, but the team doesn’t really follow the WIP limit.
The team actively pursues one-piece flow and sets WIP limits. Most of the time team members work on (at most) two stories and usually only one. Sometimes, multiple members work on the same story.
The team sets and respects WIP limits. Each team member works on one story at a time, and often more than one team member will work on the same story.
Team swarms around the work to minimize cycle time (i.e., as team members finish their work, they join forces [swarm] to help other team members finish work before moving on).
What is the team’s tolerance for change?
The team does not expect changes to their plans. When changes occur, the team uses a change-control process.
The team reevaluates plans on a quarterly basis to minimize impacts.
The team accepts change at the start of each iteration.
The team embraces change on demand. It makes upfront designs, documentation and key decisions just in time.
Change is welcome at all phases and levels.
How does the team approach continuous improvement?
The process is fixed. There is no interest in changing or improving.
The team is still learning the basics and wants to get it "just right" before tweaking.
The team understands one way of doing things but there is interest in trying new ideas.
The team understands the principles behind the practices and is trying new ideas presented by the Scrum Master, RTE, or coach.
The team self-identifies Agile goals and proactively works to achieve them by adding and prioritizing improvement items in the backlog.
What level of specialization exists within the team?
Team members are specialists, with little to no overlap in skillsets.
Developers help other developers, testers help other testers, etc.
Developers assist with testing tasks once all dev tasks are complete.
Team members pair with other team members on tasks to cross-train.
Anyone on the team can pick up any task regardless of title or technology. There are no bottlenecks for a specific type of work, and the team actively seeks ways to improve the skills on the team.
Does the team practice swarming?
[Popup Def - Swarm]: As team members finish their work, they join forces [swarm] to help other team members finish work before moving on.
Work is assigned to one person at a time.
Developers and Quality Assurance can work on unrelated user story tasks at the same time, but they don’t really work together.
At the end of the sprint, the team pulls together to complete commitments regardless of pending task assignments.
The team coalesces around work as needed, but individuals continue to focus solely on their area of specialization.
The team coalesces around work as needed.
Which of the following best describes team retrospectives?
We don’t have retrospectives.
We have retrospectives, but not regularly or frequently.
We have retrospectives at the end of each iteration, and they usually produce action items.
Our retrospectives are useful and enjoyable. We use the actions to continuously improve.
Our retrospectives are creative and forward looking. We often have breakthrough ideas that we act on and that produce results.
Continuous Delivery
What is the team’s approach to architecture?
Architecture is ad hoc and not strategic.
Designated architects do this work, primarily up-front prior to implementation.
The team is starting to work with architects, who are starting to delegate more decisions to the team.
The team makes architectural decisions, on a just-in-time basis.
Architectural excellence is one of the internal competencies of the team. This work occurs on a just-in-time basis (last responsible moment).
What is the team’s approach to version control?
We pass files around via e-mail.
We use a network drive to store our files, code, and other artifacts.
We use version control for source code.
We use version control for source code and other artifacts such as documentation.
The build/release/deployment pipeline is version controlled in step with the application source code and configuration.
What is the team’s approach to builds?
We run periodic builds manually on developer workstations.
We manually trigger the Continuous Integration (CI) server to build periodically.
We trigger the CI server to build daily.
We automate builds so that they trigger after every code commit.
We automate and test ephemeral feature branches.
Which best represents your infrastructure?
We don’t know anything about hardware details.
We use physical hardware (“bare metal").
We deploy to long-lived virtual machines provisioned by others.
We deploy to our own virtual machines using infrastructure as code tools.
We deploy to automated, scalable, and containerized infrastructure.
How does the team handle deployment?
Deploying software is a multi-day event and feels painful.
Deploying software is manual, infrequent, intensive, and takes multiple hours.
Deploying software is automated, but fragile and still requires a lot of manual intervention.
Deploying software is automated, relatively low risk, and occurs at will (push-button deployments).
Deployments are so trivial in scope and risk, they happen automatically multiple times every day, with no human intervention or initiation.
How does the team deploy databases?
We minimize database deployments since they require vast amounts of work and approvals.
We request database changes from a separate team, and do not own the database deployment process.
We own our database changes, as well as the database deployment process, and make changes manually.
We automate database deployments to all environments, using the same scripts for all deployments. We have no provisions for rollback.
Our database deployments are automated and have near-zero downtime deployments with provision for rollback.
How does the team handle built binaries?
We have no defined process or schedule.
We compile binaries each time we deploy to a location (new build event for each deployment).
We compile binaries once. We do not store them on an enterprise repository or consistently version them.
We compile binaries once. If they pass acceptance tests we archive them to be deployed anywhere (using tools such as artifactory).
Every code change results in a deployable artifact, which we test and promote from lower environment through production. We can follow a complete chain of custody for each artifact, from early tests to production.
How does the team respond to failed builds?
We are not responsible for a centralized build management system.
We automate our builds, but rarely pay attention to the build status. We are aware of unsuccessful builds, but have higher priorities than to fix them.
The build pipeline rejects broken builds, and the team works to resolve each breakage immediately.
The Continuous Delivery (CD) pipeline rejects builds that fail release criteria. The team swarms to resolve each breakage immediately and to restore the pipeline to a successful state.
We use pull requests, which merge into the mainline only when builds and tests succeed.
How does the team handle static code analysis?
We are not aware of static code analysis.
We don't do static code analysis.
We trigger static code analysis and code quality scans manually.
We trigger static code analysis and code quality scans automatically as part of the build pipeline.
We trigger static code analysis and quality scans as part of the build pipeline. We have build breakers in place if they do not pass quality gates.
How does the team obtain test data?
We request test data from another group, which requires a long lead time.
We request test data from another group, once for system testing and again for integration testing. Each request requires a short lead time.
We regularly reuse a set of test data created for us.
We can regenerate our test data automatically at will.
We use mocking to provide test data.
How does the team handle performance testing?
We do not run performance tests.
We delegate performance testing to another group.
We trigger manual performance tests irregularly.
We trigger manual performance tests on a regular cadence.
We continuously integrate performance testing into the build pipeline.
How does the team monitor the application and infrastructure?
We don't monitor our application or infrastructure.
We passively monitor our application and infrastructure, but false positives become overwhelming.
We passively monitor our application and infrastructure. The number of false positives is manageable.
We continuously monitor our application, and make manual infrastructure repairs when notified of failures.
We continuously monitor our application and employ self-healing/resilient infrastructure. The system delivers automated notices to correct individuals, with minimal false positives.
How does the team handle logging?
We don’t do logging.
We don't centralize our logs.
We have centralized logs and execute manual queries to comb data.
We have centralized logs, which we can view from a dashboard.
We employ event-driven metrics (collected from centralized logs), which we can view on dashboards.
How does the team treat security?
We don’t design with security in mind. We rely on testing and code fixes to address security issues after the release.
We include some security constraints in our design. Testing (static code analysis, Sonar) occurs before the release.
We build security constraints into our designs, but we don’t have security reviews or test (deeper static code analysis, Fortify) until the very end.
We build security constraints into our designs and perform manual security tests (all static code analysis plus penetration testing) on at least a portion of our code.
We automate security testing (static code testing, penetration testing, plus container scanning) as part of the build delivery pipeline. Developers get meaningful security feedback within minutes of their commit.
How does the team handle branching and merging?
We have many long-lived branches that are cumbersome to merge into master.
We continuously develop on multiple active branches and merge to master as needed.
We have many short-lived branches. We merge to trunk after we complete each feature.
We do trunk-based development. We commit code to the head of master multiple times a day.
We do trunk-based development. We commit code to the head of master once we review and approve pull requests. We use feature toggles to enable/disable features at will.
How frequently does the team do code reviews?
We don’t do code reviews or paired programming.
We recognize that code reviews are a good thing and are taking steps toward doing it.
We do code reviews and test reviews on less than 50% of our user stories.
On 50% to 90% of our user stories, we have code/test pairs do tool-assisted peer code reviews and peer test reviews.
On more than 90% of our user stories, we have code/test pairs do tool-assisted peer code reviews and peer test reviews.
Does the team do holistic testing?
We employ different kinds of testing (unit, functional, integration, etc.), all without coordination.
We recognize that holistic testing is a good thing and are taking steps toward doing it.
Developers and testers coordinate their testing efforts on less than 50% of our user stories.
Developers and testers coordinate their testing efforts on more than 50% of our user stories.
We coordinate all testing ahead of coding, based around user stories.
Does the team automate functional tests?
We do not automate tests.
We are just starting to automate tests, and plans are in place to increase levels.
We build automated tests for new user stories.
We build automated tests for new user stories and are also extending them to existing functionality.
Automation is our common practice, and we’ve got great coverage.
Does the team unit test?
We do not unit test.
Some coding involves unit testing. We understand that unit testing produces better code and reduces the overall effort.
All new user stories involve some unit testing.
All new user stories involve the responsible amount of unit testing. We include unit testing of stories in the definition of done.
It’s hard to imagine a shop that is better at unit testing. We have deep knowledge of the latest unit testing techniques, using mock objects, etc.
How does the team feel about refactoring?
We don’t do or understand refactoring.
We do some refactoring, as needed when implementing stories. We have some understanding of the Single Responsibility Principle (SRP) and the Open/Closed (O/C) principle.
We do the appropriate amount of refactoring with most user stories, around SRP and the O/C principles.
We have a deep understanding of refactoring, and it is a cultural norm on our team.
It’s hard to imagine a shop that is better at refactoring. We have deep knowledge of the latest refactoring techniques and refactor to patterns.
Execution
Does the team specify a Definition of Done (DoD)?
We do not specify a definition of done.
A user story is done when it meets the acceptance criteria.
We have a DoD that includes meeting acceptance criteria and a few other items. We use it like a check list.
We have a DoD that evolves as team members customize it to match their experience.
Quality is second nature to our team. The DoD includes everything necessary to push a story to production, including organizational standards like compliance and security.
How does the team approach working in Agile?
We were told to use Agile, but we don’t really know what that means.
We try to follow the rules of a specific framework, but they don’t always fit with how we work.
We follow an Agile framework and adhere to its practices. The team makes improvements on a regular basis.
We work in an Agile manner and understand why. We borrow Agile practices/tools from various Agile frameworks as needed.
We actively pursue new ways of being more Agile. We follow Agile principles more than a specific framework and can adapt to any situation.
A user story is done when it meets the acceptance criteria.
People are tired, irritable, and burnt out working overtime on a regular basis.
We recognize that the current pace is not sustainable, and we are taking steps to improve the situation, but people continue to work regular overtime.
The team works at a pace that is mostly sustainable, though the workload can be inconsistent with bursts of heavy amounts of work.
The team has support from the organization to work at a sustainable pace. Most of the time we work consistently, with the elements of sustainable pace in our team agreement.
Our pace is sustainable, driven by team members’ intrinsic desire to achieve something special.
Is the team the right size?
[Definition: Here the term "team" = development team, not including Scrum Master/Product Owner.]
The team is loosely defined and team members don't really interact with each other.
The team is very large. We have more than 12 people.
We have 10-12 people on the team.
We have 5-9 people on the team.
We have 3-5 people on the team.
How does the team handle impediments?
[Definition: Any obstacle that keeps a person or team from completing a task or project (impromptu meetings, technical issues, lack of knowledge, etc.).]
We accept impediments as norms and do the best we can with what we have.
Sometimes we raise impediments, but rarely resolve them.
We raise impediments frequently and feel encouraged to do so. We resolve some impediments, and others we cannot. The team is comfortable raising impediments and is starting to see the benefits of doing so.
Raising impediments is becoming routine, and the team feels comfortable doing so. We usually resolve any impediments. Sometimes we perform root cause analysis, and we increasingly see the value of raising impediments.
Raising and resolving impediments is part of our team culture. We address all individual and team impediments that can be addressed at those levels. We perform root cause analysis frequently and act on the results.
When is functional testing complete?
We complete testing in future sprints.
We complete testing in the following sprint.
We try to complete testing in the current sprint, but usually it extends into the following sprint.
Often we complete testing within two weeks and mostly before we start the next story.
For software projects, we employ Test Driven Delivery (TDD), completing UI-based testing immediately after coding the story.
Does the team get together for daily standups?
[Definition: A meeting (short enough for people to remain "standing up") held each day in the same place to communicate current work status and issues.]
We do not get together as a team on a daily basis.
We get together as a team, but not everyone participates every day. Sometimes the meetings last a long time.
We meet on a daily basis, usually for less than 15 minutes. The entire team participates and understands this is not a status meeting.
Our daily standup is short and effective. It runs well with or without someone officially responsible for the meeting. We adjust plans as needed.
Daily standups are part of the team culture. No one needs to facilitate them, and we positively adapt them to the needs of the team.
How does the team track progress?
[Definition: Agile Method - Burnup, Burndown, Cumulative Flow Diagram, or a similar method.]
I'm not really sure how (or if) we track progress.
We track progress using Agile methods. Sometimes this information influences the behavior of the team.
We track progress using Agile methods. Frequently this information influences the behavior of the team.
We track progress using Agile methods. Usually this information influences the behavior of the team.
We track progress using Agile methods. The team proactively uses this information to head off potential problems.
Does the team review completed work with stakeholders?
[Definition: A non-member of the team who has meaningful interest and valuable input (for instance, project sponsors, subject matter experts, etc.).]
We do not demonstrate work to stakeholders.
We demonstrate to stakeholders when we have something worth showing. The reviews don’t go very well because of bugs, incomplete work, or lack of preparation.
We hold stakeholder reviews on a regular schedule. The entire team attends. We are usually prepared, and we get useful feedback. Mostly we focus on progress.
Stakeholder reviews are part of our team culture. We review every story, and the team is very well prepared. The reviews look at progress, but focus on value. The team encourages active feedback, and stakeholders perceive the reviews as valuable.
We proactively involve stakeholders on a regular basis, not just in reviews. We often identify more value as a result of this interaction. When we hold reviews, they focus on the value we deliver.
How does the team approach innovation?
We do not set aside time for innovation.
We try to set aside time for innovation, but we usually use that time for additional development or extra testing.
We set aside time for innovation, and we try to plan at least one innovative activity.
We dedicate time/capacity to innovation, and we incorporate innovation into our sprints.
We innovate as we go. We are constantly experimenting for validated learning.
Does the team operate within the timebox cadence?
[Definition: An assigned period of time during which a team works toward an established goal. The team stops work when the time period concludes.]
The length of each sprint changes to fit the work we need to do.
Our sprints are all the same length, but we usually carry over some work from sprint to sprint.
We focus on completing everything by the end of the sprint, and usually we succeed.
We complete stories early in the sprint, and we rarely carry work over to the next sprint.
Our team works in a continuous flow.
How does the team measure progress?
We don’t pay attention to velocity or throughput on our team.
We measure velocity but it fluctuates greatly.
We use velocity to determine sprint commitments. Our velocity does not have extreme highs or lows.
Our velocity is predictable and stable. We measure throughput.
We measure progress and actively pursue ways to improve, with a goal to achieve better business value outcomes.
[Definition: Velocity - The speed at which a team completes relative-sized functionality, measured over the course of time.]
[Definition: Throughput - An average (over time) of how many user stories a team can complete within a given time frame (i.e., 4 user stories per month).]
Does the team visualize the workflow?
[Definition: Makes a visual map of the workflow, using visual tools such as a Kanban board.
We don’t know what our workflow looks like.
We have some understanding of our key steps, but don’t visualize them on a Kanban board. We use default states in our tool.
We have a Kanban board and the columns roughly match our workflow.
We understand our workflow and visualize it accurately.
We deliberately change our workflow to be ever more efficient and effective.
Do you use Work-In-Process (WIP) limits?
We do not have WIP limits.
We have WIP limits, but we do not respect them.
We have WIP limits and we follow them strictly.
We have WIP limits and we begin to adjust them to maximize our throughput.
We actively explore ways to improve our process (using metrics) to reduce WIP limits.
[Definition: Throughput - An average (over time) of how many user stories a team can complete within a given time frame (i.e., 4 user stories per month).]
Does the team use cycle time and throughput?
[Definition: Cycle time - The amount of time a process takes from the time work begins until it stops, including delays.]
[Definition: Throughput - An average (over time) of how many user stories a team can complete within a given time frame (i.e., 4 user stories per month).]
We do not measure cycle time or throughput.
We measure it, but no one pays attention to it.
We measure it and use it.
We measure it and actively use it for process improvement.
We use it to measure performance changes resulting from system improvement efforts.
Does the team use metrics to inspect and adapt?
We follow the process we were given and we don’t change it.
We make changes to improve our process, but we struggle to determine if those changes help.
We make changes to improve our process and use metrics to help determine their impact.
We make changes to improve our process and actively use metrics to monitor their impact.
We identify new and valuable metrics to help us improve our process.
Does the team have exit agreements for each state?
No, we don’t have exit agreements.
Generally we recognize some de facto team norms for some states. However, they are not in writing, nor has the team agreed to them.
There are written exit agreements for each state. The team agrees with them and they are clearly visible to the team.
We regularly review exit agreements to help optimize the flow.
The team actively pursues adjustments to exit agreements to better optimize the flow.
Backlog Refinement
How much work comes from the team backlog?
Individual managers assign tasks, and there is no clear backlog.
We understand user stories, but we get work assignments that are not in the backlog.
Our backlog has user stories for most of the product/program work, but we still use other artifacts for some work. (ex tech spec or requirements documents)
Our backlog has user stories for all of product/program work, but still using other still use other artifacts for non-product work.
All the work we do exists in the backlog (including innovation, improvement, etc.).
How is product management handled for the team?
There is no product management.
It involves multiple stakeholders, with no clear authoritative voice.
There is a single Product Owner that does all of the following: (bullets) makes sure sufficient user stories are ready stories at all times, accepts user stories , attends all team-oriented planning meetings, ,prioritizes the backlog
The Product Owner is empowered to answer questions and knows enough to answer most questions immediately.
The Team feels a sense of ownership for the product that stems from the vision and leadership of the Product Owner.
How well does the team understand the product vision?
Our product does not have a product vision.
-or-
A written product vision exists somewhere, but it is vague and no one refers to it.
There is a written definition, which is clear and concise, but few members of the team know about it or follow it.
There is a compelling product vision, which only the Product Owner can clearly articulate.
There is a clear and compelling product vision, which some team members can articulate well.
Our product vision is simple, clear, concise, and everyone involved can articulate it well.
Does the team follow the INVEST model with user stories?
[Definition: Independent, Negotiable, Valuable, Estimable, Small , Testable]
We don’t follow INVEST with our stories, or we don’t know what it is.
We understand INVEST and are starting to follow parts of it on some stories.
We mostly follow INVEST on many stories.
We follow INVEST for most stories.
INVEST is part of our team culture.
How big are your user stories?
Our stories are all different sizes.
We are starting to see the relationship between small stories and success.
We have a rule of thumb encouraging small stories.
We can accept most stories in a week or less.
We can accept most stories in 1-3 days.
At what point do you have end-to-end functionality?
Our software does not work end-to-end until we near the end of the project.
We integrate our software end-to-end after a few sprints.
We plan our stories so that we can get to end-to-end functionality as soon as possible.
We design and build our stories to work end-to-end, but we accomplish this only some of the time.
Each of our user stories delivers end-to-end functionality.
How does the team estimate?
People other than those doing the work do the estimating.
-or-
Estimation is based on the work of each function aggregated together.
A few team members provide work estimates for the team.
-or-
We estimate work in terms of time (rather than points).
The whole team participates in estimation. Estimates are a single measurement of the work for the whole team (e.g. user points, t-shirt sizes, etc.). Most team members no longer think of estimates in terms of time.
We use relative sizing to estimate, and everyone on the team estimates the entire story.
We use relative sizing to estimate. Our estimates are quick, efficient, and primarily identify stories of unusual size/complexity.
How far ahead does the team groom its backlog?
User stories are rarely ready before the team starts working on them.
We understand that consistent and frequent grooming is important. We are taking steps to get there.
Sometimes stories are ready prior to planning, but we still do a lot of grooming before we can plan.
Usually there are just enough stories ready for planning.
We always have stories ready 2-4 weeks (or 1-2 sprints) ahead.
Planning
How does the team manage cross-team dependencies?
We identify dependencies at the onset of the project. Someone outside the team manages the dependencies.
We identify dependencies, but we are not able to adjust our plan to accommodate dependencies.
We factor cross-team dependencies into the PI plan and schedule around them.
We identify cross-team dependencies and make them visible. We re-plan as needed.
We resolve dependencies through constant collaboration with other teams. This allows us to adapt as we learn more.
What happens in sprint planning?
We don’t do sprint planning. Others tell us what we must work on to meet the schedule.
It takes several days of planning to solidify our work assignments for the sprint. However, this doesn’t result in a realistic or achievable plan.
A Scrum Master facilitates our sprint planning. Activities include detailed tasking and deliberate capacity assessment. This ensures our plan is realistic and achievable.
The team takes ownership of the sprint plan, and looks ahead to prepare for the planning sessions. The plan is realistic and achievable.
Sprint planning is intuitive. The team actively pursues ways to make planning more efficient.
Does the planning result in a shared commitment by the team?
People outside of the team make the commitments.
The team has some ownership of the plan, resulting in some ownership of the commitment. The team occasionally completes commitments within the sprint.
Team members own the plan, break down work into tasks (without external assistance), and volunteer for task assignments. We meet 80% of our commitments, and product owners accept some work before the last day of the sprint.
The team has full ownership of plans and commitments. We regularly deliver on commitments, and rarely carry over work to the next sprint. product owners frequently accept user stories before the last day of the sprint.
The team commits in a continuous flow, and product owners consistently accept user stories throughout the sprint.
Does the team base planning on capacity or velocity?
Capacity. We determine sprint capacity using task hours. Every resource should be at or near 100% capacity, even if that means pulling in more stories than the team can complete.
Capacity. We determine the sprint capacity using task hours. We expect assigned task hours to fill most of the capacity for each resource, but we give a little wiggle room for collaboration and shared tasks.
Both. We determine the sprint capacity using average velocity. We look at individual task loads to see if any person is over-allocated.
We follow velocity-based planning. The team starts tracking burned points (accepted stories) in addition to burned hours. Updating hours becomes optional. We don’t review individual work load explicitly.
We follow velocity-based planning. We measure progress through completed stories. We no longer need task estimates.