0% found this document useful (0 votes)
2K views

What Is A Performance Task

perf task
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views

What Is A Performance Task

perf task
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 104

What is a Performance Task?

(Part
1)
Defined Learning
Follow
Apr 11, 2015 · 7 min read

A performance task is any learning activity or assessment that asks


students to perform to demonstrate their knowledge,
understanding and proficiency. Performance tasks yield a tangible
product and/or performance that serve as evidence of learning.
Unlike a selected-response item (e.g., multiple-choice or
matching) that asks students to select from given alternatives, a
performance task presents a situation that calls for learners to
apply their learning in context.

Performance tasks are routinely used in certain disciplines, such


as visual and performing arts, physical education, and career-
technology where performance is the natural focus of instruction.
However, such tasks can (and should) be used in every subject
area and at all grade levels.

Characteristics of Performance Tasks

While any performance by a learner might be considered a


performance task (e.g., tying a shoe or drawing a picture), it is
useful to distinguish between the application of specific and
discrete skills (e.g., dribbling a basketball) from genuine
performance in context (e.g., playing the game of basketball in
which dribbling is one of many applied skills). Thus, when I use
the term performance tasks, I am referring to more complex and
authentic performances.

Here are seven general characteristics of performance


tasks:

1. Performance tasks call for the application of


knowledge and skills, not just recall or recognition.

In other words, the learner must actually use their learning


to perform. These tasks typically yield a tangible product (e.g.,
graphic display, blog post) or performance (e.g., oral presentation,
debate) that serve as evidence of their understanding and
proficiency.

2. Performance tasks are open-ended and typically do not yield a


single, correct answer.

Unlike selected- or brief constructed- response items that seek a


“right” answer, performance tasks are open-ended. Thus, there
can be different responses to the task that still meet success
criteria. These tasks are also open in terms of process; i.e., there is
typically not a single way of accomplishing the task.
3. Performance tasks establish novel and authentic contexts for
performance.

These tasks present realistic conditions and constraints for


students to navigate. For example, a mathematics task would
present students with a never-before-seen problem that cannot be
solved by simply “plugging in” numbers into a memorized
algorithm. In an authentic task, students need to consider goals,
audience, obstacles, and options to achieve a successful product or
performance. Authentic tasks have a side benefit — they convey
purpose and relevance to students, helping learners see a reason
for putting forth effort in preparing for them.

4. Performance tasks provide evidence of understanding via


transfer.

Understanding is revealed when students can transfer their


learning to new and “messy” situations. Note that not all
performances require transfer. For example, playing a musical
instrument by following the notes or conducting a step-by-step
science lab require minimal transfer. In contrast, rich performance
tasks are open-ended and call “higher-order thinking” and the
thoughtful application of knowledge and skills in context, rather
than a scripted or formulaic performance.

5. Performance tasks are multi-faceted.


Unlike traditional test “items” that typically assess a single skill or
fact, performance tasks are more complex. They involve multiple
steps and thus can be used to assess several standards or
outcomes.

6. Performance tasks can integrate two or more subjects as well


as 21st century skills.

In the wider world beyond the school, most issues and problems
do not present themselves neatly within subject area “silos.” While
performance tasks can certainly be content-specific (e.g.,
mathematics, science, social studies), they also provide a vehicle
for integrating two or more subjects and/or weaving in 21st
century skills and Habits of Mind. One natural way of integrating
subjects is to include a reading, research, and/or communication
component (e.g., writing, graphics, oral or technology
presentation) to tasks in content areas like social studies, science,
health, business, health/physical education. Such tasks encourage
students to see meaningful learning as integrated, rather than
something that occurs in isolated subjects and segments.

7. Performances on open-ended tasks are evaluated with


established criteria and rubrics.

Since these tasks do not yield a single answer, student products


and performances should be judged against appropriate criteria
aligned to the goals being assessed. Clearly defined and aligned
criteria enable defensible, judgment-based evaluation. More
detailed scoring rubrics, based on criteria, are used to profile
varying levels of understanding and proficiency.

Let’s look at a few examples of performance tasks that


reflect these characteristics:

Botanical Design (upper elementary)

Your landscape architectural firm is competing for a grant to


redesign a public space in your community and to improve its
appearance and utility. The goal of the grant is to create a
community area where people can gather to enjoy themselves and
the native plants of the region. The grant also aspires to educate
people as to the types of trees, shrubs, and flowers that are native
to the region.
Your team will be responsible for selecting a public place in your
area that you can improve for visitors and members of the
community. You will have to research the area selected, create a
scale drawing of the layout of the area you plan to redesign,
propose a new design to include native plants of your region, and
prepare educational materials that you will incorporate into the
design.

Check out the full performance task from Defined


STEM, here: Botanical Design Performance Task. Defined STEM
is an online resource where you can find hundreds of K-12
standards-aligned project based performance tasks.

Evaluate the Claim (upper elementary/ middle


school)

The Pooper Scooper Kitty Litter Company claims that their litter is
40% more absorbent than other brands. You are a Consumer
Advocates researcher who has been asked to evaluate their claim.
Develop a plan for conducting the investigation. Your plan should
be specific enough so that the lab investigators could follow it to
evaluate the claim.

Moving to South America (middle school)

Since they know that you have just completed a unit on South
America, your aunt and uncle have asked you to help them decide
where they should live when your aunt starts her new job as a
consultant to a computer company operating throughout the
region. They can choose to live anywhere in the continent.

Your task is to research potential home locations by examining


relevant geographic, climatic, political, economic, historic, and
cultural considerations. Then, write a letter to your aunt and uncle
with your recommendation about a place for them to move. Be
sure to explain your decision with reasons and evidence from your
research.

Accident Scene Investigation (high school)

You are a law enforcement officer who has been hired by the
District Attorney’s Office to set-up an accident scene investigation
unit. Your first assignment is to work with a reporter from the
local newspaper to develop a series of information pieces to inform
the community about the role and benefits of applying forensic
science to accident investigations.

Your team will share this information with the public through the
various media resources owned and operated by the newspaper.

Check out the full performance task from Defined


STEM here: Accident Scene Investigation Performance Task
In sum, performance tasks like these can be used to engage
students in meaningful learning. Since rich performance tasks
establish authentic contexts that reflect genuine applications of
knowledge, students are often motivated and engaged by such
“real world” challenges.

When used as assessments, performance tasks enable teachers to


gauge student understanding and proficiency with complex
processes (e.g., research, problem solving, and writing), not just
measure discrete knowledge. They are well suited to integrating
subject areas and linking content knowledge with the 21st Century
Skills such as critical thinking, creativity, collaboration,
communication, and technology use. Moreover, performance-
based assessment can also elicit Habits of Mind, such as precision
and perseverance.
Take a free trial of Defined STEM for access to hundreds of real world performance tasks

For a collection of authentic performance tasks and associated


rubrics, see Defined STEM: https://www.definedstem.com

For a complete professional development course on performance


tasks for your school or district, see Performance Task PD
with Jay McTighe: http://www.performancetask.com

For more information about the design and use of performance


tasks, see Core Learning: Assessing What Matters
Most by Jay
McTighe: http://www.schoolimprovement.com

Article originally posted:


URL: http://performancetask.com/what-is-a-performance-task |
Article Title: What is a Performance Task? | Website
Title:PerformanceTask.com | Publication date: 2015–04–12

Why Should We Use Performance Tasks? (Part


2)

Defined Learning
Follow
May 4, 2015 · 9 min read

The case for the increased use of performance tasks rests on two foundational
ideas: 1) Authentic tasks are needed to both develop and assess many of the
most significant outcomes identified in the current sets of academic Standards
as well as trans-disciplinary 21st Century Skills; and 2) Research on effective
learning from cognitive psychology and neuroscience underscores the
importance of providing students with multiple opportunities to apply their
learning to relevant, real-world situations. In this blog post, I will explore the
first foundational idea. In blog post #3, I will examine ways in which the use of
authentic performance tasks contributes to deeper learning.

The New Standards Demand Performance


While any performance by a learner might be considered a performance task
(e.g., tying a shoe or drawing a picture), it is useful to distinguish between the
application of specific and discrete skills (e.g., dribbling a basketball) from
genuine performance in context (e.g., playing the game of basketball in which
dribbling is one of many applied skills). Thus, when I use the term
performance tasks, I am referring to more complex and authentic
performances.

The most recent sets of academic standards in the U.S. — The Common Core
State Standards (CCSS) in English Language Arts and Mathematics , The Next
Generation Science Standards (NGSS), The College, Career and Citizenship
Standards for Social Studies (C3) and The National Core Arts Standards
(NCAS) — call for educational outcomes that demand more than multiple-
choice and short answer assessments as evidence of their attainment. Rather
than simply specifying a “scope and sequence” of knowledge and skills, these
new standards focus on the performances expected of students who are
prepared for higher education and careers. For example, the CCSS in English
Language Arts have been framed around a set of Anchor Standards that define
the long-term proficiencies that students will need to be considered “college
and career ready.” The writers of the E/LA Standards make this point
unequivocally in their characterization of the performance capacities of the
literate individual:
“They demonstrate independence. Students can, without significant
scaffolding, comprehend and evaluate complex texts across a range of types
and disciplines, and they can construct effective arguments and convey
intricate or multifaceted information… Students adapt their communication
in relation to audience, task, purpose, and discipline. Likewise, students are
able independently to discern a speaker’s key points, request clarification,
and ask relevant questions… Without prompting, they demonstrate
command of standard English and acquire and use a wide-ranging
vocabulary. More broadly, they become self-directed learners, effectively
seeking out and using resources to assist them, including teachers, peers, and
print and digital reference materials.” (CCSS for E/LA, p. 7)

The authors of the CCSS in Mathematics declare a shift away from a “mile
wide, inch deep” listing of discrete skills and concepts toward a greater
emphasis on developing the mathematical Practices of Problem Solving,
Reasoning, Modeling, along with the mental habit of Perseverance. Similarly,
the Next Generation Science Standards (NGSS) have highlighted eight
Practices, including Asking Questions and Defining Problems and Analyzing
and Interpreting Data. As noted in the opening pages, these Practice are
intended to actively engaging learners in “doing” science, not just memorizing
facts:

“As in all inquiry-based approaches to science teaching, our expectation is


that students will themselves engage in the practices and not merely learn
about them secondhand. Students cannot comprehend scientific practices,
nor fully appreciate the nature of scientific knowledge itself, without directly
experiencing those practices for themselves.
A graphic from the National Science Teachers Association depicts the
commonalities among the practices in Science, Mathematics and English
Language Arts. Note that all of these reflect genuine performances valued in
the wider world

Available at http://nstahosted.org/pdfs/ngss/PracticesVennDiagram.pdf

In the same vein, the recently released College, Career and Citizenship (C3)
Standards for Social Studies highlight a set of fundamental performances that
are central to an “arc of inquiry.” These include, Developing Questions and
Planning Inquiries, Gathering and Evaluating Sources, and Taking Informed
Action.

The pattern is clear: the current crop of academic Standards focus on


developing transferable processes (e.g., problem solving, argumentation,
research, and critical thinking), not simply presenting a body of factual
knowledge for students to remember. A fundamental goal reflected in these
Standards is the preparation of learners who can perform with their
knowledge.
Needed Shifts in Assessment
The new emphases of the Common Core and Next Generation Standards call
for a concomitant shift in assessments — both in large-scale and classroom
levels. The widespread use of multiple-choice tests as predominant measures
of learning in many subject areas must give way to an expanded use of
performance assessments tasks that engage students in applying their learning
in genuine contexts. McTighe and Wiggins (2013) echo this point in a recent
article, “From Common Core Standards to Curriculum: Five Big Ideas”
(available at http://jaymctighe.com/resources/articles/):

“This performance-based conception of Standards lies at the heart of what is


needed to translate the Common Core into a robust curriculum and
assessment system. The curriculum and related instruction must be designed
backward from an analysis of standards-based assessments; i.e., worthy
performance tasks anchored by rigorous rubrics and annotated work
samples. We predict that the alternative — a curriculum mapped in a typical
scope and sequence based on grade-level content specifications — will
encourage a curriculum of disconnected “coverage” and make it more likely
that people will simply retrofit the new language to the old way of doing
business. Thus, our proposal reflects the essence of backward design:
Conceptualize and construct the curriculum back from sophisticated tasks,
reflecting the performances that the Common Core Standards demand of
graduates. Indeed, the whole point of Anchor Standards in ELA and the
Practices in Mathematics is to establish the genres of performance (e.g.,
argumentation in writing and speaking, and solving problems set in real-
world contexts) that must recur across the grades in order to develop the
capacities needed for success in higher education and the workplace.”

In recognition of these points, the two national assessment consortia, Smarter


Balanced (SBAC) and the Partnership for Assessment and Readiness for
College and Careers (PARCC), have declared their intent to expand their
repertoire to include performance tasks on the next generation of standardized
tests. While it is encouraging to see changes in external testing, my contention
is that the most natural home for the increased use of performance
assessments is in the classroom. Since teachers do not face the same
constraints as large-scale testing groups (e.g., standardized implementation,
limited time, scoring costs, etc.), they can more readily employ performance
tasks along with traditional assessment formats. Performance assessments
such as writing an essay, solving a multi-step problem, debating an issue, and
conducting research and creating an informative website ask students to
demonstrate their learning through actual performance, not by simply
selecting an answer from given alternatives.

By recommending an increased use of performance tasks in the classroom, I


certainly do not mean to suggest that this is the only form of assessment that
teachers should employ. Of course, teachers can and should also use traditional
measures such as selected-response and short-answer quizzes and tests, skill
checks, observations, and portfolios of student work when assessing their
students. Here’s a useful analogy: Think of classroom assessment as
photography. Any single assessment is like a snapshot in that it provides a
picture of student learning at a moment in time. However, it would be
inappropriate to use one picture (a single assessment) as the sole basis for
drawing conclusions about how well a student has achieved desired learning
outcomes. Instead, think of classroom assessment as akin to the assembly of a
photo album containing a variety of pictures taken at different times with
different lenses, backgrounds, and compositions. Such an album offers a
richer, fairer and more complete picture of student achievement than any
single snapshot can provide. My point is that our assessment photo album
needs to include performance tasks that provide evidence of students’ ability to
apply their learning in authentic contexts.
21st Century Skills
In an era in which students can “google” much of the world’s knowledge on a
smart phone, an argument can be made that the outcomes of modern schooling
should place a greater emphasis on

trans-disciplinary skills, such as critical thinking, collaboration,


communicating using various technologies, and learning to learn. In the paper,
“21st Century Skills Assessment,” the Partnership for 21st Century Skills
(2007) describes this need and the implication for assessments of students:

“While the current assessment landscape is replete with assessments that


measure knowledge of core content areas such as language arts, mathematics,
science and social studies, there is a comparative lack of assessments and
analyses focused on 21st century skills. Current tests fall short in several key
ways:

 The tests are not designed to gauge how well students apply what they
know to new situations or evaluate how students might use technologies
to solve problems or communicate ideas.

 While teachers and schools are being asked to modify their practice
based on standardized test data, the tests are not designed to help
teachers make decisions about how to target their daily instruction.
The Partnership proposes that needed assessments should “be largely
performance-based and authentic, calling upon students to use 21st century
skills” (Partnership for 21st Century Skills, 2007, p. 6). I agree!

The Current Assessment Landscape


Many current classroom- and school-level assessments focus on the most
easily measured objectives. The pressures of high-stakes accountability tests
have exacerbated this tendency as teachers devote valuable class time to “test
prep” (at least in the tested subject areas) involving practice with multiple-
choice and brief constructed-response items that mimic the format of
standardized tests. While selected-response and short-answer assessments are
fine for assessing discrete knowledge and skills, they are incapable of providing
evidence of the skills deemed most critical for the 21st century.

Dr. Linda Darling Hammond, a professor at Stanford University and authority


on international education and assessment practices, elaborates on this point
(2013):

As educators, we know that today’s students will enter a workforce in which


they will have to not only acquire information, but also analyze, synthesize, and
apply it to address new problems, design solutions, collaborate effectively, and
communicate persuasively. Few, if any, previous generations have been asked
to become such nimble thinkers. Educators accept the responsibility to prepare
our students for this new and complex world. We also know that in our current
high-stakes context, what is tested increasingly defines what gets taught.
Unfortunately, in the United States, the 21st century skills our students need
have gotten short shrift because our current multiple-choice tests do not test or
encourage students’ use of these skills.
Ironically, the widespread use of narrow, inauthentic assessments and test
prep practices at the classroom level can unwittingly undermine the very
competencies called for by the next generation academic Standards and 21st
Century Skills. To be blunt, students will not be equipped to handle the
sophisticated work expected in colleges and much of the workforce if teachers
simply march through “coverage” of discrete knowledge and skills in grade-
level standards and assess learning primarily through multiple-choice tests of
de-contextualized items. Moreover, such teaching and assessment practices are
unlikely to develop the transferable “big ideas” and fundamental processes of
the disciplines. Moreover, they deprive students of relevant and engaging
learning experiences.

In order to counter to these trends, we need to significantly increase the use of


authentic performance tasks that require students to apply their learning in
genuine contexts. We need to assess the performance outcomes that matter
most, not simply those objectives that are easiest to test and grade. Indeed,
meaningful and lasting learning will be enhanced when school curricula are
constructed “backward” from a series of rich performance tasks that reflect the
“end-in-mind” performances demanded for college and career readiness.
Take a free trial of Defined STEM for access to hundreds of real world performance tasks

For a collection of authentic performance tasks and associated


rubrics, see Defined STEM: http://www.definedstem.com

For a complete professional development course on performance tasks for your


school or district, see Performance Task PD with Jay
McTighe: http://www.performancetask.com

For more information about the design and use of performance tasks,
see Core Learning: Assessing What Matters Most by Jay
McTighe: http://www.schoolimprovement.com
Article originally posted:
URL: http://www.performancetask.com/why-should-we-use-performance-
tasks/ | Article Title: Why Should We Use Performance Tasks? | Website Title:
PerformanceTask.com | Publication date: 2015–05–04

How Can Educators Design


Authentic Performance Tasks?
(Part 3)
Defined Learning
Follow
Jul 1, 2015 · 11 min read

In this blog, we will explore ideas and processes for designing


authentic performance tasks to be used as rich learning activities
and/or for purposes of assessment. In the spirit of “backward
design,” let’s begin at the end by considering the qualities of a rich
performance task, summarized in Figure 1. Since the criteria listed
here define the features that we should see in an authentic task,
they serve as targets for constructing tasks as well as the basis for
reviewing draft tasks.

Figure 1 — Performance Task Review Criteria

Performance Task Review Criteria


Key: 3 = extensively; 2 = to some degree; 1 = not yet
The task addresses/assesses targeted standard(s)/ outcome(s).
123

The task calls for understanding and transfer, not simply recall or
a formulaic response.
123

The task requires extended thinking — not just an answer.


123

The task establishes a meaningful, real-world (i.e., “authentic”)


context for application of knowledge and skills; i.e., includes a
realistic purpose, a target audience, and genuine constraints.
123

The task includes criteria/rubric(s) targeting distinct traits of


understanding and successful performance; i.e., criteria do not
simply focus on surface features of a product or performance.
123

The task directions for students are clear.


123

The task allows students to demonstrate their understanding/


proficiency with some appropriate choice/variety (e.g., of products
or performances).
123

The task effectively integrates two or more subject areas.


123

The task incorporates appropriate use of technology.


123

Source: McTighe and Wiggins (2004)

Let’s examine these task characteristics as they apply to designing


authentic performance tasks:

The task addresses/assesses targeted


standard(s)/ outcome(s).

As noted in previous blogs in this series, performance tasks ask


students to perform with their knowledge. Accordingly, they are
well suited to those educational goals that call for application of
learning, including the Common Core State Standards (CCSS) in
English/Language Arts Anchor Standards for listening, speaking,
reading and writing; the CCSS Standards of Mathematical
Practice; the Next Generation Science Standards eight Practices;
the four dimensions of informed inquiry in The College, Career,
and Civic Life (C3) Framework for Social Studies; and many of the
National Coalition of Core Arts Standards (NCCAS). Also,
performance tasks are naturally aligned with trans-disciplinary
outcomes, such as the 21st Century Skills of Critical Thinking,
Cooperation, Communication and Creativity (4Cs).

Here is a quick check to see if a performance task is well aligned to


targeted standard(s)/ outcome(s): Show your task to another
teacher or a team and ask them to tell you which
standards/outcomes are being addressed. If they can determine all
of your targeted standards/outcomes, then the alignment is sound.
If they can infer one, but not all, of your targeted
standards/outcomes, then you will likely need to modify the task
(or eliminate one or more of the outcomes since they are not being
addressed.)

The task calls for understanding and transfer, not


simply recall or a formulaic response.

Students show evidence of their understanding when they can


effectively do two things:

1. apply their learning to new or unfamiliar contexts; i.e.,


they can transfer their learning;

2. explain their process as well as their answer(s).

Therefore, when designing a performance task, educators should


make sure that it requires application, not simply information. The
task must also call for learners to present the why not just the
what; to explain a concept in their own words; use new examples
to illustrate a theory; and/or defend their position against critique.

A wise teacher I met once offered a wise aphorism: With


performance tasks, “the juice must be worth the squeeze.” In other
words, the time and energy needed to design, implement and
score a performance task must be worth the effort because it will
promote meaningful learning and show that learners can use their
learning in authentic and meaningful ways.

The task requires extended thinking — not just an


answer.

Authentic performance tasks engage students in the thoughtful


application of knowledge and skills. In order to insure that our
tasks involve “higher order” thinking, I suggest using the Depth of
Knowledge (DOK) framework developed by Dr. Norman Webb as
a reference. DOK describes four levels of rigor or cognitive
demand in assessment tasks and learning assignments. Figure 2
presents a brief summary of the four levels of the DOK Framework
with associated performance verbs. My general recommendation
is that authentic performance tasks should target DOK Level 3.
Longer-term projects for older students (such as those featured in
Project-based Learning) would exhibit the characteristics of Level
4, while performance tasks could be appropriately challenging for
children in the primary grades.
Figure 2 — The Depth of Knowledge (DOK)
Framework

Tasks at Level 1
Performance Verbs associated with Level 1

 Require students to recite or recall information


including facts, formulae, or simple procedures.

 Require students to demonstrate a rote response, use a


well-known formula, follow a set procedure (like a
recipe), or perform a clearly defined series of steps.

 Typically expect a “correct” answer.

 Arrange

 Calculate

 Cite

 Define

 Describe

 Draw

 Explain

 Give examples

 Identify
 Illustrate

 Label

 Locate

 List

 Match

 Measure

 Name

 Perform

 Quote

 Recall

 Recite

 Recognize

 Record

 Repeat

 Report

 Select

 State

 Summarize (factual info.)


 Tabulate

Tasks at Level 2
Performance Verbs associated with Level 2

 Focus on application of basic skills and concepts.

 Involve some reasoning beyond recall.

 Require students to perform two or more steps and


make some decisions on how to approach the task or
problem.

 Apply

 Calculate

 Categorize

 Classify

 Compare

 Compute

 Construct

 Convert

 Describe

 Determine Distinguish
 Estimate

 Explain

 Extend

 Extrapolate

 Find

 Formulate

 Generalize

 Graph

 Identify patterns

 Infer

 Interpolate

 Interpret

 Modify

 Observe

 Organize

 Predict

 Relate

 Represent
 Show

 Simplify

 Solve

 Sort

 Summarize (conceptual ideas)

 Use

Tasks at Level 3
Performance Verbs associated with Level 3

 Require strategic thinking and reasoning applied to


situations that generally do not have a single “right”
answer.

 Require students to go beyond the information given to


generalize, connect ideas, evaluate, and problem solve.

 Often have more than one possible answer.

 Expect students to support their answers,


interpretations and conclusions by explaining their
reasoning and citing relevant evidence.

 Appraise

 Assess
 Cite evidence

 Check

 Compare

 Compile

 Conclude

 Contrast

 Critique

 Decide

 Defend

 Describe

 Develop

 Differentiate

 Distinguish

 Examine

 Explain

 Formulate

 Hypothesize

 Identify
 Infer

 Interpret

 Investigate

 Judge

 Justify

 Reorganize

 Solve

 Support

Tasks at Level 4
Performance Verbs associated with Level 4

 Require extended thinking and complex reasoning over


an extended period of time.

 Expects students to transfer their learning to novel,


complex and “messy” situations.

 Requires students to devise an approach among many


alternatives for how to approach the task or problem.

 May require students to develop a hypothesis and


perform complex analysis.

 Appraise
 Connect

 Create

 Critique

 Design

 Evaluate

 Judge

 Justify

 Prove

 Report

 Transfer

 Synthesize

Source: McTighe and Wiggins (2004)

The task establishes a meaningful, real-world


(i.e., “authentic”) context.

If you have ever watched a house or apartment being constructed,


you know that carpenters frame out the individual rooms to
outline the walls, doors, windows, closets and ceiling based on the
dimensions specified in a blueprint. This framing guides the
installation of sheetrock (drywall) on the walls and ceiling, etc.
Then, the windows and doors are installed and the finishing
touches (e.g., painting, carpeting) applied. The idea of framing
applies to the construction of performance tasks as well!

Grant Wiggins and I created a task design frame based on the


acronym, G.R.A.S.P.S. Here are the G.R.A.S.P.S. elements that
are used to frame a performance task: (1) a real-world Goal; (2) a
meaningful Role for the student; (3) authentic (or
simulated) Audience(s); (4) a contextualized Situation that
involves real-world application; (5) student-
generated Products and Performances; and (6) performance
Standards (criteria) by which successful performance would be
judged. Figure 3 presents this practical task design tool containing
associated prompts for each of the G.R.A.S.P.S. elements.

Figure 3 — G.R.A.S.P.S. Design Tool

Directions: Use the following prompts to brainstorm ways of


establishing an authentic context for performance tasks if needed.
(Note: The goal of this tool is not to fill in all of the blanks. Rather,
use whatever prompts apply to help you generate ideas to
embellish a task.)
Goal
Your task is
_____________________________________________
________
The goal is to
_____________________________________________
_______
The problem/challenge is
___________________________________________
The obstacle(s) to overcome is (are)
__________________________________
Role
You are
_____________________________________________
____________
You have been asked to
____________________________________________
Your job is
_____________________________________________
_________
Audience
Your client(s) is (are)
_____________________________________________
_
The target audience is
_____________________________________________
_
You need to convince
_____________________________________________
_
Situation
The context you find yourself in is
____________________________________
The challenge involves dealing with
__________________________________
Product/Performance and Purpose
You will create a
_____________________________________________
____
in order to
_____________________________________________
______

You need to develop


_____________________________________________
__
so that
_____________________________________________
_________
Standards & Criteria for Success
Your performance needs to
_________________________________________
Your work will be judged by
________________________________________
Your product must meet the following standards
_________________________
A successful result will
_____________________________________________
_____

Source: McTighe and Wiggins (2004)

Here is a performance task that was created using the G.R.A.S.P.S.


elements.

State Tour

The state Tourism Office has hired you to plan a tour of your state
for a group of six foreign exchange students (who speak English)
to help them understand the state’s history, geography, economy
and culture. Plan your tour so that the visitors are shown sites that
will teach them about the state and show the ways that it has
influenced the nation’s development. You should prepare a written
tour itinerary, including an explanation of why each site was
selected. Include a map tracing the route for the four-day tour and
a budget for the trip.

The task includes criteria/rubric(s) targeting


distinct traits.

Since authentic tasks do not typically result in a single, correct


answer, student products and performances need to be judged
against appropriate criteria aligned to the goals being assessed.
Clearly defined and aligned criteria enable defensible, judgment-
based evaluation by teachers and self-assessment by learners. I
will devote a future blog post to the topic of criteria and rubrics.

The task directions for students are clear.

A key feature of authentic performance tasks is their “open ended”


nature. However, this feature can also inject ambiguity.
Sometimes students will interpret the task differently than the
teacher intended and go off on unproductive tangents. Here are
three practical ways of checking task clarity and getting feedback
for improving the directions if needed:

 Show your draft task to a teacher from a different


subject or grade level and ask them to tell what they
think the outcomes or standards are; what students
would need to do to successfully complete the task; and
what the key evaluative criteria should be. If they have
difficulty with any of these questions, you probably need
to refine/sharpen the task directions.

 Conduct a “pilot test” of a draft task to see if and when


students become confused or go off on unproductive
tangents. Revise the directions based on this feedback.

 Following their work on a task, ask your students to


offer edits to the task directions to make them clearer
for next year’s students.
The task allows students some appropriate
choice/variety.

The open-ended nature of performance tasks allows teachers to


offer their students options. Students may be give choice(s) about:

1. Task Topic — For example, if the outcome involves


research, then students might be allowed to pick the
topic or question for their investigation.

2. Product/Performance — For example, if a performance


task focuses on a concept in social studies or science,
learners may be given some options regarding how they
demonstrate their thinking and learning, such as a
poster, blog, or an oral presentation.

3. Audience — For some tasks, it may be appropriate to


allow the students to identify a target audience (e.g.,
readers of a community newspaper, younger students,
viewers of a website) for their product or performance.

Ultimately, the purpose of the task will


determine if and when students should be given choices, and if so,
which are the appropriate options.

The task effectively integrates two or more subject areas.


In the wider world beyond the school, most issues and problems
do not present themselves neatly within subject area “silos.” While
performance tasks can certainly be content-specific (e.g.,
mathematics, science, social studies), they also provide a vehicle
for integrating two or more subjects and/or weaving in 21st
century skills (4Cs). Indeed, the more “authentic” the context, the
more likely it will be to involve more than a single subject.

One natural way of integrating subjects is to include


English/Language Arts processes — reading, research, and/or
communication (e.g., writing, graphics, oral or technology
presentation) to tasks in content areas like science, social studies,
business, and health/physical education. Such tasks encourage
students to see meaningful learning as integrated, rather than
something that occurs in isolated subjects and segments.

The task incorporates appropriate use of


technology.

Authentic performance tasks offer many opportunities for


involving students in the purposeful and productive use of
technology — for finding information, processing it, interacting
with others and communicating. Of course, today’s students are
truly digital natives and it makes sense to let them play in the
digital sandbox. Increasingly, teachers are finding that the
incorporation of digital tools can transform a mundane task and
engage more learners. I will devote a future blog post to ideas for
“upgrading” performance tasks through technology.

Conclusion

The design of authentic performance tasks, like any writing or


composing process, is iterative in nature. It is very common for
task developers to revise task directions, add options for students
or modify the evaluative criteria as the task design evolves.
Additionally, feedback from self-assessment, peer review and
classroom implementation invariably suggests further refinements
to the task and associated rubric(s).

Remember to always keep the “end in mind” when designing


performance tasks. The goal of the task is to address and assess
targeted learning outcomes, not to simply offer “cool” products,
entertaining technology or interesting scenarios. The main goal is
to design rich tasks that will promote meaningful learning while
gathering evidence of students’ abilities to apply their learning in
authentic contexts.

Here are examples of performance tasks from an


online resource called Defined
STEM (www.DefinedSTEM.com) where you can find
hundreds of standards-aligned K-12 performance
tasks:
Ancient Engineer: Roman Roads (gr.3)

Baseball Bat Analyst (gr.7)

Mars Rover (gr.10)


Sources: McTighe, J. (2013). Core learning: Assessing what
matters most. Midvale, UT: School Improvement Network.
McTighe, J. and Wiggins, G. (2004). The Understanding by design
professional development workbook. Alexandria, VA: ASCD.

For a collection of authentic performance tasks and associated


rubrics, see Defined STEM: http://www.definedstem.com
For a complete professional development course on performance
tasks for your school or district, see Performance Task PD
with Jay McTighe: http://www.performancetask.com

For more information about the design and use of performance


tasks, see Core Learning: Assessing What Matters
Most by Jay
McTighe: http://www.schoolimprovement.com

How Can We Differentiate


Performance Tasks? (Part 4)
Defined Learning
Follow
Aug 26, 2015 · 6 min read

In this blog, we will explore ways to responsibly differentiate


performance tasks so as to address the targeted learning goals and
obtain needed evidence of their attainment.

When educators are asked to reflect on and describe their most


effective and engaging learning experiences, they frequently cite
the “opportunity for some personal choice within assignments and
assessment tasks.” The frequency of this comment should be no
surprise since we know that learners differ not only in how they
prefer to take in and process information but also in how they best
demonstrate their learning. Some students thrive on oral
explanations; others need to “do.” Some students excel at creating
visual representations; others are adept at writing. Allowing
students some choice within open-ended performance tasks
provides a practical way to personalize learning while letting them
work to their strengths and interests. A standardized, one-size-
fits-all approach to instruction and assessment may be efficient,
but it is rarely optimal for all learners.

One practical way of differentiating performance tasks is to use the


G.R.A.S.P.S. format (presented in Blog #3) to offer students
appropriate choices. In other words, learners could be given
options regarding the audience, product/ performance, context,
topic, and/or process for working on the task. Here is one
example:

Consider a health standard that calls for a basic understanding of


“balanced diet.” Evidence of this understanding could be obtained
by having students explain the concept, present examples of
balanced and unbalanced meals, and list health problems that
might result from a nutritionally imbalanced diet. Such evidence
could be collected in writing, but this requirement would be
inappropriate for a learner with dysgraphia or an ESL student with
limited skills in written English. Indeed, some students’ difficulty
with writing could cause the teacher to incorrectly infer that they
do not understand the concept of balanced diet. However, if
students are offered varied manners of response (such as creating
a picture book to show a balanced vs. imbalanced diet or
explaining the concept orally), the teacher can obtain a more valid
measure of their understanding.

Another idea for differentiating performance tasks is to use an


adaptation of the game, Tic-Tac-Toe, to offer students choices of
products and performances. Figure 1.0 offers one example in
which the teacher structures product and performance options of
various genres through which students could display their content
understanding and skill proficiency.

The product and performance options are flexible. For example, if


we want students to write, then all learners would be asked to
choose one option from the first column, along with one other
product/performance from the second or third columns. Figure
2.0 shows a Tic-Tac-Toe chart with greater openness. By including
a FREE blocks, teachers could allow students to propose an
alternative source of evidence that suits their strength. For a major
performance task, we might allow students to produce more than a
single product (e.g., pick one from each column).

Here are several examples of performance tasks offering


product chioces…
Weather Reporter (gr. 3)

Paralympics Equipment (gr. 12)

Environmental Scientist (gr. 7)

Regardless of how open-ended the task and how many


product/performance options are provided, it is important to
identify a common set of evaluative criteria for assessing what the
students produce. This might seem counter-intuitive; i.e., how can
we have the same criteria if we give students different product
options? The answer goes back to the learning goals and purpose
for the tasks. Consider the unit on nutrition again: IF want
students to show their understanding of a “balanced diet, ” AND
students have some choices for audience (e.g, younger students,
peers, adults) and products (e.g., a picture book, an information
flier, a website), THEN student work on these various versions of
the task would be judged by a rubric containing the following key
criteria connected to the content — clear, accurate and complete
explanation of “balanced diet, with appropriate examples that
illustrates the concept. In other words, the evaluative criteria are
derived primarily from the learning goal(s) rather than from the
particular product a student chose.

Of course, a teacher may wish to add product-specific criteria. For


example, if a student prepares a poster to illustrate a balanced
diet, we could look for neatness, composition and effective use of
visual elements. Likewise, if a student made an oral presentation,
we could judge their pronunciation, delivery rate, and eye contact
with the audience. However, in this example we consider these to
be secondary criteria linked to specific products/ performances,
rather than the key criteria determined by the learning goal.

While I encourage teachers to differentiate their performance


tasks whenever possible and appropriate, I offer three cautions.
First, we must always keep in mind that our aim is to engage
learning in authentic and meaningful learning and to collect
appropriate evidence of that learning — not to simply offer a “cool”
menu of product and performance possibilities. If a standard calls
for proficiency in writing or oral presentation, it would be
inappropriate to provide alternative performance options other
than writing or speaking. However, it might be suitable to offer the
students some choice regarding the topic, audience, and form of
the written product to obtain the evidence we seek. Second, the
options we provide must be worth the time and energy required.
Since tasks typically require time to plan, implement and score, we
should reserve them for the most valued learning goals. It would
be inefficient and unnecessary to have students develop an
animated Power Point presentation or an elaborate 3-dimensional
display for content that could be memorized and efficiently and
appropriately assessed with a multiple-choice quiz. In the folksy
words of teacher friend, with performance tasks, “the juice must
be worth the squeeze.”

Third, feasibility must be considered. Ideally, we might wish to


individualize all major assignments and performance tasks, but
realistically we only have so much time and energy. Therefore,
educators must be judicious in determining when it is important
to offer product and performance options — striking a balance
between a single path and a maze of options that would be
impossible to manage.

Despite the challenges, I believe that efforts to provide options


within performance tasks are well worth it. When students are
given appropriate choices on worthy tasks, they are more likely to
put forth effort and experience a genuine sense of accomplishment
for a job well done.
Sources

 McTighe, J. (2013). Core learning: Assessing what


matters most. Midvale, UT: School Improvement
Network.

 Tomlinson, C. and McTighe, J. (2006). Differentiated


instruction and Understanding by design: Connecting
content and kids. Alexandria, VA: ASCD.
For a collection of authentic performance tasks and associated
rubrics, see Defined STEM: http://www.definedstem.com

For a complete professional development course on performance


tasks for your school or district, see Performance Task PD
with Jay McTighe: http://www.performancetask.com

For more information about the design and use of performance


tasks, see Core Learning: Assessing What Matters
Most by Jay
McTighe: http://www.schoolimprovement.com

How can we upgrade performance


tasks with technology? (Part 5)
Defined Learning
Follow
Sep 19, 2015 · 6 min read

Today’s students are truly digital natives and it makes sense to let
them play in the digital sandbox. Accordingly, an increasing
number of schools provide students with technology (laptops and
tablets) and/or allow their learners to BYOD (bring your own
device) to the classroom. Authentic performance tasks offer many
opportunities for involving students in the purposeful and
productive use of technology for finding and processing
information, interacting with others, and communicating. In
addition to the increasing availability of digital devices, a growing
number of free or very inexpensive applications (apps) are
available to transform a mundane task or assignment. Most of
these apps are built for Web 2.0, and many can be used on a
variety of digital devices including cell phones and tablets.

In this blog, we will explore ways in which teachers and students


can make use of various digital technologies and apps to enhance
performance tasks.

Research-based Tasks

The ability to locate relevant information for a purpose is


recognized as a particularly valuable skill in an information-rich
world where knowledge continually expands. One type of
performance task involves students in gathering information to
explore a topic, answer a question, investigate an issue, or solve a
problem. Such research-based tasks can clearly benefit from the
application of digital tools and apps.

Flashback

“When I began teaching in the early 1970’s, students’ access to


information in school was generally limited to the knowledge of
teachers, textbooks (often outdated), encyclopedias and other
reference sources available in libraries. How times have
changed!” — Jay McTighe

Today’s students can access much of the world’s knowledge on


their smart phones on a 24/7 basis, and there are innumerable
websites offering up-to-date, well-organized and curated sources
of information. Many of these excellent reference sites include
primary source materials to enable students to conduct truly
authentic research. Here are a few of my favorites:

Cautionary Notes

Despite the enormous potential offered by such Internet-based


information sources to enhance meaningful research, the potential
for superficial information gathering and uncritical analysis
abounds.

Flashback

“As a student in the 60’s, the “research” I did consisted mostly of


locating and copying information gleaned from an encyclopedia
or a reference book. As a teacher in the 70’s and 80’s, I tried to
teach students how to synthesize information from more than a
single source and to communicate what they found in their own
words, rather than simply copying verbatim from a source. It
was always a struggle and the results were mixed.” — Jay
Today, we face the prospect that web-based research projects can
too easily be accomplished by a speedy Google search, a “cut and
paste” from a Wikipedia entry, or the appropriation of a previously
published student research paper. To avoid these likelihoods, I
recommend framing research-based performance tasks around
authentic issues, problems and essential questions for which “the
answer” cannot be Googled.

As an example, Benjamin Yeo, a high school World History


teacher at an international school, frames his courses around
open-ended, issues-based questions that require research,
thoughtful analysis, discussion and debate, and communication
with support for the position taken or solution proposed. This
year, one of his questions is, Who is responsible for the plight of
the world’s migrants? He also encourages student-generated
questions that form the basis for research projects. His
methodology engages students in “doing history” — not just
learning facts about historical periods.

Here are examples of performance tasks built around information


gathering and application to address authentic problems and
issues.
Astronomer-Locating A Telescope (gr. 5)

Environmental Scientist: Fracking (gr. 7)

Marketing Segment Analyst (gr. 9)


The plethora of knowledge websites presents another challenge for
today’s digital learners and their teachers. While volumes of
information are immediately accessible, the credibility of that
information is not guaranteed. Indeed, the instant availability of
“stuff” on-line demands a commitment to developing the critical
thinking capacities needed to enable students to know how to
gauge the extent to which the information they find online is
accurate, complete and unbiased.

I endorse the systematic teaching of critical thinking skills and


associated dispositions, including comparing and evaluating
sources, distinguishing fact from opinion, identifying potential
bias and willingness to change one’s mind if the evidence is
compelling. Such instruction can be guided by an overarching
essential question that can be posed to students from the
elementary grades through college: How do I know what to believe
in what I see, read and hear? For research-based, performance
tasks, I also recommend that their associated rubrics include a
trait for critical analysis of information to make it clear that merely
locating information is insufficient.

Idea

Use the following website in a lesson to help cultivate a more


skeptical attitude toward on-line information!
 http://www.ibtimes.com/fake-tree-octopus-exposes-
risks-internet-reliance-among-students-263707

Similarly, introducing a verification website like Snopes


(http://www.snopes.com) gives learners a tool to debunk the
many “urban legends” and bogus claims circulating on the
Internet.

Tasks involving Authentic Audiences and Products

In Blog #3 in this series, I presented the G.R.A.S.P.S. format as a


way of creating a more authentic context for performance tasks, by
establishing: a real-world Goal; a meaningful Role for the student;
an authentic (or simulated) Audience(s); a contextualized
Situation that involves real-world application; student-generated
Products and Performances; and performance Standards (criteria)
by which successful performance would be judged.
The “A” and “P” categories within G.R.A.S.P.S. offer natural
opportunities for upgrading tasks with digital tools.

Rather than simply completing a task on paper to turn into their


teacher or share with their class, students can target a world-wide
audience and publish their work using the many available apps.
For instance, instead of an oral presentation to one’s classmates,
students can record a “Ted Talk” and upload it as a Podcast or
Vodcast. As they develop skills of narrative writing, young
students can use cartoon creation apps such as Strip Generator or
Toondoo to practice plot/character development and story
sequencing. Instead of composing an editorial to the school paper,
students can share their opinions via a blog post through
WordPress or EduBlog.

Figure 1 presents a before-and-after example of a performance


task for which Role, Audience, and Product have been upgraded.
Figure 2 offers a chart suggesting ways in which traditional
products and performances could be upgraded through the use of
free apps.

Note: When appropriate, students may be given some choice


in how they show their learning, and the many available apps
offer practical ways for personalizing performance tasks.
Experience shows that when students are given appropriate
choices on worthy tasks, they are more likely to put forth effort
and experience a genuine sense of accomplishment for a job well
done. See my Blog #4 in this series in which the topic of
differentiation in performance tasks is discussed.

Conclusion

While I encourage teachers to incorporate digital tools as part of


their performance tasks whenever possible and appropriate,
another cautionary note is in order. We must always keep in mind
that our aim in using performance tasks is to engage students in
authentic and meaningful learning and to collect appropriate
evidence of that learning — not to simply jump on the latest “cool”
app or tech tool. Students can easily become absorbed in the bells
and whistles of the technology or become absorbed in product
creation and lose site of the overall purpose of the task. If and
when we incorporate digital tools as part of performance tasks, we
want to insure that “the juice is worth the squeeze.”
 McTighe, J. (2013). Core learning: Assessing what
matters most. Midvale, UT: School Improvement
Network.

 McTighe, J. and March, T. (2015). “Choosing apps by


design.” Educational Leadership, May 2015.
Alexandria, VA:, ASCD
For a collection of authentic performance tasks and associated
rubrics, see Defined STEM: http://www.definedstem.com

For a complete professional development course on performance


tasks for your school or district, see Performance Task PD
with Jay McTighe: http://www.performancetask.com

For more information about the design and use of performance


tasks, see Core Learning: Assessing What Matters
Most by Jay
McTighe: http://www.schoolimprovement.com

How will we evaluate student


performance on tasks? (Part 6)
Defined Learning
Follow
Mar 2, 2016 · 13 min read

Student responses to assignments and assessment items that have


a single, correct answer can be scored using an answer key or a
scanning machine. In contrast, performance tasks are typically
open-ended and therefore, teachers must use their judgment when
evaluating the resulting products and performances. By using a set
of established criteria aligned with targeted standards/outcomes,
it is possible to fairly, consistently, and defensibly make a
judgment-based evaluation of students’ products and
performances. In this blog, we’ll explore: 1) four types of criteria
for evaluating student performance on open-ended tasks; 2) four
kinds of criterion-based evaluation tools; 3) practical processes for
designing effective rubrics, and, 4) benefits to teachers and
students.

Types of Evaluative Criteria

Criteria are guidelines or rules for judging student responses,


products or performances. In essence, they describe what is most
important in student work in relation to identified learning goals.
Criteria serve as the foundation for the development of a rubric, a
tool for evaluating student work according to a performance scale.

I propose four general categories of criteria that can be used to


evaluate student work depending on the targeted standards or
outcomes and the purpose of the performance task. The four
criterion types focus on
evaluating content, process, quality, and impact. Let’s consider
each type.

1. Content criteria are used to evaluate the degree of a


student’s knowledge and understanding of facts, concepts and
principles.

2. Process criteria are used to evaluate the proficiency level of


performance of a skill or process, as well as the effectiveness of
the methods and procedures used in a task.

3. Quality criteria are used to evaluate the overall quality and


craftsmanship of a product or performance.

4. Impact criteria are used to evaluate the overall results or


effects of a product or performance given its purpose and
audience.

Figure 6.1 presents some descriptive terms associated with each of


the four criterion types.

Figure 6.1 — Descriptive Terms for Criterion Types

Criterion Types
Descriptive Terms (examples)
Content
accurate, clearly explained, complete, expert, knowledgeable,

Process
collaborative, coordinated, efficient, methodical, precise

Quality
creative, organized, polished, effectively designed, well crafted,

Impact
entertaining, informative, persuasive, satisfying, successful

Here is an example in which all four types of criteria are used to


evaluate the dining experience in a restaurant:

 Content — the server accurately describes the appetizers,


main courses, side items, desserts and drinks; all meals and
drinks are correctly delivered as ordered

 Process — the kitchen staff collaborates well and coordinates


with the server; the server checks on diners regularly
 Quality — all the dishes are cooked to taste, presented in an
aesthetically pleasing manner, and served in a timely fashion

 Impact — the meal is tasty and satisfying to all diners

It is important to note that in this example, the four criteria are


relatively independent of one another. For example, the server
may accurately describe the content of the menu items, but the
food may arrive late and be overcooked. When different traits or
criteria are important in a performance, they should be evaluated
on their own. This analytic approach allows for more specific
feedback to be provided to the learner (as well as to the teacher)
than does an overall, holistic rating.

While these four categories reflect possible types of criteria, I


certainly do not mean to suggest that a teacher should use all four
types for each and every performance tasks. Rather, teachers
should select only the criterion types that are appropriate for the
targeted standards or outcomes, as well as the specific qualities for
which you want to provide feedback to learners. Having said this, I
want to make a case for the value of including Impact Criteria in
conjunction with authentic performance tasks. The more a task is
set in an authentic context, the more important it is to consider
the overall impact of the resulting performance. Indeed, we want
students to move beyond “compliance” thinking (e.g., How many
words does it have to be? Is this what you want? How many
points is this worth?) to consider the overall effectiveness of their
work given the intended purpose and target audience. Impact
criteria suggest important questions that students should ask
themselves. For example:

 Did my story entertain my readers?

 Was my diagram informative?

 Could the visitor find their way using my map?

 Did I find answers to my research questions?

 Was my argument persuasive?

 Was the problem satisfactorily solved?

Educators can help their students see purpose and relevance by


including Impact criteria as they work on authentic performance
tasks.

So… given these four types of criteria, how should a teacher decide
which criteria should be used to evaluate student performance on
a specific task? The answer may surprise you. In a standards-
based system, criteria are derived primarily from the targeted
standards or outcomes being assessed, rather than from the
particulars of the performance task. For example, if a teacher is
focusing on the CCSS E/LA Standard of Informative Writing, then
the criteria for any associated performance task will likely require
students to be: accurate (i.e., the information presented is
correct), complete (i.e., all relevant aspects of the topic are
addressed), clear (i.e., the reader can easily understand the
information presented; appropriate descriptive vocabulary is
used), organized (i.e., the information is logically framed and
sequenced), and conventional (i.e., proper punctuation,
capitalization, spelling, and sentence formation/transitions are
used so that the reader can follow the writing effortlessly).

This point may seem counter-intuitive: How can you determine


the evaluative criteria until you know the task? What if one version
of a task required students to produce a visual product (e.g., a
poster or graphic organizer) while another version of the same
task asked students to give a verbal explanation? Certainly, there
are different criteria involved in evaluating such different products
and performances!

Indeed, there may be different secondary criteria related to a


particular product or performance. For example, if students were
to create a visual product to show their understanding of a concept
in history, then we could include quality criteria (e.g., the visual
should be neat and graphically appealing). However,
the primary criteria in this example should focus on the content
associated with the history standard instead of simply the qualities
of the product (in this case, a visual display).

This point can be lost on students who tend to fixate on the surface
features of whatever performance or product that they are to
develop at the expense of the content being assessed. For example,
think of the science fair projects where the backboard display is a
work of art, while the depth of the science content or the projects’
conclusions are superficial.

Criterion-Based Evaluation Tools

Once the key criteria have been identified for a given performance
(based on the targeted standards/outcomes), we can use them to
develop more specific evaluation tools. Let’s now examine four
types of criterion-based scoring tools used to evaluate student
performance — criterion list, holistic rubric, and analytic rubric.

Criterion List

A basic and practical tool for evaluating student performance


consists of a listing of key criteria, sometimes referred to as a
performance list. For example, my wife was a high school art
teacher and department chair. She and her department colleagues
identified the following four key criteria that they used in
evaluating student art portfolios.

 Composition — Effective use of elements of art and principles


of design in organizing space.

 Originality — Evidence of development of unique ideas.

 Visual Impact — Sensitivity in use of line, color and form to


effectively convey ideas and mood.

 Craftsmanship — Skill in use of media tools and technique.


Attention to detail and care for results.

Here is another example of a criterion list for composing a fairy


tale (Figure 6.2):

Figure 6.2 — Criterion List for a Fairy Tale

Key Criteria

1. Plot — The plot has a clear beginning, middle, and end that is
carried throughout the tale.
2. Setting — The setting is described with details and shown
through the events in the story.

3. Characterization — The characters are interesting and fit the


story.

4. Details — The story contains descriptive details that help


explain the plot, setting, and characters.

5. Fairy Tale Elements — The story contains the elements of a fairy


tale (i.e.: appropriate characters, settings of the past, events that
can’t really happen, magical events, etc.).

6. Pictures — Detailed pictures are effectively used to help tell the


story.

7. Mechanics — The fairy tale contains correct spelling,


capitalization, and punctuation. There are no errors in mechanics.

Well-developed criterion lists identify the key elements that define


success on a performance task. They communicate to students
how their products or performances will be judged and which
elements are most important. Despite these benefits, criterion lists
do not provide detailed descriptions of performance levels. In
other words, there are no qualitative descriptions of the difference
between a “15” and a “9” rating for a given element (or a full smile
versus partial smile on the pumpkins). Thus, different teachers
using the same performance list may rate the same student’s work
quite differently.

Well-crafted rubrics can address this limitation. A rubric is based


on a set of criteria and includes a description of performance levels
according to a fixed scale (e.g., 4-points). Let’s examine three types
of rubrics.

Holistic Rubric

A holistic rubric provides an overall rating of a student’s


performance, typically yielding a single score. Here is an example
of a holistic rubric for a scientific investigation task.

Holistic Rubric for a Scientific Investigation

4
The student’s investigation includes a stated hypothesis, follows a
logical and detailed procedure, collects relevant and sufficient
data, thoroughly analyzes the results, and reaches a conclusion
that is fully supported by the data. The investigative process and
conclusion are clearly and accurately communicated in writing so
that others could replicate the investigation.

3
The student’s investigation includes a hypothesis, follows a step-
by-step procedure, collects data, analyzes the results, and reaches
a conclusion that is generally supported by the data. The process
and findings are communicated in writing with some omissions or
minor inaccuracies. Others could most likely replicate the
investigation

2
The student’s stated hypothesis is unclear. The procedure is
somewhat random and sloppy. Some relevant data is collected but
not accurately recorded. The analysis of results is superficial and
incomplete and the conclusion is not fully supported. The findings
are communicated so poorly that it would be difficult for others to
replicate the investigation.

1
The student’s investigation lacks a stated hypothesis and does not
follow a logical procedure. The data collected is insufficient or
irrelevant. Results are not analyzed, and the conclusion is missing
or vague and not supported by data. The communication is weak
or non-existent.
Since they yield an overall rating, holistic rubrics are well suited
for summative evaluation and grading. However, they typically do
not offer a detailed analysis of the strengths and weaknesses of a
student’s work, and are thus less effective tools at providing
specific feedback to learners.

Holistic rubrics can also present a challenge for teachers when


they are evaluating a student’s complex performance having
multiple dimensions. For example, consider two different students
who have completed a graphic design project. One student uses
visual symbols to clearly communicate an abstract idea. However,
her design involves clip art that are sloppily pasted onto the
graphic. A second student creates a beautiful and technically
sophisticated design, yet his main idea is trivial. How would those
respective pieces by scored using a holistic rubric? Often, the
compromise involves averaging, whereby both students might
receive the same score or grade, yet for substantially different
reasons. Averaging obscures the important distinctions in the
student’s performance, and doesn’t provide the student with
detailed feedback. If all a student receives is a score or rating, it is
difficult for the them to know exactly what the grade means or
what refinements are needed in the future.

Analytic Rubric
An analytic rubric divides a product or performance into distinct
elements or traits and judges each independently. Analytic rubrics
are well suited to judging complex performances (e.g., multi-
faceted problem solving or a research project) involving several
significant dimensions. As evaluation tools, they provide more
specific information (feedback) to students, parents and teachers
about the strengths of a performance and the areas needing
improvement.

Here is an example of an analytic rubric for mathematical problem


solving (Figure 6.6).

Figure 6.6 — Analytic Rubric for Mathematical Problem Solving

Reasoning
Computation
Representation
Communication

4
An efficient and effective strategy is used and progress towards a
solution is evaluated. Adjustments in strategy, if needed, are
made, and/or alternative strategies are considered. There is sound
mathematical reasoning throughout.
All computations are performed accurately and completely. There
is evidence that computations are checked. A correct answer is
obtained.
Abstract or symbolic mathematical representations are
constructed and refined to analyze relationships, clarify or
interpret the problem elements, and guide solutions.

Communication is clear, complete and appropriate to the audience


and purpose. Precise mathematical terminology and symbolic
notation are used to communicate ideas and mathematical
reasoning.

3
An effective strategy is used and mathematical reasoning is sound.

Computations are generally accurate. Minor errors do not detract


from the overall approach. A correct answer is obtained once
minor errors are corrected.
Appropriate and accurate mathematical representations are used
to interpret and solve problems.

Communication is generally clear. A sense of audience and


purpose is evident. Some mathematical terminology is used to
communicate ideas and mathematical reasoning.

2
A partially correct strategy is used, or a correct strategy for only
solving part of the task is applied. There is some attempt at
mathematical reasoning, but flaws in reasoning are evident.
Some errors in computation prevent a correct answer from being
obtained.
An attempt is made to construct mathematical representations,
but some are incomplete or inappropriate.

Communication is uneven. There is only a vague sense of audience


or purpose. Everyday language is used or mathematical
terminology is not always used correctly.

1
No strategy is used, or a flawed strategy is tried that will not lead
to a correct solution. There is little or no evidence of sound
mathematical reasoning.

Multiple errors in computation are evident. A correct solution is


not obtained.
No attempt is made to construct mathematical representations or
the representations are seriously flawed.

Communication is unclear and incomplete. There is no awareness


of audience or purpose. The language is imprecise and does not
make use mathematical terminology.
Analytic rubrics help students understand the nature of quality
work since these evaluation tools identify the important
dimensions of a product or performance. Moreover, teachers can
use the information provided by an analytic evaluation to target
instruction to particular areas of need (e.g., the students are
generally accurate in their computations, but less effective at
describing their mathematical reasoning).

Since there are several traits to be considered, the use of an


analytic scoring rubric may take a bit more time than assigning a
single score. However, I believe that the more specific feedback
that results from this additional time is well worth the effort,
especially given the ultimate goal of improving learning and
performance.

Developmental Rubric

A third type of rubric — developmental — describes growth along a


proficiency continuum, ranging from novice to expert. As
examples, think of the colored belts that designate various
proficiency levels in Karate or the categories for swimming from
the American Red Cross.

Developmental rubrics are well suited to subjects that emphasize


skill performance. Hence, they are natural to English/language
arts, physical education, the arts, and language acquisition. The
American Teachers of Foreign Language (ACTFL) has developed
sets of longitudinal proficiency rubrics for listening, speaking,
reading and writing that can be used in conjunction with
assessment for world languages. View these at:

 http://www.sil.org/lingualinks/LANGUAGELEARNING/Oth
erResources/ACTFLProficiencyGuidelines/contents.htm

Similar developmental rubrics exist for English/language arts.


Bonnie Campbell-Hill has created a set of proficiency continuums
for literacy, available at:

 http://www.bonniecampbellhill.com/support.php

Developmental rubrics are generic in that they are not tied to any
particular performance task nor age/grade level. Thus, teachers
across the grades can profile student proficiency levels on the
same rubric. Furthermore, an agreed-upon longitudinal scale en-
ables learners, teachers, and parents to collectively chart progress
toward desired accomplishments.

Yes, but… One often hears concerns about subjectivity when


judging performance, whether during an Olympic ice skating
event, at a juried art exhibit, or when teachers evaluate students’
products and performances for a task. Admittedly, all performance
evaluation can be considered subjective in that human judgment is
required. However, that does not mean that such judgments are
destined to be biased or arbitrary. Student performance can be
reliably judged as has been demonstrated by years of experience in
statewide writing assessments, music adjudications, and AP art
portfolio reviews. The reliability of evaluation increases with: 1)
clear criteria embedded in well-developed rubrics; 2) models or
anchors of performance coupled with the rubrics; and, 3) training
and practice in scoring student work.

Conclusion

Over the years, I have observed five benefits resulting from the use
of well-developed rubrics — two for teachers and three for
students:

Benefits for Teachers

1. Scoring Reliability — A rubric constructed around clearly


defined performance criteria assists teachers in reducing
subjective judgments when they evaluate student work. The
resulting performance evaluations, including grades, are thus
more defensible to students and parents. When a common
rubric is used throughout a department or grade-level team,
school or district (with accompanying anchor examples), the
consistency of judgments (i.e., scoring reliability) by teachers
across classrooms and schools increases.

2. Focused Instruction — Clearly developed rubrics help clarify


the meaning of standards and serve as targets for teaching.
Indeed, teachers often observe that the process of evaluating
student work against established criteria make them more
attentive to addressing those qualities in their teaching.

Benefits for Students

1. Clear Targets — When well-developed rubrics are presented


to students at the beginning, they are not left to guess about
what is most important or how their work will be judged.

2. Feedback — Educational research conclusively shows that


formative assessment and feedback can significantly enhance
student performance. Clear performance criteria embedded in
analytic rubrics enable teachers to provide the detailed
feedback that learners need to improve their performance.

3. Guides for Self Assessment — When teachers share


performance criteria and rubrics with students, learners can
use these tools for self-assessment and goal setting.
Through the use of rubrics in these ways, educators can enhance
the quality of student learning and performance, not simply
evaluate it.
For a collection of authentic performance tasks and associated
rubrics, see Defined STEM: http://www.definedstem.com

For a complete professional development course on performance


tasks for your school or district, see Performance Task PD
with Jay McTighe: http://www.performancetask.com

For more information about the design and use of performance


tasks, see Core Learning: Assessing What Matters
Most by Jay
McTighe: http://www.schoolimprovement.com

Article originally posted:


URL: http://blog.performancetask.com/how-will-we-evaluate-
student-performance-on-tasks-part-6/ | Article Title: How Will
We Evaluate Student Performance On Tasks? | Website Title:
PerformanceTask.com | Publication date: 2016–03–02

How Should We Teach Toward


Success with Performance Tasks?
(Part 7)
Defined Learning
Follow
Mar 4, 2016 · 12 min read

In this blog we’ll examine five recommended practices for


instructional planning and teaching in order to prepare students
to tackle authentic performance tasks.

Practice #1 — Plan Your Teaching Backward from


Authentic Performance Tasks
In our work on Understanding by Design ® (2012, 2011, 2005),
Grant Wiggins and I proposed that the most effective teaching is
planned “backward” from desired learning outcomes (e.g.,
academic standards, 21st century skills) and from the assessments
that will show evidence of their attainment. Backward design of
instruction is the norm in performance-based disciplines (e.g.,
visual and performing arts, career and technology education), as it
is in extra-curricular activities (e.g., athletics, yearbook, theater).
This is likely due to the fact that these areas are naturally directed
toward authentic performance (e.g., the game in athletics, the
concert in band, the public display in visual art, the production
deadline for yearbook). Teaching, learning and practice are thus
orchestrated to prepare learners for a desired performance.

Planning our teaching “backward” from desired performances on


rich, authentic tasks helps teachers focus on what matters most.
With this performance orientation, teachers are less likely to
simply march through lists of content objectives or pages in a
textbook, or to have their students complete worksheets on
discrete skills. When genuine performance is the goal, we can
emulate the practices of effective coaches and sponsors of extra-
curricular activities by following a general instructional process
like the following:

1. Once the performance task has been identified,


deconstruct the task to identify necessary concepts,
knowledge and skills needed by learners for a successful
performance.

2. Use pre-assessments to find out the entry level current


knowledge and skill levels of the learners.

3. Plan targeted lessons to develop the knowledge, skills


and confidence needed to tackle the summative task.
Differentiate this instruction as needed to address the
learning variability among students. Use on-going
formative assessments to check on the development of
requisite knowledge, skills and understandings.

4. Engage learners with formative “mini tasks” —


simplified or scaffolded versions of the summative task
— and provide feedback to students as they work on the
mini tasks.

5. Allow time for them to practice and/or make revisions


based on the feedback.
It has been my observation that this last two steps (#4 and 5) are
often skipped. This is understandable given the pressures that
teachers face to “cover” a large volume of material. Indeed, it is
tempting to think that if I cover all the factual information and
component skills, then I have prepared students for performance.
But ask yourself, How do successful coaches and band directors
prepare their charges for the game or the concert? They do more
than simply have their players or performers learn the rules/notes
and practice the necessary skills. They recognize the need for
players to be able to “put it all together.” Consequently, they
include scrimmages and concert rehearsals that simulate
game/opening night conditions, and they offer feedback in the
context of authentic performance.

Classroom teachers in all subjects can emulate the practices of


effective coaches, for example, by including the equivalent of
“scrimmages” as formative assessments. To be blunt: It is
important for teachers to realize that just covering a body of
discrete knowledge and skills, and assessing their mastery in
isolation, will not prepare learners to apply their learning in an
authentic context. Basic knowledge and skills are necessary, but
insufficient, in the quest for genuine performance.

A wonderful resource in support of Practice #1 has been developed


by The Literacy Design Collaborative (LDC). Click on this link to
see their description of “mini tasks” — https://ldc.org/how-ldc-
works/mini-tasks In addition, you can view examples of units for
English/Language Arts, Science and Technical Subjects, and
History/Social Studies that have been planned backward from rich
performance tasks and follow the teaching sequence I have
proposed. Click
— http://www.literacydesigncollaborative.org/resources/sample-
modules/

Yes, but… When performance tasks are being used as assessments,


some educators may object that the practice I described equates to
“teaching to the test” and is thus not desirable. I agree that the
backward design approach does teach with the task (or the test) in
mind, but that’s not a bad thing if the test or task reflects what
matters most — authentic performance reflecting core standards
and 21st century skills. Have you ever heard coaches apologize for
coaching to/for the next game, or theater directors say they are
sorry when their actors rehearse for the play?

Practice #2 — Present Authentic Performance Tasks


as the Learning Targets

Some schools require teachers to list their daily objectives on the


board. While it certainly makes sense to have clear lesson goals,
my contention is that students need to know not only what they
will be learning today, but also why they are learning it and how
this learning will prepare them for something worthwhile in the
future. One way to help students see the larger goal for their
learning teachers to frame their learning outcomes not simply as
lists of knowledge and skill objectives (or grade level standards)
but rather in terms of the authentic performances that learning
will enable. The message to students is, “we are learning this so
that you will be able to….”

The practice of working toward known tasks is certainly not a new


idea. There are multiple examples both within and outside of
school such as the merit badge system for Boy and Girl Scouts,
colored belts for the proficiency levels in karate, or completing the
annual yearbook on deadline. In all such cases the performance
tasks are known (i.e., what you need to accomplish) along with the
evaluative criteria (i.e., how your performance will be judged).

When the performance tasks are set in an authentic context that


reflects “real world” application of knowledge and skills, learners
are more likely to see the purpose and relevance of what they are
being asked to learn. Like the game in athletics and the play in
theater, having a clear and authentic performance goal (solid
performance on a known task) focuses both teaching and learning.
Here are three examples of performance tasks that can serve as
learning targets.

Ancient Engineers

Roman Roads Gr.3


Community Advocate

Fracking Gr. 7
Automotive Materials Engineer
Fuel Efficiency Gr. 11

Practice #3 — Present the Evaluative Criteria, Rubrics


and Models at the Start

In Blog # 6 in this series, I discussed the benefits for teachers of


having clearly articulated criteria, embedded in rubrics. Well
developed rubrics can benefit learners as well. In order to enhance
learning and the quality of student performance, teachers can (and
should) present evaluative criteria and rubrics to students early in
the instructional process in order to help their students focus on
the purpose and important dimensions of authentic performance.
When students know the criteria in advance, they don’t have to
guess about what is most important or how their work will be
judged. There is no “mystery” as to the elements of a quality of a
targeted product/performance or the basis for its evaluation (and
grading). In addition, when we share criteria and rubrics with
students, we offer them the opportunity to self-assess as they
work.
This recommended practice of sharing criteria/rubrics with
students does not mean that this process has to be completely
teacher directed. In fact, involving students in helping to create a
rubric can engage them in thinking carefully about the goals of the
task and help them better understand the salient qualities needed
for successful performance.

Presenting a well-designed rubric to students in conjunction with


the performance task, does not guarantee that the benefits will be
fully realized, especially if/when students do not understand the
language of the rubric. Phrases such as “logically organized,”
“insightful interpretation,” and “sufficient evidence” may have
little meaning for inexperienced students. To be useful, students
need to be able to comprehend what the language in the rubric
means. One strategy toward this end is to couple the rubric with
tangible examples that illustrate its key traits and the different
performance levels. By showing examples that display both
excellent, good and novice-level work, teachers can make the
abstract language in a rubric become more specific, relevant, and
understandable. The practice is grounded in a basic principle: If
we expect learners to produce high quality work, they need to
know what that looks like, and how it differs from work of lesser
quality.

Yes, but… Some teachers express concern that students will simply


copy or imitate an example. A related worry is that showing an
excellent model (sometimes known as an exemplar) will stultify
student creativity. I have found that providing multiple models
helps avoid these potential problems. When students see several
exemplars showing how different students achieved high-level
performance in unique ways, they are less likely to follow a cookie-
cutter approach. In addition, when students study and compare
examples ranging in quality — from very strong to very weak —
they are better able to internalize the differences. The models
enable students to more accurately self-assess and improve their
work before turning it in to the teacher.

Practice #4 — Assess before and while you teach.

Like effective coaches and sponsors of extra-curriculars, successful


teachers don’t just begin a new unit before they have assessed
their learners. Indeed, diagnostic (or pre-) assessment is as
important to teaching as a physical exam is to prescribing an
appropriate medical regimen.

Thankfully, a variety of practical and efficient pre-assessment


techniques are available (e.g., Skill Checks, Pre-Tests, K-W-L,
Concept Mapping) to enable teachers to determine students’ prior
knowledge and skill levels and reveal potential misconceptions
that can influence their performances. By gathering such
information in the beginning, a teacher can determine the best
starting place for instruction and decide what differentiation may
be needed to best equip students with varied knowledge and skill
levels for the desired performance.
Pre-assessments are not solely for the teacher. They can also serve
as advanced organizers by previewing forthcoming learning and
activating prior knowledge that learners may have about the
concepts and skills that will support their forthcoming
performance on known tasks.

In addition to pre-assessment, the use of on-going, formative


assessments is an essential practice for optimizing student
performance on authentic tasks. The purpose of formative
assessments is to inform both teachers and learners by providing
feedback about what is working and what adjustments are needed.
Indeed, learning of all kinds –whether in the dance studio, on the
practice field, or in the classroom — requires substantive feedback.

Not surprisingly, the best examples of formative assessment and


feedback are often observed in the performance-based subjects,
such as the visual and performing arts, physical education and
athletics, and vocational-technical courses. Indeed, the essence of
coaching involves ongoing assessment and feedback. However,
what is common practice in these areas is less widespread in the
mainstream academic subjects. In their seminal research on
classroom assessment, British researchers Paul Black and Dylan
William (Black and William, 1998) noted that formative
assessment and feedback is lacking in many classrooms.

To serve learning, feedback must meet four criteria: It must be


timely, specific, understandable to the receiver, and allow for self-
adjustment by the learner. Feedback on strengths and weaknesses
needs to be prompt for the learner to improve. Waiting two weeks
to find out how you did on a test will not help your learning. In
addition, specificity is key to helping students understand both
their progress and the areas in which they can improve. Too many
educators consider grades and scores as feedback when, in fact,
they fail the specificity test. Pinning a letter (B-) or a number
(82%) on a student’s work is no more helpful than such comments
as “Nice job” or “You can do better.” Although good grades and
positive remarks may feel good, they do not advance learning.
Specific feedback sounds different, as in this example: “The
website you designed is generally well organized, visually
appealing and contains a great deal of information on your topic.
You used multiple sources and documented them correctly.
However, your conclusion is not clear, nor are the actions you
expect viewers to take based on the information the website
provides.”
Finally, learners need opportunities to act on the feedback — to
refine, revise, practice, and retry — and teachers need to build in
time for these. Writers rarely compose a perfect manuscript on the
first try, which is why the writing process stresses cycles of
drafting, feedback, and revision as the route to excellence.

The teacher should be a main feedback provider, but students are


encouraged to seek feedback from peers, parents, and others as
they work on performance tasks. Regardless of the source, here’s a
straightforward test for a feedback system: Can learners tell
specifically from the given feedback what they have done well and
what they could do next time to improve? If not, then the feedback
is not sufficiently specific or understandable enough for the
learner.

Note: It is critical that students understand the purpose of


formative assessments and know that their results will not be used
as part of a summative evaluation. Accordingly, I advise teachers
not to average formative assessment results into the calculation of
a final grade.

Practice #5 — Expect students to self assess their


learning and performance and set goals based on
assessment results.

The most effective learners are metacognitive; i.e., they self-assess


their performance, seek and use feedback, see mistakes as learning
opportunities, set goals to improve their performance, and reflect
on their learning. Teachers can cultivate these productive
dispositions by modeling the processes of self-assessment,
reflection and goal setting for students who have never been asked
to do so. They should also expect students to apply them regularly
and structure opportunities for them to do so. For example, ask
students to self-assess (and/or peer assess) their work against a
rubric before it is submitted. Teachers are often pleasantly
surprised at how honest students can be with the assessment of
their own work and that of their peers.

Here are a few examples of prompting questions to encourage


such students to self assess their performance, set goals for
improvement and reflect on their learning:

 What aspect of your work do you think was most


effective? Why? How so?

 What aspect of your work do think was least effective?


Why? How so?

 What specific action(s) would improve your


performance based on the feedback you received?

 What advice would you offer to next year’s students to


help their performance on this task?
 What did you learn from working on this task — about
the content, topic, process and/or yourself?

Self-assessment requires a small investment of time for an


impactful return. This practice signals that self-assessment and
goal setting are expected as part of a learner’s job.

Educators who provide regular opportunities for learners to self-


assess and set goals often report a change in the classroom culture.
As one teacher put it, “My students have shifted from asking,
‘What did I get?’ or ‘What are you going to give me?’ to becoming
increasingly capable of knowing how they are doing and what they
need to do to improve. Over time, we should expect students to
become increasingly capable of honest self-assessment and
adjustment, without the teacher having to tell them how they did
or what they need to do to improve.

A related practice to encourage self-assessment and goal setting is


to include students in parent–teacher conferences. In “student
involved” or “student run” conferencing, the learner takes an
active role in reviewing his or her work, and with a teacher’s
guidance, sets specific goals to improve his or her future
performance. Parents are more likely to be able to support their
child’s academic growth if they are aware of these agreed-upon
goals.

Conclusion
Teaching toward authentic performance calls for teachers to
employ an array of instructional practices, including direct
instruction and modeling, facilitative teaching and ongoing
assessments. When preparing students to apply their learning in
realistic situations, teachers function like coaches, providing
feedback as students develop the skills and work on “scrimmages.”
For a collection of authentic performance tasks and associated
rubrics, see Defined STEM: https://www.definedstem.com

For a complete professional development course on performance


tasks for your school or district, see Performance Task PD
with Jay McTighe: http://www.performancetask.com

For more information about the design and use of performance


tasks, see Core Learning: Assessing What Matters
Most by Jay
McTighe: http://www.schoolimprovement.com

Article originally posted:


URL: http://blog.performancetask.com/how-should-we-teach-
toward-success-with-performance-tasks-part-7/ | Article
Title: How Should We Teach Toward Success with Performance
Tasks? (Part 7) | Website Title: PerformanceTask.com |
Publication date: 2016–03–03

You might also like