0% found this document useful (0 votes)
3 views

Standards-Based Learning in Action CH 9

The document discusses the importance of proficiency scales and rubrics in education, emphasizing their role in making performance criteria transparent and accessible for students. It outlines the differences between scales and rubrics, their application in assessments, and the need for clear, descriptive language to effectively communicate success criteria. Additionally, it highlights the significance of consistency in teacher evaluations and the balance between general and task-specific criteria in assessment practices.

Uploaded by

ajroethler
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Standards-Based Learning in Action CH 9

The document discusses the importance of proficiency scales and rubrics in education, emphasizing their role in making performance criteria transparent and accessible for students. It outlines the differences between scales and rubrics, their application in assessments, and the need for clear, descriptive language to effectively communicate success criteria. Additionally, it highlights the significance of consistency in teacher evaluations and the balance between general and task-specific criteria in assessment practices.

Uploaded by

ajroethler
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Proficiency Scales and Rubrics in

Action
The genius of rubrics is that they are descriptive and not evaluative. Of course, rubrics can
be used to evaluate, but the operating principle is you match the performance to the
description rather than “judge” it.
—Susan M. Brookhart

In general, success criteria describe qualities of exemplary work; the more


direct expression of criteria comes through rubrics and scales, solidifying
criteria as a natural progression of sophistication (Andrade, 2013). While
teachers can develop scales and rubrics in a variety of formats, the
fundamental purpose is to make performance criteria transparent and
accessible. Scales and rubrics work together in tandem, with the rubric
providing a more narrow and detailed view of success with a particular
standard or skill, and the proficiency scale providing a more holistic,
overarching view.

Moving From Rationale to Action


Scales and rubrics are similar in that both attempt to create a continuum
that articulates distinct levels of knowledge and skill relative to a specific
topic (Marzano, 2010). Operationally, however, there are differences that
emerge in that scales are often number based (from 0 to 4, for example),
whereas rubrics tend to use a descriptive scale (from novice to exemplary,
for example). Both scales and rubrics create a progression of quality (from
the simplest to the most sophisticated).
The advantage of both scales and rubrics is that the energy and focus of
instruction is on the intended learning rather than the specific task at hand;
this may be the most important aspect of scales and rubrics because they
create a cohesive pathway that transforms a series of (what can appear to
be) random assignments into a purposeful progression of learning.

The Research
The research we explore in the following sections focuses on making
criteria transparent, interpreting accurately, using student-friendly language,
moving from simple to sophisticated, choosing a format, and deciding
between general and task-specific criteria.

Making Criteria Transparent


When teachers make learning goals and success criteria transparent in
an organized, clear, and cohesive way, it is far more possible that students
will fully invest in the assessment process (Andrade, 2013; Vagle, 2014).
Transparent success criteria in the form of a scale or rubric make it easier
for students to see where they are headed and are essential to maximizing
the self- and peer assessment processes we discussed in chapter 6 (page
109). Provided the success criteria are not trivial or tangential, transparency
pulls back the curtain not only on what teachers expect of students in terms
of their performances, but reveals specifically how teachers will judge those
performances.
Scales and rubrics are technically the same thing in that both attempt to
create a continuum that articulates distinct levels of knowledge and skill
relative to a specific topic (Marzano, 2010). David Balch, Robert Blanck,
and David Howard Balch (2016) describe a rubric as a “visual narrative of
the criteria that defines and describes the important components of an
assignment. The criteria are stated in several levels of competence; from not
meeting the requirement to mastering it” (p. 20). Operationally, however,
there are differences that emerge in that scales (also called holistic rubrics)
are often number based (such as 0 to 4), whereas rubrics (also called
analytic rubrics) tend to use a descriptive scale (such as novice to
exemplary). Both scales and rubrics create a progression of quality, from
the simplest to the most sophisticated, so students and parents can clearly
see what it takes to reach the next level.

Interpreting Accurately
Clearly articulated criteria are also essential for teachers. Within a
criterion-referenced, standards-based learning environment, it is necessary
that teachers have (or develop) the confidence that the judgments they’re
making about students’ performances are similar to the judgments their
colleagues would make (Guskey & Jung, 2016). While this consistency in
applying success criteria (inter-rater reliability) takes time to develop, it is
important because so many standards require teachers to infer quality. There
is, for example, no completely objective way to assess argumentative
writing. There are aspects of quality, but success is a matter of interpretation
in which the teacher has to match the quality of the writing to the specific
level of qualities the criteria outline.
The research is mixed on whether teachers in general are skilled at
accurately summarizing student achievement. Some claim that teachers are
the best sources for judging student performances because they have more
experience with their students (Meisels, Bickel, Nicholson, Xue, & Atkins-
Burnett, 2001); however, others claim that an inability to distinguish
between student achievement and student traits can cloud teacher judgments
(Moss, 2013), especially when teachers are assessing students with diverse
backgrounds (Martínez & Mastergeorge, 2002; Tiedemann, 2002). If the
former is true, rubrics and scales will only strengthen that skill; if the latter
is true, then rubrics and scales are necessary to ensure achievement and
traits do not intermingle in the assessment process. According to Catherine
Welch (2006), reliable scoring rubrics must meet the following five
characteristics.
1. Be consistent with the decisions and inferences teachers make with
the results.
2. Define the characteristics of the response that teachers will evaluate
along a continuum.
3. Convey performance criteria in an understandable way.
4. Use items that elicit a range of performance.
5. Align with the content standards that teachers are assessing.
While rubrics and scales appear in a variety of structural formats, the
reliability of the scoring inferences derived from the rubric or scale is a
non-negotiable feature (Parkes, 2013).

Using Student-Friendly Language


Rubrics and scales provide learners with a natural progression of quality
that runs from the simplest to the most sophisticated. Teachers can create as
many levels as they choose, provided they can describe the differences in
quality between the levels. Labeling the levels is the easy part; describing
them is much tougher. For example, it’s easy to label twelve levels on a
scale: 4+, 4, 4–, 3+, 3, 3–, and so on. It’s much more challenging to
describe the differences between the levels. One would have to make
transparent the specific differences between a 3+ and 4– and between a 2
and 2+, and so on.
This is why effective scales and rubrics tend to have a more reasonable
number of levels (Balch et al., 2016). We most often see four. When there is
a fifth level it is often the not yet or insufficient category, which allows the
four-point scale to become a five-point scale that aligns with an A–F
grading construct. Each point should represent a point of proficiency.
Again, that runs from the simplest to the most sophisticated, so each of the
descriptions should reflect that.
Exceeds is one descriptor to avoid (Guskey, 2015). There is a big
difference between a demonstration of learning at the most sophisticated
level and one that exceeds the standards. The implication with the term
exceeds is that students must now perform beyond their grade level to reach
the highest level on the criteria. It also makes the highest level
exponentially more difficult to achieve and could lead to a curving mindset
that inadvertently (or intentionally) reserves the top level for only those
truly special demonstrations.
When teachers use meeting as a level (often the third of four levels),
they create a definitive destination that makes it difficult to distinguish
between low and high quality. Again, scales and rubrics should describe
levels of quality, so in that sense, all levels along a scale or a rubric are
meeting to one degree or the other; the continuum of simple to sophisticated
is lost when there is a finite destination. There are certainly a number of
schools using meeting as the third level, but ideally, the top level is not
exponentially more difficult to achieve. The issue is resolved when meeting
is the fourth level, but as the third, schools will most often designate the
fourth level as something beyond or exceeding standards. Schools should
not expect students to exceed the standard to reach the top level, so while
they can implement meeting as the third level successfully, it does require
some extra thought and finesse.
A finite level may be appropriate for finite standards that have one
correct answer (often at grades K–3), but most standards are at a DOK that
reaches beyond a binary choice. Those higher DOK levels require teachers
to examine a performance’s quality or consistency. That means a singular
level is, at best, incomplete. Practically speaking, with so many other word
labels to choose from that are equally appropriate and applicable, it’s
unnecessary to arbitrarily add confusion.

Moving From Simple to Sophisticated


Once teachers choose the word labels (assuming the scale is not simply
0–4), the descriptors should be clear and based on the same aspect. If a
portion of proficiency asks students to, for example, “support claim(s) with
clear reasons and relevant evidence, using credible sources and
demonstrating an understanding of the topic or text” (W.6.1.B; NGA &
CCSSO, 2010a), then each description should describe the quality with
which students are able to do that. Novice writers will support claims at the
very basic level, while exemplary writers will do so at the most
sophisticated level. Susan M. Brookhart (2013c), educational consultant,
professor of education, and author, advises that level descriptions should
follow these criteria.
• Be descriptive.
• Be clear.
• Cover the whole range of performance.
• Make relatively easy distinctions between each level.
• Center the target performance at the appropriate level.
• Feature parallel descriptions from level to level.
As well, Brookhart (2013c) submits that effective rubrics and scales
avoid listing both requirements and quantities because requirements only
produce a yes-or-no distinction, while quantities are about counting, not
quality. She goes on to add:
The rubric description is the bridge between what you see (the student work, the
evidence) and the judgment of learning. If you’re not going to take advantage of that
main function, you might as well just go back to the old-fashioned way of marking a
grade on a paper without explanation. (p. 22)

Choosing a Format
While it is possible to use all rubric or scale formats for either formative
or summative assessment, an assessment’s function drives the most
favorable format to align an assessment with its subsequent action. Analytic
rubrics provide unique, separate descriptions on multiple aspects of
qualities for any given performance (Balch et al., 2016; Brookhart, 2013c).
This is advantageous for formative assessment because it asks for a more
granular analysis. Describing each criterion separately makes it easier for
teachers and students to recognize both strengths and areas in need of
strengthening. This can allow teachers to make instructional decisions that
help them differentiate what’s next.
Holistic rubrics, on the other hand, ask the teacher or student to make a
single overall judgment of quality along the performance levels (Balch et
al., 2016; Brookhart, 2013c). Rather than describing separately the aspects
of quality along each level, teachers would holistically describe each level
in its totality. This is advantageous for summative assessment because it
often requires a single determination despite some specific deficiencies.
Teachers do not ignore specific criteria; rather, they synthesize all the
criteria into a singular description that outlines what a novice through
exemplary demonstration might look like. While most teachers will likely
spend the majority of time using analytic rubrics, they optimize the
relationship between analytic and holistic rubrics when they use an analytic
rubric for instruction, and then synthesize it into a holistic rubric for
grading.

Deciding Between General and Task-Specific Criteria


The other decision educators face is whether to develop general or task-
specific scales or rubrics. The advantage of general rubrics is that criteria
are longitudinally transferable throughout multiple demonstrations of the
same outcomes or standards; teachers can use the same general rubric to
assess multiple samples of, for example, argumentative writing. The
disadvantage of general rubrics is that they inherently require greater skill at
inferring quality because the rubric itself does not address the specific task
at hand.
Task-specific rubrics offer the opposite. They do not transfer as readily
from one assignment to the next, which is a disadvantage (Balch et al.,
2016; Brookhart, 2013c). However, they are low-inference tools that often
result in greater consistency in application. Teachers have to weigh the
short-term advantages of task-specific rubrics with the long-term
advantages of general rubrics, and while general rubrics can take more time
to develop, apply, and calibrate, they are worth the effort in the long run.
Despite their inherent differences, general and task-specific rubrics can
work in harmony. Suzanne Lane (2013), professor of the research
methodology program at the University of Pittsburgh, says a “general rubric
may be designed that reflects the skills and knowledge underlying the
defined domain. The general rubrics can then guide the design of specific
rubrics for a particular task” (p. 316). This allows teachers to take
advantage of each type of rubric and leads to greater consistency across
task-specific rubrics, because each would derive from the rubric the teacher
had originally developed (Lane, 2013).
Understanding the strengths and limitations of the different variations of
scales and rubrics allows for a more transparent assessment process that
anchors instruction (and learning) on the knowledge, skills, and
understandings that teachers expect students to attain.

In Action
Teachers employ both proficiency scales and rubrics in standards-based
learning environments, but with slightly different definitions and uses.
Again, proficiency scales in this book are the holistically described levels
that teachers use to communicate proficiency across grade levels and
content areas. Proficiency scales tend to be applied school- or districtwide
to provide consistent language with the descriptors. Descriptor labels are
usually one or a few words. Rubrics, on the other hand, have more robust
descriptions. Rubrics are for particular assignments, assessments, or
specific standards to give feedback about the proficiency level. If a
descriptor for a level 3 is proficient, then a rubric for that standard or
standards would describe what proficiency looks like. This chapter helps
teachers understand and develop quality proficiency scales and rubrics.
Additional sections are devoted to inter-rater reliability and student
involvement.

Defining Success
No matter how many levels a scale or rubric includes, each must have a
clear and concise description. These descriptions foster a process for
teachers to guide not only their instruction but also practice and assessment
for students. They also prompt teachers to reflect, learn, and improve.
The description for a scale or rubric should be succinct. Lengthy text is
more difficult for students to consume or comprehend. In addition, long and
very detailed descriptors can close the door to the variety of ways a student
can show his or her mastery of the standards. When a description gives too
many specific details, students will only rise to that description. Clarity
versus specificity is an important distinction when writing success criteria.
Creating quality success criteria should move beyond quantifiable
requirements and guidelines of compliance to a true description of learning.
Requiring a specific number of words or pages in a paper does not ensure
proficiency because quantity and quality can be quite different; simply
writing seven pages does not mean the student has met the standard. Only a
minimal quantity of evidence may be necessary to reach proficiency and
receive feedback.

Creating Quality Scales and Rubrics


The following sections include examples of more- and less-effective
proficiency scales and rubrics. Quality scales and rubrics provide students
with criteria with a language of learning. They describe the characteristics
of a successful product in relation to the standards. Quality scales and
rubrics leave the door open to multiple ways to show proficiency. They
build student self-efficacy so they can adeptly identify where their learning
is and how to move forward.

Proficiency Scales
Consider the scale in figure 9.1, which takes the levels and pairs them
with percentage-based increments.

Figure 9.1: Low-quality proficiency scale.

This example scale does nothing to communicate about student learning


or academic proficiency. Equating the levels to percentages places the focus
squarely on grades and feeds into the notion that grades are more important
than learning. Using language to describe the learning provides meaning,
where a number or letter cannot. That language helps students strive for
learning. The purpose of a proficiency scale is to support learning, and the
connection to a percentage or letter grade works against that mission.
Now compare it to the proficiency scale in figure 9.2. It also has four
levels.

Figure 9.2: High-quality proficiency scale.


Visit go.SolutionTree.com/assessment for a free reproducible version of
this figure.

Figure 9.2 uses a language of learning to describe the four levels.


Although short, these descriptors communicate proficiency and function as
a starting point for creating rubrics with more substantial language.

Rubrics
Examine the rubric in figure 9.3, which lists requirements in checklist
style.
Figure 9.3: Low-quality rubric.

This rubric has specific requirements but is severely lacking in other


areas. The focus is on the inclusion of elements rather than the writing’s
substance. The teacher can check off the items with a 3, 2, 1, or Not
Included. There is no language to describe the difference between those
levels and no guidance to help learners know what success looks like.
Now compare it to the rubric in figure 9.4 (page 174). It too has four
levels. Which rubric tells a story of learning over a story of compliance?
This rubric provides both learners and teachers with a great deal more
information. The different assessed standards appear, as do specific success
criteria for each. Language for each level helps students understand the
skills and knowledge teachers expect them to have.

Determining the Number of Levels


The number of levels on a scale or rubric depends on the number of
accurate and definable success criteria for the learning. Can teachers
accurately describe the learning in three or four different levels? Could they
break it down further, into six? A range between two and seven levels for
scales and rubrics is advisable. Beyond that, it becomes challenging to
describe the differences. Remember, teachers must be able to describe (not
just label) the differences between each level, which is why seven pushes
the boundaries of clear distinctions of quality. A four-level scale is most
common in a standards-based environment, giving some separate
distinction in levels without so many that they become difficult to define.
Regardless, they must tie to language that describes a natural progression of
quality from the simplest to the most sophisticated.

Figure 9.4: High-quality rubric.


Visit go.SolutionTree.com/instruction for a free reproducible version of
this figure.

If grading on a percentage scale is a requirement, teachers can skip to


chapter 10 (page 187) to read how to convert to one from a proficiency
scale.
Writing Proficiency Scale Descriptors
Teachers typically generalize the language when writing proficiency
scales. The overarching nature of proficiency level descriptors ties together
the wide variety of subjects or classes with a common thread. A descriptor
such as proficient works in first or fifth grade or in a Spanish or physical
education class. What proficient means for each of those depends on the
standard and learning targets specific to each subject. It is up to the teacher
to contextualize the scale with more specific language, but the scale itself is
universally applicable.
Establishing a common understanding of the terms is crucial; optimally,
teachers will collaborate among colleagues to do so. Figure 9.5 includes a
bank of ideas for proficiency scale descriptors. Note that the example
descriptors for a level 4 do not include exceeds, but words that demand a
deep understanding without going beyond the grade-level standard.

Figure 9.5: Language example for proficiency scale descriptors.


Visit go.SolutionTree.com/assessment for a free reproducible version of
this figure.

Figure 9.5 does not include descriptors for a 0 level, which many
schools and districts will include as a fifth level. This fifth level typically
indicates lack of evidence or insufficient evidence. Both communicate that
the teacher (and even the student via self-assessment) does not have enough
evidence to make a decision about proficiency. Insufficient evidence could
also be indicated as a 1 on a four-level scale. These descriptors can stand
alone for reporting or be paired with a number.
Regardless of whether the words or numbers will be used for reporting,
clear meaning and consistent application are paramount. Once developed
among teachers, a common understanding can be transferred to students and
parents. This proficiency scale language also guides rubric development,
with scale descriptors providing the overarching ideas for each proficiency
level.

Crafting Rubrics
For consistency, rubrics should contain the same number of levels as the
proficiency scale. Common sense dictates that having a different number of
levels between the scale and the rubric increases the potential for confusion
among students and parents.
Well-crafted rubrics, no matter in what content area or skill, have some
common characteristics. They use language that is positive (stating what the
evidence should look like instead of focusing on missing elements or
characteristics), as well as student friendly. Some explanation may be
necessary with examples of student work, but a rubric’s language should
not be a barrier to student understanding. When using the rubric, students
should be able to identify their current proficiency level and see what it
takes to move to the next level.
Teachers can generate two types of rubrics to use in the classroom.
Skill-specific rubrics can span multiple assignments, assessments, and units.
Teachers can most easily use them with standards that do not have specific
content tied to them. A language arts standard that requires students to cite
textual evidence can go with a rubric that the teacher uses each time he or
she addresses that standard. Other rubrics are specific to a particular content
area or unit of study. These provide a more detailed description of the
evidence that students produce with the content as a guide for the skill. An
example is a rubric that a teacher develops for a mathematics standard, such
as knowing the formulas for circumference and area of a circle and using
them to solve problems.
When creating a rubric, begin with the proficient level. Considering the
standard or standards and their essential verbs, describe what evidence
would look like to meet them. For some teachers, a rubric’s proficient level
is very similar to the standard itself. The next task is to work on the
surrounding levels (levels 4, 2, and 1 on a four-level scale). What would
evidence look like on a more complex level or if a student delved more
deeply? What would the evidence look like when a student is working
toward proficiency but has not yet achieved it? What would evidence look
like for a student in the beginning stages of learning with the standard?
The most difficult level for many teachers to describe on a rubric is the
most sophisticated level (a level 4, for example) of proficiency. What is a
4? looms large for some. There are a variety of ways to write the
description of a level 4 on a rubric. For example, examine student work on
a standard that is very high quality. Consider what characteristics of the
work make this so. Determine the level of sophistication given the cognitive
complexity and the students’ developmental stage. Another approach is to
illustrate how a student could transfer the knowledge and skill to a new
situation. An important consideration when writing the highest-level
descriptions on a rubric is that a larger quantity of work does not mean that
the student has shown a more advanced proficiency. Rather than upping the
number of requirements, change the demands or sophistication.
Figure 9.6 offers a rubric from a physical education class; it has four
proficiency levels and four standards. It is for the entire unit of study related
to dance that students will complete over multiple weeks.
Figure 9.6: Four-level rubric.
Visit go.SolutionTree.com/instruction for a free reproducible version of
this figure.

Figure 9.7 is from an ecosystems unit with one standard that the teacher
broke down into two skills. There are only three proficiency levels, and the
teacher has left the Excels level open for the first skill; the students are to
prove how they have created a model that moves beyond the proficient
level.
Source: © 2016 by K. Budrow. Source for standard: NGSS Lead States,
2013.
Figure 9.7: Three proficiency levels plus student participation.
Visit go.SolutionTree.com/assessment for a free reproducible version of
this figure.

One additional rubric example appears in figure 9.8. This skill-specific


rubric is for multiple assignments or assessments. It is not as detailed, but
when teachers use it multiple times throughout a school year, students will
become more familiar with the demands.

Figure 9.8: Skill-specific rubric.

Some digital rubric creation tools such as ThemeSpark.net, and portals


such as the Literacy Design Collaborative (https://ldc.org), can be great
resources. Teachers can search for standards and automatically generate
rubrics. These rubrics are a great starting place with standards for which
teachers are struggling to develop language or for teachers who are newer
to the process. However, rubric generators are only a starting point. There
will likely be some language that makes sense to a teacher or team of
teachers and other language that does not align with the instruction.
Teachers must use their professional judgment to examine and alter these
rubrics if necessary to best meet the needs of assignments, assessments, and
standards.

Deciding When to Use Rubrics


As with anything, there is a time and place for rubrics in the classroom.
A rubric is not necessary for every assignment and assessment in a
standards-based classroom. They are most desirable when student responses
are scalable, which means responses run from simple to sophisticated; not
every assessment lends itself to scalable responses. Teachers can easily give
feedback on student work in the formative process without a rubric. Rubrics
cannot replace a teacher’s personalized feedback, but can gauge progress
with a standard.
When using a rubric with summative assessment, teachers should
inform students of it and grant them access to it from the unit’s beginning.
Success criteria should never be unclear to students, regardless of whether
there is a rubric for that particular assignment or assessment.

Supporting Inter-Rater Reliability


Both proficiency scales and rubrics, when teachers create them with
clarity and use them with fidelity, work to increase inter-rater reliability
within groups of teachers. A standards-based learning environment
inherently supports inter-rater reliability because of its focus on common
standards and targets. This can be enhanced by moving beyond the shared
understanding of standards to creating success criteria within scales and
rubrics. They hold students to the same challenging demands no matter the
class. Common grading scales can give a false impression of consistency,
because they can be used differently with no consideration of learning
evidence. It is the determination of common learning expectations and the
consistent interpretation of elicited evidence that significantly impact and
improve consistency among colleagues.
Conversing with colleagues about evidence and where it falls on a
rubric or scale is one of the most important discussions teachers can have,
although making the time to do so is often easier said than done. Meeting
agendas fill with other items, and if teachers use the same rubric, they may
assume it will be automatically reliable. Criteria always hold an element of
interpretation, so the more teachers converse about their interpretations, the
more consistent their usage will be. When having this discussion, teachers
bring examples of student work and have everyone in the group use the
rubric to assess the work. The debriefing that follows builds consistency
with collective decisions about which evidence corresponds to which level
on a rubric or scale.
This dialogue serves multiple purposes. It creates a common
understanding of the rubric or scale and builds teachers’ confidence in their
ratings. One reason teachers reduce rubrics to quantifiable and compliance
requirements is confidence. It is much easier to count the number of words
or pages or make sure students choose the right formula than it is to make
and use a rubric that describes the language students are using in their
words or how students can effectively apply a formula to solve problems.
By working together, teachers can build this confidence by using rubrics
and scales based on quality.

Involving Students
Teachers should actively involve students with both proficiency scales
and rubrics. Both are created for students and teachers, so if a disconnect
exists, the scale or rubric has lost value. The rubric opens the discussion of
what is possible with the standard and makes plain the expectations,
increasing the opportunity to involve students from the outset.
Students can also help create rubrics. Doing so can increase their
understanding of the standards and help them consider what quality
evidence looks like. Teachers will still need to remain involved in the
process, as there will be times that the student knowledge base and
understanding are not developed enough to consider the variances in work
levels. Working together, students and the teacher can start the rubric and
then revise as student knowledge and understanding grow. Having the
students actively involved in writing rubrics will make self-assessment and
peer review opportunities more effective and efficient.

PERSONAL NARRATIVE

Katie Budrow
Sixth-Grade Science Teacher, Charles J. Caruso Middle School, Deerfield, Illinois
In my second year of teaching, one of my amazing mentors suggested that I look into
student-created assessment tools. Obviously, I had more than a few concerns as the
process got started. As it turns out, my fears were completely unfounded.
We started as a whole group, putting the standard at the top of a page and creating
boxes underneath for each level. Students developed the information in small groups,
reporting back to the whole class so we could capture it together. They debated over
details and fiddled with the wording. They argued over how to format the rubric,
suggesting that bullet points would be easier to read but sentences might be clearer.
They even asked if we could delete the lowest level, arguing that because they had the
opportunity to revise, nobody would end up at the bottom level anyway. They
questioned all kinds of things, like whether they should include neatness or if coloring
mattered. I would gently step in with a question and coach them back to the verbiage
of the standard when necessary, but for the most part, they ran the show. Eventually,
they would guide each other to reference the standard, and the healthy debate
continued.
After a lengthy conversation, we all came to a consensus. We had a good, solid rubric.
Not perfect, but good. We then repeated the process with the other three classes, and I
merged all those rubrics together to create a final one. To my surprise, all the classes
had a similar process and came up with similar results. I made a few small changes as
their four rubrics were bundled into one, but nothing major had to change. Their work
didn’t need it. When I presented it the next day, the rubric had more clarity and more
detail than anything I could have created alone.
However, what we didn’t know at the time was that first rubric really just ended up
being a rough draft. That was arguably the best part—the rubric wasn’t something
static. It was a living document, and we could change it at any time. It was ours. Not
my rubric full of my expectations, not the students’ rubric filled with whatever they
wanted, but our rubric that we owned.

Talking to Learners
Students may think that both scales and rubrics are for teacher use only,
which means they may not pay attention to their impact on their own
learning. Teachers need to help learners change their perception by
presenting scales and rubrics as shared tools that support learning. Often,
how teachers grade or what teachers are looking for is opaque at best,
which leaves students guessing. It is important for students to know that
there is no gotcha with the success criteria—what is outlined is what is
expected. Teachers reassure students that transparency leading to their
success is the goal, and that this is a we venture rather than a teacher versus
learner one.
Learners will ask questions about how the levels and rubrics translate to
grades. Share that while proficiency levels are a kind of grade per se, the
purpose of proficiency levels is not to equate levels to a percentage-based
score or contribute to accumulating points. Students must come to know
that a proficiency level is the description of evidence of learning in relation
to the standard. Although teachers may use them to determine a final grade,
students must know that their purpose is to describe what success with the
standards looks like.
Students need to know that scales and rubrics provide guidance
throughout the learning progression. If success criteria are unclear to
students, teachers must provide a means to better communicate them.
Whether it is showing quality work exemplars or changing the scale or
rubric to more student-friendly language, clarity is the priority. How to
progress through the different levels and understanding should be
straightforward so students can take ownership of their learning. Knowing
where their proficiencies lie and what the next level looks like motivates
learners to continue toward the goal.
Teachers who maintain a positive attitude when speaking about
proficiency scales and rubrics will transfer that to their students. Talk with
learners about the value of knowing success criteria from the beginning to
the end of a unit. When students can see that a rubric guides them through
the complex demands of the standards, it builds trust and students learn to
appreciate their value.
Figure 9.9 offers questions students might ask when it comes to
proficiency scales and rubrics and possible teacher responses.
Figure 9.9: Student questions and teacher responses about proficiency
scales and rubrics.

Talking to Parents
Once they have developed rubrics and scales, teachers share that
common understanding with parents. When talking to parents, it is helpful
to bring or show examples. A hands-on example is an effective way to
introduce what parents will potentially view as a new use for these tools.
Examples such as movie ratings and business ratings on websites like Yelp
(www.yelp.com) can show how scales are already present in their daily
lives.
Let parents know how they will find the scales and rubrics that teachers
will use—online, in a portfolio, or on paper. Sharing scales and rubrics
plays a huge role in facilitating effective parent involvement. Provide
parents with questions they can ask their children about their work and
where it currently falls. Sample questions follow.
• Can you show me what you are working on right now or tell me
about it?
• Tell me about the assignment’s requirements. What are the
assessment’s demands?
• Where would you place your work on the scale or rubric right now?
• What can you do to improve your proficiency and show it?
Once the teacher scores the evidence of learning, he or she can share it
with parents. This sharing can be facilitated by the student or teacher. For
example, a teacher can have students go home and talk with their parents
about their learning and proficiency levels, and to hold them accountable,
the parents can email the teacher with a quick summary of the discussion.
Not everyone will participate, but teachers expect students to go home and
talk about their learning.
If teachers will report rubric and scale levels as grades, communicate
that process as well. Parents need to know that if their child claims to not
know why his or her score is at a certain level, they can counter and
facilitate a productive conversation.

Getting Started With Proficiency Scales and


Rubrics
While some teachers may have been using scales and rubrics for quite
some time, they may be newer tools for others. No matter where teachers
are beginning, following these steps will support them as they create or
revise their scales and rubrics.
Proficiency scales:
• Decide how many proficiency levels you will use on the scale.
• Write short descriptors for each level.
• Develop a common understanding of those descriptors with
colleagues.
• If you will use the scale for reporting to students and parents,
communicate the levels, their descriptors, and how you will use
them to all involved.
Rubrics:
• Determine which standard or standards you will assess; identify the
specific aspects of quality you will assess, especially when
developing rubrics for formative assessment.
• Decide how many levels you will use on the rubric.
• Craft rubric language to describe evidence that will elicit the
different levels of proficiency (teacher created or teacher and
student created).
• Collaborate with colleagues to score student work on the rubric and
calibrate and increase inter-rater reliability.
• Use, reflect, and revise!

Questions for Learning Teams


Pose these questions during teacher team meetings, planning meetings,
or book study.
• What quote or passage encapsulates your biggest takeaway from this
chapter? What immediate action (large or small) will you take as a
result of this takeaway? Explain both to your team.
• What aspects of your current rubrics and scales do you see as areas
of strength? Which aspects might you need to strengthen?
• How do you clearly communicate your scales and levels to students
and parents? Is there anything you could or would do to enhance the
effectiveness of your process?
• Do you create and use your rubrics as individual teachers or as a
collective grade-level or content-area group? What led to that
decision? Is that an approach you should continue?
• How do you envision involving students in the creation and use of
rubrics and scales?
• How do scales and rubrics impact self-assessment and peer review
activities?

OceanofPDF.com

You might also like