0% found this document useful (0 votes)
240 views

Teachers' Use of Assessment Data To Inform Instruction - Lessons From The Past and Prospects For The Future

Uploaded by

Raquel Cahuiich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
240 views

Teachers' Use of Assessment Data To Inform Instruction - Lessons From The Past and Prospects For The Future

Uploaded by

Raquel Cahuiich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Teachers’ Use of Assessment Data to Inform

Instruction: Lessons From the Past and


Prospects for the Future
Amanda Datnow
University of California, San Diego

Lea Hubbard
University of California, San Diego

Background: Data use has been promoted as a panacea for instructional improvement.
However, the field lacks a detailed understanding of how teachers actually use assessment
data to inform instruction and the factors that shape this process.
Purpose: This article provides a review of literature on teachers’ use of assessment data to
inform instruction. We draw primarily on empirical studies of data use that have been pub-
lished in the past decade, most of which have been conducted as data-driven decision making
came into more widespread use. The article reviews research on the types of assessment data
teachers use to inform instruction, how teachers analyze data, and how their instruction is
impacted.
Research Design: Review of research.
Findings: In the current accountability context, benchmark assessment data predominate in
teachers’ work with data. Although teachers are often asked to analyze data in a consistent
way, agendas for data use, the nature of the assessments, and teacher beliefs all come into
play, leading to variability in how they use data. Instructional changes on the basis of data
often focus on struggling students, raising some equity concerns. The general absence of pro-
fessional development has hampered teachers’ efforts to use data, as well as their confidence
in doing so.
Conclusions: Given that interim benchmark assessment data predominate in teachers’ work
with data, we need to think more deeply about the content of those assessments, as well as
how we can create conditions for teachers to use assessment to inform instruction. This review
of research underscores the need for further research in this area, as well teacher professional
development on how to translate assessment data into information that can inform instruc-
tional planning.

Teachers College Record Volume 117, 040302, April 2015, 26 pages


Copyright © by Teachers College, Columbia University
0161-4681
1
Teachers College Record, 117, 040302 (2015)

With the focus on high-stakes accountability, the last decade of educa-


tional reform has seen a rise in the promotion and use of data for in-
structional decision making. Data use has been seen as a panacea for
school improvement, and activities ranging from the examination of
results from state tests to formative assessment in classrooms have all
been put under the umbrella of data use (Kennedy, 2011). The data that
educators are drawing on are wide ranging as well, including data on
student achievement, student attendance and behavior, course enroll-
ment patterns, postsecondary success rates, and school climate, among
others (Bernhardt, 1998; Data Quality Campaign, 2011). Thus, when we
talk about or study data use, it is important to be clear about what data
are used, for what purposes, and by whom.
Our focus is on how teachers use assessment data to inform instruc-
tional decision making. Although we restrict our discussion to teach-
ers’ use of data from assessments, we acknowledge that assessment data
are only one form of data that teachers use to inform their instruction
(Mandinach & Gummer, 2013). Assessment data have the potential to
inform how teachers plan lessons, identify concepts for reteaching, and
differentiate instruction (Hamilton et al., 2009; Kerr, Marsh, Ikemoto,
Darilek, & Barney, 2006; Supovitz & Klein, 2003). Yet how teachers actu-
ally use assessment data to inform instructional practice and the factors
that shape their decision making remain puzzling (Coburn & Turner,
2011; Little, 2012), in part because there is relatively little research on
this topic. As a recent analysis of a collective body of research on data-
driven decision making noted, we are faced with “blunt understandings
of data use” (Moss, 2012, p. 224).
This article provides a review of literature on teachers’ use of data.
Most of the research we review was conducted in the United States dur-
ing the period of No Child Left Behind (NCLB) and, to a more limited
degree, in other countries. As such, the analysis provides an opportunity
to learn from what we know from recent history and derives lessons for
the future. These lessons will be important as we move into the era of
the Common Core Standards in the United States and movements to
shift conceptions of teaching and learning across the globe. We draw pri-
marily on empirical studies of data use that have been published in the
past decade, most of which have been conducted as data-driven decision
making came into more widespread use. The majority of sources we cite
are refereed journal articles; we include a smaller number of empirical
research reports, books, and book chapters. Although the list of sources
we include is not exhaustive, we believe the included works highlight the
important themes in this literature.
The article reviews research on the kinds of assessment data teachers

2
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

use to inform instruction, how teachers analyze data, and how their in-
struction is impacted. In our analysis, we identified the following three
patterns. First, in the current accountability context, benchmark assess-
ment data predominate in teachers’ work with data. Because teachers
have been asked to administer and analyze benchmark assessments, it
is not surprising that benchmark assessments are most associated with
data use. Second, although teachers are often asked to analyze data in a
consistent way, agendas for data use, the nature of the assessments, and
teacher beliefs all come into play, leading to variability in how they use
data. Third, instructional changes on the basis of data often focus on
struggling students, raising some equity concerns. We review some of
the factors that influence teachers’ use of data, arguing that leadership,
structural, and cultural supports are important. We also find that the
general absence of professional development has hampered teachers’
efforts to use data, as well as their confidence in doing so. We close with
a discussion of gaps in the literature, provide suggestions for further re-
search, and raise questions for the future of policy and practice.

The Predominance of Benchmark Assessment Data

Teachers have always relied on numerous forms of assessment data to


guide their instruction (Young & Kim, 2010). Within the data use move-
ment, much of the focus has been on teachers’ use of benchmark assess-
ment data. Most districts engaged in data use have adopted or developed
these assessments in recent years and have asked teachers to analyze and
act upon the data from them (Datnow & Park, 2014; Hamilton et al.,
2009; U.S. Department of Education, 2010). However, the data that arise
from these assessments are limited, and thus teachers draw upon other
forms of assessment data as well as they plan for instruction, including
data from curriculum embedded assessments, teacher-generated assess-
ments, and other forms of assessment (Hamilton et al., 2009; Supovitz &
Klein, 2003; Wayman & Stringfield, 2006).
Interim benchmark assessments are defined as those that evaluate
student knowledge and skills in a limited time frame and can be eas-
ily aggregated across schools and classrooms (Perie, Marion, Gong,
& Wurtzel, 2007). The frequency of interim assessments, which are
typically administered three times a year or more, is intended to en-
able teachers to track current students’ progress toward standards
(Hamilton et al., 2009). Such assessments are formalized and designed
to give information to educators and policy makers (Andrade, Huff,
& Brooke, 2012). In the United States, many districts adopted these
assessments in the past decade to help track students’ achievement

3
Teachers College Record, 117, 040302 (2015)

toward standards measured on state high-stakes accountability tests.


The results of interim assessments are typically made available in elec-
tronic formats, and score reports can often be generated that pres-
ent data in ways that are intended for classroom teachers’ use. For
example, reports often identify students who are meeting particular
standards, students who are on the cusp of doing so, and those who fall
significantly below the cut points for proficiency.
Interim assessments are distinctive from summative assessments,
such as the end-of-year state assessments that have accompanied NCLB
(Andrade et al., 2012). Interim assessments are also distinctive from
the ongoing minute-by-minute, day-by-day classroom assessments ad-
ministered by teachers in the course of teaching and learning activities,
which are considered to be formative assessments (Andrade et al., 2012;
Bulkley, Nabors Oláh, & Blanc, 2010). However, there is an assumption
that benchmark assessments will be used in a formative way (Young &
Kim, 2010). It is important to point out that there is a current debate
involving the definition of formative assessment as to whether it is an in-
strument or a process (Bennett, 2011). Bennett (2011) argued that this
is an oversimplification, and in fact formative assessment “might be best
conceived as neither a test nor a process, but some thoughtful integra-
tion of process and purposefully designed methodology or instrumen-
tation” (p. 7). He explained that the use of assessments in support of
learning is not limited to a certain kind of assessment (e.g., summative
or formative) but that more than one type of assessment could contrib-
ute to teachers’ judgments of student achievement. These distinctions
are useful because it helps us consider what data teachers are using and
for what purposes.
Bulkley and colleagues (2010, p. 117) described interim assess-
ments as occupying an “unchartered middle ground” between forma-
tive assessments and summative assessments. They explained that the
multiple-choice or short-answer formats of many interim assessments
resemble state tests, but they have different goals. They are intend-
ed to help predict performance on end-of-year state tests and also to
provide information on students’ strengths and weaknesses. They are
used to examine how well students have mastered curriculum content
by a particular point in the year so that teachers can adjust instruc-
tion accordingly. For example, a teacher in Christman and colleagues’
(2009) study discussed how the benchmarks acted as “checkpoints”
that helped ascertain his progress with the district curriculum and
the students’ level of understanding (p. 23). In some cases, teachers
reported that the use of benchmark assessments in their districts led
them to also use the data from curriculum embedded assessments and

4
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

teacher-generated assessments more frequently to inform their instruc-


tion (Datnow & Park, 2014).
The wide-scale adoption of interim assessments is supported by the
belief that they can contribute to the process of continual school im-
provement (Bulkley et al., 2010). As Bulkley et al. noted, whether this oc-
curs depends a great deal on how these data are actually used to inform
decision making at the classroom, school, and district levels. For ex-
ample, teachers in Nabors Oláh, Lawrence, and Riggans’s (2010) study
universally analyzed benchmark assessment data in math. The teachers
in the study were working in the School District of Philadelphia, which
had instituted interim assessments in Grades K–8 in reading and math.
Christman and colleagues’ (2009) survey of teachers in Philadelphia
also found widespread and frequent use of benchmark assessment data
by teachers, and the majority found them to be a useful source of infor-
mation on students’ strengths and weaknesses. There were some limita-
tions as well, however, as we will discuss.
Interim assessments are not necessarily used for formative purposes,
nor do they have the same evidence base as formative assessments in spite
of claims made by their developers (Bulkley et al., 2010; Perie, Marion,
& Gong, 2009; Shepard, 2010). Interim assessments do not occur within
the context of instruction as a short quiz might, for example. Because
they are provided in standardized formats, they also may not provide
sufficient information on “how students understand” (Christman et al.,
2009, p. 2). When assessments have not been specifically mapped to the
content, standards, or skills being taught in the classroom, teachers find
it difficult to use the inform instruction (Cosner, 2011).
Shepard’s (2009) literature review also noted that teachers have strug-
gled to use interim assessment data to inform instruction. Drawing on
Perie and colleagues’ (2009) study, Shepard (2009) suggested that the
usefulness of interim assessments seems to “erode in practice” (p. 35).
If teachers do not know how to address students’ conceptual problems
identified by the data, and if they are unable to adjust their practice ac-
cordingly, then the information derived from the assessments is of little
use (Heritage, Kim, Vendlinski, & Herman, 2009). If we are to truly un-
derstand the teachers’ use of data, Shepard (2009) argued that we need
validity frameworks to evaluate assessment applications.
Although benchmark assessments are a feature of many teachers’ work
in the current accountability context, it is clear that teachers are ad-
ministering a wide array of assessments (Datnow & Park, 2014; Hoover
& Abrams, 2013). What is less clear is how their instructional planning
is informed by these sources of assessment data. Hoover and Abrams
(2013) conducted a large survey of teachers in Virginia to ascertain

5
Teachers College Record, 117, 040302 (2015)

which kinds of assessment data they used and how they analyzed the
data. Teachers used data from a wide variety of assessments, including
teacher-generated assessments, departmental common assessments,
benchmark assessments, and norm-referenced assessments. Although
they administered assessments frequently, teachers did not analyze
data with nearly the same frequency. They also did not analyze the data
with much depth, focusing mainly on measures of central tendency.
Teachers seldom disaggregated data, which may have produced a more
fine-grained and useful analysis. The authors concluded, “Much of the
information that could be used to support learning and instructional
practice is left untapped” (p. 227).
Apart from the assessments themselves, the literature also reveals
that if teachers are going to use assessments in meaningful ways to im-
prove instruction, they will need new knowledge and skills (Mandinach
& Gummer, 2013). Teachers must have “the skills to analyze classroom
questions, test items, and performance assessment tasks to ascertain
the specific knowledge and thinking skills required for students to do
them,” among other things (Brookhart, 2011, p. 7). Teachers also need
to understand the purposes and uses of the range of available assess-
ment options and must be skilled in translating them into improved in-
structional strategies. These skills are particularly important as we move
toward new goals for teaching and learning. With the implementation
of the Common Core Standards in most U.S. states, some districts have
abandoned their former interim assessments and are now looking to
new ways to assess students. Organizations such as the Smarter Balanced
Consortium have designed new interim assessments linked to the stan-
dards. At the same time that assessments continue to be generated by
external organizations, Andrade et al. (2012) argued that interim assess-
ments that are created by teams of teachers and aligned to curricular
content at the school level could be effective in helping teachers share
practices. They noted that the utility of such assessments would be fur-
ther improved if students could be involved in analyzing the results and
making plans to address their own learning needs. Similarly, Schnellert,
Butler, and Higginson (2008) found when teachers co-constructed as-
sessments, set context-specific goals themselves, and were engaged as
partners in the accountability system, more meaningful instructional
changes occurred.
Informal formative assessments in which students’ thinking is made
explicit in the course of instructional dialogue are another promising
avenue (Ruiz-Primo, 2011). Ruiz-Primo (2011) explained how students’
questions and responses become material for stimulating “assessment
conversations” in which teachers are able to collect information (orally

6
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

or from written work) on students’ thinking and make instructional de-


cisions accordingly. Based on their inferences of students’ understand-
ing, teachers are able to move students toward specific learning goals.
This embedded process of assessment allows teachers to give the kind of
feedback that addresses not only students’ cognitive demands but also
their motivational and affective needs as well.

Consistencies and Variations in Teachers’


Analysis and Use of Data

The process by which assessment data are used to inform instruction-


al improvement is often described as involving numerous steps. These
steps include: (1) accessing and organizing data, (2) making sense of
data to identify problems and solutions, (3) trying the interventions, and
(4) assessing and modifying the interventions (Blanc et al., 2010). These
steps are often depicted in a circular fashion to represent this process
as an ongoing cycle. For example, the teachers in Nabor Oláh and col-
leagues’ (2010) study were all comfortable logging into the district’s data
management system to access results from benchmark assessment in
math. They engaged in a process of identifying weak points for the class
as a whole or for individual students. The teachers then went through a
process of validating the data to ensure that they accurately reflected stu-
dents’ understanding. They also considered the instructional context,
such as whether they had covered particular topics in depth and the dis-
trict’s pacing plan. In other words, the majority of teachers moved from
analyzing the data and linking this analysis to curricular content, which
was the district’s expectation. This pattern is common in other studies as
well (e.g., Datnow & Park, 2014; Hamilton et al., 2009).
In spite of these consistent patterns, Nabors Oláh and colleagues’
(2010) study also highlighted the importance of delving deeper into
how teacher judgment and intuition play into the analysis of data to
understand the variations that exist among teachers. Few studies explore
how teachers analyze the data, much less how they enact instructional
changes on the basis of them. Nabors Oláh and colleagues explained
how teachers’ interpretations of data were influenced by their personal
“thresholds” for mastery of the subject matter. These thresholds were
influenced by their knowledge of the students and their beliefs about
teaching mathematics. As the authors noted, these thresholds served as
“a critical link between interpretation and action in the formative as-
sessment cycle” (p. 235). For the teachers in this study, the demonstra-
tion of mastery could fall anywhere between 60% and 80%, and it var-
ied depending on the student, the class composition, and the district’s

7
Teachers College Record, 117, 040302 (2015)

curriculum and pacing schedule. That teachers took all these issues into
account, even in the presence of defined cut scores for mastery from the
district, suggests a rather complex process of data analysis that involved
much more than the assessment data on the page. Part of this process
was a diagnosis of student mistakes (i.e., was it procedural, conceptual,
and reflective of other cognitive weaknesses, and did it take into account
other external influences?). By and large, the diagnosis focused on pro-
cedural mistakes because the items on the assessments did not allow for
much diagnosis of conceptual misunderstandings.
As we alluded to earlier, teachers’ use of data is clearly influenced by
the nature of the assessment. Davidson and Frohbieter (2011) found
that multiple-choice assessments motivated actions that often resulted in
class placement decisions. In contrast, assessments that required more
constructivist responses generated dialogue and collaboration among
teachers and created opportunities for shared understandings of assess-
ment purposes and use. As Christman et al. (2009) explained, because
questions on the Philadelphia district’s interim assessments were not
open ended, they failed to provide information on why student confu-
sion might exist or reveal the students’ missteps in solving a problem.
As they argued, “If the items operate only at the lower levels of cogni-
tion (e.g., knowledge and comprehension), and do not tap into analyti-
cal thinking, they are not good tests of conceptual proficiency” (p. 29).
Teachers then struggled to make use of the data that emerged as they
planned for instruction.
Cosner’s (2011) qualitative study of teacher teams’ use of literacy as-
sessment data also “suggests that assessments that have not been specifi-
cally mapped to content-knowledge, skills or standards may prove more
challenging for the generation of student learning knowledge by teach-
ers” (p. 582). In the first year of a data use effort, teachers in Cosner’s
study focused on identifying broad patterns of achievement in their
classes rather than student learning needs. By the third year of their
work with data, several teacher teams made connections between assess-
ment results and the skills and content they had taught; however, consid-
erations of past instructional efficacy were slow to develop.
How closely data are connected to the classroom affects teachers’
ability to make sense of them in ways that are useful for their practice.
Teachers in Kerr and colleagues’ (2006) study found reviewing student
work to be more useful in guiding instruction than the results from
benchmark assessments or state tests, though they drew on all of them
to some extent. Classroom assessments and reviews of student work were
deemed more meaningful and valid. Teachers viewed state tests as not
very useful because they came too late in the year and were judged to be

8
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

less relevant to classroom practice. This is not surprising, given findings


from other studies showing that the data from large-scale assessments
may be useful for school and system planning, but they are less useful at
the teacher or student level (Rogosa, 2005; Supovitz, 2009). Similarly, in
Schildkamp and Kuiper’s (2010) qualitative study of Dutch teachers and
school leaders, teachers focused on data that would help them meet the
needs of learners (e.g., data from examinations they administered) rath-
er than the more broad school-level assessment data that preoccupied
the leaders. These data yielded more helpful information for making
instructional decisions. Data that compared schools with national results
were of less interest to teachers.
Explaining variations in teachers’ data use and the types of data they
draw on calls for a focus on the interactions among educators and their
responses to the situations they are in. These processes are driven by
how teachers and districts frame the purpose of data, as well as the lim-
its of the assessments themselves. Shepard, Davidson, and Bowman’s
(2011) study of middle school teachers in six districts found that teach-
ers believed districts implemented benchmark assessments for two pri-
mary reasons: to provide data on mastery toward the standards and to
prepare for state tests. It is therefore not surprising that teachers’ ex-
amination of benchmark assessment data focused primarily on mastery
of the standards and test taking and procedural insights, and much less
on substantive insights. In a study of high school teachers in two schools,
Park (2008) found that the relationship between teachers’ perceptions
and data use were tightly connected. Although most teachers perceived
data use as an important and necessary tool for improving classroom
practice, the majority of teachers either used data to address account-
ability demands or viewed data-driven decision making as a bureaucratic
task to be completed.
In a recent cross-case comparison study of two elementary schools,
Moriarty (2013) found that “teachers who had more autonomy . . .
felt more ownership in their work and had more capacity to engage
in data-driven practices” (p. 161). Although both schools were located
in the same district, principal leadership accounted for some teachers
having less power in making instructional decisions and using the type
of data they felt would inform their work. The result was that teachers
felt less ownership and were less likely to view data as valuable in in-
forming their practice.
The broader policy environment also influences teachers’ sense mak-
ing about data use (Spillane, 2012). Teachers in Blanc and colleagues’
(2010) study often centered on what was needed for the school to meet
Adequate Yearly Progress (AYP) as part of No Child Left Behind. The

9
Teachers College Record, 117, 040302 (2015)

belief was that the benchmark assessment data were predictive of stu-
dent performance on the annual high-stakes assessment. Teachers fo-
cused data use discussions on test-taking strategies, underscoring the
link between the interim assessments and the state assessments. However,
as Christman and colleagues’ (2009) study noted, the benchmarks in
Philadelphia were not intended to be predictive, and that practitioners
erroneously viewed them this way proved to be a distraction from their
intended use for strengthening instructional capacity.
Challenges also arose for some teachers in Heppen and colleagues’
(2012) study designed to examine teachers’ use of interim assessment
data. Some teachers complained that there was a misalignment among
district pacing guides, curriculum and state assessments, and their in-
structional programs. Teachers were also concerned about the validity
of interim assessment data and were suspicious about the intent and ex-
pectations of the district in using interim assessment data, particularly
as it related to accountability. They preferred instead to rely on quizzes
and unit tests. A lack of communication about district goals for the use
of interim assessments exacerbated the problems, causing some teach-
ers to adopt practices that the district did not support. For example, cut
scores were used for high-stakes promotion and placement decisions,
which was not intended.
Jennings’s (2012) review of literature also draws our attention to the
ways in which accountability models may shape the use of data to inform
instruction. In a status or proficiency model, there is an incentive for
teachers to “move as many students over the cut score as possible but
need not attend to the average growth in their class” (Jennings, 2012,
p. 11). In the case of a growth model, there is an incentive for teachers
to focus on students who they believe have the greatest potential for
growth. This strategy obviously presents consequences for students who
are not the recipients of the interventions.
In sum, there are both consistencies and variations in how teachers
use data. Teachers have been asked to engage in a fairly common pro-
cess of analyzing data to inform instruction. It seems that many teach-
ers are familiar with and have engaged in this process designed to in-
form continuous improvement. However, when we dig deeper, we see
some difficulties along the way. A consistency we find in the literature
is that benchmark assessments that include only closed-ended response
items, such as multiple-choice questions, are limited in informing teach-
ers about students’ conceptual understanding. There is also significant
variability in how teachers engage in the process of data use, depend-
ing on the context for data use, teachers’ views of the purpose of the
assessment, and their beliefs about its utility. In some cases, the results

10
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

from benchmark assessments are being used in unintended ways. These


dynamics, not surprisingly, have consequences for the instructional
changes we see.

Instructional Changes Resulting from


Teachers’ Use of Data

In spite of the enthusiasm in the policy world for data-driven instruction,


only small number of studies speak to the instructional changes that
teachers make as a result of their analysis of assessment data (e.g., Blanc
et al., 2010; Christman et al., 2009; Cosner, 2011; Davidson & Frohbieter,
2011; Datnow & Park, 2014; Hoover & Abrams, 2013; Nabors Oláh et al.,
2010; Pierce & Chick, 2011). These studies provide some information
on the types of instructional adjustments teachers make, their teaching
strategies, and when such activities were implemented. These studies pri-
marily report on what teachers report doing, rather than documenting
what teachers actually do by examining classroom practice in depth.
Hoover and Abrams (2013) argued that although the teachers they
surveyed did not analyze data frequently or with the depth required to
obtain the full benefits, most of the teachers in the study reported mak-
ing instructional changes on the basis of assessment data. A total of 96%
of teachers reported differentiating instruction for remediation, 94%
reported reteaching, and 92% changed the pace of future instruction.
At the same time, 64% of teachers said that the pacing prevented re-
teaching, which appears inconsistent with the fact that most of them re-
ported reteaching. More information is needed on the degree to which
instructional planning is informed by assessment data and what kinds of
instructional changes result.
Numerous qualitative studies give some insight into teachers’ in-
structional decision making on the basis of data. In theory, teachers’
instructional planning would be aimed at improving student learning.
Some studies have found that teachers’ joint instructional planning
centered less on how on to address students’ conceptual understanding
and more on how to motivate students to improve testing performance
(Blanc et al., 2010; Nabors Oláh et al., 2010). A common pattern is to
devote instructional planning time to identifying students of concern
and planning remediation or review (Blanc et al., 2010; Cosner, 2011;
Christman et al., 2009; Nabors Oláh et al., 2010; Shepard et al., 2011).
For example, teachers in Blanc et al.’s (2010) study spent some of their
benchmark assessment data analysis time focused on what they called
“strategic sensemaking,” which involved focusing on students who
were on the bubble or just below the target of performance (p. 212).

11
Teachers College Record, 117, 040302 (2015)

Christman and colleagues’ (2009) study corroborated this finding, not-


ing that identifying students on the bubble was a uniform practice in
data discussions, and interventions were planned accordingly. As one
teacher noted, “The teachers put stars next to those kids they were
going to target. And we made sure those kids had interventions, from
Saturday school to extended day, to Read 180. And then we followed
their benchmark data” (p. 50).
Christman et al. (2009) found that the designated reteaching weeks in
Philadelphia were helpful in providing time set aside to address issues
arising from the benchmark assessment data. During the designated re-
teaching weeks, teachers in Nabors Oláh and colleagues’ (2010) study
targeted instructional changes in two main areas: students who were low-
est performing and content areas that proved challenging for a large
number of students. Reteaching happened either in small- or whole-
group instructional settings, rarely 1:1, and usually involved presenting
material in a different way, such as through visual aids or manipulatives.
High-scoring students received less direct instruction during instruc-
tional time allocated for reteaching. Some teachers also admitted that
they didn’t actually use the reteaching time for addressing instructional
needs arising from the assessments. Rather, they used the time to catch
up with the district pacing guide.
Administrators in Philadelphia imagined that after the reteaching
week, teachers would again check for mastery using their own assess-
ments (Christman et al., 2009). Although “assessing the results of re-
teaching is an essential part of determining whether interventions have
been successful,” this seldom occurred, representing a fundamental dis-
juncture in the interim assessment process (Christman et al., 2009, p.
29). If teachers did not assess whether the instructional changes were
positively influencing student achievement, then the process of continu-
ous improvement was compromised.
Pierce and Chick’s (2011) study also reported limited instructional
changes on the basis of data. The authors examined teachers’ use of
data to inform instruction in Australia, which administers national tests
in numeracy and literacy and whose government also promotes data use.
Pierce and Chick’s survey found that 61% of teachers reported that they
had not made changes to their teaching plans as a result of using data.
Thirty-nine percent of teachers had made changes, mostly modifications
for weaker students but also for some stronger students. For example,
addressing the needs of weaker students, one teacher described the
instructional change he/she made as a result of examining the data:
“Realized remedial Year 7 group was weaker than originally thought.
Brought in much more concrete tasks” (p. 445). The survey allowed

12
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

for only brief details of instructional changes, thus, more information


is needed to gain a more comprehensive understanding of changes in
teacher practice.
A small body of research speaks to the intersection of data use and
grouping students for instruction. Although no studies were found to
have a specific focus on this, some findings have emerged in broad-
er studies. Teachers in Shepard et al.’s (2011) study frequently used
grouping to differentiate instruction. Teachers in Datnow and Park’s
(2014) qualitative study used weekly assessments to group “students of
concern” for reteaching in math and English during a set aside time.
These groupings were flexible and changed weekly. Teachers in Nabors
Oláh and colleagues’ study (2010) also used pull-out, small-group in-
struction in math during a designated reteaching week and before
school. In other cases, teachers used data to arrange heterogeneous
grouping and peer teaching.
Hoover and Abrams’s (2013) study also reported that teachers re-
grouped for instruction on the basis of data: “Elementary (97%) teach-
ers were more likely to report using data to make changes to student
groups than were middle (81%) or high school (66%) teachers, albeit
large percentages of teachers in all three groups did indicate using
assessment data for this purpose” (p. 226). However, because this was
a self-report survey of teachers, we lack information on how teachers
used data to make these decisions and on whether these groupings
were fixed or dynamic.
As noted earlier, Davidson and Frohbieter (2011) found that when
districts administered interim assessments that included only multiple
choice data, this had the unintended consequence of leading to data
use for the purpose of tracking decisions, grouping, or placing stu-
dents into various classes. Shepard et al.’s (2011) related study found
similar patterns of the use of test scores to determine placements, as
did Heppen et al.’s (2012). These instructional trends about the over-
lap between assessment use and ability grouping are noteworthy in
light of a recent report that found that ability grouping in the upper
elementary grades in reading and math grew considerably in the past
20 years (Loveless, 2013). The percentage of fourth-grade teachers re-
porting ability grouping in reading grew from 28% in 1998 to 71%
in 2009, and ability grouping in math grew from 40% to 61% during
a similar period. Loveless (2013) posited that this significant rise in
ability grouping may be a consequence of NCLB and the emphasis on
data because they “focus educators’ attention on students below the
threshold for ‘proficiency’ on state tests” and provide justification for
grouping struggling students (p. 20). It is unclear what exactly teachers

13
Teachers College Record, 117, 040302 (2015)

mean by ability grouping, how it is arranged in the classroom, for how


long, and for what purposes. It is also unclear how exactly accountabil-
ity data, in the form of state, district, and schoolwide assessments, inform
and shape this instructional decision-making process.

Factors that Influence Teachers’ Use of Data

As the preceding discussion suggests, teachers are embedded in multiple


organizational contexts that influence, direct, and guide how they con-
ceptualize and implement data use for instructional decision making.
Compared with the areas already discussed, a great deal has been written
about the factors that influence teachers’ data use, including the role of
leadership and the structures and cultures that support the use of data
in schools and districts. Thus, we restrict our discussion here to the most
critical topics that arose in the studies we reviewed. The studies report
on both facilitators of teachers’ use of data as well as inhibitors.

School Leadership

School principals and teacher leaders are key players in facilitating data
use among teachers (Blanc et al., 2010; Cosner, 2011; Earl & Katz, 2006;
Halverson, Grigg, Pritchett, & Thomas, 2007; Ikemoto & Marsh, 2007;
Mandinach & Honey, 2008; Marsh, 2012; Park, Daly, & Guerra, 2012) be-
cause they help to set the tone for data use among teachers and provide
support at the school level. For example, principals in Halverson and
colleagues’ (2007) study adapted policies and practices to structure so-
cial interaction and professional discourse on data use in their schools.
District leaders also play a critical role in framing the purpose of data
use and setting the direction for data use practices (Park et al., 2012).
At both levels, a productive role for the leader is to guide staff in using
data in thoughtful ways that inform action rather than promoting the
idea that data in and of themselves drive action (Datnow & Park, 2014;
Knapp, Copland, & Swinnerton, 2007).
In actuality, there is a range of ways in which leaders scaffold data
use, some more generative of continuous improvement than others.
Some leaders promote an accountability-focused culture in which
data are used in a short time frame to identify problems and moni-
tor compliance (Firestone & González, 2007). In such environments,
sanctions and remediation are chief tactics, and increased test scores
are the primary outcome of improvement efforts. In contrast, when
the culture of a district supports organizational learning, data use is
more conducive to educational improvement (Firestone & González,
2007; Wayman & Cho, 2008). Without a fear of punishment, educators

14
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

working in cultures focused on organizational learning can go beyond


simply identifying a problem and work to understand the nature of the
problem.
Diamond and Cooper’s (2007) study of Chicago principals and teach-
ers also found that data use among educators depended on a school’s
relationship with the “accountability regime” (p. 242). Elementary
schools placed on probation and trying to avoid sanctions were less in-
tent on transforming the whole school. Under the pressure to perform,
principals in Diamond and Cooper’s study used standardized test score
data to identify individuals who were most likely to show the quick-
est gains. This leadership focus on students on the bubble (Booher-
Jennings, 2005) is consistent with the teachers’ focus we discussed
earlier. Whereas low-performing schools adopted this approach, high-
achieving schools adopted an orientation to data use that focused on
improving instruction for all students by changing pedagogy and in-
creasing instructional rigor. These schools took a more holistic look
at what the data told them, recognizing the importance of address-
ing academic weaknesses schoolwide. The majority of the students in
schools under sanction were low-income and African American stu-
dents, whereas students in the high-achieving schools were predomi-
nantly White. Profiles of data use across schools with different account-
ability cultures illustrate the kind of consequences that can influence
students. Depending on the purpose and the context, data use strat-
egies promote improvement achievement, or disadvantage groups of
students and perpetuate inequality. Leadership is key in orienting data
use in ways that disrupt rather than reinforce patterns of inequity.

Organizational Contexts

Research also highlights how classroom, grade-level, and school orga-


nizational contexts shape data use in important ways. Providing struc-
tured time for collaboration is one of the most common ways that dis-
tricts and schools attempt to support teachers’ use of data (Honig &
Venkateswaran, 2012; Lachat & Smith, 2007; Marsh, 2012; Means, Padilla,
& Gallagher, 2010). Strong instructional communities in which to ana-
lyze data can assist teachers in using data in productive ways (Blanc et
al., 2010; Cosner, 2011; Datnow, Park, & Kennedy-Lewis, 2013; White &
Anderson, 2011). For example, White and Anderson’s (2011) study of
Australian math teachers found that when professional learning oppor-
tunities were arranged so that teacher could dialogue around data and
strategize pedagogy, teachers instruction improved, and student achieve-
ment improved as well.

15
Teachers College Record, 117, 040302 (2015)

However, studies also have found that grade-level agendas, norms,


and the level of expertise in the group all play into teacher collabora-
tion around data use (Horn & Little, 2010; Young, 2006). Consequently,
teacher teams with limited expertise can misinterpret or misuse data,
or they can work together to perpetuate poor classroom practice (Daly,
2012). Various tools for supporting teachers’ use of data, including dis-
trict protocols for analyzing data and reflecting on data, also assist in the
process of data use in some cases (Christman et al., 2009) but prove con-
straining in others (Datnow et al., 2013). The variance in the quality of
conversations can impact student learning, as Timperley (2009) found.
In her study of schools in New Zealand, conversations in which there was
a sense of urgency to address student progress and in which multiple
sources of data were brought to bear were more generative than those in
which the purpose of data use was not clearly defined (Timperley, 2009).
A school or district’s context for reform also shapes teachers’ use of
data. Hubbard, Datnow, and Pruyn (2013) found that requirements to
implement multiple other educational initiatives at the school created
challenges that hampered teachers’ ability and motivation to fully in-
tegrate data use into their daily practice. How and when teachers used
data were determined by the interaction of multiple factors, including a
broad set of policies and structures in place at the federal, district, and
school levels, as well as the capacity of the teachers to engage in data-
driven practice.

Teacher Capacity for Data Use

Several important factors related to data use exist at the level of the
teacher capacity. As we discussed briefly, teachers need a range of
knowledge to make sense of and use assessment data effectively. A na-
tional survey found that 43% of teachers surveyed received some train-
ing on how to analyze data from state and benchmark assessments,
though they did not find it adequate (Means, Padilla, DeBarger, &
Bakia, 2009). Most studies have found that teachers have had little pro-
fessional development to aid in their understanding of data or in their
instructional planning on the basis of data (Davidson & Frohbieter,
2011; Dunn, Airola, Lo, & Garrison, 2012; Kerr et al., 2006; Mandinach
& Gummer, 2013; Shepard et al., 2011; U.S. Department of Education,
2010; Wayman & Cho, 2008). Most teachers also have had little training
in assessment in general, in either their preservice or in-service years
(Young & Kim, 2010).
The lack of training limits teachers’ capacity to use data effectively.
For example, Davidson and Frohbieter (2011) found that a lack of

16
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

teacher professional development and support contributed to a failed


data use initiative even though district administrators pinned the fail-
ure on the teachers’ lack of interest in change. In their study of three
urban school districts, Kerr et al. (2006) reported that training for
teachers with regard to data analysis and interpretation was an impor-
tant factor in teachers’ ability to use data because capacity gaps were
highly visible. The more successful districts in Kerr and colleagues’
study were the ones that offered professional development to teachers
in data analysis.
Absent sufficient training, teachers lack confidence in their ability
to use data to improve instruction. In Pierce and Chick’s (2011) study,
teachers felt handicapped in making sense of the data they were pro-
vided. Teachers found the reports difficult to understand, and they
lacked confidence in dealing with statistical data. Dunn and colleagues’
(2012) survey of over 1,700 teachers found that teachers’ lack of effi-
cacy in using data and their anxiety about the process limited their
ability to use data effectively. An important finding that emerged in
this study was that teachers viewed the ability to analyze and interpret
data as distinct from the ability to use the data to inform instructional
practice. The authors noted that, just as we have found, there is scant
research information on the latter.

Teacher Beliefs

Data use is also shaped by teachers’ beliefs about assessment, teaching,


and learning. Jimerson’s (2014) study of a school district in central Texas
revealed that teachers’ understanding of data and data use was specifi-
cally tied to their mental models, which vary along a number of dimen-
sions. Brown, Lake, and Matters’ (2010) multigroup analysis of teachers
in Australia found that although conceptions about assessment among
primary and lower secondary teachers were similar in that they were not
anti-assessment, their beliefs and uses of assessment were statistically dif-
ferent. “Primary teachers agreed more than secondary teachers that ‘as-
sessment improves teaching and learning,’ while the latter agreed more
that it ‘makes students accountable’” (p. 210). Differences were related
to beliefs that mediated policy and outcomes. Teachers also were will-
ing to take professional responsibility for improving school outcomes
“while rejecting the notion that assessment should focus on students” (p.
218). This rejection stemmed from concerns regarding “the quality and
usefulness of the assessment resources being used to make students and
schools accountable” (p. 218). Assessment was viewed as irrelevant if the
data were punitive for children or if the validity of the data was called

17
Teachers College Record, 117, 040302 (2015)

into question. Teachers viewed poor quality assessments as having unjust


consequences for learners.
Along similar lines, Remesal’s (2011) qualitative study of 50 primary
and secondary mathematics teachers in Spain found that four different
belief categories built teachers’ conceptions about assessment: teaching,
as separate from the influence of assessment on learning, and accountabil-
ity, as separate from accreditation of achievement. These beliefs caused
teachers to identify assessment as either a positive change or a disruptive
measure. Teachers who held a pedagogical orientation toward assessment
(typically primary teachers) viewed assessment as a valuable tool to mon-
itor and support student learning, as an instrument for quality control,
and as a way to know if students had indeed learned and reflected on
their lessons. Conversely, high school teachers were more likely to hold a
societal conception, viewing assessment as a “benchmark, a reference point
for the establishment of minimum levels of expected performance” (p.
477). The majority of the teachers, however, reported mixed concep-
tions of assessment. According to the author, this variation in beliefs
indicates that the nature of assessment is complex: “all purposes of as-
sessment (for/of learning, accreditation/accountability) [are] part of
the whole system affecting the daily classroom practices [and they are in
a] continuous and inevitable tension” (p. 479).
Importantly, Remesal (2011) reminded us of the importance of con-
text in understanding teachers’ beliefs about assessment. She argued
that the higher incidence of societal conceptions among high school teach-
ers was likely related to Spain’s new educational system, and the external
assessment policy demands that it placed on teachers (the move from
a basic plan of education to a compulsory education for secondary stu-
dents) meant that students’ promotion decisions and career paths (col-
lege or vocational education) were heavily reliant on assessment data.
Increased pressure to report quantitative data caused teachers to view
assessment as an instrument of one-way communication with families
and students—a way to report on students’ progress but also a process
that was viewed as “disassociated from their teaching duties” (p. 477).
Teachers’ beliefs play out in other ways that affect data use. The ab-
sence of teacher buy-in was found to limit Dutch teachers’ use of data
in Schildkamp and Kuiper’s (2010) study. Lack of buy-in was associated
with teachers’ belief in an “external locus of control.” In other words,
according to one teacher, students’ achievement could be understood
by whether you had a year of “good students or not so good students”
(p. 488). Having data would thus not be helpful. So, too, some teachers
in Pierce and Chick’s (2011) study felt that the data reported to them
merely indicated what they already knew about students.

18
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

The myriad factors influencing teachers’ use of data remind us of


the interdependency among the classroom, school, and district levels
(Hamilton et al., 2009). The general lack of capacity many teachers feel
regarding data use, as well as their beliefs about the utility of assessments,
figures strongly into their efforts to use data to inform instruction.

Conclusion

This review of research reveals numerous lessons from the past and for
the future. Given that interim benchmark assessment data predominate
in teachers’ work with data, we will need to think more deeply about the
content of those assessments as well as how we can create conditions for
teachers to use assessment to inform instruction. This is especially im-
portant given that we know that the assessments themselves, combined
with teachers’ beliefs and their levels of support, influence their deci-
sions about instruction. Up to now, instructional changes on the basis
of data use have focused primarily struggling students, which raises con-
cerns about whether instructional differentiation is meeting the needs
of all students. Although we have a great deal of knowledge of the struc-
tural supports for data use, one striking pattern is the fact that teachers
feel underprepared to use data effectively, which has undermined their
confidence and their efforts.
This review of research reveals gaps in significant areas. First, there is a
need for much more research on how teachers’ instruction is informed
by the use of data. Few studies have focused in depth on classroom in-
struction, and thus we know little about what data-informed instruc-
tion looks like, particularly with respect to instructional differentiation,
which is often a goal of data use efforts. Second, there is a need for
further research on the activity settings in which teachers use data to
inform instruction. Such research could focus on both the individual
ways in which teachers use data and how they work together in groups to
analyze and act on data.
Understanding data use for instructional decision making requires
close investigation into how educators engage with data so that we can
find out why it fosters positive outcomes in some places and not others
(Coburn & Turner, 2012). For example, studies have found that the use
of data from interim assessments has a positive impact on achievement in
some grades but not others (Konstantopoulos, Miller, & van der Ploeg,
2013) and in some content areas and not others (Carlson, Borman, &
Robinson, 2011). To understand the source of these variations, we need
to take a deep research dive into teacher practice. As Little (2012) ar-
gued, we need more studies that either “zoom in” on teachers’ daily and

19
Teachers College Record, 117, 040302 (2015)

weekly activities around data or “zoom out” and focus on how data use
fits in within a larger context of teachers’ work.
As schools in most states shift to the Common Core Standards and with
the push for 21st-century learning in many countries, lessons from the
recent decade of data use prompt us to ask several important questions
and emphasize significant areas for additional study. Common Core
Standards demand that teachers engage in a new kind of teaching—one
that is focused on supporting students to analyze intricate arguments,
compare texts, synthesize complex information, explain problem solv-
ing, and dig more deeply than ever into the nature of evidence. What
counts as “data” will be more wide ranging.
We are now asking teachers not only to use data to inform decision
making but also to use more complex forms of data and to implement
new instructional strategies to respond to students’ needs. These ini-
tiatives will likely involve activities such as inquiry-based learning, col-
laboration, and discussion, which are not easily measured by traditional
assessments (Levin, Datnow, & Carrier, 2012). There are some promising
signs given that the new assessments are intended to provide a fuller
picture of student understanding than the prior ones that have been in
use in the NCLB era. As teachers turn to data to provide instructional
guidance, it will be important to investigate how these new assessments
facilitate teaching and learning.
Given the findings from the research reviewed here, it is essential that
teachers receive professional development on assessment and how to
translate assessment data into information that can inform instructional
planning. Teachers need to be provided with a rich understanding of
assessment. They also need support in accessing data, translating them,
and disaggregating them in ways that will support goals of equity and
excellence for all students (Hoover & Abrams, 2013; Marsh, 2012; Young
& Kim, 2010). Finally, the need for support in planning instructional
changes on the basis of data, particularly with respect to differentiation.
There have been calls for reform in teacher education to place a much
higher priority on developing these skills in teachers’ preservice years
(Mandinach & Gummer, 2013; Mandinach, Friedman, & Gummer, 2015,
this issue; Young & Kim, 2010). With respect to in-service training, we
need to know how to support teachers’ varied needs with respect to data
literacy preparation (Mandinach & Gummer, 2013). District and school
administrators are challenged to configure new ways of improving com-
munication with teachers and providing them with the skills and knowl-
edge to take advantage of the growing availability of technologies and
cutting-edge pedagogies that best support teaching and learning.
As students are asked to monitor their own progress and to design

20
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

learning strategies to boost their individual achievement, they too will


need to learn how to become reflective learners and gain the capacity to
examine data. A closer examination of how this process unfolds will be
critical. And finally, as past experience with data use has also shown, a
school’s accountability context has shaped what data are used, how they
are used, and for whom. When stakes are high, perverse incentives come
into play that may work against improving teaching and learning for all
students. There is no evidence that high-stakes accountability will dimin-
ish, causing us to ask: Will new assessments generate more substantive
information about students’ needs and result in greater attention given
by teachers to scaffold learning for all students? Enabling teachers to use
data in ways that improve instruction will require systemic support.

21
Teachers College Record, 117, 040302 (2015)

References
Andrade, H., Huff, K., & Brooke, G. (2012). Assessing learning: The students at the center series.
Retrieved from http://studentsatthecenter.org/topics/assessing-learning
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in education:
Principles, Policy, and Practice, 18(1), 5–25.
Bernhardt, V. (1998). Multiple measures. Invited Monograph No. 4. Oroville: California
Association for Supervision and Curriculum Development.
Blanc, S., Christman, J. B., Liu, R., Mitchell, C., Travers, E., & Bulkley, K. E. (2010). Learning
to learn from data: Benchmarks and instructional communities. Peabody Journal of
Education, 85(2), 205–225.
Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas
accountability system. American Educational Research Journal, 42(2), 231–268.
Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers.
Educational Measurement: Issues and Practice, 30(1), 3–12.
Brown, G. T., Lake, R., & Matters, G. (2011). Queensland teachers’ conceptions of assessment:
The impact of policy priorities on teacher attitudes. Teaching and Teacher Education, 27(1),
210–220.
Bulkley, K. E., Nabors Oláh, L. N., & Blanc, S. (2010). Introduction to the special issue on
benchmarks for success? Interim assessments as a strategy for educational improvement.
Peabody Journal of Education, 85(2), 115–124.
Carlson D., Borman G. D., & Robinson M. (2011). A multi-state district-level cluster
randomized trial of the impact of data-driven reform on reading and mathematics
achievement. Educational Evaluation and Policy Analysis, 33, 378–398.
Christman, J. B., Neild, R. C., Bulkley, K., Blanc, S., Liu, R., Mitchell, C., & Travers, E. (2009).
Making the most of interim assessment data. Lessons from Philadelphia. Retrieved from http://
www.researchforaction.org/wp-content/uploads/publication-photos/41/Christman_J_
Making_the_Most_of_Interim_Assessment_Data.pdf
Coburn, C. E., & Turner, E. O. (2012). The practice of data use: An introduction. American
Journal of Education, 118(2), 99–111.
Cosner, S. (2011). Teacher learning, instructional considerations and principal
communication: Lessons from a longitudinal study of collaborative data use by teachers.
Educational Management Administration & Leadership, 39(5), 568–589.
Daly, A. J. (2012). Data, dyads, and dynamics: Exploring data use and social networks in
educational improvement. Teachers College Record, 114(11), 1–38.
Data Quality Campaign. (2011). Data: The missing piece to improving student achievement.
Washington, DC: Author. Retrieved from http://www.dataqualitycampaign.org/files/
dqc_ipdf.pdf
Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco, CA: Jossey-Bass.
Datnow, A., Park, V., & Kennedy-Lewis, B. (2013). Affordances and constraints in the context
of teacher collaboration for the purpose of data use. Journal of Educational Administration,
51(3), 341–362.
Davidson, K. L., & Frohbieter, G. (2011). District adoption and implementation of interim and
benchmark assessments (Report No. 806). Los Angeles, CA: National Center for Research
on Evaluation, Standards, and Student Testing (CRESST).
Diamond, J. B., & Cooper, K. (2007). The uses of testing data in urban elementary schools:
Some lessons from Chicago. National Society for the Study of Education Yearbook, 106(1),
241–263.

22
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

Dunn, K. E., Airola, D. T., Lo, W., & Garrison, M. (2012). What teachers think about what
they can do with data: Development and validation of the data-driven decision making
efficacy and anxiety inventory. Contemporary Educational Psychology, 38, 87–98.
Earl, L., & Katz, S. (2006), Leading schools in a data-rich world. Thousand Oaks, CA: Corwin
Press.
Firestone, W. A., & González, R. A. (2007). Culture and processes affecting data use in school
districts. In P. A. Moss (Ed.), Evidence and decision making: Yearbook of the National Society for
the Study of Education (pp. 132–154). Malden, MA: Blackwell.
Halverson, R., Grigg, J., Pritchett, R., & Thomas, C. (2007). The new instructional leadership:
Creating data-driven instructional systems in schools. Journal of School Leadership, 17(2),
159–193.
Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E., Supovitz, J., & Wayman, J. (2009).
IES Practice Guide: Using student achievement data to support instructional decision making
(NCEE 2009-4067). Washington, DC: National Center for Education Evaluation and
Regional Assistance.
Heppen, J., Jones, W., Faria, A., Sawyer, K., Lewis, S., Horwitz, A. & Casserly, M. (2012). Using
data to improve instruction in the Great City Schools: Documenting current practice. Washington,
DC: American Institutes for Research and The Council of Great City Schools.
Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A
seamless process in formative assessment? Educational Measurement: Issues and Practice,
28(3), 24–31.
Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence
use: Understanding evidence use as a systems problem, American Journal of Education,
118(2), 199–222.
Hoover, N. R., & Abrams, L. M. (2013). Teachers’ instructional use of summative student
assessment data. Applied Measurement in Education, 26, 219–231.
Horn, I. S., & Little, J. W. (2010). Attending to problems of practice: Routines and resources
for professional learning in teachers’ workplace interactions. American Educational
Research Journal, 47(1), 181–217.
Hubbard, L., Datnow, A., & Pruyn, L. (2013). Multiple initiatives, multiple challenges: The
promise and pitfalls of implementing data. International Studies in Educational Evaluation.
doi:10.1016/j.stueduc.2013.10.003
Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data driven” mantra: Different
conceptions of data-driven decision making. National Society for the Study of Education
Yearbook, 106(1), 105–131.
Jennings, J. L. (2012). The effects of accountability system design on teachers’ use of test
score data. Teachers College Record, 114(11), 1-23.
Jimerson, J. B. (2014). Thinking about data: Exploring the development of mental models
for “data use” among teachers and school leaders. Studies in Educational Evaluation, 42,
5–14.
Kennedy, M. M. (2011). Data use by teachers: Productive improvement or panacea? (Working Paper
No. 19). East Lansing: Education Policy Center at Michigan State University.
Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to
promote data use for instructional improvement: Actions, outcomes, and lessons from
three urban districts. American Journal of Education, 112(3), 496–520.
Knapp, M. S., Copland, M. A., & Swinnerton, J.A. (2007). Understanding the promise
and dynamics of data-informed leadership. Yearbook of the National Society for the Study of
Education, 106(1), 74–104.

23
Teachers College Record, 117, 040302 (2015)

Konstantopoulos, S., Miller, S. R., & van der Ploeg, A. (2013). The impact of Indiana’s system
of interim assessments on mathematics and reading achievement. Educational Evaluation
and Policy Analysis, 35(4), 481–499.
Lachat, M. A., & Smith, S. (2005). Practices that support data use in urban high schools.
Special Issue on transforming data into knowledge: Applications of data-based decision
making to improve instructional practice. Journal of Education Change for Students Placed
At-Risk, 10(3), 333-339.
Levin, B., Datnow, A., & Carrier, N. (2012). Changing school district practices. Boston, MA:
Jobs for the Future. Retrieved from http://www.studentsatthecenter.org/papers/
changing-school-district-practices
Little, J. W. (2012). Understanding data use practices among teachers: The contribution of
micro-process studies. American Journal of Education, 118(2), 143–166.
Loveless, T. (2013). The 2013 Brown Center report on American education: How well are American
students learning? (Vol. 3, No. 2). Washington, DC: Brookings. Retrieved from http://
www.brookings.edu/2013-brown-center-report
Mandinach, E. B., Friedman, J. M., & Gummer, E. S. (2015). How can schools of education
help to build educators’ capacity to use data? A systemic view of the issue. Teachers College
Record, 117(4).
Mandinach, E. B., & Gummer, E. S. (2013). A systemic view of implementing data literacy in
educator preparation. Educational Researcher, 42(1), 30–37.
Mandinach, E. B., & Honey, M. (Eds.). (2008). Data driven school improvement: Linking data and
learning. New York, NY: Teachers College Press.
Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and
gaps. Teachers College Record, 114(11), 1–48.
Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision
making in schools—teacher access, supports and use. Washington, DC: U.S Department of
Education, Office of Planning, Evaluation, and Policy Development.
Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level: from
accountability to instructional improvement. Washington, DC: U.S. Department of Education,
Office of Planning, Evaluation, and Policy Development.
Moriarty, T. (2013). Data-driven decision-making: Teachers’ use of data in the classroom (Doctoral
dissertation). Available from ProQuest Dissertations and Theses database. (UMI No.
3571708)
Moss, P. A. (2012). Exploring the macro-micro dynamic in data use practice. American Journal
of Education, 118(2), 223–232. doi:10.1086/663274
Nabors Oláh, L., Lawrence, N. R., & Riggan, M. (2010). Learning to learn from benchmark
assessment data: How teachers analyze results. Peabody Journal of Education, 85, 226–245.
Park, V. (2008). Beyond the numbers chase: How urban high school teachers make sense of data use
(Unpublished doctoral dissertation). University of Southern California, Los Angeles.
Park, V., Daly, A. J., & Guerra, A. W. (2012). Strategic framing: How leaders craft the
meaning of data use for equity and learning. Educational Policy, 27(4), 645–675.
doi:10.1177/0895904811429295
Perie, M., Marion, S., & Gong, B. (2009). Moving toward a comprehensive assessment system:
A framework for considering interim assessments. Educational Measurement: Issues and
Practice, 28(3), 5–13.
Perie, M., Marion, S., Gong, B., & Wurtzel, J. (2007). The role of interim assessments in a
comprehensive assessment system: A policy brief. Washington, DC: Aspen Institute.
Pierce, R., & Chick, H. (2011). Teachers’ intentions to use national literacy and numeracy
assessment data: A pilot study. Australian Educational Research, 38(3), 433–477.

24
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction

Remesal, A. (2011). Primary and secondary teachers’ conceptions of assessment: A qualitative


study. Teaching and Teacher Education, 27(2), 472–482.
Rogosa, D. (2005). Statistical misunderstandings of the properties of school scores and
school accountability. In J. L. Herman & E. H. Haertel (Eds.), Uses and misuses of data for
educational accountability and improvement. The 104th Yearbook of the National Society for the
Study of Education (pp. 147–174). Malden, MA: Blackwell.
Ruiz-Primo, M. A. (2011). Informal formative assessment: The role of instructional dialogues
in assessing students’ learning. Studies in Educational Evaluation, 37(1), 15–24.
Schildkamp, K., & Kuiper, W. (2010). Data-informed curriculum reform: Which data, what
purposes, and promoting and hindering factors. Teaching and Teacher Education, 26(3),
482–496.
Schnellert, L. M., Butler, D. L., & Higginson, S. K. (2008). Co-constructors of data, co-
constructors of meaning: Teacher professional development in an age of accountability.
Teaching and Teacher Education, 24(3), 725–750.
Shepard, L. A. (2009). Commentary: Evaluating the validity of formative and interim
assessment. Educational Measurement: Issues and Practice, 28(3), 32–37.
Shepard, L. A. (2010). What the marketplace has brought us: Item-by-item teaching with
little instructional insight. Peabody Journal of Education, 85(2), 246–257.
Shepard, L., Davidson, K., & Bowman, R. (2011). How middle school mathematics teachers use
interim and benchmark assessment data (CSE Technical Report). Los Angeles: University of
California, National Center for Research on Evaluation, Standards, and Student Testing
(CRESST)
Spillane, J. (2012). Data in practice: Conceptualizing the data-based decision-making
phenomena. American Journal of Education, 118, 113–141.
Supovitz, J. (2009). Can high stakes testing leverage educational improvement? Prospects
from the last decade of testing and accountability reform. Journal of Educational Change,
10(2&3), 211–227.
Supovitz, J. A., & Klein, V. (2003). Mapping a course for improved student learning: How innovative
schools systematically use student performance data to guide improvement. Philadelphia, PA:
Consortium for Policy Research in Education.
Timperley, H. (2009) Evidence-informed conversations making a difference to student
achievement. In L. Earl & H. Timperley (Eds.), Professional learning conversations:
Challenges in using evidence for improvement (pp. 69–79). New York, NY: Springer.
U.S. Department of Education, Office of Planning, Evaluation and Policy Development.
(2010). Teachers’ ability to use data to inform instruction: Challenges and supports. Washington,
DC: Author.
Wayman, J. C., & Cho, V. (2008). Preparing educators to effectively use student data systems.
In T. J. Kowalski & T. J. Lasley (Eds.), Handbook on data-based decision-making in education
(pp. 89–104). New York, NY: Routledge.
Wayman, J. C., & Stringfield, S. (2006). Data use for school improvement: School practices
and research perspectives. American Journal of Education, 112(4), 463–468.
White, P., & Anderson, J. (2011). Teachers’ use of national test data to focus numeracy
instruction. In J. Clark, B. Kissane, J. Mousley, T. Spencer, & S. Thornton (Eds.),
Mathematics: Traditions and [new] practices (pp. 777–785). Adelaide, Australia: AAMT and
MERGA.
Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms.
American Journal of Education, 112(4), 521–548.
Young, V. M., & Kim, D. H. (2010). Using assessments for instructional improvement: A
literature review. Educational Policy Analysis Archives, 18(19). Retrieved from http://epaa.
asu.edu/ojs/article/view/809/852

25
Teachers College Record, 117, 040302 (2015)

AMANDA DATNOW is a professor in the Department of Education Studies


and associate dean of the Division of Social Sciences at the University of
California, San Diego. Her research focuses on educational reform, par-
ticularly with regard to issues of equity and the professional lives of educa-
tors. She is the author of Data Driven Leadership (Jossey-Bass, 2014). She is
currently conducting a study of how teachers use data to inform differenti-
ated instruction.

LEA HUBBARD is a professor in the School of Leadership and Education


Sciences at the University of San Diego. Her work focuses on educational
reform and district leadership as well as educational inequities as they ex-
ist across ethnicity, class, and gender. Working nationally and internation-
ally, she has coauthored several books and written articles on data-driven
decision making, the academic achievement of minority students, gender
and education, educational leadership and school reform. 

26

You might also like