Teachers' Use of Assessment Data To Inform Instruction - Lessons From The Past and Prospects For The Future
Teachers' Use of Assessment Data To Inform Instruction - Lessons From The Past and Prospects For The Future
Lea Hubbard
University of California, San Diego
Background: Data use has been promoted as a panacea for instructional improvement.
However, the field lacks a detailed understanding of how teachers actually use assessment
data to inform instruction and the factors that shape this process.
Purpose: This article provides a review of literature on teachers’ use of assessment data to
inform instruction. We draw primarily on empirical studies of data use that have been pub-
lished in the past decade, most of which have been conducted as data-driven decision making
came into more widespread use. The article reviews research on the types of assessment data
teachers use to inform instruction, how teachers analyze data, and how their instruction is
impacted.
Research Design: Review of research.
Findings: In the current accountability context, benchmark assessment data predominate in
teachers’ work with data. Although teachers are often asked to analyze data in a consistent
way, agendas for data use, the nature of the assessments, and teacher beliefs all come into
play, leading to variability in how they use data. Instructional changes on the basis of data
often focus on struggling students, raising some equity concerns. The general absence of pro-
fessional development has hampered teachers’ efforts to use data, as well as their confidence
in doing so.
Conclusions: Given that interim benchmark assessment data predominate in teachers’ work
with data, we need to think more deeply about the content of those assessments, as well as
how we can create conditions for teachers to use assessment to inform instruction. This review
of research underscores the need for further research in this area, as well teacher professional
development on how to translate assessment data into information that can inform instruc-
tional planning.
2
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
use to inform instruction, how teachers analyze data, and how their in-
struction is impacted. In our analysis, we identified the following three
patterns. First, in the current accountability context, benchmark assess-
ment data predominate in teachers’ work with data. Because teachers
have been asked to administer and analyze benchmark assessments, it
is not surprising that benchmark assessments are most associated with
data use. Second, although teachers are often asked to analyze data in a
consistent way, agendas for data use, the nature of the assessments, and
teacher beliefs all come into play, leading to variability in how they use
data. Third, instructional changes on the basis of data often focus on
struggling students, raising some equity concerns. We review some of
the factors that influence teachers’ use of data, arguing that leadership,
structural, and cultural supports are important. We also find that the
general absence of professional development has hampered teachers’
efforts to use data, as well as their confidence in doing so. We close with
a discussion of gaps in the literature, provide suggestions for further re-
search, and raise questions for the future of policy and practice.
3
Teachers College Record, 117, 040302 (2015)
4
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
5
Teachers College Record, 117, 040302 (2015)
which kinds of assessment data they used and how they analyzed the
data. Teachers used data from a wide variety of assessments, including
teacher-generated assessments, departmental common assessments,
benchmark assessments, and norm-referenced assessments. Although
they administered assessments frequently, teachers did not analyze
data with nearly the same frequency. They also did not analyze the data
with much depth, focusing mainly on measures of central tendency.
Teachers seldom disaggregated data, which may have produced a more
fine-grained and useful analysis. The authors concluded, “Much of the
information that could be used to support learning and instructional
practice is left untapped” (p. 227).
Apart from the assessments themselves, the literature also reveals
that if teachers are going to use assessments in meaningful ways to im-
prove instruction, they will need new knowledge and skills (Mandinach
& Gummer, 2013). Teachers must have “the skills to analyze classroom
questions, test items, and performance assessment tasks to ascertain
the specific knowledge and thinking skills required for students to do
them,” among other things (Brookhart, 2011, p. 7). Teachers also need
to understand the purposes and uses of the range of available assess-
ment options and must be skilled in translating them into improved in-
structional strategies. These skills are particularly important as we move
toward new goals for teaching and learning. With the implementation
of the Common Core Standards in most U.S. states, some districts have
abandoned their former interim assessments and are now looking to
new ways to assess students. Organizations such as the Smarter Balanced
Consortium have designed new interim assessments linked to the stan-
dards. At the same time that assessments continue to be generated by
external organizations, Andrade et al. (2012) argued that interim assess-
ments that are created by teams of teachers and aligned to curricular
content at the school level could be effective in helping teachers share
practices. They noted that the utility of such assessments would be fur-
ther improved if students could be involved in analyzing the results and
making plans to address their own learning needs. Similarly, Schnellert,
Butler, and Higginson (2008) found when teachers co-constructed as-
sessments, set context-specific goals themselves, and were engaged as
partners in the accountability system, more meaningful instructional
changes occurred.
Informal formative assessments in which students’ thinking is made
explicit in the course of instructional dialogue are another promising
avenue (Ruiz-Primo, 2011). Ruiz-Primo (2011) explained how students’
questions and responses become material for stimulating “assessment
conversations” in which teachers are able to collect information (orally
6
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
7
Teachers College Record, 117, 040302 (2015)
curriculum and pacing schedule. That teachers took all these issues into
account, even in the presence of defined cut scores for mastery from the
district, suggests a rather complex process of data analysis that involved
much more than the assessment data on the page. Part of this process
was a diagnosis of student mistakes (i.e., was it procedural, conceptual,
and reflective of other cognitive weaknesses, and did it take into account
other external influences?). By and large, the diagnosis focused on pro-
cedural mistakes because the items on the assessments did not allow for
much diagnosis of conceptual misunderstandings.
As we alluded to earlier, teachers’ use of data is clearly influenced by
the nature of the assessment. Davidson and Frohbieter (2011) found
that multiple-choice assessments motivated actions that often resulted in
class placement decisions. In contrast, assessments that required more
constructivist responses generated dialogue and collaboration among
teachers and created opportunities for shared understandings of assess-
ment purposes and use. As Christman et al. (2009) explained, because
questions on the Philadelphia district’s interim assessments were not
open ended, they failed to provide information on why student confu-
sion might exist or reveal the students’ missteps in solving a problem.
As they argued, “If the items operate only at the lower levels of cogni-
tion (e.g., knowledge and comprehension), and do not tap into analyti-
cal thinking, they are not good tests of conceptual proficiency” (p. 29).
Teachers then struggled to make use of the data that emerged as they
planned for instruction.
Cosner’s (2011) qualitative study of teacher teams’ use of literacy as-
sessment data also “suggests that assessments that have not been specifi-
cally mapped to content-knowledge, skills or standards may prove more
challenging for the generation of student learning knowledge by teach-
ers” (p. 582). In the first year of a data use effort, teachers in Cosner’s
study focused on identifying broad patterns of achievement in their
classes rather than student learning needs. By the third year of their
work with data, several teacher teams made connections between assess-
ment results and the skills and content they had taught; however, consid-
erations of past instructional efficacy were slow to develop.
How closely data are connected to the classroom affects teachers’
ability to make sense of them in ways that are useful for their practice.
Teachers in Kerr and colleagues’ (2006) study found reviewing student
work to be more useful in guiding instruction than the results from
benchmark assessments or state tests, though they drew on all of them
to some extent. Classroom assessments and reviews of student work were
deemed more meaningful and valid. Teachers viewed state tests as not
very useful because they came too late in the year and were judged to be
8
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
9
Teachers College Record, 117, 040302 (2015)
belief was that the benchmark assessment data were predictive of stu-
dent performance on the annual high-stakes assessment. Teachers fo-
cused data use discussions on test-taking strategies, underscoring the
link between the interim assessments and the state assessments. However,
as Christman and colleagues’ (2009) study noted, the benchmarks in
Philadelphia were not intended to be predictive, and that practitioners
erroneously viewed them this way proved to be a distraction from their
intended use for strengthening instructional capacity.
Challenges also arose for some teachers in Heppen and colleagues’
(2012) study designed to examine teachers’ use of interim assessment
data. Some teachers complained that there was a misalignment among
district pacing guides, curriculum and state assessments, and their in-
structional programs. Teachers were also concerned about the validity
of interim assessment data and were suspicious about the intent and ex-
pectations of the district in using interim assessment data, particularly
as it related to accountability. They preferred instead to rely on quizzes
and unit tests. A lack of communication about district goals for the use
of interim assessments exacerbated the problems, causing some teach-
ers to adopt practices that the district did not support. For example, cut
scores were used for high-stakes promotion and placement decisions,
which was not intended.
Jennings’s (2012) review of literature also draws our attention to the
ways in which accountability models may shape the use of data to inform
instruction. In a status or proficiency model, there is an incentive for
teachers to “move as many students over the cut score as possible but
need not attend to the average growth in their class” (Jennings, 2012,
p. 11). In the case of a growth model, there is an incentive for teachers
to focus on students who they believe have the greatest potential for
growth. This strategy obviously presents consequences for students who
are not the recipients of the interventions.
In sum, there are both consistencies and variations in how teachers
use data. Teachers have been asked to engage in a fairly common pro-
cess of analyzing data to inform instruction. It seems that many teach-
ers are familiar with and have engaged in this process designed to in-
form continuous improvement. However, when we dig deeper, we see
some difficulties along the way. A consistency we find in the literature
is that benchmark assessments that include only closed-ended response
items, such as multiple-choice questions, are limited in informing teach-
ers about students’ conceptual understanding. There is also significant
variability in how teachers engage in the process of data use, depend-
ing on the context for data use, teachers’ views of the purpose of the
assessment, and their beliefs about its utility. In some cases, the results
10
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
11
Teachers College Record, 117, 040302 (2015)
12
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
13
Teachers College Record, 117, 040302 (2015)
School Leadership
School principals and teacher leaders are key players in facilitating data
use among teachers (Blanc et al., 2010; Cosner, 2011; Earl & Katz, 2006;
Halverson, Grigg, Pritchett, & Thomas, 2007; Ikemoto & Marsh, 2007;
Mandinach & Honey, 2008; Marsh, 2012; Park, Daly, & Guerra, 2012) be-
cause they help to set the tone for data use among teachers and provide
support at the school level. For example, principals in Halverson and
colleagues’ (2007) study adapted policies and practices to structure so-
cial interaction and professional discourse on data use in their schools.
District leaders also play a critical role in framing the purpose of data
use and setting the direction for data use practices (Park et al., 2012).
At both levels, a productive role for the leader is to guide staff in using
data in thoughtful ways that inform action rather than promoting the
idea that data in and of themselves drive action (Datnow & Park, 2014;
Knapp, Copland, & Swinnerton, 2007).
In actuality, there is a range of ways in which leaders scaffold data
use, some more generative of continuous improvement than others.
Some leaders promote an accountability-focused culture in which
data are used in a short time frame to identify problems and moni-
tor compliance (Firestone & González, 2007). In such environments,
sanctions and remediation are chief tactics, and increased test scores
are the primary outcome of improvement efforts. In contrast, when
the culture of a district supports organizational learning, data use is
more conducive to educational improvement (Firestone & González,
2007; Wayman & Cho, 2008). Without a fear of punishment, educators
14
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
Organizational Contexts
15
Teachers College Record, 117, 040302 (2015)
Several important factors related to data use exist at the level of the
teacher capacity. As we discussed briefly, teachers need a range of
knowledge to make sense of and use assessment data effectively. A na-
tional survey found that 43% of teachers surveyed received some train-
ing on how to analyze data from state and benchmark assessments,
though they did not find it adequate (Means, Padilla, DeBarger, &
Bakia, 2009). Most studies have found that teachers have had little pro-
fessional development to aid in their understanding of data or in their
instructional planning on the basis of data (Davidson & Frohbieter,
2011; Dunn, Airola, Lo, & Garrison, 2012; Kerr et al., 2006; Mandinach
& Gummer, 2013; Shepard et al., 2011; U.S. Department of Education,
2010; Wayman & Cho, 2008). Most teachers also have had little training
in assessment in general, in either their preservice or in-service years
(Young & Kim, 2010).
The lack of training limits teachers’ capacity to use data effectively.
For example, Davidson and Frohbieter (2011) found that a lack of
16
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
Teacher Beliefs
17
Teachers College Record, 117, 040302 (2015)
18
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
Conclusion
This review of research reveals numerous lessons from the past and for
the future. Given that interim benchmark assessment data predominate
in teachers’ work with data, we will need to think more deeply about the
content of those assessments as well as how we can create conditions for
teachers to use assessment to inform instruction. This is especially im-
portant given that we know that the assessments themselves, combined
with teachers’ beliefs and their levels of support, influence their deci-
sions about instruction. Up to now, instructional changes on the basis
of data use have focused primarily struggling students, which raises con-
cerns about whether instructional differentiation is meeting the needs
of all students. Although we have a great deal of knowledge of the struc-
tural supports for data use, one striking pattern is the fact that teachers
feel underprepared to use data effectively, which has undermined their
confidence and their efforts.
This review of research reveals gaps in significant areas. First, there is a
need for much more research on how teachers’ instruction is informed
by the use of data. Few studies have focused in depth on classroom in-
struction, and thus we know little about what data-informed instruc-
tion looks like, particularly with respect to instructional differentiation,
which is often a goal of data use efforts. Second, there is a need for
further research on the activity settings in which teachers use data to
inform instruction. Such research could focus on both the individual
ways in which teachers use data and how they work together in groups to
analyze and act on data.
Understanding data use for instructional decision making requires
close investigation into how educators engage with data so that we can
find out why it fosters positive outcomes in some places and not others
(Coburn & Turner, 2012). For example, studies have found that the use
of data from interim assessments has a positive impact on achievement in
some grades but not others (Konstantopoulos, Miller, & van der Ploeg,
2013) and in some content areas and not others (Carlson, Borman, &
Robinson, 2011). To understand the source of these variations, we need
to take a deep research dive into teacher practice. As Little (2012) ar-
gued, we need more studies that either “zoom in” on teachers’ daily and
19
Teachers College Record, 117, 040302 (2015)
weekly activities around data or “zoom out” and focus on how data use
fits in within a larger context of teachers’ work.
As schools in most states shift to the Common Core Standards and with
the push for 21st-century learning in many countries, lessons from the
recent decade of data use prompt us to ask several important questions
and emphasize significant areas for additional study. Common Core
Standards demand that teachers engage in a new kind of teaching—one
that is focused on supporting students to analyze intricate arguments,
compare texts, synthesize complex information, explain problem solv-
ing, and dig more deeply than ever into the nature of evidence. What
counts as “data” will be more wide ranging.
We are now asking teachers not only to use data to inform decision
making but also to use more complex forms of data and to implement
new instructional strategies to respond to students’ needs. These ini-
tiatives will likely involve activities such as inquiry-based learning, col-
laboration, and discussion, which are not easily measured by traditional
assessments (Levin, Datnow, & Carrier, 2012). There are some promising
signs given that the new assessments are intended to provide a fuller
picture of student understanding than the prior ones that have been in
use in the NCLB era. As teachers turn to data to provide instructional
guidance, it will be important to investigate how these new assessments
facilitate teaching and learning.
Given the findings from the research reviewed here, it is essential that
teachers receive professional development on assessment and how to
translate assessment data into information that can inform instructional
planning. Teachers need to be provided with a rich understanding of
assessment. They also need support in accessing data, translating them,
and disaggregating them in ways that will support goals of equity and
excellence for all students (Hoover & Abrams, 2013; Marsh, 2012; Young
& Kim, 2010). Finally, the need for support in planning instructional
changes on the basis of data, particularly with respect to differentiation.
There have been calls for reform in teacher education to place a much
higher priority on developing these skills in teachers’ preservice years
(Mandinach & Gummer, 2013; Mandinach, Friedman, & Gummer, 2015,
this issue; Young & Kim, 2010). With respect to in-service training, we
need to know how to support teachers’ varied needs with respect to data
literacy preparation (Mandinach & Gummer, 2013). District and school
administrators are challenged to configure new ways of improving com-
munication with teachers and providing them with the skills and knowl-
edge to take advantage of the growing availability of technologies and
cutting-edge pedagogies that best support teaching and learning.
As students are asked to monitor their own progress and to design
20
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
21
Teachers College Record, 117, 040302 (2015)
References
Andrade, H., Huff, K., & Brooke, G. (2012). Assessing learning: The students at the center series.
Retrieved from http://studentsatthecenter.org/topics/assessing-learning
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in education:
Principles, Policy, and Practice, 18(1), 5–25.
Bernhardt, V. (1998). Multiple measures. Invited Monograph No. 4. Oroville: California
Association for Supervision and Curriculum Development.
Blanc, S., Christman, J. B., Liu, R., Mitchell, C., Travers, E., & Bulkley, K. E. (2010). Learning
to learn from data: Benchmarks and instructional communities. Peabody Journal of
Education, 85(2), 205–225.
Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas
accountability system. American Educational Research Journal, 42(2), 231–268.
Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers.
Educational Measurement: Issues and Practice, 30(1), 3–12.
Brown, G. T., Lake, R., & Matters, G. (2011). Queensland teachers’ conceptions of assessment:
The impact of policy priorities on teacher attitudes. Teaching and Teacher Education, 27(1),
210–220.
Bulkley, K. E., Nabors Oláh, L. N., & Blanc, S. (2010). Introduction to the special issue on
benchmarks for success? Interim assessments as a strategy for educational improvement.
Peabody Journal of Education, 85(2), 115–124.
Carlson D., Borman G. D., & Robinson M. (2011). A multi-state district-level cluster
randomized trial of the impact of data-driven reform on reading and mathematics
achievement. Educational Evaluation and Policy Analysis, 33, 378–398.
Christman, J. B., Neild, R. C., Bulkley, K., Blanc, S., Liu, R., Mitchell, C., & Travers, E. (2009).
Making the most of interim assessment data. Lessons from Philadelphia. Retrieved from http://
www.researchforaction.org/wp-content/uploads/publication-photos/41/Christman_J_
Making_the_Most_of_Interim_Assessment_Data.pdf
Coburn, C. E., & Turner, E. O. (2012). The practice of data use: An introduction. American
Journal of Education, 118(2), 99–111.
Cosner, S. (2011). Teacher learning, instructional considerations and principal
communication: Lessons from a longitudinal study of collaborative data use by teachers.
Educational Management Administration & Leadership, 39(5), 568–589.
Daly, A. J. (2012). Data, dyads, and dynamics: Exploring data use and social networks in
educational improvement. Teachers College Record, 114(11), 1–38.
Data Quality Campaign. (2011). Data: The missing piece to improving student achievement.
Washington, DC: Author. Retrieved from http://www.dataqualitycampaign.org/files/
dqc_ipdf.pdf
Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco, CA: Jossey-Bass.
Datnow, A., Park, V., & Kennedy-Lewis, B. (2013). Affordances and constraints in the context
of teacher collaboration for the purpose of data use. Journal of Educational Administration,
51(3), 341–362.
Davidson, K. L., & Frohbieter, G. (2011). District adoption and implementation of interim and
benchmark assessments (Report No. 806). Los Angeles, CA: National Center for Research
on Evaluation, Standards, and Student Testing (CRESST).
Diamond, J. B., & Cooper, K. (2007). The uses of testing data in urban elementary schools:
Some lessons from Chicago. National Society for the Study of Education Yearbook, 106(1),
241–263.
22
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
Dunn, K. E., Airola, D. T., Lo, W., & Garrison, M. (2012). What teachers think about what
they can do with data: Development and validation of the data-driven decision making
efficacy and anxiety inventory. Contemporary Educational Psychology, 38, 87–98.
Earl, L., & Katz, S. (2006), Leading schools in a data-rich world. Thousand Oaks, CA: Corwin
Press.
Firestone, W. A., & González, R. A. (2007). Culture and processes affecting data use in school
districts. In P. A. Moss (Ed.), Evidence and decision making: Yearbook of the National Society for
the Study of Education (pp. 132–154). Malden, MA: Blackwell.
Halverson, R., Grigg, J., Pritchett, R., & Thomas, C. (2007). The new instructional leadership:
Creating data-driven instructional systems in schools. Journal of School Leadership, 17(2),
159–193.
Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E., Supovitz, J., & Wayman, J. (2009).
IES Practice Guide: Using student achievement data to support instructional decision making
(NCEE 2009-4067). Washington, DC: National Center for Education Evaluation and
Regional Assistance.
Heppen, J., Jones, W., Faria, A., Sawyer, K., Lewis, S., Horwitz, A. & Casserly, M. (2012). Using
data to improve instruction in the Great City Schools: Documenting current practice. Washington,
DC: American Institutes for Research and The Council of Great City Schools.
Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A
seamless process in formative assessment? Educational Measurement: Issues and Practice,
28(3), 24–31.
Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence
use: Understanding evidence use as a systems problem, American Journal of Education,
118(2), 199–222.
Hoover, N. R., & Abrams, L. M. (2013). Teachers’ instructional use of summative student
assessment data. Applied Measurement in Education, 26, 219–231.
Horn, I. S., & Little, J. W. (2010). Attending to problems of practice: Routines and resources
for professional learning in teachers’ workplace interactions. American Educational
Research Journal, 47(1), 181–217.
Hubbard, L., Datnow, A., & Pruyn, L. (2013). Multiple initiatives, multiple challenges: The
promise and pitfalls of implementing data. International Studies in Educational Evaluation.
doi:10.1016/j.stueduc.2013.10.003
Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data driven” mantra: Different
conceptions of data-driven decision making. National Society for the Study of Education
Yearbook, 106(1), 105–131.
Jennings, J. L. (2012). The effects of accountability system design on teachers’ use of test
score data. Teachers College Record, 114(11), 1-23.
Jimerson, J. B. (2014). Thinking about data: Exploring the development of mental models
for “data use” among teachers and school leaders. Studies in Educational Evaluation, 42,
5–14.
Kennedy, M. M. (2011). Data use by teachers: Productive improvement or panacea? (Working Paper
No. 19). East Lansing: Education Policy Center at Michigan State University.
Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to
promote data use for instructional improvement: Actions, outcomes, and lessons from
three urban districts. American Journal of Education, 112(3), 496–520.
Knapp, M. S., Copland, M. A., & Swinnerton, J.A. (2007). Understanding the promise
and dynamics of data-informed leadership. Yearbook of the National Society for the Study of
Education, 106(1), 74–104.
23
Teachers College Record, 117, 040302 (2015)
Konstantopoulos, S., Miller, S. R., & van der Ploeg, A. (2013). The impact of Indiana’s system
of interim assessments on mathematics and reading achievement. Educational Evaluation
and Policy Analysis, 35(4), 481–499.
Lachat, M. A., & Smith, S. (2005). Practices that support data use in urban high schools.
Special Issue on transforming data into knowledge: Applications of data-based decision
making to improve instructional practice. Journal of Education Change for Students Placed
At-Risk, 10(3), 333-339.
Levin, B., Datnow, A., & Carrier, N. (2012). Changing school district practices. Boston, MA:
Jobs for the Future. Retrieved from http://www.studentsatthecenter.org/papers/
changing-school-district-practices
Little, J. W. (2012). Understanding data use practices among teachers: The contribution of
micro-process studies. American Journal of Education, 118(2), 143–166.
Loveless, T. (2013). The 2013 Brown Center report on American education: How well are American
students learning? (Vol. 3, No. 2). Washington, DC: Brookings. Retrieved from http://
www.brookings.edu/2013-brown-center-report
Mandinach, E. B., Friedman, J. M., & Gummer, E. S. (2015). How can schools of education
help to build educators’ capacity to use data? A systemic view of the issue. Teachers College
Record, 117(4).
Mandinach, E. B., & Gummer, E. S. (2013). A systemic view of implementing data literacy in
educator preparation. Educational Researcher, 42(1), 30–37.
Mandinach, E. B., & Honey, M. (Eds.). (2008). Data driven school improvement: Linking data and
learning. New York, NY: Teachers College Press.
Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and
gaps. Teachers College Record, 114(11), 1–48.
Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision
making in schools—teacher access, supports and use. Washington, DC: U.S Department of
Education, Office of Planning, Evaluation, and Policy Development.
Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level: from
accountability to instructional improvement. Washington, DC: U.S. Department of Education,
Office of Planning, Evaluation, and Policy Development.
Moriarty, T. (2013). Data-driven decision-making: Teachers’ use of data in the classroom (Doctoral
dissertation). Available from ProQuest Dissertations and Theses database. (UMI No.
3571708)
Moss, P. A. (2012). Exploring the macro-micro dynamic in data use practice. American Journal
of Education, 118(2), 223–232. doi:10.1086/663274
Nabors Oláh, L., Lawrence, N. R., & Riggan, M. (2010). Learning to learn from benchmark
assessment data: How teachers analyze results. Peabody Journal of Education, 85, 226–245.
Park, V. (2008). Beyond the numbers chase: How urban high school teachers make sense of data use
(Unpublished doctoral dissertation). University of Southern California, Los Angeles.
Park, V., Daly, A. J., & Guerra, A. W. (2012). Strategic framing: How leaders craft the
meaning of data use for equity and learning. Educational Policy, 27(4), 645–675.
doi:10.1177/0895904811429295
Perie, M., Marion, S., & Gong, B. (2009). Moving toward a comprehensive assessment system:
A framework for considering interim assessments. Educational Measurement: Issues and
Practice, 28(3), 5–13.
Perie, M., Marion, S., Gong, B., & Wurtzel, J. (2007). The role of interim assessments in a
comprehensive assessment system: A policy brief. Washington, DC: Aspen Institute.
Pierce, R., & Chick, H. (2011). Teachers’ intentions to use national literacy and numeracy
assessment data: A pilot study. Australian Educational Research, 38(3), 433–477.
24
TCR, 117, 040302 Teachers’ Use of Assessment Data to Inform Instruction
25
Teachers College Record, 117, 040302 (2015)
26