Rasch Model: January 2010
Rasch Model: January 2010
net/publication/314045679
Rasch Model
CITATION READS
1 3,377
1 author:
Trevor G Bond
James Cook University
94 PUBLICATIONS 7,373 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Piaget: Formal Operations; Theory, Practice & Empirical Evidence View project
All content following this page was uploaded by Trevor G Bond on 15 October 2017.
For all items and persons in a data matrix the Rasch model estimates how much of the
underlying latent trait is revealed in each person ability and item difficulty, estimated along a
logit (log odds unit) scale that is common to items and persons. The total score (N correct
responses for ability; N persons correct for item difficulty) is the sufficient statistic for
estimating Rasch measures; i.e., total score is the sufficient statistic for person ability as it
contains the complete information about that ability. Additionally, fit statistics are used as
quality control mechanisms to determine which test items should be added together to
produce total scores; misfitting items/persons should be put to one side for later
consideration. Indeed, the requirements of the Rasch model are often seen as the explicit
statement of the implied conditions of any technique which uses total N correct as a summary
statistic, based on those performances which should/not be counted, and the transformation of
those counts to an interval measurement scale.
The RM feature of parameter separation supports the model’s goal of specific objectivity:
item/person estimates are calculated independently of the distribution of those difficulties /
abilities in the item / person sample. The consequence of specific objectivity is the
requirement for invariant measures: item difficulties should remain the same (within error)
across all appropriate samples, and person estimates should not vary according to the choice
of items in a test. Lack of invariance, say, revealed as DIF (differential item functioning)
should prompt diagnostic consideration of item/person performances.
In the RM for dichotomous data, the probability of any correct response is modeled as a
logistic function of the difference between person ability and item difficulty, each expressed
in logits (log odds units): higher ability persons are more likely to succeed on all items; less
difficult questions will be more easily responded to by all persons; and the order of the item
difficulties remains the same for all persons. For polytomous data (Wright & Masters, 1982),
the Rating Scale Model (RSM) predominates in the analysis of Likert-style data, while Partial
Credit Rasch Model (PCM) allows response options to vary across items. The many-facets
Rasch model (MFRM) provides for the estimation of additional facet(s), such as rater severity
when judges are used to score persons on items according to graded criteria (e.g., essay
marking, performance certification.)
References
Andrich, D. (1988). Rasch models for measurement. Newbury Park, CA: Sage.
Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests
(Expanded ed.). Chicago: University of Chicago Press.
Wright, B.D. & Masters, G.N. (1982). Rating Scale Analysis: Rasch Measurement. Chicago:
MESA press.
Key Words: Rasch model, measurement, invariance, probability, test, DIF, parameter, scores,
item response theory (IRT), scaling, unidimensional, rating scale, Likert scale, judge.