Caep Rubric Assessment Response Form and Content Validity Protocol
Caep Rubric Assessment Response Form and Content Validity Protocol
INSTRUCTIONS: This measure is designed to evaluate the content validity of (insert title of assessment) . Please rate each item as
follows:
Please rate the level of representativeness of item in measuring the aligned overarching construct on a scale of 1-4, with 4 being the most
representative. Space is provided for you to comment on the item or suggest revisions.
Please rate the importance of the item in measuring the aligned overarching construct on a scale of 1-4, with 4 being the most essential. Space
is provided for you to comment on the item or suggest revisions.
Please rate the level of clarity for each item on a scale of 1-4, with 4 being the clearest. Space is provided for you to comment on the item or
suggest revisions.
Overarching Operational Item measuring Representativeness Importance of item in Clarity of item Comments:
This row construct Definition overarching of item in measuring the 1 = item is
doesn’t (i.e., “big construct measuring the overarching construct not clear
idea to overarching 1 = item is not 2 = item
change –
measure”) (Uses the exact construct necessary to needs
same for wording as 1 = item is not measure the major
all rubrics appears on the representative construct revisions to
assessment 2 = item needs 2 = item provides be clear
rubric). major revisions some information 3 = item
to be but is not needs
representative essential to minor
3 = item needs measure the revisions to
minor revisions construct be clear
to be 3 = item is useful 4 = item is
representative but not essential clear
4 = item is to measure the
representative construct
4 = item is
essential to
measure the
construct
Construct 1: (fill in the blank) – the construct “Content Knowledge” is used for this example.
Content Knowledge K2a:
Knowledge about actual Demonstrates
These subject knowledge of
rows will (Example) matter that content
1 2 3 4 1 2 3 4 1 2 3 4
is to be (Example)
change learned or
depending taught
(Example)
on your Content Knowledge K2b:
program Knowledge about actual Implements
rubric … subject interdisciplinary
matter that approaches and
these 1 2 3 4 1 2 3 4 1 2 3 4
is to be multiple
rows will learned or perspectives for
taught teaching
be content
different Content Knowledge K2c:
for each Knowledge about actual Demonstrates
subject awareness of
rubric matter that literacy 1 2 3 4 1 2 3 4 1 2 3 4
used is to be instruction
learned or across all
taught content areas
Content Knowledge K2d: Makes
Knowledge about actual content relevant
1 2 3 4 1 2 3 4 1 2 3 4
subject for all learners
matter that
is to be
learned or
taught
To the reviewer: What additional items would you recommend including to measure the construct? If you have no suggestions, please
enter “none.”
These three 1 (This row would be inserted after each group of items aligned with an identified overarching construct).
(3) open- To the reviewer: What additional items would you recommend deleting? If you have no suggestions, please enter “none.”
ended
response (This row would be inserted after each group of items aligned with an identified overarching construct).
rows are 2 To the reviewer: Please provide any additional information you believe may be useful in assessing the identified construct with this
inserted instrument. If you have no suggestions, please enter “none.”
after each
group of 3 (This row would be inserted after each group of items aligned with an identified overarching construct).
items
aligned with
an identified Construct 2: (fill in the blank) – the construct “Learning Environments” is used for this example.
overarching
competency
Content Validity determines the extent to which an assessment represents all facets of a given
construct: The assessment instrument should answer the following questions:
Does the indicator measures what it was designed to measure?
Do the constructs include the concept, attribute, or variable that are the target of
measurement?
Does the instrument estimate how much a measure represents every single element of a
construct?
Does the instrument assess constructs or domains?
Does the instrument assess the body of knowledge surveyed?
What degree does the content of the indicator reflect the content domain of interest?
The process of determining if an assessment is valid begins with gathering evidence to determine
how accurately an assessment addresses various aspects of the specific construct question and
adequately represents a defined domain of knowledge or performance. In other words, do the
questions assess the constructs or are the responses by the person answering questions influenced
by other factors. The purpose of content validity protocol is to guide the collection of evidence to
document the adequate technical quality of rubrics, surveys, etc. that are being used to determine
the validity of assessments to evaluate Program Learning Outcomes in the College of Education
at Fort Hays State University.
How does a committee establish Content Validity for an initial EPP created assessment?
To establish Content Validity for EPP created assessments/rubrics, a panel of experts identifies
the essential constructs for the assessment/rubric. Although there are other methods for
establishing content validity, the College of Education will use the Lawshe Seminal research
method as approved by CAEP. The Lawshe method requires a Content Evaluation Panel (e.g.,
subject-matter experts) to provide feedback on how well each question measures the construct in
question. The Content Evaluation Panel will identify the overlap between the construct and the
performance domain. Their feedback will be analyzed, and informed decisions will be made
about the effectiveness of each question.
Protocol
The EPP will determine content validity using the Lawshe method. Content validity refers to the
appropriateness of the content of an instrument. In other words, do the measures (questions,
observation logs, etc.) accurately assess what we want to know? The expert judgment I not
statistics) is the primary method used to determine content validity. The process of review and
ratings establishes it by subject matter experts or stakeholders. The Lawshe proposed that each of
the subject matter experts (SMEs) raters on the judging panel respond to the following questions
for each item:
Step 1:
1. Determine the body of knowledge for construct measure. Complete the initial
assessment/rubric review form. For each assessment/rubric used to evaluate candidate
performance in the program. Make sure that all constructs measured in the identified
assessment/rubric.
2. Identify a Content Evaluation Panel and credentials for this selection. The Content
Evaluation Panel should be a combination of all stakeholders to include the College of
Education faculty (i.e., content experts) and P-12 school or community practitioners (lay
experts). Each panel expert should have minimum credentials established by program
faculty.
a. At least one content expert from the program/department in the College of
Education.
b. At least one external content expert from outside the program/department. This
person can be from FHSU or another college or university as long as the requisite
content expertise established; and
c. At least one practitioner expert from the field
Total Number of Subject-matter Experts on the panel: A minimum of seven (3)
3. Create the response form: For each EPP created an assessment, there should be an
accompanying response form that Content Panel members use to rate items that appear
on the rubric/instrument. Program faculty work collaboratively to develop the response
form needed for each rubric/instrument used in the program to evaluate the candidate’s
performance.
The Content Evaluation Panel (subject-matter experts) Per the Lawshe method
For example:
3 = Essential
2 = Useful but not essential
3 = Not essential
Essential
Total
Useful but
members Not essential
Constructs not essential
responded Total
Total
to
construct
Ability to create lessons 2 1 7
Accepts Criticism (Merged into Response to
10 0 0
Feedback)
Assessment Skills 1 2 7
Reflective Educator 10 0 0
Response to Feedback 10 0 0
4. Create an assessment packet for each Content Evaluation panel member. The packet
should include the following:
a. A letter explaining the purpose of the assessment/rubric, the reason the experts were
selected, a description of the measure and its scoring, and an explanation of the
response form.
b. A copy of the assessment/rubric instructions
c. A copy of the form used to evaluate the assessment/rubric
d. The response form aligned with the assessment/rubric for the panel member to rate
each item.
7. Content Validity results Submitted: The COE assessment contact person will generate a
Content Validity Index (CVI) and Content Validity Ratio (CVR). The index will be
calculated based on recommendations by Rubio et al. (2003), Davis (1992), and Lynn
(1986).
8. The assessment contact person will use the CVR = (ne – n/2)/(n/2.