0% found this document useful (0 votes)
19 views

2nd Upload

The document discusses the characteristics of good measures including validity, reliability, construct validity, convergent validity, discriminant validity, predictive validity, Cronbach's alpha, test-retest reliability, parallel-forms reliability, and inter-rater reliability.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

2nd Upload

The document discusses the characteristics of good measures including validity, reliability, construct validity, convergent validity, discriminant validity, predictive validity, Cronbach's alpha, test-retest reliability, parallel-forms reliability, and inter-rater reliability.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

“CHARACTERISTICS OF GOOD MEASURE” Terminologies: • The measure must be truthful - It must

accurately reflect the construct. • Validity - Validity is about truthfulness. A measure shows validity if it
actually measures what it claims (or is intended) to measure. • Content Validity - Refers to the extent to
which the items or behaviors fully represent the concept being measured • Construct Validity - Refers to
the extent to which the measure is on target to measure the construct being studied. Note: Construct
Validity has two ways to determine construct validity 1. Convergent Validity – Which is the extent to
which other measures of the same behavior are similar to your measure. 2. Discriminant Validity – One
achieves discriminant validity when the instrument being examined is uncorrelated with another
measure that is presumably unrelated. • Predictive Validity - Refers to the extent to which a measure is
related to some other measure that you would be interested in predicting. • Reliability - Is the extent to
which a measure yields the same scores across different times, groups of people, or versions of the
instrument. • Cronbach’s Alpha - Is the most common way to assess the reliability of self - report items.
Cronbach ’s Alpha measures the degree to which the items in an instrument are related. • Test-Retest
Reliability – measures the similarity of participant’s scores at two different times. The greater the
similarity between the two set of scores, the higher the test-retest reliability. • Parallel-Forms Reliability
- An instrument has high parallel - forms reliability if similar, but not identical, versions of the same
instrument have the same measurement characteristics. • Inter-Rater Reliability - Inter-rater reliability is
often used for behavioral observations. A measure has high inter-rater reliability if two people who are
observing a behavior agree on the nature of that behavior.

You might also like