Quality of A Good Instrument
Quality of A Good Instrument
Criterion-
related
Validity
1. Face validity
Face validity is concerned with whether an
instrument is relevant and appropriate for what it’s
assessing on the surface.
2 √ — √ √ √ √ 5 .83
3 √ √ — √ √ √ 5 .83
4 √ √ √ — √ √ 5 .83
5 √ √ √ √ — √ 5 .83
6 √ √ √ √ √ — 5 .83
7 √ √ √ √ √ √ 6 1.00
8 √ √ √ √ √ √ 6 1.00
9 √ √ √ √ √ √ 6 1.00
10 √ √ √ √ √ √ 6 1.00
Relevant:
3. Construct Validity
o Constructs are variables or concepts that are
abstract in nature and cannot be measured directly.
2. Discriminant Validity
Discriminant validity is when the two measures of
unrelated constructs that should be unrelated are, in
4. Criterion Validity
Criterion validity focus on how well an instrument
can be compared with another instrument that have
been established as being good instrument in
measuring behaviour.
2. Predictive Validity
This is demonstrated when an instrument can
predict future performance.
The test must correlate with a variable that can
only be assessed at future data after the test has
been administered.
Reliability of Research Instrument
Reliability of Research Instrument
Internal
Consistenc
y
1. Test Re-test Reliability
Test-retest reliability measures the consistency of
results when you repeat the same instrument on the
same sample at a different point in time.
If all the raters give similar ratings, the test has high
interrater reliability.
Items Rater 1 Rater 2 Rater 3 Rater 4 Rater 5 Agreemen
t
1. √ √ √ √ √ 5/5
2. √ √ 0 √ √ 4/5
3. √ O √ 0 √ 3/5
4. O √ √ √ √ 4/5
5. √ O √ √ √ 4/5
6. 0 √ O √ √ 3/5
7. √ √ √ √ √ 5/6
8 √ √ √ √ √ 5/5
Interrater Reliability
3. Parallel forms reliability
Parallel forms reliability measures the correlation
between two equivalent versions of instruments.
This also referred to as alternative or equivalent for of
reliability.
You use it when you have two different assessment tools
designed to measure the same thing.
Split-half
It is the process of splitting a test item into two
halves.