0% found this document useful (0 votes)
207 views

Goodness of Measure

1. Validity refers to how well a scale measures the intended concept, while reliability indicates a scale is consistent and free of errors over time. 2. There are several types of validity: content validity ensures a scale adequately measures the intended concept. Criterion-related validity compares a scale to a criterion, while construct validity examines if scale results fit the underlying theory. 3. Reliability is measured through stability over time and internal consistency. Test-retest reliability examines consistency over multiple administrations, while parallel forms reliability compares alternative question wordings measuring the same construct.

Uploaded by

Abdullah Afzal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
207 views

Goodness of Measure

1. Validity refers to how well a scale measures the intended concept, while reliability indicates a scale is consistent and free of errors over time. 2. There are several types of validity: content validity ensures a scale adequately measures the intended concept. Criterion-related validity compares a scale to a criterion, while construct validity examines if scale results fit the underlying theory. 3. Reliability is measured through stability over time and internal consistency. Test-retest reliability examines consistency over multiple administrations, while parallel forms reliability compares alternative question wordings measuring the same construct.

Uploaded by

Abdullah Afzal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Goodness of Measure

Validity:
how well the scale/instrument measure the concep
t for which it is developed

Reliability:
Instrument is error-free, bias-free and produce cons
istent results over time
Example
• What is the impact of civility on turnover intention

• Variables
• Civility
• Turnover Intention
Employee Name: b Marital Status
a

Age d Gender
c

1. Civility is defined as respectful treatment of others. It includes courtesy,

Strongly Disagree
compassion, kindness, politeness, manners and sportsmanship.

Strongly Agree
Please indicate your agreement and disagreement regarding at what extent

Disagree

Neutral
your coworkers are civil with you.

Agree
1 2 4 5
1. Do your co-workers treat you with respect? 3
 
1 2 4 5
2. Do your co-workers treat you with dignity? 3
 
1 2 4 5
3. Do your co-workers treat you in a polite manner? 3
 
1 2 4 5
4. Do your co-workers is pleasant with you? 3
 

Strongly Disagree
3. Rate the following statements by placing a tick in one option for each

Strongly Agree
statement that describes your opinion

Disagree

Neutral
Agree
i 1 2 4 5
You think about quitting your job. 3
ii 1 2 4 5
you would look for a new job in the near future. 3
iii 1 2 4 5
As soon as possible, you would leave this organization. 3
TYPES OF VALIDITY

01 02 03

Content Criterion- Construct


Validity related validity Validity
Content Validity

Content validity ensures that the measure includes an adequate an


d representative set of items that tap the concept.

• The more the scale items represent the domain or universe of th


e concept being measured, the greater the content validity.

Content Validity is measured as:


Criterion-related Validity
Criterion-related validity is established when the measure differentiates
individuals on a criterion it is expected to predict.
Criterion related Validity is further divided into:

Concurrent Validity

Concurrent validity is established when the scale discriminates ind


ividuals who are known to be different; that is, they should score dif
ferently on the instrument.
Predictive Validity
Construct Validity
Construct validity testifies to how well the results obtained from the use
of the measure fit the theories around which the test is designed.
i.e the concept is tapped as it is theorized.
Construct Validity is further divided into:
Convergent Validity
Convergent validity is established when the scores obtained with two
different instruments measuring the same concept are highly correlate
Discriminant Validity
Discriminant validity is established when, based on theory, two variab
les are predicted to be uncorrelated, and the scores obtained by measu
ring them are indeed empirically found to be so.
Ways in which the above forms of validity can be established
are through the following:

 Correlational analysis: Establishing concurrent and predi


ctive validity or convergent and discriminant validity.

 Factor analysis: Multivariate technique in case of (establ


ishing construct validity).
RELIABILITY
RELIABILITY

The reliability of a measure indicates the extent


to which it is without bias (error free) and hence
ensures consistent measurement across time and
across the various items in the instrument.
• An indication of the stability and
consistency.
Stability of Measures
The ability of a measure to remain the same over time and is
indicative of its stability and low vulnerability to changes in the
situation.
 Despite uncontrollable testing conditions.
 The state of the respondents.
Test
The two test of stability of measure are:
• Test-retest Reliability
• Parallel form Reliability.
Test-retest Reliability Parallel-form Reliability
 The reliability coefficient obtained  Responses on two comparable sets
by repetition of the same measure of measures tapping the same
on a second occasion. construct are highly correlated.
 Changes of wording and the order
 Test-retest coefficient: or sequence of the questions.
 The correlation between the  Two comparable forms are highly
scores obtained at the two correlated and they are reasonably
different times from one and reliable, with minimal error
the same set of respondents. variance caused by wording and
 The higher it is, the better the sequencing.
test–retest reliability .
Internal Consistency of Measures
The internal consistency of measures is indicative of the
homogeneity of the items in the measure that taps the co
nstruct.

• By examining whether the items and the subsets of ite


ms in the measuring instrument are correlated highly.

Consistency can be examined through:


Inter-item consistency reliability
A test of the consistency of respondents’ answers to all the items in a me
asure.
Independent items are correlated.
The higher the coefficients, the better the measuring instrument.
Split-half reliability

Split-half reliability reflects the correlations between two halves of an ins


trument.

Estimates will vary depending on how the items in the measure are split i
nto two halves.
CONCLUSION
Well-validated and reliable measures should be ensured
to show that the research is scientific.
It is important to note that
• Validity is a necessary but not sufficient condition of th
e test of goodness of a measure.

• A measure should not only be valid but also reliable. A


measure is reliable if it provides consistent results.

You might also like