0% found this document useful (0 votes)
7 views

Httpseclass.yorku.capluginfile.php5802518mod Resourcecontent7Week202 Reliability20and20Validity Student.pptx

Uploaded by

sydney.sem078
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Httpseclass.yorku.capluginfile.php5802518mod Resourcecontent7Week202 Reliability20and20Validity Student.pptx

Uploaded by

sydney.sem078
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

1

HRM 3470

Reliability and Validity

Lecture 2_Part I

Agenda

• The Recruitment and Selection


(R&S) Process
• Reliability
• Validity
• Bias and Fairness

The R&S Process


3

● An employer’s goal
● Hire an applicant with a good person-
job fit.

Loading…
The applicant possesses the knowledge,
skills, abilities, or other attributes
(KSAOs) required to successfully
perform the job being filled.
● Hire an applicant with a good person-
organization fit.

The R&S Process


4

● A selection
system

Reliability
and Validity

Biases,
Fairness, and
Utility

The R&S Process


5

● The hiring process

Loading…

Hiring Process: Toronto Police


Service Physical Readiness Police Analytical Written
Stage 1 Evaluation for Police Thinking Inventory Communication Test
(PREP- 60 mins) (PATI- 90 mins)
(WCT- 60 mins)

Stage 2 Video Simulation Vision and Hearing


(B-PAD- 20 mins) (60 mins)

Blended Interviews- Part I and Part Blended Interviews- Part II


Stage 3 III (Pre-Backgroup Questionnaire- 20
(Behavioural Interview: 120-150 mins)
mins)
Background Investigation Psychological Assessment, and
Stage 4 Minnesota Multi-Phasic Personality
Inventory (MMPI-2)

Stage 5 Comprehensive Medical Examination

Conditional Offer of Employment


Stage 6
6

The R&S Process


7

● Science versus practice in selection

The R&S Process


8

● Science versus practice in selection


(cont’d)

Reliability
9

● Reliability: the degree to which observed


scores are free from random measurement
errors; an indication of the stability or
dependability of a set of measurements
over repeated applications of the
measurement procedure.
Systematic Error

Reliability
10

● Random vs. systematic errors


● Random measurement errors occur by
chance.

Less Random More Random


Errors (Precise), Errors
and Less (Imprecise), and
Systematic Errors Less Systematic
(Unbiased) Errors
(Unbiased)

High Low Reliability


Reliability

Reliability
11

● Random vs. systematic errors (cont’d)


● Systematic measurement errors occur
in a consistent, or predictable fashion.
Loading…Systematic Error

Less Random More Random


Errors (Precise), Errors
and More (Imprecise), and
Systematic Errors More Systematic
(Biased) Errors (Biased)

High Low Reliability


Reliability

Reliability
12

● Observed scores Random


measurement
errors affect the
reliability of the
Less Random More Random
Errors
measurement, but
Errors
(Precise), and (Imprecise), systematic
Less and Less measurement
Systematic Systematic
Errors errors do not
Errors
(Unbiased) (Unbiased) affect the
Less Random More Random reliability of the
Errors Errors
measurement but
(Precise), and (Imprecise), and
More More rather the
Systematic Systematic meaning, or
Errors (Biased)High Reliability Low ReliabilityErrors (Biased)
interpretation.

Random vs. Systematic Errors:


13
Examples in Testing
1. I have difficulty in making important
decisions.
1 2 3 4 5

Highly Disagree Neither Disagree Agree Highly Agree


Disagree Nor Agree

- - 0 1 2
2 1
Highly Disagree Neither Disagree Agree Highly Agree
Disagree Nor Agree

Reliability
14

● Observed scores (cont’d)

Reliability refers to
the degree to which
observed scores are
free from random
measurement
errors.

Reliability
15

● Reliability coefficients: the degree that


observed scores correlate with one another.
Observed Score (x) = True Score + Error
Score
Example:
An adult’s actual IQ scores are 123.
Scores of IQ Test #1: 127 (127 = 123 + 4)
Scores of IQ Test #1: 125 (125 = 123 + 2)
Scores of IQ Test #1: 124 (124 = 123 + 1)

Reliability
16

● Reliability coefficients: the degree that


observed scores correlate with one another.
Observed Score (x) = True Score + Error
Score
● True score: the average score that an individual
would earn on an infinite number of administrations
of the same test or parallel versions of the same test
● Error score/measurement error: the hypothetical
difference between an observed score and a true
score, comprising both the random and systematic
error.

Reliability
17

● Reliability coefficients (cont’d)

Reliability
18

● Factors affecting reliability


● Temporary individual
characteristics: e.g., health,
motivation, fatigue, emotional state,
etc.
● Environment and
the assessment process:
e.g., lack of standardization
● Chance: e.g., good luck
Reliability
19

● Methods of estimating reliability


● Parallel (alternate) forms: different
forms
● Test and retest: different occasions
● Internal consistency: calculating the
correlations between the scores of all
possible pairs of items and then
averaging all correlations
➢ Alpha or Cronbach’s Alpha

Reliability
20

● Methods of estimating reliability


(cont’d)
● Inter-rater reliability: the correlation
between the scores of different raters
➢ Classification consistency or inter-
rater agreement

21

HRM 3470

Reliability and Validity

Lecture 2_Part II

Validity
22

● Validity: the degree to which


accumulated evidence and theory support
specific interpretations of test scores in
the context of the test’s proposed use.
● Face validity: the degree to which the
test takers view the content of a test or
test items as relevant to the context in
which the test is being administered.

Validity
23

● Construct: refers to ideas or concepts


constructed or invoked to explain
relationships between observations, e.g.,
cognitive abilities vs. performance
● Construct is the collection of related
behaviours.
● The construct may be based on a theory.
● To be useful, the construct should be
measurable.

Validity
24

● Variable: refers to how people vary on


the construct of interest.
● When making a measurement,
developers are assigning a numerical
value to represent the degree of
variation by the person or object within
the construct of interest, e.g., General
Mental Ability (GMA) scores represent
variability in cognitive ability.

Validity
25

● Validation The degree to which a


test or procedure
strategies assess an underlying
Construct
theoretical construct it
Validity
is supposed to
The degree to measure.
which the items
on a test appear Criterion-Related
Content Validity
to match the Validity The degree to
content or
which the scores
subject matter
from a test
they are
predict an
intended to
outcome.
assess.
Construct, Content and Criterion-
Related Validity: Example
26
Purpose: Communication ability as a predictor of
performance
Please rate the employee on the following items
(Unsatisfactory = 1; Satisfactory = 2; Good = 3; Excellent =
4):
1. Ability to use verbal communication to make clear and
convincing presentations to multiple groups inside the
organization.
2. Ability to clarify plans, policies and role expectations to a
target audience.
3. Ability to effectively ask questions of others in order to
draw out necessary information.

Validity
27

● Validation strategies (cont’d)

You might also like