IO Chapter6 Final
IO Chapter6 Final
Presented to:
Prof. Lloyd Sajol, MPsy
Presented by:
Mariel Bajo
Erica May Cambarihan
Mark Joseph Cayabyab
Jessa Jane Fiel
Ayana Megan Florido
Jevan Carl Montales
Stacey Nanie Plazos
Princess Joy Sumalinog
Rica Tesoro
Each one of several people take the same test twice. The
scores from the first administration of the test are
correlated with scores from the second to determine
whether they are similar. If they are, then the test is said Test-Retest Reliability
to have temporal stability.
The extent to which similar items are answered in similar Internal Reliability
ways is referred to as internal consistency and measures or
item stability. Internal Consistency
That is, do all of the items measure the same thing, or do Item Homogeneity
they measure different constructs? The more
homogeneous the items, the higher the internal
consistency.
A statistic used to determine internal reliability of tests that Kuder-Richardson
use items with dichotomous answers (yes/no, true/false). Formula 20
(K-R 20)
A form of internal reliability in which the consistency of item
responses is determined by comparing scores on half of Split-half Method
the items with scores on the other half of the items.
Used to correct reliability coefficients resulting from the Spearman-Brown
split-half method. Prophecy Formula
Used to adjust correlations.
A statistic used to determine internal reliability of tests that Coefficient Alpha
use interval or ratio scales.
The extent to which two people scoring a test agree on the Scorer Reliability
test score, or the extent to which a test is scored correctly.
6.1.2 Validity
It is the degree to which inferences from scores on tests or
assessments are justified by the evidence.
Advantage:
Hiring the top scorers on a valid test, an organization will Unadjusted Top-Down
gain the most utility Selection
Disadvantage:
Can result in high levels of adverse impact and it reduces
an organization’s flexibility to use nontest factors such as
references or organizational fit
Assumption is that if multiple test scores are used, the
relationship between a low score on one test can be Compensatory approach
compensated for by a high score on another
6.4.2 Rules of Three
It is used in the public sector. This method ensures that
the person hired will be well qualifies but provides more Rules of Three
choice than does top-down selection
6.4.3 Passing Score
Reducing adverse impact and increasing flexibility. An
organization determines the lowest score on a test that is Passing Score
associated with acceptable performance on the job
● multiple-cutoff approach Approaches that can be
● multiple-hurdle approach used when the
relationship between the
selection test and
performance is not
linear
The applicants would be administered all of the tests at Multiple-cutoff
one time.
Reduce the costs associated with applicants failing one or Multiple-hurdle
more tests. The applicant is administered one test at a approach
time, usually beginning with the least expensive scores.
6.4.4 Banding
Attempts to hire the top test scorers while still allowing
some flexibility for affirmative action. Take into
consideration the degree of error associated with any test
score. Thus, even though one applicant might score two Banding
points higher than another, the two-point difference might
be the result of chance (error) rather than actual
differences in ability
The number of points that a test score could be off due to
test unreliability Standard error of
measurement (SEM)
Solution: SEM = SD √ 1 – reliability