Job Analysis Criteria Reliability Validity
Job Analysis Criteria Reliability Validity
Job Analysis
Motivation • A job analysis generates information about
the job and the individuals performing the
job.
– Job description: tasks, responsibilities, working
Performance Environment conditions, etc.
Technology
– Job specification: employee characteristics
(abilities, skills, knowledge, tools, etc.) needed
to perform the job
Abilities – Performance standards
1
Occupational Information Network (O*NET)
U.S. Dept. of Labor
Other Job Analysis Methods
• O*NET
• CIT- (Critical incidents technique) collects and – Worker Requirements (Basic skills,
categorizes critical incidents that are critical in Knowledge, education)
performing the job. – Worker Characteristics (abilities, values,
• Task Oriented Procedures interests)
1. Task Analysis- compiles and categorizes a list – Occupational Characteristics (labor market
of task that are performed in the job. information)
2. Functional Job Analysis (method)– describes – Occupation-Specific Requirements (tasks,
the content of the job in terms of things, data, duties, occupational knowledge)
and people. – Occupational Requirements (Work context,
organizational context)
Criterion Deficiency
Conceptual versus Actual
Conceptual
• Conceptual Criterion– the theoretical Criterion Deficiency Criterion
construct that we would like to measure.
• Actual Criterion– the operational definition Relevance
(of the theoretical construct) that we end up
Criterion Contamination Actual
measuring. Criterion
2
Criterion Deficiency Types of Performance
• Criterion Deficiency– the degree to which • Task Performance– generally affected by
the actual criterion fails to overlap with the cognitive abilities, skills, knowledge &
conceptual criterion. experience.
• Criterion Relevance– the degree of overlap • Contextual Performance– generally affected by
personality traits and values includes helping
or similarity between the actual and others, endorsing organizational objectives, &
conceptual criterion. contributing to the organizational climate.
• Contamination– the part of the actual Prosocial behavior that facilitates work in the
criterion that is unrelated to the conceptual organization.
criterion. • Adaptive Performance– engage in new learning,
coping with change, & developing new processes.
3
Test-Retest Reliability
Test-retest reliability is estimated by comparing respondents’ scores
Types of Reliability on two administrations of a test
Test-retest reliability is used to assess the temporal stability of a
Test-retest reliability measure; that is, how consistent respondents’ scores are across time
The higher the reliability, the less susceptible the scores are to the
Alternate-form reliability random daily changes in the condition of the test takers or of the
Split-half reliability testing environment
The longer the time interval between administrations, the lower the
Internal consistency (a.k.a., Kuder-Richardson reliability; test-retest reliability will be
a.k.a., Coefficient Alpha) ¾ The concept of test-retest reliability is generally restricted to short-range
Interrater reliability (a.k.a., interscorer reliability) random changes (the time interval is usually a few weeks) that characterize
the test performance itself rather than the entire behavior domain that is being
tested
¾ Long-range (i.e., several years) time intervals are typically couched in terms
of predictability rather than reliability
¾ Test-retest reliability is NOT appropriate for constructs that tend to fluctuate
on an hourly, daily, or even weekly basis (e.g., mood)
19 20
4
Methods of Estimating
Problems With Reliability
Reliability
• Test-retest • Homogenous groups have lower reliability
• Parallel (alternate) -forms than heterogeneous groups
• Split-half (must use adjustment Spearman- • The longer the test the higher the reliability
Brown) • Most reliability estimates require that the
• Kuder-Richardson (Alpha) test be one-dimensional
• Inter-rater
Establishing Validity
Validity • Content validity– The degree to which the items in a
test are representative sample of the domain of knowledge
the test purports to measure
• 1. Whether a test is an adequate measure of
the characteristics it is suppose to measure. • Criterion Related Validities– the degree to which
a test is statistically related to a performance criterion.
• 2. Whether inferences and actions based on
– Concurrent Validation
the test scores are appropriate.
– Predictive Validation
• Similar to reliability, validity is not an
• Construct Validity– the degree to which a test is an
inherent property of a test. accurate measure of the theoretical construct it purports to
measure.
– Multi-trait Multi-method approach
29 30
5
Good Reliability, Good Validity Performance Appraisal Goals
• Assessment of work performance
• Identification areas that need improvement
• Accomplishing organizational goals
• Pay raises
• Promotions
31
Methods of Performance
Assessments
Appraisals
• Basic Rating Forms • Supervisor’s assessment
– Graphic forms • Self-assessment– generally people
– BARS (Behaviorally anchored ratings scales) recognize their own strengths and
– BOS (Behavioral observation scales) weakness, but they are generally a bit
– Check lists (based on ratings of CI)
inflated.
– Mixed scales
– 360 degree feedback • Peer assessment– very accurate in
• None have shown overall advantage predicting career advancement.
6
Performance Appraisals
• PA systems that have failed in court
generally were
– Developed without the benefit of a Job
Analysis
– Conducted in the absence of specific
instructions to raters
– Trait oriented rather than behavior oriented
– Did not include a review of the appraisal with
the employee