0% found this document useful (0 votes)
47 views

Job Analysis Criteria Reliability Validity

This document discusses factors that affect job performance and methods for analyzing jobs. It covers: 1. A job analysis generates information about job tasks, responsibilities, required employee characteristics, performance standards, and working conditions. This information is then used to inform compensation, performance appraisal criteria, selection, training, and job enrichment. 2. Common job analysis methods examine either the job, the worker, or both. Worker-oriented methods like the PAQ focus on information and mental processes used, while task analysis compiles tasks performed. The O*NET database provides standardized worker requirements, characteristics, and occupational information. 3. Developing valid performance criteria is important. Criteria should be relevant, free from contamination

Uploaded by

Mitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Job Analysis Criteria Reliability Validity

This document discusses factors that affect job performance and methods for analyzing jobs. It covers: 1. A job analysis generates information about job tasks, responsibilities, required employee characteristics, performance standards, and working conditions. This information is then used to inform compensation, performance appraisal criteria, selection, training, and job enrichment. 2. Common job analysis methods examine either the job, the worker, or both. Worker-oriented methods like the PAQ focus on information and mental processes used, while task analysis compiles tasks performed. The O*NET database provides standardized worker requirements, characteristics, and occupational information. 3. Developing valid performance criteria is important. Criteria should be relevant, free from contamination

Uploaded by

Mitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Factors Affecting Performance

Job Analysis
Motivation • A job analysis generates information about
the job and the individuals performing the
job.
– Job description: tasks, responsibilities, working
Performance Environment conditions, etc.
Technology
– Job specification: employee characteristics
(abilities, skills, knowledge, tools, etc.) needed
to perform the job
Abilities – Performance standards

Job Analysis Methods Uses of Job Analysis


• Job Analysis can focus on the job, on the • Information from a job analysis is used to
worker, or both assist with
– Job Oriented: focus on work activities – Compensation
– Worker-oriented: focus on traits and talents – Performance appraisal- criteria
necessary to perform the job – Selection- identifying predictors
– Mixed: looks at both – Training
– Enrichment and combination

Some Job Analysis Procedures


Worker Oriented Threshold Traits Analysis
1. PAQ (Position Analysis Questionnaire)
– Information input (what kind of information
does the worker use in the job)
2. TTA: Measures 33 Traits in six areas
– Physical (stamina, agility, etc)
– Mental Processes (reasoning, decision making,
etc.) – Mental (perception, memory, problem solving)
– Work Output (what machines, tools, or – Learned (planning, decision making,
devices are used) communication)
– Relationships – Motivational (dependability, initiative, etc)
– Job Context (environment) – Social (cooperation, tolerance, influence)
– Other Characteristics

1
Occupational Information Network (O*NET)
U.S. Dept. of Labor
Other Job Analysis Methods
• O*NET
• CIT- (Critical incidents technique) collects and – Worker Requirements (Basic skills,
categorizes critical incidents that are critical in Knowledge, education)
performing the job. – Worker Characteristics (abilities, values,
• Task Oriented Procedures interests)
1. Task Analysis- compiles and categorizes a list – Occupational Characteristics (labor market
of task that are performed in the job. information)
2. Functional Job Analysis (method)– describes – Occupation-Specific Requirements (tasks,
the content of the job in terms of things, data, duties, occupational knowledge)
and people. – Occupational Requirements (Work context,
organizational context)

Issues to Consider in Developing


O*NET Basic Skills
Criteria for Performance
• Reading • Long term or short term performance
• Active listening • Quality or quantity
• Writing • Individual or team performance
• Speaking • Situational effects
• Multidimensional nature of performance at
• Critical thinking work
• Repairing • What do we want to foster? Cooperation or
• Visioning competition, or both?

Criterion Deficiency
Conceptual versus Actual
Conceptual
• Conceptual Criterion– the theoretical Criterion Deficiency Criterion
construct that we would like to measure.
• Actual Criterion– the operational definition Relevance
(of the theoretical construct) that we end up
Criterion Contamination Actual
measuring. Criterion

We want the conceptual criterion and actual criterion to


overlap as much as possible.

2
Criterion Deficiency Types of Performance
• Criterion Deficiency– the degree to which • Task Performance– generally affected by
the actual criterion fails to overlap with the cognitive abilities, skills, knowledge &
conceptual criterion. experience.
• Criterion Relevance– the degree of overlap • Contextual Performance– generally affected by
personality traits and values includes helping
or similarity between the actual and others, endorsing organizational objectives, &
conceptual criterion. contributing to the organizational climate.
• Contamination– the part of the actual Prosocial behavior that facilitates work in the
criterion that is unrelated to the conceptual organization.
criterion. • Adaptive Performance– engage in new learning,
coping with change, & developing new processes.

Criteria Used by Industry to


Criteria Validate Predictors
• Supervisory performance ratings
• Criteria Should be • Turnover
– Relevant to the specific task • Productivity
– Free from contamination (does not include • Status Change (e.g. promotions)
other factors relevant to task performance) • Wages
– Not deficient (must not leave out factors • Sales
relevant to the performance of the task) • Work samples (Assessment Centers)
– Reliable • Absenteeism
• Accidents

Personnel Psy, by Schmitt, Gooding,Noe, & Kirsh (1984)

Predictor No. of Average Reliability


Studies Validity
Special attitudes
• Classical Model
31 .27
– An observation is viewed as the sum of two
Personality
62 .15 latent components: the true value of the trait
General mental
53 .25 plus an error,
Ability
Biodata
• X= t + e
99 .24
• The error and the true component are
Work samples
18 .38 independent of each other.
Assessment
21 .41 • The true and error component can’t be
Centers
Physical Ability
observed.
22 .32
Overall 337 .28

3
Test-Retest Reliability
ƒ Test-retest reliability is estimated by comparing respondents’ scores
Types of Reliability on two administrations of a test
ƒ Test-retest reliability is used to assess the temporal stability of a
ƒ Test-retest reliability measure; that is, how consistent respondents’ scores are across time
ƒ The higher the reliability, the less susceptible the scores are to the
ƒ Alternate-form reliability random daily changes in the condition of the test takers or of the
ƒ Split-half reliability testing environment
ƒ The longer the time interval between administrations, the lower the
ƒ Internal consistency (a.k.a., Kuder-Richardson reliability; test-retest reliability will be
a.k.a., Coefficient Alpha) ¾ The concept of test-retest reliability is generally restricted to short-range
ƒ Interrater reliability (a.k.a., interscorer reliability) random changes (the time interval is usually a few weeks) that characterize
the test performance itself rather than the entire behavior domain that is being
tested
¾ Long-range (i.e., several years) time intervals are typically couched in terms
of predictability rather than reliability
¾ Test-retest reliability is NOT appropriate for constructs that tend to fluctuate
on an hourly, daily, or even weekly basis (e.g., mood)

19 20

Reliability Signal to Noise


• How consistent is a measure over repeated • Under the assumption of independence, we
applications. define reliability as
• Consistency is a factor of the error in the
measure. σ t2
• If we view an observation as X=T+E, we ρ= 2
can define reliability as the ratio of two σ t + σ e2
variances.

Job Analysis of the Student


Sources of Unreliability
Development
• Cognitive skills– Analysis, innovation, • Item sampling
ability to learn • Guessing
• People skills– Cooperation, conflict • Intending to choose one answer but marking
resolution, & emotion intelligence another one
• Communication– Written and verbal • Misreading a question
communication skills • Fatigue factors
• Motivation and commitment

4
Methods of Estimating
Problems With Reliability
Reliability
• Test-retest • Homogenous groups have lower reliability
• Parallel (alternate) -forms than heterogeneous groups
• Split-half (must use adjustment Spearman- • The longer the test the higher the reliability
Brown) • Most reliability estimates require that the
• Kuder-Richardson (Alpha) test be one-dimensional
• Inter-rater

Establishing Validity
Validity • Content validity– The degree to which the items in a
test are representative sample of the domain of knowledge
the test purports to measure
• 1. Whether a test is an adequate measure of
the characteristics it is suppose to measure. • Criterion Related Validities– the degree to which
a test is statistically related to a performance criterion.
• 2. Whether inferences and actions based on
– Concurrent Validation
the test scores are appropriate.
– Predictive Validation
• Similar to reliability, validity is not an
• Construct Validity– the degree to which a test is an
inherent property of a test. accurate measure of the theoretical construct it purports to
measure.
– Multi-trait Multi-method approach

Poor Reliability, Poor Validity Good Reliability, Poor Validity

29 30

5
Good Reliability, Good Validity Performance Appraisal Goals
• Assessment of work performance
• Identification areas that need improvement
• Accomplishing organizational goals
• Pay raises
• Promotions

31

Potential Problems Possible Solutions


• Single criterion- most jobs require more than one • Use of multiple criteria
criterion
• Focusing on behaviors
• Leniency- inflated evaluations
• Using multiple evaluators
• Halo- one trait influences the entire evaluation
• Similarity effects- we like people like us • Forcing a distribution
• Low differentiation- no variability • Important Issues:
• Forcing information- making our minds too soon. – Training the evaluators
– Rater’s motivation

Methods of Performance
Assessments
Appraisals
• Basic Rating Forms • Supervisor’s assessment
– Graphic forms • Self-assessment– generally people
– BARS (Behaviorally anchored ratings scales) recognize their own strengths and
– BOS (Behavioral observation scales) weakness, but they are generally a bit
– Check lists (based on ratings of CI)
inflated.
– Mixed scales
– 360 degree feedback • Peer assessment– very accurate in
• None have shown overall advantage predicting career advancement.

6
Performance Appraisals
• PA systems that have failed in court
generally were
– Developed without the benefit of a Job
Analysis
– Conducted in the absence of specific
instructions to raters
– Trait oriented rather than behavior oriented
– Did not include a review of the appraisal with
the employee

You might also like