0% found this document useful (0 votes)
18 views

Unit 10

Uploaded by

Nageshwar Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Unit 10

Uploaded by

Nageshwar Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 38

CHAPTER TEN

Training Evaluation
LEARNING OUTCOMES

 Define training evaluation and the main reasons


for conducting evaluations
 Discuss the barriers to evaluation and the factors
that affect whether or not an evaluation is
conducted
 Describe the different types of evaluations
 Describe the models of training evaluation and
the relationship among them
LEARNING OUTCOMES

 Describe the main variables to measure in a


training evaluation and how they are measured
 Discuss the different types of designs for training
evaluation as well as their requirements, limits,
and when they should be used
INSTRUCTIONAL SYSTEMS
DESIGN MODEL
INSTRUCTIONAL SYSTEMS
DESIGN MODEL

Training evaluation is the third step of the ISD model


and consists of two parts:
 The evaluation criteria (what is being measured)
 Evaluation design (how it will be measured)

These concepts are covered in the next two chapters


 Each has a specific and important role to play in the
effective evaluation of training and the completion of
the ISD model
TRAINING
EVALUATION

Process to assess the value – the worthiness –


of training programs to employees and to
organizations
TRAINING
EVALUATION
 Not a single procedure; a continuum of techniques,
methods, and measures
 Ranges from simple to elaborate procedures
 The more elaborate the procedure, the more
complete the results, yet usually the more costly
(time, resources)
 Need to select the procedure based on what makes
sense and what can add value within resources
available
WHY A TRAINING
EVALUATION?
 Improve managerial responsibility toward training
 Assist managers in identifying what, and who,
should be trained
 Determine cost–benefits of a program
 Determine if training program has achieved
expected results
 Diagnose strengths and weaknesses of a program
and pinpoint needed improvements
 Justify and reinforce the value of training
DO WE EVALUATE?

 There has been a steady decline in determining


ROI – Level 3 and 4 evaluation
BARRIERS TO
EVALUATION
Barriers fall into two categories:
1. Pragmatic
• Requires specialized knowledge and can be
intimidating
• Data collection can be costly and time
consuming
2. Political
• Potential to reveal ineffectiveness of training
TYPES OF
EVALUATION
Evaluations may be distinguished from
each other with respect to:
1. The data gathered and analyzed
2. The fundamental purpose of the evaluation
TYPES OF
EVALUATION
1. The data gathered and analyzed
a. Trainee perceptions, learning, and behaviour
at the conclusion of training
b. Assessing psychological forces that operate
during training
c. Information about the work environment
• Transfer climate and learning culture
TYPES OF
EVALUATION
2. The purpose of the evaluation

a. Formative: Provide data about various aspects of a


training program
b. Summative: Provide data about worthiness or
effectiveness of a training program
c. Descriptive: Provide information that describes the
trainee once they have completed a training program
d. Causal: Provide information to determine if training
caused the post-training behaviours
MODELS OF
EVALUATION
A. Kirkpatrick’s Hierarchical Model
Oldest, best known, and most frequently used
model
The Four Levels of Training Evaluation:
– Level 1: Reactions
– Level 2: Learning
– Level 3: Behaviours
– Level 4: Results
– ROI
CRITIQUE OF
EVALUATION
 There is general agreement that the five levels
are important outcomes to be assessed
 There are some critiques:
• Doubt about the validity
• Insufficiently diagnostic
• Kirkpatrick requires all training evaluations
to rely on the same variables and outcome
measures
MODELS OF
EVALUATION
B. COMA Model
A training evaluation model that involves the
measurement of four types of variables
1. Cognitive
2. Organizational Environment
3. Motivation
4. Attitudes
MODELS OF
EVALUATION
The COMA model improves on Kirkpatrick’s model
in four ways:
1. Transforms the typical reaction by incorporating greater
number of measures
2. Useful for formative evaluations
3. The measures are known to be causally related to
training success
4. Defines new variables with greater precision
Note: Relatively new model – too early to draw conclusions
as to its value
MODELS OF
EVALUATION
C. Decision-Based Evaluation Model
A training evaluation model that specifies the
target, focus, and methods of evaluation
MODELS FOR
TRANSFER
Decision-Based Evaluation Model
 Goes further than either of the two preceding
models:
1. Identifies the target of the evaluation
– Trainee change, organization payoff, program improvement

2. Identifies its focus (variables measured)


3. Suggest methods
4. General to any evaluation goals
5. Flexibility: Guided by target of evaluation
MODELS FOR
TRANSFER
 As with COMA, the DBE model is recent and will
need to be tested more fully
 All three models require specialized knowledge to
complete the evaluation; this can limit their use in
organizations without this knowledge
 Holton and colleagues’ Learning Transfer System
Inventory provides a more generic approach
See Training Today for more on its use for evaluation
MODELS FOR
TRANSFER
 Training evaluation requires data be collected on
important aspects of training
 Some of these variables have been identified in the
three models of evaluation
 A more complete list of variables is presented and
shows sample questions and formats for measuring
each type of variable
EVALUATION
VARIABLES
A. Reactions
B. Learning
C. Behaviour
D. Motivation
E. Self-efficacy
F. Perceived/anticipated support
G. Organizational perceptions
See Table 11.2 in text
H. Organizational results
VARIABLES

A. Reactions
1. Affective reactions: Measures that assess
trainees’ likes and dislikes of a training
program
2. Utility reactions: Measures that assess the
perceived usefulness of a training program
VARIABLES

B. Learning
Learning outcomes can be measured by:
1. Declarative learning: Refers to the
acquisition of facts and information, and is
by far the most frequently assessed learning
measure
2. Procedural learning: Refers to the
organization of facts and information into a
smooth behavioural sequence
VARIABLES

C. Behaviour
Behaviours can be measured using three
approaches:
1. Self-reports
2. Observations
3. Production indicators
VARIABLES

D. Motivation
Two types of motivation in the training context:
1. Motivation to learn
2. Motivation to apply the skill on-the-job (transfer)

E. Self-Efficacy
Beliefs that trainees have about their ability to
perform the behaviours that were taught in a
training program
VARIABLES

F. Perceived and/or Anticipated Support


Two important measures of support:
1. Perceived support: The degree to which the
trainee reports receiving support in attempts to
transfer the learned skills
2. Anticipated support: The degree to which the
trainee expects to supported in attempts to
transfer the learned skills
VARIABLES

G. Organizational Perceptions
Two scales designed to measure perceptions:
1. Transfer climate: Can be assessed via a
questionnaire that identifies eight sets of
“cues”
2. Continuous learning culture: Can be assessed
via questionnaire presented in Trainer’s
Notebook presented the text
VARIABLES

G. Organizational Perceptions (cont'd)


Transfer climate cures include:
 Goal cues
 Social cues
 Task and structural cues
 Positive feedback
 Negative feedback
 Punishment
 No feedback
 Self-control
VARIABLES

H. Organizational Results
Results information includes:
1. Hard data: Results measured objectively
(e.g., number of items sold)
2. Soft data: Results assessed through
perceptions and judgments (e.g., attitudes)
3. Return on expectations: Measurement of a
training program’s ability to meet managerial
expectations
DESIGNS IN TRAINING
EVALUATION
The manner with which the data collection is
organized and how the data will be analyzed
 All data collection designs compare the trained
person to something
DESIGNS IN TRAINING
EVALUATION
1. Non-experimental designs: Comparison is made
to a standard and not to another group of
(untrained) people
2. Experimental designs: Trained group compared
to another group that does not receive the training
– assignment is random
3. Quasi-experimental designs: Trained group is
compared to another group that does not receive
the training; assignment is not random
DATA COLLECTION
DESIGN
DATA COLLECTION
DESIGN

Pre Post Pre Post

A: Single group post-only design B: Single group pre-post Design


(Non-experimental) (Non-experimental)
DATA COLLECTION
DESIGN
Trained Untrained

Pre Post Pre Post

C: Time series design D: Single group design


(Non-experimental) with control group
DATA COLLECTION
DESIGN
Trained Untrained

Pre Post Pre Post

E: Pre-post design F: Time series design


with control group with control group
DATA COLLECTION
DESIGN

Training on Relevant Items

Training on Irrelevant Items

Pre Post

G: Internal Referencing Strategy


SUMMARY

 Discussed the main purposes for evaluating training


programs as well as the barriers
 Presented, critiqued, and contrasted three models of training
(Kirkpatrick, COMA, and DBE)
 Recognized that Kirkpatrick model is most frequently used,
yet has limitations
 Discussed the variables required for an evaluation as well as
methods and techniques required to measure them
 Presented the main types of data collections designs
 Discussed factors influencing choice of data collection
designs

You might also like