0% found this document useful (0 votes)
44 views15 pages

Assessment Center Practices in South Africa

Assessment centers (AC) are popular in south africa, but little research exists on their practices. This study analyzes the development, execution, and evaluation of ACs in 43 South African organizations. Results identify pros and cons in current South African AC practices and offer suggestions for improvement.

Uploaded by

Cristina Sandu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views15 pages

Assessment Center Practices in South Africa

Assessment centers (AC) are popular in south africa, but little research exists on their practices. This study analyzes the development, execution, and evaluation of ACs in 43 South African organizations. Results identify pros and cons in current South African AC practices and offer suggestions for improvement.

Uploaded by

Cristina Sandu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Assessment Center Practices in South Africa

Diana E. Krause*, Robert J. Rossberger*, Kim Dowdeswell**,


Nadene Venter** and Tina Joubert**
*Alpen-Adria University Klagenfurt, Human Resource Management and Organizational Behavior, University Street
65-67, 9020 Klagenfurt, Austria. [email protected]
**SHL South Africa, New Muckleneuk, South Africa
Despite the popularity of assessment centers (AC) in South Africa, no recent study exists
that describes AC practices in that region. Given this research gap, we conducted a survey
study that analyzes the development, execution, and evaluation of ACs in N43 South
African organizations. We report ndings regarding AC design, job analysis and job require-
ments assessed, target groups and positions of the participants after the AC, number and
kind of exercises used, additional diagnostic methods used, assessors and characteristics
considered in constitution of the assessor pool, observational systems and rotation plan,
characteristics, contents, and methods of assessor training, types of information provided to
participants, data integration process, use of self- and peer-rating, characteristics of the
feedback process, and features after the AC. Finally, we compare the results with professional
suggestions to identify pros and cons in current South African AC practices and offer
suggestions for improvement.
1. Introduction
W
e know it well that none of us acting alone can
achieve success (Mandela, 1994). This state-
ment is not only true for political, economic, and social
circumstances but also with respect to assessment
centers (AC). In an AC, the candidates ability to perform
successfully in a team and to communicate adequately
with others are assessed. Among other competences,
these skills of the job applicants are crucial for the future
job performance of the candidate and consequently the
success of the organizations. A recent meta-analysis
(Hermelin, Lievens, & Robertson, 2007) that considered
27 validity coefcients from 26 previous studies has
shown that the predictive validity of an AC is r .28
(using a relatively conservative method of estimation).
AC results predict candidates future job performance,
training performance, salary, promotion, etc., in several
occupations, sectors, and countries.
AC programs continue to spread to more countries
around the world (Thornton & Rupp, 2005). In recent
years, ACs have been increasingly applied to international
settings (Lievens & Thornton, 2005, pp. 244245). One of
the challenges faced by organizations operating in an
international context is to understand cross-cultural
variability in AC practices. It is very plausible that certain
AC features that are acceptable and feasible in some
countries (e.g., United States, United Kingdom, Switzer-
land) may not be acceptable and feasible in others (e.g.,
Indonesia, Philippines, South Africa). For this reason, it is
important to increase our knowledge of AC practices in
different countries, such as South Africa.
The AC program was introduced into a South African
insurance company (Old Mutual group) by Bill Byham in
1973. During the next few years, Old Mutual group
implemented developmental centers in its offshore com-
panies such as Zimbabwe, England, Thailand, Malaysia,
and Hong Kong. One year later, the Edgars group was a
pioneer in developing and running ACs in South Africa. In
1975, another South African organization, Transport
Services, found out how companies in the United States
like AT&T and IBM identify their potential. During the
next years, Transport Services assessed 670 managers
and expanded the AC as a tool for selection and
developmental purposes (Meiring, 2008). Other South
African organizations (e.g., Stellenbosch Farmers Winery,
Department of Post and Telecommunication Services,
Naspers, South African Army, and South African Police)
followed soon in the development, execution, and valida-
tion of the AC.
& 2011 Blackwell Publishing Ltd.,
9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main St., Malden, MA, 02148, USA
International Journal of Selection and Assessment Volume 19 Number 3 September 2011
While AC practices in South Africa have changed
dramatically during the last three decades, to date no
empirical study exists that describes AC practices in
South Africa or the nature in which AC practices have
changed in South Africa over time. Yet, there are two
studies that describe AC practices in other countries
such as the United States (Spychalski, Quinones, Gaugler,
& Pohley, 1997) and German-speaking regions (Hoeft &
Obermann, 2009; Krause & Gebert, 2003). Two notable
exceptions from the trend to analyze AC practices at a
national level, is the worldwide study on AC practices
conducted by Kudisch, Avis, Thibodeaux, and Fallon
(2001) and the AC study by Krause and Thornton
(2009). However, Kudisch et al. (2001) collapsed data
across countries instead of reporting ndings for specic
regions so that systematic differences between the
countries are concealed. The most recent study on AC
practices (Krause & Thornton, 2009) compares AC
features in North American and Western European
organizations. Previous studies conducted in the United
States, Canada, and German-speaking regions have
shown the dynamic and evolving nature of AC practices
(Eurich, Krause, Cigalurov, & Thornton, 2009; Krause &
Thornton, 2009). Current AC are conducted within a
shorter period of time, less exercises are used, job
analyses are conducted with a great deal of methodolo-
gical effort, a shift toward developmental assessment
programs is typical, more appropriate dimensions are
used, systematic revisions of the AC are made frequently,
and the AC is frequently matched with the divisions own
needs compared to ACs designed 20 years ago.
The current study is the rst and most comprehensive
description of AC practices in South Africa to date. The
country-specic approach is highly important because the
ndings of AC applications from other countries can not
to be generalized to South Africa as the economic, social,
political, and educational circumstances vary from one
country to the next (Herriot & Anderson, 1997; Krause,
2010; Newell & Transley, 2001; Ryan, Wiechmann, &
Hemingway, 2003) and consequently, the AC practices
are highly heterogeneous not only within one country
but also between countries (for differences in personnel
selection practices in general see Ryan, McFarland, Baron,
& Page, 1999).
With respect to the differences in personnel selection
between countries, a model that explains cross-cultural
differences in general was proposed by Klehe (2004). The
model distinguishes between causes, constituents, con-
trol forms, content, and contextual factors of personnel
selection decisions. The causes and mechanisms lead to
ve strategic types of personnel selection decisions:
acquiesce, compromise, avoidance, defy, and manipula-
tion. With respect to the causes of personnel selection,
the model differentiates an economic t and a social t.
Regarding the economic tness, a long-term and a short-
term perspective is separated two perspectives that a
partially incompatible, judged differently by scientists and
practitioners, and require different control mechanisms.
Depending on the perspective someone takes, this is, if
the primary goal is to maximize the short-term prot or
if the primary goal is to invest budget, personnel, and
time in a valid and reliable personnel selection system,
the resulting personnel selection strategy will vary. Be-
side economic conditions Klehe (2004) underlines the
social tness which includes perceived legality and the
candidates perceived acceptance of the personnel selec-
tion method. Subject to the dominant form of control in
this sociallegal structure the resulting kind of personnel
selection procedure will also vary. In addition, contextual
factors as well as uncertainty and interdependencies need
to be considered because these factors have an impact on
the diffusion of a personnel selection system. Overall, this
model can also be used as a theoretical basis to explain
differences in AC practices between countries.
The present study aims to advance AC literature by
addressing the above-mentioned limitations in previous
research. First, we portray a broad spectrum of AC
practices with respect to all stages of the AC process:
the analysis, the design, the execution, and the evaluation
(Schlebusch & Roodt, 2008, p. 16). Second, we compare
South African AC practices with the practices in other
countries based on the aforementioned previous studies.
Third, we identify pros and cons in South African AC
practices. For this purpose we used three kinds of
information: South African guidelines for AC procedures
(Schlebusch & Roodt, 2008, appendix A), suggestions for
cross-cultural AC applications (Task Force on Assess-
ment Center Operations, 2009), and scholarly papers
that indicate aspects relevant to increase the predictive
and construct validity evidence of an AC.
2. Method
Data were collected via an online survey completed by
Human Resource (HR) managers of N43 South African
organizations. The data collection took place from August
to September 2009. The questionnaire was developed on
the basis of previous surveys (Krause & Gebert, 2003;
Krause & Thornton, 2009; Kudisch et al., 2001; Spychalski
et al., 1997). A draft of the questionnaire was then
evaluated by AC scholars and practitioners from South
Africa, Europe, and the United States. The nal version of
the questionnaire contained N62 AC features, pre-
sented in multiple choice and open-ended format. Organ-
izations were selected by sampling of organizations by
economic sector, predominantly in consulting as well as
the banking, mining, and public sector. While SHLs South
African clientele list was used as a starting point to compile
the master list, several colleagues knowledgeable in AC
usage and consulting to a variety of organizations in
different industries nominated individuals to be included
Assessment Centers: South Africa
263
& 2011 Blackwell Publishing Ltd.
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011
in the sample, to ensure coverage as far as possible of
applicable participants in the South African context. SHL
South Africa contacted the organizations via email. Letters
of invitation and follow-up reminders were sent by SHL.
To ensure the identity of the respondents, while the
survey was presented anonymously to encourage comple-
tion, at the end of the survey the respondents were
offered the opportunity to enter their email address in
order to receive a summary of the results. In total 60.5%
(26 of 43) of the respondents did so, and all of the email
addresses so provided were ones included on the original
master list. The response rate was 38.6% which is
relatively high given the length of the questionnaire.
Respondents were asked to describe the development,
execution, and evaluation of their AC. The respondents
worked in their companies as HR managers (e.g., 15%
head of HR department, 3% chief department head, 18%
division manager) or as personnel specialists (64%). Their
function in the AC included developers (16%), moder-
ators (26%), and assessors (63%) (multiple responses
were possible). The respondents indicated that the AC
they described takes place in the whole company (59%)
or in their specic division (41%).
The sample was heterogeneous in terms of the eco-
nomic sector (banking and insurance: 16%, consulting:
16%, manufacturing: 16%, automobiles: 8%; government:
8%; telecommunication: 8%, services: 6%, trade: 3%,
heavy industry: 3%, others: 16%). We tested whether
sectors diverge in terms of AC use, but no signicant
sectorial differences in the development, operation, and
evaluation of ACs were found. In this sense there is no
reason to assume that the specic composition of the
sample has distorted the results of our study.
The distribution of the organizations regarding their
size (measured by the number of employees in the whole
corporation) is: up to 500 employees: 35%; 5012,000
employees: 19%; 2,0015,000 employees: 19%; 5,001
10,000 employees: 8%; 10,00120,000 employees: 0%;
more than 20,000 employees: 19%. The assumption that
the administration of AC features covaries with organiza-
tional size was tested. We found that large organizations
did indeed differ signicantly from small ones in the way
they conduct the individual measures. (Additional infor-
mation about the measures that covary signicantly with
the size of an organization is available upon request.)
Large organizations generally have a larger budget for
personnel purposes, enabling them to invest more in
quality ACs than smaller organizations can. We also have
to notice that some of the large organizations operate
multinationally. In principle, this makes it possible that an
AC was developed in one country (e.g., United States)
and then transferred to South Africa. However, 67% of
the respondents indicated that the country of origin and
the country of operation were identical. Only in one
third of the cases, the AC was developed elsewhere and
executed in South Africa.
3. Results
Results for the present study are presented in the
following categories: (a) AC design, (b) job analysis
methods and job requirements assessed, (c) target
groups and positions of the participants after the AC,
(d) number and kind of exercises used, (e) additional
diagnostic methods used, (f) assessors and characteristics
considered in constitution of the assessor pool, (g)
observational systems and rotation plan, (h) character-
istics, contents, and methods of assessor training, (i)
types of information provided to participants, and (j)
data integration process, and use of self- and peer-ratings
(k) characteristics of the feedback process, and (l)
features after the AC. The percentages for each AC
practice are summarized in Table 1 and will not be
repeated in the paragraphs.
3.1. AC design
Professional experts state that the AC should be de-
signed to achieve a stated objective. As shown (Table 1),
two thirds of the organizations in South Africa use the
AC for both goals: personnel selection as well as
personnel development. Only a few organizations state
that the main objective of their AC is personnel devel-
opment. This nding contradicts previous results of AC
practices in other countries (Krause & Gebert, 2003;
Krause & Thornton, 2009) in which an increasing trend
toward developmental centers has been observed during
the last few years. In a developmental center, candidates
learning and development over time plays a dominant
role. For those South African organizations that use the
AC for personnel development, more than half indicate
that the main subgoal is to diagnose personnel develop-
ment and training needs, followed by HR planning/suc-
cession planning, and promoting to the next level or
identify potential.
With respect to variants for assessee selection, super-
visor nominations are common, but self-nomination and
personnel ratings are not. This nding is also not in line
with practices for participants selection in organizations
in other countries (i.e., Western Europe and North
America) in which self-nomination plays a more dominant
role than in organizations in South Africa (Krause &
Thornton, 2009). However, in more than half of the
organizations in South Africa it is typical that external
experts design the AC for the particular organizations or
that the AC development is conducted by teamwork.
Regarding the duration of the AC, we found that in
82% of the organizations the ACs last up to 1 day.
Compared with previous studies by Spychalski et al.
(1997) (23 days), Krause and Gebert (2003) (up to 3
days), and Krause and Thornton (2009) (12 days) our
nding reects that ACs in South Africa are leaner than
those in other countries. Given the need for lean
264
Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.
Table 1. Assessment center (AC) practices in South Africa
AC feature Practices in
South Africa
in % (N43)
Main objectives of the AC
Personnel selection 22
Personnel development 13
Personnel selection and development 65
If main objective is personnel development, most important
sub-goals are
Promoting to the next level or identify
potential
37
Diagnoses for personnel development or
training needs
51
HR planning/succession planning 44
Basis for assessee selection
Selfnomination 19
Supervisor recommendation 54
Personnel ratings 26
Development of the AC
Internal experts 22
External experts 53
Teamwork 19
Other 6
Length of average AC procedure
Less than a half day 19
1 day 63
2 days 9
3 days 6
4 days 3
More then 4 days
Fit of AC to division
Use of standard AC 41
Adaptation of standard AC to the given
division
50
Development entirely according to the
division own needs
9
Number of systematic improvements of the AC procedure
Every 710 years 3
Every 46 years 19
Every 23 years 56
Yearly 22
Job analysis conducted before the AC 84
Kind of job analyses
Job description 47
Interview with job incumbents 23
Questionnaire to job incumbents 16
Interview with supervisor 26
Questionnaire to supervisor 16
Critical incident technique 2
Observation of job incumbents 5
Workshop or teamwork 9
Competency modeling 44
Other 7
Kind of job requirements (dimensions) assessed
Communication 63
Consideration/awareness of others 33
Drive 33
Inuencing others 58
Organizing and planning 67
Problem solving 67
Number of observed job requirements/dimensions per exercise
1 characteristic
23 characteristics 33
45 characteristics 54
Table 1. (Contd.)
AC feature Practices in
South Africa
in % (N43)
67 characteristics 10
89 characteristics 3
49 characteristics
Number of observed job requirements/dimensions per AC
o3 characteristics 3
45 characteristics 17
67 characteristics 37
810 characteristics 33
1115 characteristics 3
415 characteristics 7
Target groups of the AC
Internal employees 23
External applicants 7
Both internal and external candidates 70
Average number of participants per AC
24 43
57 30
810 23
1113
More then 13 3
Number of participants assessed during the last period
(6 months/1 year)
Up to 100 94
101500 3
5011,000
More then 1,000 3
Groups the participants belong to
Internal managers (rst line) 56
External Managers (second line) 42
Internal leadership trainees 21
External leadership trainees 7
Entry level 14
Position of placement for participants after AC
Trainee
First line Manager 17
Second line Manager 30
Third line Manager 23
Other 30
Number of exercises used in one AC
o3 exercises 43
45 exercises 46
67 exercises 11
89 exercises
1011 exercises
411 exercises
Linkage between job requirements and exer-
cises documented in a competency by exercise
matrix
93
Pretest of exercises before they are implemented 50
Kind of exercises/simulations
In-basket 54
Presentation 51
Background interview 16
Situational interview 23
Role playing 49
Case study 26
Fact nding 16
Planning exercises 16
Sociometric devices 2
Group discussion 37
Other 14
Assessment Centers: South Africa
265
& 2011 Blackwell Publishing Ltd.
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011
Table 1. (Contd.)
AC feature Practices in
South Africa
in % (N43)
If one-on-one talks are simulated, who plays the role of the
other person?
Another participant
An observer 17
A role player 75
A professionally trained actor 4
Other 4
Other diagnostic methods used
None 2
Biographical questionnaire 14
Intelligence tests (GMA) 7
Personality tests 54
Skills/ability tests 49
Knowledge tests 5
Work sample tests 7
Graphology
Ratio participants and observer
1 : 1 32
1 : 2 29
1 : 3 29
4 or more 10
Groups which represented in the observer pool
Line managers 16
Internal Human Resource experts 23
External Human Resource experts 9
Labor union 2
A participants direct supervisor 4
Company ofcer for woman affairs
Internal psychologists 28
External psychologists 42
Criteria considered in selecting the assessor pool
Race 9
Ethnicity 7
Age 2
Gender 9
Organizational level 9
Functional work area 28
Educational level 28
Other 33
Observational systems I
None
Qualitative aids: for example, handwritten
notes of the participants behavior
51
Quantitative aids, such as certain forms/
systems of observation
63
Observational Systems II
Quantitative observational systems used
Behavioral observation scales
(BARS)
49
Behavioral checklists 47
Realistic behavioral descriptions 26
Computer-aides proles 19
Graphic rating scales 9
Rotation plan used 46
Duration of observer training
Less than a half day 11
1 day 25
2 days 21
3 days 7
4 days 7
More than 4 days 4
Observer training is not conducted 25
Table 1. (Contd.)
AC feature Practices in
South Africa
in % (N43)
Methods of observer training
Lectures 28
Discussion 47
Video demonstration/Camera 16
Observe other assessor 33
Observation of practice candidates 28
Other 2
Contents of observer training
Knowledge of the exercises 47
Knowledge of the target job 23
Knowledge of the job requirements (deni-
tions, demarcations)
30
Knowledge and sensitizing for errors of
judgment
40
Professional behavior with the participants
during the AC
47
Method of behavioral observation including
use of behavioral systems
47
Ability to observe, record, and classify the
participants behavior in job requirements
49
Consistency in role playing 37
Ability to give accurate oral or written
feedback
37
Limits of the AC method 40
Forms of reciprocal inuences in the data
integration process
19
Types of forming judgments (statistical, non-
statistical)
26
Evaluation of the qualities of observational and
rating skills of each observer after the obser-
ver training
75
Types of information provided to participants before AC
How individuals are selected for participa-
tion
26
Tips for preparing 21
Objective of the AC 65
Kinds of exercises 37
The storage and use of the data 21
Staff and roles of observers 21
The results of the AC 30
How feedback will be given 60
Job requirements/dimensions assessed in the
individual exercises are explicitly communi-
cated to the participants before the exercise
start
46
Data integration process
Assessor consensus (OAR) 32
Statistical aggregation 7
Combination of OAR and statistical aggre-
gation
61
Voting
Observers complete report before integration
process begins
75
Poor results in some exercises can be com-
pensated by good results in other exercises
86
Poor results regarding certain characteristics
can be compensated by good results regarding
other characteristics
54
Use of peer-ratings 18
Use of self-rating 29
Kind of feedback
Oral 18
266
Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.
assessments (i.e., organizations have to make valid
personnel decisions in a timely fashion) this result is
understandable. Nevertheless, as Lievens and Thornton
(2005) pointed out these cutbacks reduce the accuracy
and effectiveness of the AC.
Another practice is that most South African companies
do not match their ACs to their divisions own needs.
This trend is reected in two ndings. First, more than
90% of the organizations use a standard AC or an
adaptation of a standard AC to the given division. Only
a few organizations develop the AC entirely according
the divisions own needs. With respect to the validity of
the AC program, this practice needs to be seen in terms
of the dimensions being assessed (see section job re-
quirements are assessed). Whether a division-specic or
an organization-specic AC is more valid depends on the
dimensions being used. For example, communication is a
job requirement which is not division-specic whereas
problem solving is a division-specic job requirement.
Second, systematic revisions in the ACs are made every
second to third year by more than half of the organiza-
tions and annually by only a few organizations in South
Africa (see Table 1). Compared with North America and
Western Europe (see Krause & Thornton, 2009), South
African organizations revise their ACs less frequently.
This result reects a need for improvement: Revisions
should be made more frequently compared with current
practices. This improvement would enhance the chance
to increase the t between the AC and the needs of the
division, and consequently the effectiveness of the overall
AC. This could result in the following advantage for HR
departments: Especially in times of economic crisis HR
departments have to legitimize their existence. If poten-
tial errors in the personnel decision making process
would be reduced due to a better t between the AC
and the division and increased frequency of revisions of
the AC, HR departments circumstantiate the results of
their work.
3.2. Job analysis methods, and job requirements
assessed
Professional recommendations in South Africa and else-
where (Schlebusch & Roodt, 2008; Task Force on Assess-
ment Center Operations, 2009) indicate that a job
analysis before the AC should be conducted. In fact,
nearly all of the organizations in South Africa report to
do so (see Table 1) which is a positive sign in South
African AC practices. Another positive sign is that a wide
variety of job analysis techniques is used (see Table 1).
This nding is encouraging because many argue no single
method will sufce (Thornton & Rupp, 2005). Still, the
absolute amount of each job analysis technique is lower
compared with the frequencies in which each technique
is used in North America and in countries in Western
Europe (Krause & Thornton, 2009). In South Africa, the
most frequently used job analysis techniques are job
description and competency modelling. The wide use of
competency modelling shows that job analyses are con-
ducted in great detail very carefully. On the other hand, a
method that is relatively unused is the critical incident
Table 1. (Contd.)
AC feature Practices in
South Africa
in % (N43)
Written 3
Oral and written 79
When do participants receive feedback?
Directly upon completion 7
Up to 1 week after the AC 36
More than 1 week after the AC 57
Who gives the feedback?
Observer 26
Direct supervisor
Employee of personnel department 12
External expert 30
Other 12
Length of feedback
Less than 15 4
1530
3045 11
4560 44
6090 37
More than 90 4
In what form is the feedback?
On specic dimensions 46
On specic exercises 14
OAR (overall assessment rating) 29
Other 11
Who is notied of the AC results of the participants?
Participant 42
Head of department 33
Direct supervisor 47
Personnel le 30
Other 12
Possibility for reassessment for participants 46
Systematic evaluation of the AC 71
If an evaluation exists
Do written documents exist describing
the evaluation?
40
Evaluation conducted by
Developer 12
External expert 21
Internal expert 19
Team work 12
Other 5
Criterias evaluated and result of the evaluation of the criterias
Evaluation of objectivity (Interrater agree-
ment)
75
Evaluation of reliability 70
Evaluation of predictive validity 75
Evaluation of concurrent validity 45
Evaluation of construct validity 75
Content validity by expert judgment 65
Assessment Centers: South Africa
267
& 2011 Blackwell Publishing Ltd.
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011
technique (Flanagan, 1954). This method facilitates to
determine critical behaviors related to the target posi-
tion. The resulting information makes it possible to
distinguish between successful and unsuccessful job can-
didates. For both managers as well as employees, the
critical incident technique would be well suited to
clustering the job requirements related to a specic
position.
Regarding the kind of job requirements being assessed,
we used the results of two meta-analyses (Arthur, Day,
McNelly, & Edens, 2003; Bowler & Woehr, 2006). These
two studies found six construct and criterion valid
dimensions: communication, consideration/awareness of
others, drive, inuencing others, organization and plan-
ning, and problem solving. With respect to the present
study we found that two thirds of the South African
organizations assess communication, organizing and plan-
ning, problem solving, and inuencing others (see Table
1). These four dimensions were among the most popular
in Kudisch et al.s (2001) sample as well as in Krause and
Thorntons sample. These four job requirements ac-
counted for 20% of the variance in performance in the
meta-analysis by Arthur, Day, McNelly, and Edens (2003).
In the recent meta-analysis by Dilchert and Ones (2009)
the relevance of these dimensions for job performance
was supported: The best dimensional predictor for job
performance was problem solving, followed by inuen-
cing others, and organizing and planning, and commun-
ication skills. Given that, it should not be too surprising
that these four dimensions were also predictive for a
work-related criterion (salary) in a recent study by
Lievens, Dilchert, and Ones (2009).
Therefore, we can conclude that the most popular
dimensions being assessed in South Africa are also the
ones with the most predictive validity evidence.
With respect to the number of job requirements
assessed, 80% of the organizations assess more than
ve dimensions per AC and more than two thirds up
to nine dimensions per exercise. While this trend is
reected in the Guidelines for Assessment and Development
Centres in South Africa, which allows for typically no more
than 10 dimensions per AC and ve to seven per exercise
(Assessment Centre Study Group, 2007), compared with
other countries (Krause & Gebert, 2003; Krause &
Thornton, 2009; Spychalski et al., 1997), organizations
in South Africa assess more job requirements per AC and
per exercise. The assessment of more than ve dimen-
sions per AC increases the likelihood that the dimensions
are not distinguishable and, consequently, that the asses-
sors can not differentiate among the behavioral cat-
egories. Several studies have shown that an ACs con-
struct validity decreases as the number of assessed
dimensions increases (Bowler & Woehr, 2006). In con-
clusion, results from the present study illustrate that
current ACs in South Africa could be improved by using
fewer job requirements.
3.3. Target groups and positions of the participants
after the AC
It is most common (see Table 1) to conduct the AC for
internal and external candidates. Forty-three percent of
the organizations assess two to four candidates per AC;
30% assess ve to seven candidates per AC; and 23%
assess eight to 10 candidates per AC. As shown, the AC
is conducted for candidates of all organizational levels
(see Table 1). Almost all organizations assess up to 100
candidates within a period of 6 months up to 1 year. It is
most typical to assess internal and external rst and
second line managers. The AC program is used less
frequently for internal and external trainees or entry
level staff. After the AC, the candidates become rst,
second, or third line managers.
3.4. Number and kind of exercises used
South African organizations use a wide variety of ex-
ercises (see Table 1). However, the absolute amount of
exercises used in South Africa is lower compared to
other countries in North America and Western Europe
(Krause & Thornton, 2009). In line with the trend to
leaner AC programs, nearly half of the organizations use
less than three exercises per AC. The other half uses four
to ve exercises or more. Overall, the number of used
exercises is in need of improvement: An ACs predictive
validity evidence increases as the number of exercises
increases (Gaugler, Rosenthal, Thornton, & Benson,
1987).
A positive sign in current South African AC practices is
that in nearly all organizations linkages between the
assessed job requirements and exercises are documented
in a competency by exercise matrix (see Table 1).
Although, counter to suggestions (Task Force on Assess-
ment Center Operations, 2009), only half of organiza-
tions in South Africa pretest the exercises before
implementation. Although this is understandable given
the cost involved, organizations in South Africa should
invest more time, money, and personnel in pilot tests of
exercises to maximize the validity of the AC program.
The most frequently used exercises in South Africa
(see Table 1) are in-basket exercises, presentations and
role playing, followed by group discussions. These nd-
ings are in line with the most frequently used exercises in
the United States and Canada (Krause & Thornton,
2009). In these countries, in-baskets, presentations, and
role playing are also very popular. These results are
similar to the ndings by Krause and Gebert (2003),
who found for German-speaking regions that presenta-
tions and group discussions were the most frequently
used exercises in Germany. The frequent use of these and
not other exercises can be explained in terms of the ACs
social acceptance. For example, organizations in
many countries prefer exercises that demonstrate the
268
Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.
candidates ability to deal with complex tasks. These
kinds of exercises are presumably perceived to be
more activity specic than other kinds of exercises. The
use of presentations and role playing is consistent with
Thornton and Rupps (2005) argument that situational
exercises are still the mainstay of ACs (for details
regarding task-based ACs see Jackson, Stillman, & Englert,
2010). It might be that the popularity of presentations
and role playing has to do with the increasing people-
focused demands of the workplace.
In terms of role playing we were interested in the
question of who plays the other person. As shown (see
Table 1), in nearly all cases a role player or an observer
plays the role of the other person if one-to-one talks are
simulated. Although it would increase the costs involved
in the AC process, we suggest that a professionally
trained actor should play the role of the other person
in one-to-one simulations because it would increase the
objectivity of the exercise. Contrariwise, an ACs con-
struct validity decreases if an assessor is involved in one-
to-one talks (Thornton & Mueller-Hanson, 2004).
3.5. Additional diagnostic methods used
In addition to the behavioral exercises, only a minority of
organizations uses at least one other assessment method.
Only half of the organizations include a personality test
or a skill and ability test within the context of the AC. It is
not very common to include other diagnostic methods in
the AC, such as biographical questionnaires, work sample
tests, intelligence (general mental ability [GMA]) or
knowledge tests. These results parallel those of previous
studies in North America (Krause & Thornton, 2009) as
well as in Western Europe (Krause & Gebert, 2003;
Krause & Thornton, 2009). The rare use of testing
procedures such as biographical questionnaires, work
sample tests, and intelligence tests can be explained by
the fact that they are not always well accepted by HR
experts. The reluctance to use tests such as knowledge
and intelligence tests in South Africa as part of the AC
program is particularly strong (because of racial subgroup
differences, the ndings of large mean differences across
racial and ethnic groups that make validation more
imperative and difcult). Furthermore, the use of tests
within the context of the AC itself is usually limited
because of an interest in focusing on overt behavior. As a
whole, intelligence tests, and knowledge tests are used by
a minority of South African organizations as a part of the
AC. Nonetheless, there is empirical evidence supporting
higher predictive validity when ACs are combined with
cognitive ability tests (Dayan, Fox, & Kasten, 2008; Dayan,
Kasten, & Fox, 2002; Dilchert & Ones, 2009; Krause,
Kersting, Heggestad, & Thornton, 2006; Lievens, Harris,
Van Keer, & Bisqueret, 2003; Meriac, Hoffman, Woehr, &
Fleisher, 2008). Furthermore, work sample tests that
have a high predictive validity (r .54, see Schmidt &
Hunter, 1998) and are evaluated favorable by candidates
(Anderson, Salgado, & Huelsheger, 2010) also used rarely
by most of the South African organizations. Given that
state of the art we encourage South African organizations
to rethink about the integration of at least one additional
diagnostic method within the context of the AC program.
This practice could be benecial for the predictive validity
evidence of the overall AC program.
3.6. Assessors and characteristics considered in
constitution of the assessor pool
With regard to the ratio of participants to assessors, we
found that the most typical ratio is 1 : 2, which is in line
with professional recommendations and the practices in
other countries (Hoeft & Obermann, 2009; Krause &
Gebert, 2003; Krause & Thornton, 2009; Spychalski et al.,
1997). Two other aspects of AC programs in South Africa
might be interesting, namely which groups are repres-
ented in the observer pool and which criteria are
considered in the constitution of the observer pool.
Consistent with ACs in other countries (Krause &
Gebert, 2003; Spychalski et al., 1997), the assessor pool
in South Africa consists of various functional groups,
creating a broad composition for judging the assessees,
which is a positive sign in current AC practices in South
Africa. Assessors are, to a large extent HR professionals.
In comparison to other countries, line managers serve
signicantly less often as assessors than in North America
or Western Europe (Krause & Thornton, 2009). The
lower integration of line managers as assessors can be
interpreted in the context of specic labor legislation:
Due to the South African Employment Equity Acts (no.
55 of 1998) prohibition of unfair discrimination in em-
ployment practices (including assessments), organizations
are typically fairly conscious of the need to be able to
defend the legality of their actions. To facilitate this
process guidance can be drawn from best practice
publications such as the Assessment Centre Study
Groups Guidelines for Assessment and Development Centres
in South Africa (2007), which recommend as a minimum
qualication for an assessor an honors or masters degree
in behavioral science (i.e., Industrial and Organisational
Psychology, or HR Management). In this sense, personnel
decisions in South Africa are less strongly legitimatized by
hierarchy than in North America or Western Europe.
However, research has documented that the integration
of line managers into the assessor pool increases an ACs
construct validity (Lievens, 2002). It is also shown that
one third of the South African organizations use internal
psychologists and nearly half of the organizations use
external psychologists as assessors. There is evidence
that when psychologists serve as assessors the predictive
validity and the construct validity of an AC rises (Gaugler
et al., 1987; Lievens, 2002; Sagie & Magnezy, 1997). In this
respect, South African organizations might consider the
Assessment Centers: South Africa
269
& 2011 Blackwell Publishing Ltd.
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011
type of the assessor as an important moderator variable
of an ACs validity.
With respect to the observer pool, the Task Force on
AC Operations (2009) offered some suggestions on
which criteria need to be considered in the constitution
of observer pool. As shown, only a few organizations in
South Africa take these criteria seriously. Educational
level and functional work area are considered by only one
third of the organizations. By contrast, only a very small
minority of organizations appear to select the observer
pool with an eye toward organizational level, gender,
race, ethnicity, and age. Without saying, these criteria are
extremely important and inuence an ACs predictive
validity evidence. In this respect, most of the current
assessor pools in South Africa seem to be imbalanced in
terms of those important criteria a fact that might be a
dangerous strategy for organizations in terms of an ACs
accuracy. Overall, there is reason to assume that organ-
izations in South Africa should improve their practices
when it comes to criteria considered in the constitution
of assessor pool.
3.7. Observational systems, and rotation plan
With respect to the kind of observational systems used
(i.e., quantitative and qualitative aids), we found that
quantitative aids and qualitative systems are used to a
similar degree. In other countries (see Krause & Thorn-
ton, 2009) quantitative aids are more frequently used
than qualitative aids. Although both types of observa-
tional systems have certain advantages and disadvantages
(see Hennessy, Mabey, & Warr, 1998), there is empirical
evidence illustrating that quantitative aids (e.g., dimension
ratings, exercise ratings, overall assessment ratings) lead
to higher accuracy in prediction with reduced time and
costs as compared with qualitative observational systems
(Lance, Lambert, Gewin, Lievens, & Conway, 2004).
From those South African organizations using quantit-
ative aids, the most frequently used forms are behavioral
anchored rating scales and behavioral checklists. This
nding is consistent with the practice in North America
and Western Europe (Krause & Thornton, 2009). Less
frequently used are realistic behavioral descriptions and
graphic rating scales. Reilly, Henry, and Smithers (1990)
found the use of behavioral checklists improved con-
struct validity because the assessors are able to relate
assessees behavior more clearly to the various beha-
vioral dimensions, compared with other types of obser-
vational systems used. Hennessy, Mabey, and Warr
(1998) experimentally demonstrated the superiority of
behavioral checklists and behavioral coding over other
observational systems. Given these results, our ndings
indicate that the most frequently used observational
systems in South Africa are those which produce con-
struct validity.
Only half of the organizations in South Africa use a
rotation plan in their ACs (i.e., each participant is seen by
more than one assessor). Kleinmann (2003) found the
use of rotation plans minimized rating bias, which in-
creases ACs construct validity. Therefore, it would be
advantageous if more South African organizations would
consider this important moderator variable of an ACs
construct validity. Additionally, a recent study supported
that assessors judgment in group discussions is more
accurate if assessors have to observe only a few candid-
ates (instead of a large number of candidates) per
exercise (Melchers, Kleinmann, & Prinz, 2010). The
observation of fewer candidates leads to higher construct
and criterion-related validity. Consequently, South Afri-
can organizations should also consider the number of
candidates that assessors have to observe as a moderator
variable of an ACs validity.
3.8. Characteristics, contents, and methods of
assessor training
Assessor training was found to be conducted in two third
of the organizations. In most cases, the assessor training
approximately lasts from one half-day to two full days.
Meta-analytic evidence suggests that the length of the
training is unrelated to the predictive validity of an AC
(Gaugler et al., 1987). The quality of the training is more
important than its duration (Lievens, 2002). Therefore,
we analyzed the methods of the observer training and the
contents of the training. With respect to the methods of
the training sessions, discussion is the most frequently
used format (see Table 1). Besides discussions, various
other methods of assessor training are used to a lesser
extent: Video demonstration/camera, observation of
other assessors, or observation of practice candidates
or lectures. We argue that observation of practice
candidates as a method of observer training is more
effective to increase the ability to form reliable and valid
judgments about assessees behavior than discussions. As
a whole, our results show that the methods used are not
the most appropriate in training the assessor to make
valid and reliable judgments about candidates behavior.
Consequently, the methods of observer training are in
need of improvement.
Regarding the quality of the observer training we have
to clarify of its contents. As shown (see Table 1), in most
cases the assessors learn how to observe, record, and
classify participants behavior. They receive knowledge
about the method of behavioral observation, about the
exercises, and professional behavior with participants. It
is also obvious that many features of assessor training are
less frequently trained, for example, knowledge of the
relation between dimensions and job performance, how
to observe each job requirement independently, how to
focus on the various job requirements for which the
exercise has been designed, and how to distinguish
270
Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.
between the various job requirements. The nding that
these content areas are trained less frequently has to be
seen as counterproductive because the quality of the AC
measured by its predictive and construct validity is
reduced. These ndings suggest that organizations
need to improve the contents of assessor training. Finally,
we found that following the completion of assessor
training, South African organizations often evaluate each
assessor on his or her qualities of observational and
rating skills.
3.9. Types of information provided to participants
Virtually all organizations provide some sort of informa-
tion to participants before the AC (see Table 1). Typically,
participants in South Africa receive information about the
objective of the AC and about the feedback process.
Other kinds of information such as how the results will
be used, the type of exercises, the storage of data, the
staff and observers, how individuals are selected, and
how candidates can prepare themselves for the center,
are rarely provided. Another question is whether the job
requirements are explicitly communicated before the
exercise starts. Kleinmann (1997) called that the prin-
ciple of transparency. If one follows this principle, we
increase the validity of the center. As shown, half of the
organizations communicate the kind of job requirements
to the participants before the exercise starts the other
half ignores the principle of transparency.
Compared with other countries (see Krause & Thorn-
ton, 2009), South African candidates receive relatively
little information before they participate in the AC. Given
the emphasis on informed participation that is made in
both the international Guidelines and Ethical Considerations
for Assessment Center Operations (Task Force on Assess-
ment Center Guidelines, 2009) as well as in South Africas
Guidelines for Assessment and Development Centres in South
Africa (Assessment Centre Study Group, 2007) the
information policy toward the participants is in need of
improvement. Thornton and Rupp (2005) found that
when sufcient and frequent information was provided
to participants, the ACs were generally more accepted,
compared to instances where insufcient and less
frequent information was provided. To improve the
acceptance of ACs and its results, organizations may
need to provide participants with more information.
That might also have positive effects on the commitment
of the internal candidates after the AC and on personal
marketing issues on the labor market. Frequent informa-
tion about several topics involved in the AC process
should inuence the candidates reactions toward
the AC positively (for detailed information regarding
candidates reaction toward 10 personnel selection
methods in 17 countries, see Anderson, Salgado, &
Huelsheger, 2010).
3.10. Data integration process, and the use of self-
and peer-rating
With respect to data integration, approximately two
thirds of the organizations use a combination of assessor
consensus discussions and statistical aggregation. In addi-
tion, approximately one third uses assessor consensus
discussion, while the least frequently used method is
purely statistical aggregation. These ndings are in con-
trast to earlier studies (Krause & Gebert, 2003; Kudisch
et al., 2001; Spychalski et al., 1997) in which a much higher
amount of organizations used a consensus discussion.
The trend to combine assessors consensus information
with statistical aggregation may be a result of at least two
factors. First, statistical integration may ensure overall
ratings that are just as accurate as consensus ratings
(Thornton & Rupp, 2005). Second, the need for public
organizations to increase the appearance of objectivity
associated with statistical integration, in contrast to the
apparently subjective consensus discussion.
Furthermore, in many South African organizations the
observers complete a report before the data integration
process starts. It is also worthwhile to mention that in
most organizations candidates can compensate their
poor performance in some exercises by good perform-
ance in other exercises or their poor performance in
some dimensions by good performance in other dimen-
sions. In terms of the integration of self-ratings (i.e., the
candidates judgment of own performance) and the use of
peer-ratings (i.e., the candidates evaluation of the perform-
ance of his or her colleagues), we have to note that those
are rarely used in South African organizations. This
nding is consistent with the frequency in which self-
and peer-ratings are used in other countries. The use of
self- and peer-rating has decreased during the last 15
years, although these ratings can provide new insights
about the participants. Self- and peer-ratings can be used
as diagnostic information in addition to the ratings made
by the assessors.
3.11. Characteristics of the feedback process
With regard to the feedback process, it is obvious that
the most common ways of delivering feedback is a
combination of oral and written methods (see Table 1).
Here, because AC feedback is likely complex, written
feedback alone could lead to frustration, confusion, and
lack of understanding and therefore could lead to
negative work outcomes, including reduced organiza-
tional commitment. The frequencies in the kind of feed-
back are similar to those identied in earlier studies
(Krause & Gebert, 2003; Kudisch et al., 2001; Spychalski
et al., 1997).
Research on the timing of feedback indicates that
feedback is most valuable when it is given immediately
after a behavior (Thornton & Rupp, 2005). Unfortunately,
Assessment Centers: South Africa
271
& 2011 Blackwell Publishing Ltd.
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011
only 7% of the organizations in South Africa provide
feedback to participants immediately after AC comple-
tion. The majority of organizations provide feedback
within 1 week, or more than 1 week, after the AC. South
African organizations need to be encouraged to provide
more timely feedback. Thornton et al. (1992) found that
the maximum learning occurred and the most behaviors
were corrected when feedback was immediate. Feedback
is given by an observer, external expert, or employee of
the personnel department. Finally, the feedback includes
information about the overall assessment rating and
specic dimensions. In South Africa, it is relatively unusual
to provide feedback on ratings in each exercise. Organ-
izations standardize their feedback procedure in terms of
its content and its medium to reduce uncertainty during
this nal AC stage.
3.12. Features after the AC
As shown (see Table 1), the participants, the department
head, and the direct supervisor are most informed about
the participants AC performance. In terms of condenti-
ality and storage of results, access should be restricted to
those with a need to know and in accordance with what
has been agreed with the respondent during the AC
administration. Interestingly, in contrast to data protec-
tion regulations such as the European Union Directive on
Data Protection and the US Safe Harbor Privacy Princ-
iples, the South African Protection of Personal Informa-
tion Bill is not yet law and as such is not yet legally
binding. While the privacy of communications is covered
in the South African Electronic Communications and
Transactions Act (no. 25 of 2002), there is not any case
law on data protection, nor any legislation dealing specif-
ically with data privacy (Michalson, 2009). In one third of
the cases, someones AC performance is stored in the
candidates personnel le. Only half of the South African
organizations provide the possibility for reassessment.
This point would depend on the time period that has
passed before reassessment is requested; in selection
scenarios AC data should be utilised within 2 years of
administration (Task Force on Assessment Center Guide-
lines, 2009). Another essential feature after the AC
program is the evaluation procedure. Only two thirds
of the organizations reported that any method of evalua-
tion exists although the evaluation stage is part of the
legislative requirements in South Africa. Section 8 of the
Employment Equity Act (1998) prohibits psychological
testing and other similar assessments of an employee
unless the test or assessment being used has been
scientically shown to be valid and reliable, can be applied
fairly to all employees and is not biased against any
employee or group. Options to demonstrate the validity
and reliability of assessment measures include either in-
house studies or detailed analysis of the job supporting
content validity, or validity generalization of previous
studies to the position in question. This result is con-
sistent with the reported validation frequency in Kudisch
et al.s study (2001) in the United States, which found that
two thirds of organizations carry out some sort of
validation. It might be strategically risky for one third of
the South African organizations to neglect an evaluation
process, or at least organizations should document the
content validity evidence or validity generalization evi-
dence supporting the applicability of the AC for the role
in question, because no organization today is able to
afford ineffective, inefcient, or indefensible AC proced-
ures. Among those reporting some form of evaluation,
only 40% of the organizations reported that written
documents existed describing the evaluation and only
21% stated that an external expert was involved in the
evaluation process. In the two thirds of cases where
systematic evaluation was carried out, the most common
evaluation criteria were objectivity, reliability, predictive
validity, content validity, and construct validity. Statistical
testing of concurrent validity evidence is one feature
missing in most South African organizations. Based on
these ndings, we can conclude that the evaluation
process is insofar in need of improvement as written
documents should be used and an external expert should
conduct the evaluation of the overall AC program.
4. Discussion
This study lls two gaps in research on AC practices. The
rst comprehensive South African survey of a wide variety
of AC features was conducted. Positive and negative
trends in current South African AC practices have been
identied and compared with previous surveys of AC
practices in other countries. In the following section, we
discuss study limitations and directions for future AC
research. Finally, we offer suggestions for ways in which
South African HR experts can improve their ACs.
4.1. Study limitations
Our study goes well beyond previous AC research by
involving a country in which no empirical study on AC
practices had been conducted. In using our approach,
however, there are a number of limitations worth noting.
Whereas past studies on AC use have had samples of
over 100 organizations (Kudisch et al., 2001: N115;
Spychalski et al., 1997: N215), our sample is more
modest and consistent with two studies of similar sample
sizes (Krause & Gebert, 2003: N75; Krause & Thorn-
ton, 2009: Western Europe N45, North America
N52). As past work has pointed out, many HR
departments are overwhelmed with surveys, thus causing
many to be dropped in the bin (Fletcher, 1994, p. 173).
Nevertheless, future research is encouraged to replicate
our ndings with a larger sample size and broader
272
Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.
representation of industries than the current study.
Another concern is that most of our measures were
based on single survey questions. We only surveyed one
individual per organization. One assumption inherent in
this approach is that HR experts provide accurate
descriptions about their AC practices (see Fletcher,
1994). Seeking to obtain parallel descriptions of AC use
from additional experts within each company would have
seriously jeopardized the return rate. Consequently, our
method does not allow interrater reliability to be calcu-
lated. Future research is encouraged to replicate our
ndings using an approach in which two or three experts
per organization will be surveyed. Follow-up studies are
also encouraged to analyze the kind of adaptations
required to operate AC practices in South African
organizations that operate multinationally.
4.2. Suggestions to improve South African AC
practices
The results show that South African organizations could
improve their AC practices. Before we summarize these
aspects, we point out the cons in South African AC
practices AC features that should remain the same in
the future. The ndings have shown that sophisticated
methods of job analysis (e.g., competency models) are
used a trend that is positive compared with other
countries. Furthermore, South African organizations as-
sess those dimensions with a high construct validity and
predictive validity (Arthur et al., 2003; Bowler & Woehr,
2006; Dilchert & Ones, 2009), the four dimensions that
are assessed by most of the organizations are those that
predict the candidates future job performance accurately.
However, future AC programs are encouraged to con-
sider assessor constructs in use as an important part of
the validity of their programs (see Jones & Born, 2008).
Another positive trend is that a mixture of a broad
spectrum of exercises is used. It is also worth to mention
that a combination of OAR and statistical aggregation is
used to integrate the data of the AC. Although these
features are carried out in an adequate manner, there are
is still room for improvement.
South African organizations should assess less job
requirements. In doing so one increases the predictive
validity and construct validity evidence of the AC pro-
gram. In opposite, if too many dimensions are assessed,
the observers cannot distinguish among them which
decrease the construct validity of the AC. To increase
the accuracy and effectiveness of the program, one should
also increase the duration of the AC. In the design stage, it
is highly important to develop the AC entirely to the
divisions own needs. Standard ACs and adaptations of
standard ACs should be used less frequently to derive
valid predictions from the AC results. Additionally, organi-
zations need to improve their current AC practices by
conducting pilot tests of exercises before implementation.
Moreover, HR experts should consider whether it is
meaningful to integrate additional diagnostic procedures
more frequently than in the past to increase the validity of
their AC. Organizations should also consider relevant
criteria (e.g., gender, race, ethnicity, educational level,
age, functional work area, organizational level) in select-
ing the assessor pool. This strategy would enhance the
probability that the assessor pool is balanced in terms of
these criteria. To improve the AC, it is also important to
enhance the contents of the observer training. It seems
necessary to enlarge coverage of topics, such as the
relationship between dimensions and job performance,
the ability to observe the dimensions independently, the
ability to distinguish between the various dimensions, and
the ability to focus on those dimensions for which the
exercise has been designed. To facilitate the assessors
learning, organizations should think about the appropri-
ate methods used in observer training. It might be helpful
not only use the discussion format but also video
demonstration/camera or the observation of real candid-
ates or the observation of other assessors. During the
nal stages of the AC, the information policy toward
participants should be improved which would lead to
higher acceptance of the AC program and commitment
to the organization. The perceptions and reactions of
candidates after the AC should be considered in more
detail as it is common in personnel selection in other
countries (Anderson & Goltsi, 2006; Huelsheger &
Anderson, 2009). In addition, feedback should be pro-
vided in a timely fashion, ideally immediately after the
completion of the AC. Furthermore, continual statistical
evaluation of the AC is needed by all organizations to
monitor the quality of AC practices. Organizations
should also consider a third-party involvement in the
AC evaluation and to document the evaluation process
and its outcomes in a written manner. Although an
evaluation procedure is costly and time intense, this
necessity seems unavoidable in order to improve the
quality control of an organizations personnel selection,
promotion, and development decisions.
Acknowledgements
Portions of this paper were presented as a keynote
address at the 30th Annual Assessment Center Study
Group Conferences. Stellenbosch, Western Cape March
1719, 2010, South Africa. We thank two anonymous
reviewers and the editor for their constructive feedback
on a previous version of this paper.
References
Anderson, N., & Goltsi, V. (2006). Negative psychological effects
of selection methods: Construct formulation and an empirical
Assessment Centers: South Africa
273
& 2011 Blackwell Publishing Ltd.
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011
investigation into an assessment center. International Journal of
Selection and Assessment, 14, 236255.
Anderson, N., Salgado, J. F., & Huelsheger, U. R. (2010).
Applicant reactions in selection: Comprehensive meta-analy-
sis into reaction generalization versus situational specicity.
International Journal of Selection and Assessment, 18, 291304.
Arthur, W. Jr., Day, E. A., McNelly, T. L., & Edens, P. S. (2003). A
meta-analysis of the criterion-related validity of assessment
center dimensions. Personnel Psychology, 56, 125154.
Assessment Centre Study Group. (2007). Guidelines for assess-
ment and development centres in South Africa 4th ed. Assess-
ment Centre Study Group (ACSG), Stellenbosch. Available at
http://www.acsg.co.za.
Bowler, M. C., & Woehr, D. J. (2006). A meta-analytic evaluation
of the impact of dimension and exercise factors on assess-
ment center ratings. Journal of Applied Psychology, 91, 1114
1124.
Dayan, K., Fox, S., & Kasten, R. (2008). The preliminary
employment interview as a predictor of assessment center
outcomes. International Journal of Selection and Assessment, 16,
102111.
Dayan, K., Kasten, R. and Fox, S. (2002). Entry-level police
candidate assessment center: An efcient tool or a hammer
to kill a y? Personnel Psychology, 55, 827849.
Dilchert, S., & Ones, D. S. (2009). Assessment center dimen-
sions: Individual differences correlates and meta-analytic
incremental validity. International Journal of Selection and
Assessment, 17, 254270.
Electronic Communications and Transactions Act, no 25 (2002).
Government Gazette, 446 (23708). Cape Town: Government
Printers.
Employment Equity Act, no 55 (1998). Government Gazette, 400
(19370). Cape Town: Government Printers.
Eurich, T., Krause, D. E., Cigularov, K., & Thornton, G. C. III
(2009). Assessment centers: Current practices in the United
States. Journal of Business and Psychology, 24, 387407.
Flanagan, J. C. (1954). The critical incidents technique. Psycho-
logical Bulletin, 51, 327358.
Fletcher, C. (1994). Questionnaire surveys of organizational
assessment practices: A critique of their methodology and
validity, and a query about their future relevance. International
Journal of Selection and Assessment, 2, 172175.
Gaugler, B. B., Rosenthal, D. B., Thornton, G. C. III, & Benson,
C. (1987). Meta-analysis of assessment center validity. Journal
of Applied Psychology, 72, 493511.
Hennessy, J., Mabey, B., & Warr, P. (1998). Assessment centre
observation procedures: An experimental comparison of
traditional, checklist and coding method. International Journal
of Selection and Assessment, 6, 222231.
Hermelin, E., Lievens, F., & Robertson, I. T. (2007). The validity
of assessment centres for the prediction of supervisory
performance ratings: A meta-analysis. International Journal of
Selection and Assessment, 15, 405411.
Herriot, P., & Anderson, N. (1997). Selecting for change: How
will personnel and selection psychology survive? In Anderson,
N.R. and Herriot, P. (Eds.), International handbook of selection
and assessment (pp. 132). London: Wiley.
Hoeft, S. and Obermann, C. (2009). Was ist ein Assessment
Center. Annaherung an eine unscharfe Verfahrensklasse [What is
an assessment center? Approximation to an unclear method].
Presented at the 6th Congress of Work and Organizational
Psychology. September 911, Vienna, Austria.
Huelsheger, U. R., & Anderson, N. (2009). Applicant perspec-
tives in selection: Going beyond preference reactions. Inter-
national Journal of Selection and Assessment, 17, 335345.
Jackson, D., Stillman, J. A., & Englert, P. (2010). Task-based
assessment centers: Empirical support for a systems model.
International Journal of Selection and Assessment, 18, 141154.
Jones, R., & Born, M. (2008). Assessor constructs in use as the
missing component in validation of assessment center dimen-
sions: A critique and directions for research. International
Journal of Selection and Assessment, 16, 229238.
Klehe, U. C. (2004). Choosing how to choose: Institutional
pressures affecting the adoption of personnel selection
procedures. International Journal of Selection and Assessment,
12, 327342.
Kleinmann, M. (1997). Assessment Center: Stand der Forschung
Konsequenzen fur die Praxis [The state of research on the
assessment center Consequences for practice]. Goettingen:
Hogrefe.
Kleinmann, M. (2003). Assessment center. Go ttingen: Hogrefe.
Krause, D. E. (2010). Trends in international personnel selection.
Goettingen: Hogrefe.
Krause, D. E., & Gebert, D. (2003). A comparison of assessment
center practices in organizations in German-speaking regions
and the United States. International Journal of Selection and
Assessment, 11, 297312.
Krause, D. E., Kersting, M., Heggestad, E. D., & Thornton, G. C.
(2006). Incremental validity of assessment center ratings over
cognitive ability tests. A study at the executive management
level. International Journal of Selection and Assessment, 14(4),
360371.
Krause, D. E., & Thornton, G. C. III (2009). A cross-cultural look
at assessment center practices: A survey in Western Europe
and Northern America. Applied Psychology: An International
Review, 58(4), 557585.
Kudisch, J. D., Avis, J. M., Thibodeaux, H. and Fallon, J. D. (2001).
A survey of assessment center practices in organizations world-
wide: Maximizing innovation or business as usual? Paper pre-
sented at the 16th annual conference of the Society for
Industrial and Organizational Psychology, San Diego, CA.
Lance, C. E., Lambert, T. A., Gewin, A. G., Lievens, F., & Conway,
J. M. (2004). Revised estimates of dimension and exercise
variance components in assessment center post exercise
dimension ratings. Journal of Applied Psychology, 89, 377385.
Lievens, F. (2002). Trying to understand the different pieces of
the construct validity puzzle of assessment centers: An
examination of assessor and assessee effects. Journal of
Applied Psychology, 87, 675686.
Lievens, F., Dilchert, S., & Ones, D. R. (2009). The importance of
exercise and dimension factors in assessment centers: Simul-
taneous examinations of construct-related and criterion-
related validity. Human Performance, 22, 375390.
Lievens, F., Harris, M. M., Van Keer, E., & Bisqueret, C. (2003).
Predicting cross-cultural training performance: The validity of
personality, cognitive ability, and dimensions measured by an
assessment center and a behavioral description interview.
Journal of Applied Psychology, 88, 476489.
Lievens, F., & Thornton, G. C. III (2005). Assessment centers:
Recent developments in practice and research. In A. Evers, N.
274
Diana E. Krause, Robert J. Rossberger, Kim Dowdeswell, Nadene Venter and Tina Joubert
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011 & 2011 Blackwell Publishing Ltd.
Anderson, & O. Voskuijl (Eds.), The Blackwell handbook of
personnel selection (pp. 243264). Malden, MA: Blackwell.
Mandela, N. (1994). Statement of the president of the African
National Congress Nelson Rolihlahla Mandela at his Inauguration
as president of the Democratic Republic of South Africa Union
Buildings. Pretoria, May 10, South Africa.
Meiring, D. (2008). In G. Roodt, & S. Schlebusch (Eds.), Assess-
ment centers (pp. 2132). Johannesburg: Knowres Publishing.
Melchers, K.G., Kleinmann, M. and Prinz, M.A. (2010). Do
assessors have too much on their plates? The effects of
simultaneously rating multiple assessment center candidates
on rating quality. International Journal of Selection and Assess-
ment, 18, 329341.
Meriac, J. P., Hoffman, B. J., Woehr, D. J., & Fleisher, M. S. (2008).
Further evidence for the validity of assessment center
dimensions: A meta-analysis of the incremental criterion-
related validity of dimension ratings. Journal of Applied Psychol-
ogy, 93, 10421052.
Michalson, L. (2009). Protection of Personal Information Bill the
implications for you. Available at http://www.michalsons.com/
protection-of-personal-information-bill-the-implications-for-you/
Newell, S., & Transley, C. (2001). International use of selection
methods. In C. L. Cooper, & I. T. Robertson (Eds.), Interna-
tional review of industrial and organizational psychology (Vol. 21,
pp. 195213). Chichester: Wiley.
Reilly, R. R., Henry, S., & Smithers, J. W. (1990). An examination
of the effects of using behavior checklists on the construct
validity of assessment center dimensions. Personnel Psychology,
43, 7184.
Ryan, A. M., McFarland, L., Baron, H., & Page, R. (1999). An
international look at selection practices: Nation and culture
as explanations for variability in Practice. Personnel Psychology,
52, 359391.
Ryan, A. M., Wiechmann, D., & Hemingway, M. (2003). Design-
ing and implementing global stafng systems: Part II Best
practices. Human Resource Management, 42, 8594.
Sagie, A., & Magnezy, R. (1997). Assessor type, number of
distinguishable categories, and assessment centre construct
validity. Journal of Occupational and Organizational Psychology,
70, 103108.
Schlebusch, S., & Roodt, G. (2008). Assessment centers. Johan-
nesburg: Knowres Publishing.
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of
selection methods in personnel psychology: Practical and
theoretical implications of 85 years of research nding.
Psychological Bulletin, 124, 262274.
Spychalski, A. C., Quinones, M. A., Gaugler, B. B., & Pohley, K.
(1997). A survey of assessment center practices in
organizations in the United States. Personnel Psychology, 50,
7190.
Task Force on Assessment Center Guidelines. (2009).
Guidelines and ethical considerations for assessment center
operations. International Journal of Selection and Assessment, 17,
243254.
Thornton, G. C. III, Gaugler, B. B., Rosenthal, D., & Bentson, C.
(1992). Die pradiktive Validitat des Assessment Center eine
Metaanalyse [The predictive validity of the assessment center
A meta-analysis]. In Schuler, H. and Stehle, W. (Eds.),
Assessment-Center als Methode der Personalentwicklung (2nd
ed., pp. 3660). Goettingen: Hogrefe.
Thornton, G. C. III, & Mueller-Hanson, R. (2004). Developing
organizational simulations: A guide for practitioners and students.
Mahwah, NJ: Lawrence Erlbaum Associates.
Thornton, G. C. III, & Rupp, D. R. (2005). Assessment centers in
human resource management: Strategies for prediction, diagnosis,
and development. Mahwah, NJ: Erlbaum.
Assessment Centers: South Africa
275
& 2011 Blackwell Publishing Ltd.
International Journal of Selection and Assessment
Volume 19 Number 3 September 2011
Copyright of International Journal of Selection & Assessment is the property of Wiley-Blackwell and its
content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for individual use.

You might also like