2002 - Where and Why G Matters
2002 - Where and Why G Matters
The general mental ability factorgis the best single predictor of job performance. It is probably the best measured and most studied human trait in all of psychology. Much is known about its meaning, distribution, and origins thanks to research across a wide variety of disciplines (Jensen, 1998). Many questions about g
remain unanswered, including its exact nature, but g is hardly the mystery that
some people suggest. The totalitythe patternof evidence on g tells us a lot
about where and why it is important in the real world. Theoretical obtuseness about
g is too often used to justify socalled technical advances in personnel selection
that minimize, for sociopolitical purposes, the use of g in hiring.
Requests for reprints should be sent to Linda S. Gottfredson, School of Education, University of
Delaware, Newark, DE 19716. E-mail: [email protected]
26
GOTTFREDSON
27
ical tests, for instance, enhances performance somewhat in clerical jobs (beyond
that afforded by higher g), but g enhances performance in all domains of work.
The g factor shows up in nonpsychometric tests as well, providing more evidence for both its reality and generality. Exceedingly simple reaction time and inspection time tasks, which measure speed of reaction in milliseconds, also yield a
strong information processing factor that coincides with psychometric g.
In short, the g continuum is a reliable, stable phenomenon in human populations. Individual differences along that continuum are also a reliable, stable phenomenon. IQ tests are good measures of individual variation in g, and peoples IQ
scores become quite stable by adolescence. Large changes in IQ from year to year
are rare even in childhood, and efforts to link them to particular causes have failed.
Indeed, mental tests would not have the pervasive and high predictive validities
that they do, and often over long stretches of the life span, if peoples rankings in
IQ level were unstable.
Theorists have long debated the definition of intelligence, but that verbal exercise is now moot. g has become the working definition of intelligence for most
researchers, because it is a stable, replicable phenomenon thatunlike the IQ
scoreis independent of the vehicles (tests) for measuring it. Researchers are
far from fully understanding the physiology and genetics of intelligence, but they
can be confident that, whatever its nature, they are studying the same phenomenon
when they study g. That was never the case with IQ scores, which fed the unproductive wrangling to define intelligence. The task is no longer to define intelligence, but to understand g.
Meaning of g as a Construct
Understanding g as a constructits substantive meaning as an abilityis essential
for understanding why and where g enhances performance of everyday tasks.
Some sense of its practical meaning can be gleaned from the overt behaviors and
mental skills that are prototypical of gthat is, those that best distinguish people
with high g levels from those with low g. Intelligence tests are intended to measure
a variety of higher order thinking skills, such as reasoning, abstract thinking, and
problem solving, which experts and laypeople alike consider crucial aspects of intelligence. g does indeed correlate highly with specific tests of such aptitudes.
These higher order skills are context- and content-independent mental skills of
high general applicability. The need to reason, learn, and solve problems is ubiquitous and lifelong, so we begin to get an intuitive grasp of why g has such pervasive
value and is more than mere book smarts.
We can get closer to the meaning of g, however, by looking beyond the close
correlates of g in the domain of human abilities and instead inspect the nature of
the tasks that call it forth. For this, we must analyze data on tasks, not people. Recall that the very definition of an ability is rooted in the tasks that people can per-
28
GOTTFREDSON
FIGURE 1
29
30
GOTTFREDSON
adaptable tool for processing any sort of information, whether on the job or off, in
training or after.
TABLE 1
Job Attributes That Correlate Most With the Job Complexity Factor
Correlate Most With Complexity Factor
31
0.90
0.88
0.88
0.86
0.86
0.84
0.84
0.83
0.79
0.70
0.68
0.68
0.68
0.68
0.59
0.55
0.54
0.51
0.40
0.36
Seeing (DOT)
Information from events, extent of use
Vigilancechanging events, importance of
Pictorial materials, extent of use
Apply measurable, verifiable criteria (DOT)
Vigilanceinfrequent events, importance of
Patterns, extent of use
Interpret others feelings, ideas, facts (DOT)
0.66
0.58
0.57
0.44
0.43
0.41
0.41
0.22
0.86
0.83
0.82
0.79
0.79
0.64
0.61
0.59
0.25
Catering to people
Catering to people
Catering to people
Catering to people
(continued)
32
TABLE 1 (Continued)
Correlate Most With Complexity Factor
Staff functions, importance of
Coordinate without line authority, importance of
Public speaking, importance of
Instructing, importance of
Direction, control, and planning (DOT)
Dealing with people (DOT)
Influencing (DOT)
Level of responsibility and respect
Prestige (Temme)
General responsibility, degree of
Criticality of position, degree of
Job structure
Self-direction (Temme)
Complexity of dealings with data (DOT)
Work under distractions, importance of
Frustrating situations, importance of
Interpersonal conflict, importance of
Strained contacts, importance of
Complexity of dealings with people (DOT)
Personal contact required, extent of
Personal sacrifice, importance of
Civic obligations, importance of
Time pressure, importance of
Precision, importance of
Variety and change (DOT)
Repetitive activities, importance of
Supervision, level of
Repetitive or continuous (DOT)
Structure, amount of
0.79
0.74
0.68
0.67
0.59
0.59
0.42
0.82
0.76
0.71
0.88
0.83
0.78
0.77
0.76
0.69
0.68
0.66
0.65
0.64
0.55
0.53
0.41
0.49
0.73
0.74
0.79
0.48
0.47
0.77
0.54
0.53
0.44
0.42
0.27
0.73
0.63
0.55
0.48
0.74
0.51
0.45
0.42
0.37
0.33
0.20
0.37
0.39
0.45
0.48
0.53
0.56
0.66
0.48
0.42
0.70
0.45
0.45
0.48
0.53
0.66
Commission, yes or no
Tips, yes or no
Licensing and certification
Median age, men (census)
Mean hours, men (census)
Median age, women (census)
Mean hours, women (census)
Percentage women (census)
0.88
0.86
0.85
0.76
0.62
0.51
0.53
0.50
0.42
0.31
0.31
0.28
0.34
0.37
Selling
Selling
Catering to people
Vigilance with machines
Controlled manual
Coordination without sight
Catering to people
Controlled manual
33
Note. Source of data: Gottfredson (1997). DOT = Dictionary of Occupational Titles; Temme = Temmes ratings of occupatioanl prestige and self-direction;
Holland = Hollands vocational personality type codes for occupations (see Gottfredson, 1994, for description and use of these scales).
34
GOTTFREDSON
ent of the jobs overall complexity. The extent of use of most forms of information
(behavioral, oral, written, quantitative) is also strongly correlated with overall job
complexity (.59.84) but no other factor. The primary exception, once again, is visual (use of patterns and pictorial materials).
Many job duties can be described as general kinds of problem solvingfor instance, advising, planning, negotiating, instructing, and coordinating employees
without line authority. As Table 1 shows, they are also consistently and substantially
correlated with job complexity (.74.86). In contrast, the requirements for amusing,
entertaining, and pleasing people mostly distinguish among jobs at the same complexity level, for they help to define the independent factor of catering to people.
Complex dealings with data (.83) and people (.68) are more typical of highly
complex than simple jobs, as might be expected. Complex dealings with things
(material objects) help to define a separate and independent factor: work with
complex things (which distinguishes the work of engineers and physicians, e.g.,
from that of lawyers and professors). Constant change in duties or the data to be
processed (variety and change, .41) also increase a jobs complexity. As the data
show, the more repetitive (.49, .74), tightly structured (.79), and highly supervised (.73) a job is, the less complex it is. Complexity does not rule out the need
for tight adherence to procedure, a set work pace, cycled activities, or other particular forms of structure required in some moderately complex domains of work. As
can be seen in Table 1, these attributes typify work that is high on the operating
machines (and vehicles) factor of work.
That the overall complexity of a job might be enhanced by the greater complexity of its component parts is no surprise. However, Table 1 reveals a less well-appreciated pointnamely, that job complexity also depends on the configuration of
tasks, not just on the sum of their individual demands. Any configuration of tasks
or circumstances that strains ones information processing abilities puts a premium
on higher g. Consider dual-processing and multitasking, for instance, which tax
peoples ability to perform tasks simultaneously that they have no trouble doing sequentially. The data in Table 1 suggest that information processing may also be
strained by the pressures imposed by deadlines (.55), frustration (.77), and interpersonal conflict (.76), and the need to work in situations where distractions (.78)
compete for limited cognitive resources. Certain personality traits would aid performance in these situations, but higher g would also allow for more effective handling of these competing stresses.
The importance of performing well tends to rise with job complexity, because
both the criticality of the position for the organization (.71) and the general responsibility it entails (.76) correlate strongly with job complexity. Responsibility for
materials and safety are more domain specific, however, because they correlate
most with the vigilance with machines factor.
Education and training are highly g-loaded activities, as virtually everyone recognizes. Table 1 shows, however, that more complex jobs tend not only to require
35
higher levels of education (.88), but also lengthier specific vocational training (.76)
and experience (.62). The data on experience are especially important in this context, because experience signals knowledge picked up on the job. It reflects a form
of self-instruction, which becomes less effective the lower ones g level. Consistent with this interpretation, the importance of updating job knowledge correlates very highly (.85) with job complexity.
More complex jobs tend to require more education and pay better, which in turn
garners them greater social regard. Hence, the job complexity factor closely tracks
the prestige hierarchy among occupations (.82), another dimension of work that
sociologists documented decades ago.
The other attributes that correlate most highly with complexity, as well as those
that do not, support the conclusion that the job complexity factor rests on distinctions among jobs in their information processing demands, generally without regard to the type of information being processed. Of the six Holland fields of work,
only oneRealisticcorrelates best (and negatively) with the complexity factor
(.74). Such work, which emphasizes manipulating concrete things rather than
people or abstract processes, comprises the vast bulk of low-level jobs in the
American economy. The nature of these jobs comports with the data on vocational
interests associated with the complexity factor. Complex work is associated with
interests in creative rather than routine work (.63), with data (.73), and with social
welfare (.55), respectively, rather than things and machines, and with social esteem
rather than having tangible products (.48). This characterization of low-level, frequently Realistic work is also consistent with the data on physical requirements:
All the physically unpleasant conditions of work (working in wet, hazardous,
noisy, or highly polluted conditions) are most characteristic of the simplest, lowest-level jobs (.37 to .45). In contrast, the skill and activity demands associated
with the other factors of work are consistently specific to particular functional domains (fields) of workfor example, selling with enterprising work and coordination without sight (such as typing) with conventional (mostly clerical) work.
So, too, are various other circumstances of work, such as how workers are paid
(salary, wages, tips, commissions), which tend to distinguish jobs that require selling from those that do not, whatever their complexity level.
As we saw, the job analysis items that correlate most highly with overall job
complexity use the very language of information processing, such as compiling
and combining information. Some of the most highly correlated mental demands,
such as reasoning and analyzing, are known as prototypical manifestations of intelligence in action. The other dimensions of difference among jobs rarely involve
such language. Instead, they generally relate to the material in different domains of
work activity, how (not how much) such activity is remunerated, and the vocational interests they satisfy. They are noncognitive by contrast.
The information processing requirements that distinguish complex jobs from
simple ones are therefore essentially the same as the task requirements that distin-
36
GOTTFREDSON
guish highly g-loaded mental tests, such as IQ tests, from less g-loaded ones, such
as tests of short-term memory. In short, jobs are like (unstandardized) mental tests.
They differ systematically in g-loading, depending on the complexity of their information processing demands. Because we know the relative complexity of different occupations, we can predict where job performance (when well measured)
will be most sensitive to differences in workers g levels. This allows us to predict
major trends in the predictive validity of g across the full landscape of work in
modern life. One prediction, which has already been borne out, is that mental tests
predict job performance best in the most complex jobs.
The important point is that the predictive validities of g behave lawfully. They
vary, but they vary systematically and for reasons that are beginning to be well understood. Over 2 decades of meta-analyses have shown that they are not sensitive
to small variations in job duties and circumstance, after controlling for sampling
error and other statistical artifacts. Complex jobs will always put a premium on
higher g. Their performance will always be notably enhanced by higher g, all else
equal. Higher g will also enhance performance in simple jobs, but to a much
smaller degree.
This lawfulness can, in turn, be used to evaluate the credibility of claims in personnel selection research concerning the importance, or lack thereof, of mental
ability in jobs of at least moderate complexity, such as police work. If a mental test
fails to predict performance in a job of at least moderate complexity (which includes most jobs), we cannot jump to the conclusion that differences in mental
ability are unimportant on that job. Instead, we must suspect either that the test
does not measure g well or that the job performance criterion does not measure the
most crucial aspects of job performance. The law-like relation between job complexity and the value of g demands such doubt. Credulous acceptance of the null
result requires ignoring the vast web of well-known evidence on g, much of it emanating from industrialorganizational (I/O) psychology itself.
RELATIVE IMPORTANCE OF g
FOR JOB PERFORMANCE
The I/O literature has been especially useful in documenting the value of other predictors, such as personality traits and job experience, in forecasting various dimensions of performance. It thus illuminates the ways in which gs predictive validities
can be moderated by the performance criteria and other predictors considered.
These relations, too, are lawful. They must be understood to appreciate where, and
to what degree, higher levels of g actually have functional value on the job. I/O research has shown, for instance, how gs absolute and relative levels of predictive
validity both vary according to the kind of performance criterion used. A failure to
37
understand these gradients of effect sustains the mistaken view that gs impact on
performance is capricious or highly specific across different settings and samples.
The Appendix outlines the topography of gthat is, its gradients of effect relative to other predictors. It summarizes much evidence on the prediction of job performance, which is discussed more fully elsewhere (Gottfredson, 2002). This summary is organized around two distinctions, one among performance criteria and
one among predictors, that are absolutely essential for understanding the topography of g and other precursors of performance. First, job performance criteria differ
in whether they measure mostly the core technical aspects of job performance
rather than a jobs often discretionary contextual (citizenship) aspects. Second,
predictors can be classified as can do (ability), will do (motivation), or have
done (experience) factors.
The Appendix repeats some of the points already made, specifically that (a) g
has pervasive value but its value varies by the complexity of the task at hand,
and (b) specific mental abilities have little incremental validity net of g, and then
only in limited domains of activity. The summary points to other important regularities. As shown in the Appendix, personality traits generally have more incremental validity than do specific abilities, because will do traits are correlated
little or not at all with g, the dominant can do trait, and thus have greater opportunity to add to prediction. These noncognitive traits do, however, tend to
show the same high domain specificity that specific abilities do. The exception is
the personality factor representing conscientiousness and integrity, which substantially enhances performance in all kinds of work, although generally not as
much as does g.
An especially important aspect of gs topography is that the functional value of
g increases, both in absolute and relative terms, as performance criteria focus more
on the core technical aspects of performance rather than on worker citizenship
(helping coworkers, representing the profession well, and so on). The reverse is
generally true for the noncognitive will do predictors, such as temperaments and
interests: They predict the noncore elements best. Another important regularity is
that, although the predictive validities of g rise with job complexity, the opposite is
true for two other major predictors of performancelength of experience and
psychomotor abilities. The latters predictive validities are sometimes high, but
they tend to be highest in the simplest work.
Another regularity is that have done factors sometimes rival g in predicting
complex performance, but they are highly job specific. Take job experiencelong
experience as a carpenter does not enhance performance as a bank teller. The same
is true of job sample or tacit knowledge tests, which assess workers developed
competence in a particular job: Potential bank tellers cannot be screened with a
sample of carpentry work. In any case, these have done predictors can be used to
select only among experienced applicants. Measures of g (or personality) pose no
such constraints. g is generalizable, but experience is not.
38
GOTTFREDSON
As for g, there are also consistent gradients of effect for job experience. The
value of longer experience relative to ones peers fades with time on the job, but the
advantages of higher g do not. Experience is therefore not a substitute for g. After
controlling for differences in experience, gs validities are revealed to be stable and
substantial over many years of experience. Large relative differences in experience
among workers with low absolute levels of experience can obscure the advantages
of higher g. The reason is that a little experience provides a big advantage when
other workers still have little or none. The advantage is only temporary, however.
As all workers gain experience, the brighter ones will glean relatively more from
their experience and, as research shows, soon surpass the performance of more experienced but less able peers. Research that ignores large relative differences in experience fuels mistaken conceptions about g. Such research is often cited to support the view that everyday competence depends more on a separate practical
intelligence than on gfor example, that we need to posit a practical intelligence
to explain why inexperienced college students cannot pack boxes in a factory as efficiently as do experienced workers who have little education (e.g., see Sternberg,
Wagner, Williams, & Horvath, 1995).
The foregoing gradients of gs impact, when appreciated, can be used to guide
personnel selection practice. They confirm that selection batteries should select for
more than g, if the goal is to maximize aggregate performance, but that g should be
a progressively more important part of the mix for increasingly complex jobs (unless applicants have somehow already been winnowed by g). Many kinds of mental tests will work well for screening people yet to be trained, if the tests are highly
g-loaded. Their validity derives from their ability to assess the operation of critical
thinking skills, either on the spot (fluid g) or in past endeavors (crystallized g).
Their validity does not depend on their manifest content or fidelitythat is,
whether they look like the job. Face validity is useful for gaining acceptance of a
test, but it has no relation to the tests ability to measure key cognitive skills. Cognitive tests that look like the job can measure g well (as do tests of mathematical
reasoning) or poorly (as do tests of arithmetic computation).
Tests of noncognitive traits are useful supplements to g-loaded tests in a selection battery, but they cannot substitute for tests of g. The reason is that noncognitive traits cannot substitute for the information-processing skills that g provides. Noncognitive traits also cannot be considered as useful as g even when they
have the same predictive validity (say, .3) against a multidimensional criterion
(say, supervisor ratings), because they predict different aspects of job performance. The former predict primarily citizenship and the latter primarily core performance. You get what you select for, and the wise organization will never forego
selecting for core performance.
There are circumstances where one might want to trade away some g to gain
higher levels of experience. The magnitude of the appropriate trade-off, if any,
would depend on the sensitivity of job performance to higher levels of g (the com-
39
plexity of the work), the importance of short-term performance relative to longterm performance (probable tenure), and the feasibility and cost of training
brighter recruits rather than hiring more experienced ones (more complex jobs require longer, more complex training). In short, understanding the gradients of effect outlined in the Appendix can help practitioners systematically improveor
knowingly degradetheir selection procedures.
40
GOTTFREDSON
FIGURE 2 Adapted from Figure 3 in Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24, 79132, with permission from Elsevier Science.
aWPT = Wonderlic Personnel Test. bNALS = National Adult Literacy Survey. See Gottfredson
(1997) for translation of NALS scores into IQ equivalents. cWAIS = Wechsler Adult Intelligence Scale. dSee Gottfredson (1997) for calculation of percentiles.
41
modes of training that are possible (at the higher ranges of IQ) or required (at
the lower ranges) at different IQ levels are also shown.
The cumulative percentages of American Blacks and Whites at each IQ level
are shown at the bottom of Figure 2. The ratios in the last row represent the
proportion of all Blacks to the proportion of all Whites within five different broad
ranges of IQ. Blacks are overrepresented (5:1) in the lowest range (below IQ 75, labeled here as the high risk zone) and extremely underrepresented (1:30) in the
highest (above IQ 125, the range where success is yours to lose). These ratios
represent racial differences in the per capita availability of applicants who will be
competitive for different levels of work, and they portend a clear trend in disparate
impact. Under raceneutral hiring, disparate impact will generally be high enough
to fail the 80% rule (which triggers the presumption of racial discrimination under
federal guidelines) in hiring for all but the simplest jobs.
When Black and White applicants are drawn from the same IQ ranges, disparate impact will therefore be the rule, not the exception, even in jobs of modest
complexity. It will get progressively worse at successively higher levels of education, training, and employment, and it will be extremely high in the most desirable
jobs. Cognitive tests cannot meet the 80% rule with these two populations until the
threshold for consideration falls to about IQ 77 to 78 (Gottfredson, 2000b). This
low estimate is consistent with other research showing that mental tests have to be
virtually eliminated from test batteries to satisfy the 80% rule under typical conditions (Schmitt, Rogers, Chan, Sheppard, & Jennings, 1997). The estimate also falls
below the minimum mental standard (about IQ 80) that federal law sets for inducting recruits into the military.
To take some more specific examples, about 22% of Whites and 59% of Blacks
have IQs below 90, which makes considerably fewer Blacks competitive for midlevel jobs, such as firefighting, the skilled trades, and many clerical jobs. The average
IQ of incumbents in such jobs is nearer IQ 100, one standard deviation above the
Black average of roughly IQ 85. IQ 80 seems to be the threshold for competitiveness
in even the lowest level jobs, and four times as many Blacks (30%) as Whites (7%)
fall below that threshold. Looking toward the other tail of the IQ distribution, IQ 125
is about average for professionals (e.g., lawyers, physicians, engineers, professors)
and high-level executives. The BlackWhite ratio of availability is only 1:30 at this
level. Disparate impact, and therefore political and legal tension, is thus particularly
acute in the most complex, most socially desirable jobs.
Actual employment ratios are not as extreme as the per capital availability ratios
shown here (other factors matter in hiring), but they follow the same systematic decline up the job complexity continuum. There is considerable IQ variability among
incumbents in any occupation, of course, the standard deviation among incumbents generally averaging about 8 IQ points. The average BlackWhite difference
is twice that large, however, which guarantees that Blacks will often cluster at the
42
GOTTFREDSON
43
mance criteria. The vexing fact, which no tinkering with measurement can eliminate, is that Blacks and Whites differ most, on the average, on the most important
predictor of job performance.
Some panelists also retreated into the unsubstantiated claim that there are multiple forms of intelligence, independent of g, that could predict job performance
with less disparate impact. However, even the strongest body of evidencethat for
so-called practical intelligence and its associated triarchic theory of intelligence
(Sternberg et al., 2000)provides only scant and contradictory bits of evidence
for such a claim. Coming from a mere six studies (four of which remain unpublished) of five occupations, those data provide no support whatsoever (see
Gottfredson, in press; also Brody, in press) for Sternberg et al.s (2000, p. xi) assertion that practical intelligence is a construct that is distinct from general intelligence and is at least as good a predictor of future success as is the academic
form of intelligence [g].
Reducing disparate impact is a worthy goal to which probably all selection professionals subscribe. What is troubling are the new means being promulgated:
minimizing or eliminating the best overall predictor of job performance. They
amount to a call for reducing test validity and thereby violating personnel psychologys primary testing standard. Reducing the role of g in selection may be legally
and politically expedient in the short term, but it delays more effective responses to
the huge racial gaps in job-relevant skills, abilities, and knowledges.
REFERENCES
Arvey, R. D. (1986). General ability in employment: A discussion. Journal of Vocational Behavior, 29,
415420.
Brody, N. (in press). Construct validation of the Sternberg Triarchic Abilities Test. (STAT): Comment
and reanalysis. Intelligence, 30.
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press.
Gottfredson, L. S. (1985). Education as a valid but fallible signal of worker quality: Reorienting an old
debate about the functional basis of the occupational hierarchy. In A. C. Kerckhoff (Ed.), Research in
sociology of education and socialization, Vol. 5 (pp. 119165). Greenwich, CT: JAI.
Gottfredson, L.S. (1994). The role of intelligence and education in the division of labor. Report No.
355. Baltimore, MD: Johns Hopkins University, Center for Social Organization of Schools.
Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24, 79132.
Gottfredson, L. S. (2000a). Intelligence. In E. F. Borgatta & R. J. V. Montgomery (Eds.), Encyclopedia
of sociology, revised edition (pp. 13591386). New York: Macmillan.
Gottfredson, L. S. (2000b). Skills gaps, not mental tests, make racial proportionality impossible. Psychology, Public Policy, and Law, 6, 129143.
Gottfredson, L. S. (2002). g: Highly general and highly practical. In R.J. Sternberg & E. L. Grigorenko
(Eds.), The general intelligence factor: How general is it? (pp. 331-380) Mahwah, NJ: Lawrence
Erlbaum Associates, Inc.
Gottfredson, L. S, Sternberg J. & Grigorenko E. L. (Eds.), The general intelligence factor: How general is it? Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
44
GOTTFREDSON
Gottfredson, L. S. (in press). Dissecting practical intelligence theory: Its claims and evidence. Intelligence, 30.
Jenson, A.R. (1980). Bias in mental testing. New York; Free Press.
Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger.
Kirsh I. S., & Mosenthal P. B. (1990). Exploring document literacy: Variables underlying the performance of young adults. Reading Research Quarterly, 25, 5-30.
Neisser, U., Boodoo, G., Bouchard, T. J., Jr., Boykin, A. W., Brody, N., Ceci, S. J., Halpern, D. F.,
Loehlin, J. C., Perloff, R., Sternberg, R. J., & Urbina, S. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51, 77101.
Reder, S. (1998). Dimensionality and construct validity of the NALS assessment. In M. C. Smith (Ed.),
Literacy for the twentyfirst century (pp. 3757). Westport, CT: Praeger.
Schmitt, N., Rogers, W., Chan, D., Sheppard, L., & Jennings, D. (1997). Adverse impact and predictive
efficiency of various predictor combinations. Journal of Applied Psychology, 82, 719730.
Sternberg, R. J., Forsythe, G. B., Hedlund, J., Horvath, J. A., Wagner, R. K., Williams, W. M., Snook, S.
A., & Grigorenko, E. L. (2000). Practical intelligence in everyday life. New York: Cambridge University Press.
Sternberg, R. J., Wagner, R. K., Williams, W. M., & Horvath, J. A. (1995). Testing common sense.
American Psychologist, 50, 912926.
Wood, R. E. (1986). Task complexity: Definition of the construct. Organizational Behavior and Human
Decision Processes, 37, 6082.
APPENDIX 1
Major Findings on gs Impact on Job
Performance a Utility of g
1. Higher levels of g lead to higher levels of performance in all jobs and along
all dimensions of performance. The average correlation of mental tests with overall rated job performance is around .5 (corrected for statistical artifacts).
2. There is no ability threshold above which more g does not enhance performance. The effects of g are linear: successive increments in g lead to successive increments in job performance.
3. (a) The value of higher levels of g does not fade with longer experience on the
job. Criterion validities remain high even among highly experienced workers. (b)
That they sometimes even appear to rise with experience may be due to the confounding effect of the least experienced groups tending to be more variable in relative level of experience, which obscures the advantages of higher g.
4. g predicts job performance better in more complex jobs. Its (corrected) criterion validities range from about .2 in the simplest jobs to .8 in the most complex.
5. g predicts the core technical dimensions of performance better than it does
the non-core citizenship dimension of performance.
aSee
45
6. Perhaps as a consequence, g predicts objectively measured performance (either job knowledge or job sample performance) better than it does subjectively
measured performance (such as supervisor ratings).
Utility of g Relative to Other Can Do Components
of Performance
7. Specific mental abilities (such as spatial, mechanical, or verbal ability) add
very little, beyond g, to the prediction of job performance. g generally accounts for
at least 85-95% of a full mental test batterys (cross-validated) ability to predict
performance in training or on the job.
8. Specific mental abilities (such as clerical ability) sometimes add usefully to
prediction, net of g, but only in certain classes of jobs. They do not have general
utility.
9. General psychomotor ability is often useful, but primarily in less complex
work. Its predictive validities fall with complexity while those for g rise.
Utility of g Relative to the Will Do Component
of Job Performance
10. g predicts core performance much better than do non-cognitive (less
g-loaded) traits, such as vocational interests and different personality traits. The
latter add virtually nothing to the prediction of core performance, net of g.
11. g predicts most dimensions of non-core performance (such as personal discipline and soldier bearing) much less well than do non-cognitive traits of personality and temperament. When a performance dimension reflects both core and
non-core performance (effort and leadership), g predicts to about the same modest
degree as do non-cognitive (less g-loaded) traits.
12. Different non-cognitive traits appear to usefully supplement g in different
jobs, just as specific abilities sometimes add to the prediction of performance in
certain classes of jobs. Only one such non-cognitive trait appears to be as generalizable as g: the personality trait of conscientiousness/integrity. Its effect sizes
for core performance are substantially smaller than gs, however.
Utility of g Relative to the Job Knowledge
13. g affects job performance primarily indirectly through its effect on job-specific knowledge.
14. gs direct effects on job performance increase when jobs are less routinized,
training is less complete, and workers retain more discretion.
15. Job-specific knowledge generally predicts job performance as well as does
g among experienced workers. However, job knowledge is not generalizable (net
46
GOTTFREDSON
of its g component), even among experienced workers. The value of job knowledge is highly job specific; gs value is unrestricted.
Utility of g Relative to the Have Done (Experience)
Component of Job Performance
16. Like job knowledge, the effect sizes of job-specific experience are sometimes high but they are not generalizable.
17. In fact, experience predicts performance less well as all workers become
more experienced. In contrast, higher levels of g remain an asset regardless of
length of experience.
18. Experience predicts job performance less well as job complexity rises,
which is opposite the trend for g. Like general psychomotor ability, experience
matters least where g matters most to individuals and their organizations.