0% found this document useful (0 votes)
401 views

Aviator Selection

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
401 views

Aviator Selection

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 128

Technical Report 1183

Review of Aviator Selection

Cheryl Paullin
Personnel Decisions Research Institutes, Inc.

Lawrence Katz
U.S. Army Research Institute

Kenneth T. Bruskiewicz and Janis Houston


Personnel Decisions Research Institutes, Inc.
Diane Damos
Damos Aviation Services

July 2006 20060929060


United States Army Research Institute
for the Behavioral and Social Sciences

Approved for public release, distribution is unlimited.


U.S. Army Research Institute
for the Behavioral and Social Sciences

A Directorate of the Department of the Army


Deputy Chief of Staff, G1

Authorized and approved for distribution:

STEPHEN GOLDBERG MICHELLE SAMS


Acting Technical Director Acting Director
Research accomplished under contract
for the Department of the Army

Personnel Decisions Research Institutes, Inc.

Technical review by

David M. Johnson, U.S. Army Research Institute


Tonia Heffner, U.S. Army Research Institute

NOTICES

DISTRIBUTION: Primary distribution of this Technical Report has been made by ARI.
Please address correspondence concerning distribution of reports to: U.S. Army
Research Institute for the Behavioral and Social Sciences, Attn: DAPC-ARI-MS, 2511
Jefferson Davis highway, Arlington, Virginia 22202-3926.

FINAL DISPOSITION: This Technical Report may be destroyed when it is no longer


needed. Please do not return it to the U.S. Army Research Institute for the Behavioral
and Social Sciences.

NOTE: The findings in this Technical Report are not to be construed as an official
Department of the Army position, unless so designated by other authorized documents.
REPORT DOCUMENTATION PAGE
1. REPORT DATE (dd-mm-yy) 2. REPORT TYPE 3. DATES COVERED (from... to)
July 2006 Interim June 2004 - June 2005

4. TITLE AND SUBTITLE 5a. CONTRACT OR GRANT NUMBER


DASW01-03-D-0008
Review of Aviator Selection
5b. PROGRAM ELEMENT NUMBER
630007

6. AUTHOR(S) 5c. PROJECT NUMBER


Cheryl Paullin, (Personnel Decisions Research Institutes, Inc.); A792
Lawrence C. Katz (U.S. Army Research Institute); Kenneth T.
Bruskiewicz and Janis Houston (Personnel Decisions Research
Institutes, Inc.); Diane Damos (Damos Aviation Services)

5d. TASK NUMBER


308

5e. WORK UNIT NUMBER

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER


Personnel Decisions Research Institutes, Inc. Technical Report No. 493
650 Third Ave. South Suite 1350
Minneapolis, Minnesota 55402

9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. MONITOR ACRONYM


U.S. Army Research Institute for the Behavioral & Social Sciences ARI-RWARU
ATTN: DAPE-ARI-IR 11. MONITOR REPORT NUMBER
2511 Jefferson Davis Highway Technical Report 1183
Arlington, VA 22202-3926
12. DISTRIBUTION/AVAILABILITY STATEMENT
Approved for public release; distribution unlimited.

13. SUPPLEMENTARY NOTES


Contracting Officer's Representative and Subject Matter POC: Lawrence Katz
14. ABSTRACT (Maximum 200 words):

This report presents a review of research in the aviator selection and general personnel selection domains. That
information was used to identify knowledge, skills, attributes, and other factors that should be included in a job
analysis focusing on the Army aviator job. It was further used to develop a recommended strategy for an Army aviator
selection battery.

15. SUBJECT TERMS


Aviator selection; selection; military; personality; psychomotor; cognitive; job analysis; review

SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21. RESPONSIBLE PERSON


ABSTRACT OF PAGES
16. REPORT 17. ABSTRACT 18. THIS PAGE Ellen Kinzer
Unclassified Unclassified Unclassified Unlimited Technical Publication Specialist
(703) 602-8047
ii
Technical Report 1183

Review of Aviator Selection

Cheryl Paullin
Personnel Decisions Research Institutes, Inc.

Lawrence Katz
U.S. Army Research Institute

Kenneth T. Bruskiewicz and Janis Houston


Personnel Decisions Research Institutes, Inc.
Diane Damos
Damos Aviation Services

Rotary-Wing Aviation Research Unit


William R. Howse, Chief

U.S. Army Research Institute for the Behavioral and Social Sciences

2511 Jefferson Davis Highway, Arlington, VA 22202-3926

July 2006

Army Project Number Personnel Performance and


633007A792 Training

Approved for public release; distribution is unlimited

]]],o
iv
REVIEW OF AVIATOR SELECTION

EXECUTIVE SUMMARY

Research Requirement:

In June 2004, the U.S. Army Research Institute for the Behavioral and Social Sciences
(ARI) was tasked with conducting the research and development towards a new Selection
Instrument for Flight Training (SIFT). The Army's stated objectives were: 1. Develop a
computer-based and web-administered selection instrument forArmy flight training with
emphasis upon aptitudes for Future Force aviator performance within the Future Combat
Systems environment; 2. Develop an aviator selection instrument that corrects or minimizes risks
associated with several deficiencies identified in the current selection instrument - the Alternate
Flight Aptitude Selection Test (AFAST); 3. Develop the selection instrument so that the Army
will be able to rapidly assess its current performance as a predictor, revise the instrument when
necessary and adapt its application to selection for related occupational categories such as
Unmanned Aerial Vehicle Operators and Special Operations Aviators; and, 4. Maximize
utilization (by inclusion or adaptation) of existing tests as may be found in use or under
development within the Department of Defense. The first task was to review the relevant
selection literature. The overall goal of this initial task was to collect information that could be
used to produce a rational recommendation for a specific selection and testing strategy for Army
aviation.

Procedure:

A focused review of aviator selection research, supplemented by relevant research from


the general personnel selection domain, was conducted. The review identified more than 150
potentially relevant articles. Rather than rely entirely on a narrative summary, a spreadsheet was
developed to summarize information about various test batteries and to facilitate comparison of
the test batteries when deriving a recommended selection strategy. From this analysis, a
selection strategy for replacing the Army's current aviator selection battery was recommended.
The results of this review also informed the job analysis study conducted as part of the SIFT
project.

Findings:

Research clearly suggests that cognitive ability, or general intelligence (g), will be an
important predictor of aviator performance. However, there is reason to believe that measures of
the following constructs may add incremental validity beyond that achieved by a battery that
reliably and accurately measures general intelligence: psychomotor skills; selective and divided
attention; working memory; aviation interest/knowledge; flying experience; and, personality.
The recommended selection strategy is a two-stage testing process. The first stage of testing will
measure cognitive and personality/motivational traits important for the aviator job. These tests

v
do not require any non-standard computer peripherals and can be administered via the Internet in
virtually any location with access to a desktop computer, the Internet, and a test proctor. The
second stage of the test battery will include performance-based measures of psychomotor and
information processing skills. These tests do require non-standard computer peripherals and may
better serve the needs of Army aviation as classification instruments, for tracking selected
aviators into one of the four mission platforms. Both the U.S. Navy and the U.S. Air Force
currently use an aviator selection test battery that measures cognitive abilities important for U.S.
Army Aviators, and one of these two batteries should be adopted for Army aviator selection. The
U.S. Army also possesses two non-cognitive inventories that can be adapted for use with the
Army aviator applicant population. Finally, it is recommended that a small number of new
ability tests and non-cognitive scales be developed to measure abilities or traits that are not
currently measured by any of the readily-accessible test batteries or non-cognitive instruments.

Utilization and Dissemination of Findings:

This product is one of many emanating from the SIFT effort. The contents of this report
flow mainly into decision processes conducted internally to the project, but also documents the
overall conduct of the effort. Documentation of the development of this selection instrument is
necessary to provide a basis to defend the scientific and theoretical underpinnings of the test and
to provide a detailed base from which revisions can be made in time. This report provides
information for use in transition of the selection instrument into operation.

vi
REVIEW OF AVIATOR SELECTION

CONTENTS

Page
IN TR O DU C T ION ........................................................................................................................... 1
Overview of Existing Army Aviation Accession Procedures ............................................... I
Brief History of Aviator Selection ....................................................................................... 2
FOCUSED LITERATURE REVIEW ........................................................................................ 4
Literature Review Methodology .......................................................................................... 4
Findings from Aviator Selection Research Literature ........................................................... 5
General Aviator Selection Reviews ............................................................................... 5
OBSTACLES AND ISSUES IN CONDUCTING AVIATOR SELECTION RESEARCH .......... 7
Training Performance as a Criterion Measure ..................................................................... 7
Statistical/Methodological Issues .................................. ...................................................... 8
L ow B ase R ate ........................................................................................................................... 9
Predictor Variables ....................................................................................................... 9
C riterion Variables ........................................................................................................ 10
FACTOR-ANALYTIC WORK IN THE AVIATOR SELECTION RESEARCH
L IT E RA TU RE ......................................................................................................................... 10
MODELS OF SKILL ACQUISITION ...................................................................................... 12
EVIDENCE OF PREDICTIVE VALIDITY FOR FLIGHT TRAINING PERFORMANCE ..... 13
Damos (1993) Meta-Analysis .............................................................................................. 13
Hunter and Burke (1994) Meta-Analysis .......................................................................... 13
Martinussen (1996) Meta-Analysis ................................................................................... 16
Martinussen and Torjussen (1998) Meta-Analysis ............................................................ 19
Summary of Meta-Analytic Validation Studies ................................................................. 19
PERSONALITY RESEARCH IN THE AVIATOR SELECTION ARENA ........................... 21
FINDINGS FROM GENERAL SELECTION RESEARCH LITERATURE .......................... 21
INCREMENTAL PREDICTIVE VALIDITY ........................................................................ 23
Aviator Selection Research Literature .............................................................................. 23
General Selection Research Literature ............................................................................... 24
Summary of Incremental Validity Evidence ...................................................................... 24
GROUP DIFFERENCES .......................................................................................................... 25
Cognitive Ability Tests ..................................................................................................... 25
P sych om otor T ests .................................................................................................................. 26

vii
CONTENTS (continued)

Speeded Information Processing Tests ............................................................................... 26


Personality and Temperament Measures ............................................................................. 27
WHAT SHOULD THE ARMY MEASURE? .......................................................................... 28
REVIEW OF EXISTING AVIATOR SELECTION TEST BATTERIES ............................... 29
SELECTION STRATEGY RECOMMENDATIONS ............................................................ 29
BEST BET PREDICTOR MEASURES .................................................................................... 31
Stage 1: Cognitive M easures ............................................................................................... 31
Aviation Selection Test Battery (ASTB) ...................................................................... 31
Air Force Officer Qualification Test (AFOQT) ........................................................... 32
Cognitive Prioritization (Popcorn Test) ........................................................................ 33
Perceptual Speed and Accuracy .................................................................................... 33
Stage 1: Non-Cognitive Measures ...................................................................................... 33
Test of Adaptable Personality (TAP) ........................................................................... 33
Assessment of Individual Motivation (AIM) ............................................................... 33
Self-Description Inventory Plus (SDI+) ........................................................................ 34
Armstrong Laboratory Aviation Personality Scale (ALAPS) ...................................... 34
N ew N on-cognitive Scales ............................................................................................. 34
Stage 2: Psychomotor Skills and Multiple-Task Performance
(Performance-Based Measures) ........................................................................................... 35
Test of Basic Aviation Skills (TBAS) .......................................................................... 35
W om b at ©.......................................................................................................................... 35
New Performance-Based Measure ............................................................................... 35
C ON C LU SIO N S............................................................................................................................ 36
R EF E R EN C E S .............................................................................................................................. 39
APPENDICES
A. Overview of Aviator Selection Test Batteries ................................................................. A-I
B. Overview of Non-Cognitive Inventories that may be Relevant for Aviator Selection ..... B-I
C. Recommended Selection Strategy for Army Aviators ............................................... C-1

LIST OF TABLES

TABLE 1. HUNTER & BURKE (1994) META-ANALYTIC RESULTS FOR


VARIOUS PREDICTOR TYPES ........................................................................ 15
TABLE 2. MARTINUSSEN (1996) META-ANALYTIC RESULTS FOR VARIOUS
MEASUREMENT METHODS ............................................................................ 18

viii
REVIEW OF AVIATOR SELECTION

Introduction

In June 2004, the US Army Research Institute for the Behavioral and Social Sciences
(ARI) awarded the Selection Instrument for Army Flight Training (SIFT) contract to Personnel
Decisions Research Institutes (PDRI). The Army's stated objectives were: 1) Develop a
computer-based and web-administered selection instrument for Army flight training with
emphasis upon aptitudes for Future Force aviator performance within the Future Combat
Systems environment; 2) Develop an aviator selection instrument that corrects or minimizes risks
associated with several deficiencies identified in the current selection instrument - the Alternate
Flight Aptitude Selection Test (AFAST); 3) Develop the selection instrument so that the Army
will be able to rapidly assess its current performance as a predictor, revise the instrument when
necessary and adapt its application to selection for related occupational categories such as
Unmanned Aerial Vehicle Operators and Special Operations Aviators; and, 4) Maximize
utilization (by inclusion or adaptation) of existing tests as may be found in use or under
development within the Department of Defense.

The project was divided into several tasks. This report summarizes efforts conducted in
relation to Task 1: Review the existing Army aviation accession process and relevant literature.
The overall goal of Task I was to collect information that could be used to produce a rational
decision on a specific selection and testing strategy.

Overview of Existing Army Aviation Accession Procedures

A review of existing Army aviation accession procedures was conducted to provide the
context for recommending a replacement for the AFAST. This included reviewing Army
regulations and other documents. US Army aviators are Commissioned or Warrant Officers.
Commissioned Officers primarily come from a military academy, or from a Reserve Officer
Training Corps (ROTC) or Officer Candidate School (OCS) program. Civilians and enlisted
personnel from any branch of the US military may apply to become an Army Aviation Warrant
Officer. Prior to volunteering for aviation duty, candidates must meet standards for becoming a
Commissioned Officer or a Warrant Officer in the US Army. Among other things, this includes
meeting physical and medical standards, and earning a qualifying score on the relevant
admission exam (Scholastic Aptitude Test or the American College Test for Commissioned
Officers; Armed Services Vocational Aptitude Battery (ASVAB) General-Technical (GT)
Composite for Warrant Officers).

Candidates who apply to become an Army aviator must meet additional standards beyond
those described above. The selection process is rigorous and there are typically five to ten
applicants for every available training seat. Selection standards are highly similar across all
accession sources but the exact procedures vary to some degree, depending on whether the
applicant is a Commissioned versus a Warrant Officer, the source from which he/she comes
(e.g., US Army versus US Army National Guard or Reserve), and whether or not the applicant is
already on active duty at the time of application. In general, all Army Aviator applicants must
meet physical fitness and medical standards beyond those required to become a Commissioned
or Warrant Officer, meet minimum and maximum age requirements, earn a qualifying score on

__ I
the AFAST, and be recommended by a selection board. Flight experience and post high-school
coursework or degree are preferred but not required.

The following Army regulations (AR) and other documents outline selection and testing
requirements:
"* Selection and Training of Army Aviation Officers (AR 611-110, 14 Nov 2003)
"* Aviation Warrant Officer Training (AR 611-85, 15 June 1981)
"* Army Personnel Selection and Classification Testing (AR 611-5, 10 June 2002)
"* Appointment of Commissioned and Warrant Officers of the Army (AR 135-100, 1
Sept 1994)
"* Warrant Officer Procurement Program (Department of the Army Circular 601-99-1, 23
April 1999)
"* Warrant Officer Professional Development (Department of the Army Pamphlet 600-
11, 30 Dec 1996)
"* Order to Active Duties as Individuals Other than a Presidential Selected Reserve Call-
up, Partial or Full Mobilization (AR 135-210, 17 Sept 1999)
"* Policies and Procedures for Active-Duty List Officer Selection Boards (Department of
the Army Memo 600-2, 24 Sept 1999)

After candidates are selected as Army aviators, they report to Ft. Rucker, AL for training.
All candidates complete an 18-week Initial Entry Rotary Wing (IERW) core training program
and a two-week Basic Navigation course, followed by 12 to 20 weeks of training in a specific
operational aircraft. Student aviators are assigned, or "classified" into one of four tracks for
aircraft-specific training: Scout, Attack, Cargo, or Utility. Classification decisions are currently
based in part on academic grades in IERW and in part on the needs of the Army. Upon
completion of aircraft-specific training, aviators are assigned to a Military Occupational
Specialty (MOS) that corresponds to the type of aircraft they are qualified to fly, and they begin
their first operational tour as an Army Aviator.
BriefHistory ofAviator Selection

The prediction of aviator performance played a prominent role in the military research
and development arena for most of the last century. In a review of aviator selection research,
Hunter (1989) explained that this continued emphasis is a result of the expense involved in
aviator training, noting that, almost without exception, aviator training is the most expensive of
the training programs conducted by the military services. The US Navy estimates that the sunk
costs for student aviators who fail training range from $500,000 to $1,000,000, depending on the
stage at which failure occurs (Helm & Reid, 2003). According to Carretta and Ree (2000),
estimates of the cost of each person who failed to complete US Air Force (USAF) undergraduate
aviator training range from $50,000 (Hunter, 1989) to $80,000 (Siem, Carretta, & Mercatante,
1988). The amount approaches $500,000 per candidate by the end of flight school for US Army
aviators.

2
Since World War I, the military services have explored the relationships between
measures of a wide variety of personal characteristics and aviator performance. As early as
World War I, tests of mental alertness and emotional stability were found to be predictive of
aviator success (North & Griffin, 1977). Between World War I and World War II, measures of
psychomotor coordination received the primary emphasis in aviator selection research. A flurry
of developmental activity produced "aircraft-like controls" for use in measuring complex
coordination, two-hand coordination, rudder control skills, dual-task performance, and the like.
A number of these psychomotor tests, especially those of a more complex nature, were found to
be valid for aviator selection. However, in the early 1950s psychomotor tests were largely
abandoned as a result of persistent problems with reliability and maintainability of these
electromechanical devices (Hunter, 1989).

With the advent of World War II, research on aviator selection and classification
expanded to include measurement of additional abilities such as spatial orientation and the use of
new testing tools (e.g., motion pictures, photographs). Much of what is known today about
spatial and psychomotor abilities, as well as several other related attributes, stems from the
classic Army Air Force (AAF) work (Guilford & Lacey, 1947; Melton, 1947) and the Navy's
Pensacola 1000 Aviator Study (Franzen & McFarland, 1945). After the war, Fleishman and his
colleagues continued psychomotor abilities research (e.g., Fleishman, 1967, 1972; Fleishman &
Hempel, 1954). Researchers also investigated personality characteristics related to attrition from
aviator training and/or aviator performance (Griffin & Mosko, 1977).

Within the last few decades, innovations in aviator selection and classification have
centered on attributes such as multi-task performance (e.g., Griffin & McBride, 1986), division
of attention (e.g., Carretta, 1987d), decision making speed (e.g., Carretta, 1988), and attitudinal
and motivational traits (Foushee & Helmreich, 1986; Helmreich, Foushee, Benson & Russini,
1986). Personality also received a good deal of attention in the past two decades. Much of the
early work was exploratory in nature, attempting to determine which personality traits were
related to various outcomes relevant for aviators, but not necessarily guided by any particular
theory of personality or aviator performance. For example, several researchers administered
personality inventories that had been well established as useful for purposes other than aviator
selection, including the Minnesota Multiphasic Personality Inventory (Caldwell, O'Hara,
Caldwell, Stephens, & Krueger, 1993), Eysenck Personality Inventory (Bartram & Dale, 1982;
Jessup & Jessup, 1971), and the Edwards Personal Preference Schedule (Fry & Reinhardt, 1969).
Other researchers developed their own inventory, for example, the programmatic research
conducted by the USAF that eventually led to the NEO-PI and the Self-Description Inventory
(Christal, 1975; Christal, Barucky, Driskill, & Collis, 1997; Tupes & Christal, 1961).

Some of the research specifically focused on developing personality profiles for


helicopter aviators (Caldwell, et al., 1993; Geist & Boyd, 1980; Harrs, Kastner, & Beerman,
1991; Howse, 1995). Another arena of increasing importance is selection of individuals to fly
unmanned aerial vehicles (UAVs). For example, US Navy researchers have examined the
validity of a test battery designed to measure psychomotor, multi-tasking, and visuospatial
abilities in a small sample of UAV operators, with promising results (Phillips, Arnold, &
Fatolitis, 2003).

3
This report describes the specific procedures, findings, and implications of a focused
review of the aviator selection literature. As an initial step in the development of SIFT, the goal
of this review was to produce a rational recommendation for a specific selection and testing
strategy for Army aviation. Therefore, consideration was given to methodological limitations
and obstacles in conducting selection research, as well as to the incremental validities and
practical issues associated with the tests being studied.
Focused Literature Review

As noted above, aviator selection and classification research has been conducted since the
1920's and a tremendous amount has been written on this subject. This focused literature review
was designed to provide a research-based foundation for a recommended selection strategy.
Therefore, no attempt was made to review every aviator selection study that has ever been
conducted. Rather, the focus was on key studies related to currently or recently available
selection batteries, particularly those studies conducted by the US military.

The specific goals for conducting this literature review were to:
1. Review studies that delineate the knowledge, skill, ability, and other characteristics
(KSAOs) important for performing the aviator job, with particular emphasis on studies
that involve helicopter aviators. This information would help inform the job analysis
phase of the project.
2. Review studies that focus on aviator selection batteries currently (or recently) in use
by the US Air Force, US Navy, and other relevant organizations (e.g., foreign military,
commercial airlines).

LiteratureReview Methodology

The first step in this task was to identify currently or recently available test batteries that
might be viable candidates for consideration as a replacement for the AFAST and, once those
were identified, to locate and summarize key research about them. This step requires
consideration of a wide range of possible tests or test batteries, with the expectation that, at a
later date, a number of potential candidates would be ruled out with relative ease (e.g., test
batteries that cannot be computerized or ones that involve prohibitively expensive licensing
fees). There was a possibility that one or more existing test batteries would be recommended as
an intact entity, with minimal changes, or of recommending specific subtests from a variety of
existing batteries.

Seven on-line databases were searched first, to obtain pertinent literature. These included
Psychlnfo, Defense Technical Information Center (DTIC), the Air Force Research Laboratory
Research Archive Library, the Civil Aeromedical Institute database of technical reports, the
Naval Medical Research Laboratory database of technical reports, and the archives of the Human
Factors and Ergonomics Society (HFES). The HFES database covers all of the Society's
publications, including the Society's bulletin and magazine. The sixth database searched was the
United States Air Force Human Resources Laboratory (AFHRL, 1968-1998) Topics, hosted by
the Innovation Center for Occupational Data, Applications, and Practices. All of these databases
were searched using terms such as "aviator selection," "ab initio" (from the beginning),
"personality," and "psychomotor." Personnel Decisions Research Institutes (PDRI) also

4
searched its archives for articles and technical reports related to aviator selection, based on prior
work with the US Air Force, particularly in the area of Crew Resource Management (CRM).

In addition, the Damos Aviation Services (DAS) database was searched, which consists
primarily of articles related to aviator selection and performance. This database currently has
over 3800 entries. The earliest entry pertaining to aviator selection in the DAS library dates
from 1921. It contains references to both civilian and military aviator selection, and a substantial
proportion of the entries are concerned with foreign aviator selection. The DAS database covers
all of the InternationalJournalofAviation Psychology, all of the Proceedings of the
InternationalSymposium on Aviation Psychology, and the last 19 years of Aviation, Space, and
EnvironmentalMedicine. Any recent materials that had not yet been entered into the database
were searched by hand. Hand searches also were conducted on recently edited books that had
not yet been entered into the database. Several individuals involved with aviator selection were
also contacted to obtain updates on their current aviator selection research projects.
Findingsfrom Aviator Selection Research Literature

Most of the research on aviator selection has been conducted by the military in the United
States, the United Kingdom, and Norway. Some research was also published by military
organizations in other countries (e.g., Israel, Turkey) and in the commercial sector. The Federal
Aviation Administration (FAA) and National Aeronautics and Space Administration (NASA)
have both conducted research in the arenas of cognitive and non-cognitive testing. Of most
relevance for the present research is work conducted by NASA in the area of personality traits
impacting aircrew performance (e.g., Helmreich, Foushee, Benson, & Russini, 1986; Musson,
Sandal, & Helmreich, 2004) and work originated by the FAA's Civil Aeromedical Institute
(CAMI) on a test battery called CogScreen (King & Flynn, 1995). The following sections
summarize key research found in the aviator selection research literature, as well as in the
general selection research literature.

General aviator selection reviews. A number of reviews of the aviator selection literature
have been published (Carretta & Ree, 2000, 2003; Dolgin & Gibb, 1988; Griffin & Koonce,
1996; Hunter, 1989; North & Griffin, 1977; Ree & Carretta, 1996, 1998; Rogers, Roach, &
Short, 1986; Tirre, 1997; Turnbull, 1992), including one that focuses specifically on
methodological difficulties and common shortfalls associated with such research (Damos, 1996).
In their review of aviator selection methods, Carretta & Ree (2000) state, "Research results point
to g [general intelligence] as the most important underlying construct in the prediction of aviator
success. Clearly, three others have been shown to be important but to a smaller degree: flying
job knowledge, personality, and general psychomotor ability" (p. 31). These authors note that,
"Simulation-based tests may significantly increment the validity of cognitive tests when the two
approaches are used together. These results are consistent with a large-scale meta-analysis of 19
commonly used personnel selection methods across many occupations (Schmidt & Hunter,
1998)" (p. 24). Regarding personality measures, Carretta and Ree comment that a great deal of
research has been conducted in this area, with contradictory results. They go on to say that
organizing the results according to the Big Five personality variables of Neuroticism,
Extraversion, Openness, Agreeableness, and Conscientiousness (Norman, 1963; Tupes &

5
Christal, 1961) would likely be enlightening, but has not (yet) been done in the aviator selection
arena.

Griffin and Koonce (1996) also wrote a comprehensive review of aviator selection, with
particular emphasis on measures of psychomotor skills. They review numerous research studies
showing that several types of predictor measures are valid for predicting aviator performance,
including:
"* aptitude (cognitive ability);
"* psychomotor skills;
* work simulation;
* divided attention (or multiple-task performance);
* flying experience; and,
* biographical information.

According to these authors, uncorrected, zero-order correlations for psychomotor skills


are in the .30 to .40 range and multiple regression correlations are in the .50 range in research
studies involving continuous criterion measures such as instructor check/flight ride ratings. With
regard to measures of psychomotor skills, the authors concluded,

Automated versions of vintage psychomotor tests (developed in the 1930s and 1940s)
seem to be as predictive of military aviator/aviator performance today as in the past. The
use of computers may have enhanced the predictive power of the psychomotor tests by
making their functioning dependent on digital electronic circuitry, rather than analog
electromechanical devices, resulting in more reliable performance measurement. The
psychomotor tests receiving the most attention today are the CCT [complex coordination
test] and the THCT [two-hand coordination test], originally developed by Mashburn and
colleagues before World War II (Mashburn, 1934). These tests were significant
predictors of USAF and Navy pass-fail criteria in the past, and automated versions are
predictive today. However, the tests are better predictors of normally distributed,
continuous criteria such as flight grades and number of flight hours for the Navy and
check rides and advanced training ratings for the USAF [than of traditional pass-fail] (p.
143).

Tirre (1997) made a useful distinction between two different approaches to aviator
selection:
" Basic attributes - In this approach, the test battery measures specific attributes that
are assumed to underlie aviator performance. Examples of this approach include the
USAF's Air Force Officer Qualifications Test (AFOQT) and Basic Aviator Test
(BAT; Carretta, 1987a).
" Learning sample (simulation) - In this approach, the test battery simulates tasks
performed in flight, with varying degrees of realism. An example is the Canadian
Automated Pilot Selection System (CAPSS).

6
Each approach has advantages and disadvantages, many of which are outlined by Tirre.
The basic attributes approach has a long history outside aviator selection and is generally less
costly and time-consuming to develop and administer than the learning sample approach. In fact,
it has only been possible to use the learning sample approach widely and effectively with the
advent of powerful desktop computers. The learning sample approach offers the advantage of
dynamic (as opposed to static) measurement of cognitive processing skills and often involves
measures that appear very realistic to test-takers. With either approach, the reliability and
validity of the measurement tool depends critically on how carefully it-was developed.

Obstacles and Issues in Conducting Aviator Selection Research

There are several obstacles to conducting research studies in the aviator selection domain,
many of which have been recognized for a long time and many of which are exceedingly
difficult to overcome. These issues have been described in several of the preceding reviews
(e.g., Carretta & Ree, 2000; Damos, 1996), and the most important ones are summarized below.
When reviewing the literature, it became clear that some researchers recognized these obstacles
and acknowledged how their study results and conclusions were likely impacted; many others
did not.

TrainingPerformance as a CriterionMeasure

The criterion measure in aviator selection research studies is almost always a measure of
training performance. While training performance is clearly an important outcome measure, it
certainly is not the only outcome variable of interest. Unfortunately, it is exceedingly difficult to
obtain reliable and accurate measures of aviator performance after training. The reliance on
training performance as a criterion measure is particularly problematic because researchers are
typically unable to differentiate various types of "failure." Different abilities or traits may
underlie different types of failure, but the pattern of relationships will be difficult or impossible
to detect if there is no way to identify and code the reason(s) for failure.

The reliance on training outcome measures is also problematic when attempting to


evaluate the validity of predictor measures that would not necessarily be expected to predict
training performance (e.g., personality measures). Research conducted as part of the US Army's
Project A shows that measures of cognitive ability predict declarative knowledge and technical
components of performance (McCloy, Campbell, & Cudeck, 1994) while measures of non-
cognitive characteristics predict motivational aspects of job performance (Campbell, Hanson, &
Oppler, 2001; McCloy, Campbell, & Cudeck, 1994) and contextual performance (Borman,
Penner, Allen, & Motowidlo, 2001; Campbell, Harris, & Knapp, 2001; Campbell & Knapp,
2001). While motivational factors certainly play a role in training performance and most
students are highly motivated to succeed, the type of training criterion measures typically used in
aviator selection research do not separate technical performance and motivational aspects of
performance. Thus, training criterion measures are likely more heavily weighted toward
academic and technical aspects of performance (e.g., flight instructor ratings, grades, pass-fail
status) and less heavily on motivational aspects of performance.

7
Statistical/MethodologicalIssues

Aviator selection research is plagued by a number of statistical and methodological


issues. Some of them are extremely difficult, if not impossible, to overcome.
1. Usingpredictoror criterion measures of low or unknown reliability.In many cases,
the reliability of predictor and criterion measures is not reported and may be quite
low, particularly in the case of criterion measures. Thus, the impact of unreliable
measurement on the outcomes of the study cannot be evaluated.
2. The most common criterionmeasure is a dichotomous variable-pass-fail status at
the end of training. When working with dichotomous criterion variables, the highest
possible value of a correlation between any predictor measure and that criterion
variable depends on the distribution of the dichotomous variable. The maximum
possible value of the correlation is lower the more the distribution varies from a 50-50
split. For aviator training pass-fail status, the pass-fail distribution is usually much
more extreme than 50-50. It is possible to correct the correlation coefficient for
dichotomization, and some researchers did this. It is important to note that, while
pass-fail performance in training is impacted by the attitudes and skills of student
aviators, it is also impacted by the policies of aviator accession and training
organizations. When there is a strong need for aviators, for example during war,
there is strong pressure to ensure that virtually all students will pass training. In
addition, most aviator training programs make every effort to ensure that most
students pass training because it is very costly to fail a candidate after several weeks
of expensive training.
3. Aviator selection research is based on a highly selected and homogeneous
population. Before they begin an aviator training program, all applicants have been
extensively screened, including meeting a required minimum score to enter the
military, meeting a required minimum score on an aviator aptitude battery, meeting
education requirements, and/or earning strong, positive evaluations and
recommendations from a superior officer or a selection board. The samples used in
most aviator selection research are also typically highly homogeneous in terms of
race and gender. Screening occurs in multiple stages, with each stage serving to
further restrict the sample relative to the general population. Correlations can be
corrected for some types of range restriction, but there is disagreement about the
extent to which such corrections should be made. Damos (1996) argues that it is not
appropriate to make such corrections because aviators will never be selected from an
unrestricted sample. In addition, some types of restriction cannot be corrected for,
including the demographic composition of the sample.
4. Failureto correctfor capitalizationon chance. A number of researchers in the
aviator selection domain have used regression techniques to evaluate the validity of a
test battery, without recognizing or correcting for the fact that such techniques
capitalize on chance variations present in their sample. The reported multiple
correlation may not generalize to a new sample, especially if the original sample was
not very large.

8
5. Small sample sizes. In some aviator selection research, the sample is very small.
This means there may have been very little power to detect significant relationships
even if they did exist.
6. Measurement method is confounded with measurement target. As Carretta and Ree
(2000), Hough (2001), and others have noted, in some research studies, measurement
method (e.g., biodata or personality inventory) is confounded with the measurement
target (i.e., KSAOs). In some cases, there is a close correspondence between
measurement method and measurement target. For example, "psychomotor tests"
virtually always measure one or more psychomotor abilities, and typically very little
else. In contrast, the "biodata" or "personality" measurement method can be used to
target leadership tendencies, conscientiousness, stress tolerance, psychopathology,
motivation, or other KSAOs. Summarizing findings across all biodata inventories or
all personality inventories tells us little about which underlying traits are more and
less predictive of aviator performance. The situation is worsened by the fact that not
all biodata and personality inventories measure the same targets. Thus, across
studies, there may be a great deal of variation in the extent to which relevant and
irrelevant KSAOs are measured.

Low Base Rate

Predictorvariables. Some tests are designed to identify applicants, aviator trainees, or


experienced 'aviators who have a severe psychopathological problem or a neurological deficit.
Tools such as CogScreen, dichotic listening tests, and the MMPI have been used for this purpose.
King and Flynn (1995) describe CogScreen as "a self-administered screening tool, in which the
subject uses a light pen on a cathode ray tube monitor. CogScreen may be superior to traditional
neuropsychological testing in determining cognitive deficits after a central nervous system injury
or dementing disease .... The CogScreen is very sensitive to the nuances of neuropsychological
functioning and can be administered in a group setting" (p. 954). No validity studies for
CogScreen could be located. The USAF explored the possibility of using CogScreen for aviator
medical screening, but it was never used operationally for aviator selection. The US Navy is
currently including CogScreen, or a variation of it, in their ongoing studies to enhance aviator
selection.

Severe psychopathology and neurological deficits are rare in the general population, and
are even rarer in the highly-selected population of aviators (including applicants and trainees).
While it may be exceedingly important to identify individuals in the aviator population who
might or will experience these problems, doing so is literally like "looking for a needle in a hay
stack." The low base rate for these problems makes it extremely difficult to show a statistically
significant relationship between test scores and outcome measures, even if the test is valid. In
past research, the failure to find significant correlations for these types of tests was sometimes
inappropriately generalized to all tools of a particular type, for example, all personality
inventories. Callister, King, Retzlaff, and Marsh (1999) point out, "Testing for psychopathology
has been shown to be of limited value in the assessment of the highly-functioning aviator
population. On the other hand, measures of normal personality characteristics have been shown
to be useful in a variety of settings and populations" (p. 885).

9
Criterionvariables. As noted above, the most common criterion measure is pass-fail
status at the end of aviator training, and the base rate for failure is typically low. The low base
rate issue becomes even more extreme when researchers attempt to categorize failure according
to type or reason, for example, failure due to lack of technical competence versus failure due to
attitudinal problems, or when the training failure rate is mandated by policy to be extremely low.
Factor-Analytic Work in the Aviator Selection Research Literature

In spite of the aforementioned limitations, selection test developers have continued to


search for measures that might predict aviation performance. Accordingly, researchers have
factor analyzed scores on several aviator selection batteries to uncover which constructs yield
incremental predictive validity. Most of the work was conducted by USAF researchers. Several
of these studies are summarized below.

Carretta and Ree (1997a) administered the Armed Services Vocational Aptitude Battery
(ASVAB) and 17 psychomotor tests to enlisted USAF personnel (n = 429). They summarized
their findings as follows:

Confirmatory factor analysis yielded higher-order factors of general cognitive ability (g)
and psychomotor/technical knowledge (PM/TK). PM/TK was interpreted as Vernon's
(1969) practical factor (k:m). In the joint analysis of these batteries, g and PM/TK each
accounted for about 31% of the common variance. No residualized lower-order factor
accounted for more than 7%. PM/TK influenced a broad range of lower-order
psychomotor factors. The first practical implication of these findings is that psychomotor
tests are expected to be at least generally interchangeable. A second implication is that
the incremental validity of psychomotor tests beyond cognitive tests is expected to be
small (p. 165).

Ree & Carretta (1992) conducted a similar study, using the ASVAB and three
psychomotor tests from the Basic Aviator Test (Carretta, 1987a). The sample was 354 USAF
enlisted recruits. They found that the two types of tests correlated with each other, with average
correlations in the .30's (corrected for range restriction, but not for test unreliability). They also
found that, as expected, there was a large first factor, which they labeled "psychometric g, "and
that both the ASVAB and the psychomotor tests loaded on it. Confirmatory factor analyses
revealed that both a seven-factor and a nine-factor model fit the data equally well. The more
parsimonious seven-factor model includes psychometric g and a higher-order general
psychomotor factor which accounted for 57% and 9% of the total variance respectively. Other
factors included 1) Verbal-Technical (accounted for an additional 8% of the variance), 2) Non-
technical General Knowledge (10%), 3) Time-Sharing (4%), 4) Two Hand Coordination (7%),
and 5) Complex Coordination (5%).

Carretta and Ree (1998) compared the factor structure of the ASVAB with the factor
structure of the AFOQT. The factor structure for each test battery was derived in a different
sample of USAF personnel, because the two batteries differ in difficulty level and intended
audience (with the ASVAB being taken by all Air Force applicants and the AFOQT being taken
by Flight Officer applicants). The authors conclude "The AFOQT is comprised of five lower-
order factors: verbal, math, spatial, aircrew, and perceptual speed which accounted for 20% of
the total variance, and g in hierarchical position accounted for 41% of the total variance.

10
Compared with the ASVAB, the AFOQT was less saturated [with g] but had more common
factors and had a greater proportion of its variance associated with common factors" (p. 12).

Carretta, Retzlaff, and King (1997) compared the AFOQT and the Multidimensional
Aptitude Battery (MAB). The MAB is a broad-based test of intellectual ability patterned after
the Wechsler Adult Intelligence Scale but designed for group administration. The sample in this
study was approximately 2,200 USAF aviator candidates. A joint factor analysis of the AFOQT
and the MAB revealed that each battery had a hierarchical structure. The correlation between the
higher-order factors from the two batteries was .981, indicating that both measured the same
thing, which these authors conclude is general intelligence (g).

Ambler and Smith (1974) analyzed data for the seven tests of the Guilford-Zimmerman
Aptitude Survey, the Hidden Figures Test, and four subtests from the US Navy-Marine Corps
aviation selection battery [1) Aviation Qualification, which includes reading, math, and science
questions related to a typical college experience; 2) Mechanical Comprehension; 3) Spatial
Apperception; and 4) Biographical Inventory]. Scores were available for approximately 1,700
aviation trainees (presumably all male, given that the study was published in 1974). The
researchers factor analyzed the subtest scores in the total sample and in various subsamples and
found that six factors appeared consistently across samples, which they labeled Mechanical,
Spatial Manipulation, Perceptual Flexibility, Verbal Intelligence, Numerical Intelligence, and
Flight Motivation.

Martinussen and Torjussen (1998) factor analyzed scores on a multi-aptitude test battery
used for aviator selection into the Norwegian Air Force. The battery is administered in a multi-
stage process. Stage 1 includes 12 subtests intended to measure General Intelligence, Technical
Comprehension, and Spatial Ability. Stage 2 includes seven subtests intended to measure
Simultaneous Capacity and Orientation Ability. Finally, Stage 3 includes a personality inventory
called the Defense Mechanism Test (DMT) which is described as a measure of psychodynamic
defense mechanisms and was developed for use in selecting persons into high-risk professions.
Very little information is provided about any of the subtests.

The authors randomly selected 450 applicants from the applicant pool who had Stage 1
and Stage 2 scores, and factor-analyzed the scores using Principal Component Analysis with
Varimax rotation. The tests included in each stage were factor-analyzed separately. Three
factors, labeled Mechanical Comprehension and Spatial Ability, Verbal Ability, and Numerical
Reasoning accounted for 61% of the variance in the Stage I tests and three factors, labeled
Spatial Ability, Time Estimation, and Perceptual Speed and Coordination, accounted for 62% of
the variance in the Stage 2 tests.

In summary, factor analyses of several aviator selection batteries suggest that it is


possible to derive a hierarchical general intelligence factor, with sub-factors related to verbal
ability, numerical ability, mechanical ability, spatial ability, and perceptual speed/flexibility. A
general psychomotor factor, with some specific sub-factors also appears when the test battery
explicitly contains psychomotor tests.

The factor-analytic work in the aviator selection domain is consistent with research
conducted on the structure of human abilities that is not entirely based on military or aviator test

11
data (e.g., see Fleishman & Mumford, 1988; Lubinski & Dawis, 1992; McHenry & Rose, 1988;
Russell, Reynolds, & Campbell, 1994). It is worth noting that, with the exception of one subtest
in the US Navy-Marine Corps selection battery (Ambler & Smith, 1974), the aviator selection
batteries included in the factor-analytic studies described above did not include measures of non-
cognitive traits. It is not particularly surprising, then, that no underlying non-cognitive factors
were found.
Models of Skill Acquisition

This section examines more closely the hierarchical general intelligence factor derived by
the factor analyses described above. Specifically, the question, "How does intelligence relate to
skill acquisition during flight training?" is addressed.

Ackerman (1987; 1988; 1990) developed a model of skill acquisition that is applicable to
the development of piloting skills. The theory is founded on the concept of attentional resource
allocation, that is, the amount of attentional resources required by various tasks at various points
in time, and the amount of attentional resources that individuals can bring to bear in any given
situation. Ackerman's model divides skill acquisition into three broad phases, with a
corresponding type of ability that is the primary predictor of performance within each phase. In
Phase I, the primary learning task is to comprehend the new task. Declarative knowledge and
general intelligence are the primary predictors of performance in this phase. In Phase II, the
primary learning task involves integrating the cognitive and motor processes required to perform
the task. In this phase, knowledge compilation and perceptual speed are the primary predictors of
performance. In Phase III, task performance becomes proceduralized (or automatic), and thus
requires fewer attentional resources. Procedural knowledge and psychomotor abilities are the
most important predictors in this phase. Tasks vary in the extent to which they can be
proceduralized. In Ackerman's terminology, tasks that can become proceduralized are called
consistent tasks; those that cannot become proceduralized are called inconsistent tasks.

According to Ackerman's theory, general cognitive ability is expected to be most


important during the early stages of skill acquisition for all tasks and to remain important for
inconsistent tasks. Processing speed is expected to be most important during intermediate stages
of learning for any task. Psychomotor skills will become increasingly important as a task
becomes better-learned, but may only outstrip cognitive ability in importance for inconsistent
tasks. Keil and Cortina (2001) found confirmatory evidence for the relationship between
cognitive ability and performance on consistent and inconsistent tasks but did not find support
for the relationship between perceptual speed and psychomotor skills and consistent and
inconsistent tasks. Additional research by Ackerman and colleagues shows that both ability and
non-ability factors (e.g., personality, vocational interests, motivation, and self-concept) play a
role in determining performance on complex (inconsistent) tasks (Ackerman, Kanfer, & Goff,
1995; Ackerman & Woltz, 1994).

Ree, Carretta, and Teachout (1995) developed a causal model to explore the role played
by general intelligence (g) and prior knowledge of flying on performance during aviator training.
The measures of g and prior flying knowledge were based on AFOQT composite and subtest
scores collected at the time of application to flight training. Criterion measures included
measures of job knowledge (academic classroom performance) and work samples (check ride
performance) collected at various points during a 53-week training program. When the model

12
was tested in a large sample of USAF aviator trainees (n = 3,428 males), the authors found that g
directly influenced the acquisition of flight knowledge both prior to and during training and
indirectly influenced work sample performance through the acquisition of job knowledge. Prior
knowledge of flying had almost no influence on acquisition of job knowledge during the
academic portions of aviator training, but directly influenced performance on early work sample
measures. Early work sample performance was very strongly related to later work sample
performance. Carretta and Ree (1997b) tested the same model in a sample of male USAF
aviators (n = 3,369) and in a small sample of female USAF aviators (n = 59). The basic model
was supported and appeared to work similarly for males and females, although the female
sample was too small to draw any strong conclusions.

Evidence of Predictive Validity for Flight Training Performance

An enormous number of validation studies have been conducted in the aviator selection
domain - too many to cover in this report. Fortunately, several meta-analyses focusing on the
validity of selection tests have been published. In all the studies, measurement method is
confounded with measurement target (KSAOs) to at least some degree. This section describes
meta-analyses addressing validity evidence.

Damos (1993) Meta-Analysis

The first meta-analysis of aviation performance predictors was published by Damos in


1993. She meta-analyzed 12 studies that involved a single-task performance-based measure, for
example, tracking or dichotic listening, and 14 studies that involved multiple-task performance-
based measures, that is, two or more single-task measures administered simultaneously, such as
tracking plus dichotic listening. The mean correlation (uncorrected) between single-task
performance and flight grades was .18 (n = 5,378); the correlation between multiple-task
performance and the same criterion was .23 (n = 6,920). Moderator analyses suggested that the
level of validity for multiple-task performance-based measures depended on the type of sample
(military versus civilian) and level of flight experience (students versus fully-trained aviators),
with higher validity in studies with a civilian sample or with a fully-trained aviator sample.

Hunter and Burke (1994) Meta-Analysis

The second meta-analysis was published by Hunter and Burke in 1994.1 They reviewed
200 studies published between 1940 and 1990 that involved aircrew selection. Sixty-nine studies
contained one or more usable validity coefficients, and the authors located or derived 468
validity coefficients from these studies. It is worth noting that studies reporting only a composite
score based on a multi-aptitude test battery were excluded from the meta-analysis. The majority
of the validity coefficients were based on studies conducted in the US (77%), involving a
military sample (94%), and/or a sample that was training to fly fixed-wing aircraft (86%). Most
of the studies used dichotomous pass-fail criterion measures (84%) which, as noted above, places

An earlier version of this meta-analysis was also published in Hunter and Burke (1992). The general
findings are the same in the two versions, but the specific values cited for various predictor types is not
exactly the same.

13
a ceiling on the maximum possible correlation, with a lower ceiling to the extent the criterion
distribution departs from a 50-50 split (as is likely the case in virtually all of the studies).

Hunter and Burke (1994) categorized each validity coefficient according to one of 16
predictor types. They then applied bare-bones meta-analytic procedures (Hunter & Schmidt,
1990). Table 1 is adapted from a table of results published in Hunter and Burke (1994). It
shows, for each predictor type, the mean sample-weighted validity (uncorrected), the percentage
of variance explained by sampling error, and the lower bound for the 95% confidence interval.2
The predictor type with the highest mean sample-weighted validity is "Job Sample." The
authors do not describe this predictor type, but one might speculate that it includes flight
simulation tests. Mechanical ability, gross dexterity, reaction time, biodata inventory, and
information (General or Aviation) predictors also showed relatively high validity and a
confidence interval that did not include zero. Recall that "biodata" is a measurement method.
There is no way of determining, from this meta-analytic review, what KSAOs were measured.
After conducting the bare-bones meta-analysis, Hunter and Burke applied two validity
generalization decision rules: 1) Does sampling error account for more than 75% of the variance
in observed validities? and 2) Does the 90% credibility limit include zero? Answering "no" to
the first decision rule allows one to conclude that validity is generalizable across samples and
settings. Answering "no" to the second decision rule allows one to conclude that the true
validity in the population is greater than zero. None of the predictor types included in this meta-
analysis met the first decision rule, but several met the second. For these predictor types, it is
reasonable to believe that the true validity is greater than zero in any setting or sample, but the
level of validity may vary from one setting or sample to another: Quantitative Ability; Spatial
Ability; Mechanical; Aviation Information; General Information; Gross Dexterity; Perceptual
Speed; Reaction Time; Biodata Inventory; and, Job Sample.

2 Hunter and Burke (1994) claim that, in keeping with decision rules established by Hunter & Schmidt
(1990), they calculated and used the 90% credibility limit, rather than the 95% confidence interval. In their
table of results, however, they report the 95% confidence interval.

14
Table 1

Hunter and Burke (1994) Meta-analytic Results for Various Predictor Types

Variance
Explained
Total by 95% CI
# of Sample Sampling Lower
Predictor Type Correlations Size Mean r Error Bound
General Ability 14 8,071 .13 21% -.05
Verbal Ability 17 22,841 .12 6% -.09
Quantitative Ability 34 46,884 .11 28% .01

Spatial Ability 37 52,153 .19 14% .05


Mechanical 36 42,418 .29 8% .11

General Information 13 29,951 .25 4% .06


Aviation Information 23 25,295 .22 12% .06
Gross Dexterity 60 48,988 .32 13% .15
Fine Dexterity 12 2,792 .10 45% -.09
Perceptual Speed 41 33,511 .20 19% .05

Reaction Time 7 10,633 .28 16% .16


Biodata Inventory 21 27,004 .27 6% .07
Age 9 13,810 -. 10 11% -.25
Education 9 6,163 .06 12% -. 16
Job Sample 16 2,814 .34 37% .19
Personality 46 22,486 .10 11% -.16

Notes.
1. Mean r is weighted by sample size, but has not been corrected for any other artifacts.
2. When analyzing the data, validity coefficients were reflected for predictor types that would be expected to
show a negative correlation with the criterion variable, that is, those involving measures of speed. Thus, in the
table above, positive correlations indicate that better performance on the predictor is associated with better
performance on the criterion measures.

15
According to Hunter and Burke (1994), the following predictor types may show non-zero
validity in some settings or samples:
"* General Ability
"* Verbal Ability
"* Fine Dexterity
"* Age
* Education
* Personality - Recall that "personality" is a measurement method. Across studies,
some of the personality scales likely were expected to show a negative correlation with
criterion performance (e.g., Anxiety), while other scales were likely expected to show a
positive correlation (e.g., Self-Confidence). For still other personality scales, there likely
was no clear a priori expectation about the direction of the correlation (e.g., Risk-
Taking). One might argue that averaging across all the different types of scales does not
provide an accurate representation of the true level of validity that might be achieved by
measures of specific personality traits.

Hunter and Burke conducted moderator analyses for a subset of the predictor types for
which there were sufficient data. They examined four possible moderators: 1) time period in
which the study was conducted (1940-1960 versus 1961-1990), 2) nationality of the study
sample (US versus other), 3) service branch (Air Force versus other), and aircraft type (fixed-
wing versus rotary-wing). The most consistent finding was that the time period in which the
study was conducted moderated the validity of several predictor types, with lower mean validity
in more recent studies. The authors speculate that the decline in validity over time could be due
to reduced variability in the applicant pool, more extreme splits on dichotomous criterion
measures (e.g., farther away from a 50-50 split in the proportion of trainees who pass versus fail
UPT), or changes in the nature of aviator training. The other moderator variables, at least as
coded in this meta-analysis, provided very little explanatory power.

Martinussen (1996) Meta-Analysis

The third meta-analysis was published by Martinussen in 1996. She conducted a


standard computerized literature search and also made a special effort to collect unpublished
validation studies focusing on military aircrew selection from researchers in NATO countries.
Studies that did not report the magnitude of nonsignificant correlations or only reported
corrected correlations were excluded. (Hunter and Burke do not say how they handled such
studies). Martinussen reports that she reviewed 134 studies, and located 66 independent samples
in 50 studies that met her criteria for inclusion. Fifty percent of the studies were conducted in
the United States. Most samples involved military aviators, with the bulk of those belonging to
the Air Force. Two-thirds of the studies involved fixed-wing aviators and 21% involved rotary-
wing aviators (12% did not specify the type of aircraft). Twenty (40%) of the studies were
unpublished material. All of the studies used performance during aviator training as the criterion
variable - dichotomous pass/fail status, instructor ratings, or course grades. While the
distribution of study types is similar to that described by Hunter and Burke (1994), comparison

16
of the reference lists reveals very little overlap in the studies included in each review. In fact,
fewer than 20 studies appeared in both meta-analyses.

Martinussen categorized each predictor measure into one of nine measurement methods
(predictor types). Each is described below:
1. Cognitive includes all tests designed to measure a specific type of cognitive ability
(e.g., mechanical, spatial, verbal, quantitative).
2. Intelligence includes tests specifically designed to measure global intelligence.
3. Psychomotor/InformationProcessingincludes all tests involving apparatus or a
computer. Obviously, this could encompass several different types of ability
measures (e.g., psychomotor skills, reaction time, etc.)
4. Aviation information includes tests with questions about aviation. Martinussen points
out that most psychologists interpret such tests as measures of motivation to become
an aviator.
5. Biographicalinventories collect background information about applicants, and then
summarize the information according to a total score. Although Martinussen does not
comment on the nature of the inventories, it is likely that many of them were
empirically-scored.
6. Personalitytests include a variety of personality inventories. The data are not
organized according to personality trait but, unlike Hunter and Burke (1994),
Martinussen did attempt to take the expected relationship between the underlying
scale and the criterion variable into account by reflecting the sign of the correlation, if
needed, based on information in the original study. In cases where no expectation
about the direction of the relationship could be derived from the original study,
Martinussen coded the absolute value of the correlation, in effect making it positive.
This has the overall effect of inflating the mean validity coefficient.
7. Combined index was used when a validity coefficient was reported only for a
combination of predictor measures. Martinussen does not report how, or if, she took
account of the fact that such measures may capitalize on chance, for example, if they
were created using a regression procedure. (Hunter and Burke excluded these
studies.)
8. Academics includes school grades or tests that measured mathematical or language
proficiency.
9. Training experience includes measures of flying performance prior to selection into
the training program that was the focus of the study. It is not clear if these included
self-reported or verified, objective measures of prior flight hours/performance, or
both.

Table 2 shows the number of correlations, total sample size, mean sample-weighted
correlation (observed and corrected for dichotomization), percent variance explained by
sampling error, and 90% credibility limit for each measurement method. Using decision rules
similar to those applied by Hunter and Burke (1994), Martinussen suggests that the mean validity
of the Academics measurement method (r =.15) is likely to generalize across samples and

17
settings, given that 70% of the variance in observed validity is explained by sampling error. For
the remaining measurement methods, it appears that there may be moderator variables that
impact the level of validity across settings and samples, but the 90% credibility limit is greater
than zero for all but two of them (biographical inventory and personality).

Martinussen (1996) also found a negative correlation between year of study publication
and validity coefficients for each type of predictor measure except Training Performance. This
is consistent with the finding of a decline in validity across time reported by Hunter and Burke
(1994). She also conducted several moderator analyses. Of most interest for the present effort
was her finding of a significant difference in the mean validity of two measurement methods -
(general) intelligence and training experience - depending on type of aircraft. General
intelligence tests showed higher validity in samples of rotary-wing aviators (mean uncorrected
r = .27) than in samples of fixed-wing aviators (mean uncorrected r =.11).

Table 2

Martinussen (1996) Meta-analytic Results for Various Measurement Methods

Variance
Explained
Total by 90%
# of Sample Sampling Credibility
Measurement Method Correlations Size Mean r Error Limit
Cognitive 35 17,900 .22 12% .07
(.24)
Intelligence 26 15,403 .13 18% .03
(.16)

Psychomotor/Info Processing 29 8,522 .20 28% .10


(.24)

Aviation Information 16 3,736 .22 46% .14


(.24)

Personality 21 6,304 .13 24% .00


(.14)

Biographical Inventory 13 11,347 .21 4% .00


(.23)

Combined Index 14 5,362 .31 13% .19


(.37)

Academics 9 4,267 .15 70% .11


Training Experience 10 5,806 .25 7% .07

Note. Mean r is weighted by sample size. The value enclosed in parentheses is the sample-weighted mean r
corrected for criterion dichotomization.

18
In contrast, training experience showed higher validity in samples of fixed-wing aviators
(mean uncorrected r = .35) than in samples of rotary-wing aviators (mean uncorrected r = .12).
The latter finding may be due to the fact that individuals who pursue a private pilot's license
prior to entering a formal aviator training program are more likely to do so in a fixed-wing
aircraft. As a consequence, the training experience may more directly transfer to, and thus
positively affect, performance in fixed-wing aviator training than in rotary-wing aviator training.
This finding is also consistent with anecdotal evidence that "too much" prior experience or
training in fixed-wing aircraft can be detrimental when learning to fly a rotary-wing aircraft.

Martinussen and Torjussen (1998) Meta-Analysis

The fourth meta-analysis, conducted by Martinussen and Torjussen (1998), focused


exclusively on a test battery used for aviator selection into the Norwegian Air Force (NAF).
Four studies were included, with two to five independent samples for each of 19 subtests
included in the test battery. Sample sizes ranged from 244 to 977 per subtest. In all four studies,
the test battery was used in the aviator selection process so there was direct restriction of range
on the subtest scores. Furthermore, spatial abilities were measured in each of two successive
stages of the battery, albeit with different tests. As a consequence, the final sample was highly
restricted in terms of spatial ability. Criterion measures were based on training performance,
primarily pass/fail status, but also instructor ratings and course grades.

Out of 19 subtests, 10 showed a 90% credibility limit greater than zero. The mean
uncorrected validity was lower than .20 for all but two of them - Aviation Information (mean
uncorrected r = .21) and Instrument Comprehension (mean uncorrected r = .26), both of which
were administered in the first stage of testing. Martinussen corrected the validities for
dichotomization of the criterion measure (when appropriate), but did not correct them for range
restriction. The corrected validities are consistently somewhat higher. One can only speculate
how high they might be if corrected for range restriction as well.

Interestingly, this is the only study in which the 90% credibility limit was greater than
zero for a personality measure, although the mean validity was still low and consistent with the
level reported in other meta-analytic reviews (mean r = .06 and .12 for two non-independent
scoring methods used within the same inventory). According to Martinussen and Torjussen, the
personality inventory - the DMT - measures psychodynamic defense mechanisms, and was
specifically developed to select personnel for high-risk professions. The Norwegian Air Force
used it as a post-selection screening device for individuals who had already been selected into
aviator training.

Summary of Meta-Analytic Validation Studies

As noted above, there was very little overlap in the studies included in three of the meta-
analytic reviews. (The obvious exception is that all four of the studies included in the
Martinussen and Torjussen (1998) review also appeared in Martinussen's (1996) broader
review.) Only four of the studies reviewed by Damos (1996) appear in the Hunter and Burke
(1994) citation list, and only one appears in the Martinussen (1996) citation list. Fewer than 20
references appear in both the Hunter and Burke (1994) and Martinussen (1996) reviews.
Different authors also categorized the predictor measures differently, making it difficult to

19
compare the results from different reviews. Nevertheless, the following summary statements can
be made:
" Global intelligence tests showed about the same, relatively low level of validity in the
two meta-analyses in which they were included (mean uncorrected r =.13), with
support for validity generalizability in one study but not in the other.
" The validity of specific cognitive ability tests seems to vary depending on the type of
ability being measured but tends to be higher than that of more global measures of
intelligence. This statement is supported by the mean uncorrected validity of .22 for
the cognitive measurement method, as opposed to the mean uncorrected validity of .13
for the global intelligence measurement method, as reported in Martinussen (1996). It
is also supported, to some degree, by Hunter and Burke's finding that the mean
uncorrected validities for two specific cognitive ability predictors types, Spatial (r
.19) and Mechanical (r = .29), are higher than that for the global intelligence predictor
type (r =.13). However, the mean uncorrected validity of verbal and quantitative
ability predictor types reported by Hunter and Burke (r = .12 and 11, respectively) is
about the same as that reported for the global intelligence predictor type. This finding
may be at least partially due to higher content overlap between global intelligence and
verbal and quantitative ability tests than between global intelligence and spatial or
mechanical ability tests.
"* There is some evidence that Mechanical ability tests are among the more valid
measures of performance during aviator training, as evidenced by the mean uncorrected
validity of .29 in the Hunter and Burke (1994) meta-analysis and the mean uncorrected
validity of .26 for the Instrument Comprehension subtest in the Martinussen and
Torjussen (1998) meta-analysis. (Factor analyses of the Norwegian Air Force test
battery suggested that the Instrument Comprehension measures both mechanical and
spatial abilities.)
"* Aviation Information tests showed about the same level of validity in the three meta-
analyses in which they were included - about .22 (uncorrected).
" The biographical inventory measurement method showed a relatively high mean
validity in the two meta-analyses in which it was included (mean uncorrected r = .27
and .21, respectively). However, sampling error explained very little of the variability
in validity estimates across studies, suggesting that there are other factors that impact
the validity of such inventories. One of the most important factors may be the extent to
which the inventory was designed to measure KSAOs relevant for the aviator job.
" At least some types of psychomotor and information processing tests are likely to
exhibit a reasonable level of validity in almost any sample or setting, as evidenced by
the Damos (1993) finding of mean uncorrected validities of.18 and .23 for single-task
and multiple-task performance-based measures, respectively. This is supported by the
range of mean validities from. 10 to .32 for measures of dexterity, reaction time, and
perceptual speed in Hunter and Burke (1994) and by the Martinussen (1996) mean
validity of .20 for psychomotor/information processing tests.
"* Measures of spatial ability showed a mean uncorrected validity of.19 in the Hunter
and Burke meta-analysis. In the Norwegian Air Force battery, subtests with titles that

20
appear most like traditional measures of spatial ability (Paper Forming, Rotating
Patterns, and Figure Pattern) showed very low validity. However, two other subtests
that contain a spatial ability component, Raven's Matrices and Instrument
Comprehension, showed much higher validity (mean uncorrected r = .16 and .29,
respectively). Stage two of the NAF battery also includes spatial ability tests, and they
showed very low validity, but this could be due to the extreme restriction of range
given that applicants had already been directly screened on spatial abilities during the
first stage of the testing process.
Personality measures, in general, showed low validity for predicting performance in
training. However, as noted above, the meta-analytic reviews did not calculate the
mean validity for different types of personality traits, and averaging across scales more
and less relevant for the aviator job likely obscured the true level of validity that such
measures can achieve.

Personality Research in the Aviator Selection Arena

As noted earlier in this report, a great deal of research has been conducted in the area of
personality measurement for use in aviator selection, with contradictory results. Lambirth,
Dolgin, Rentmeister-Bryant, and Moore (2003) commented, "The US Navy, Air Force, and
Army have investigated a variety of personality tests for use in pilot selection batteries. These
efforts have had little impact on the selection of pilots or other aircrew because of response bias
and the inappropriateness of the clinical measures selected for a homogeneous, non-clinical
population. However, personality tests that emphasize positive attributes, rather than
psychopathology, and performance-based personality measures, have proven to be more accurate
descriptors of personality and predictors of performance" (p. 416).

Job analyses and other studies suggest that non-cognitive characteristics are important for
aviator performance. Musson, Sandal, and Helmreich (2004) say, "Superior performance [among
pilots] has consistently been linked to a personality profile characterized by a combination of
high levels of instrumentality and expressivity along with lower levels of interpersonal
aggressiveness. This personality profile has sometimes been referred to as the 'Right Stuff,'
suggesting this is the ideal description of an astronaut or pilot. Inferior performance has been
linked to personality profiles typified by a hostile and competitive interpersonal orientation...
(the 'Wrong Stuff)... or to low achievement motivation combined with passive-aggressive
characteristics ('No Stuff)" (p. 342). The authors point out that these profiles seem to be
especially important in terms of working as part of a crew.

As noted above, several of the obstacles to conducting good research in the aviator
selection domain are particularly problematic for personality measures. These include the
reliance on training outcomes as the criterion measure and summarizing results by averaging
validity estimates across several different personality scales.
Findings from General Selection Research Literature

There is a great deal of information about the validity of measurement methods in the
general selection research literature. Cognitive ability tests have been shown to predict job
performance, particularly technical or "can-do" aspects of job performance, in a wide variety of

21
jobs (Hunter & Hunter, 1984; McHenry, Hough, Toquam, Hanson, & Ashworth, 1990; Schmidt
& Hunter, 1998; Vernon, 1969). Schmidt and Hunter (1998) collected meta-analytic evidence
from a large number of sources and summarized it according to different types of personnel
measures, that is, measurement methods, for predicting performance in training programs and for
predicting overall job performance. Personnel measures with the highest validity for predicting
performance in job training programs include general mental ability (GMA) tests (mean r
= .56),3 integrity tests (mean r = .38), peer ratings (mean r = .36), employment interviews
(structured and unstructured) (mean r = .35), conscientiousness tests (mean r = .30), and
biographical data (mean r = .30). Personnel measures with the highest levels of validity for
predicting overall job performance include work sample tests (mean r = .54), GMA tests (mean
r = .51), structured employment interviews (mean r = .51), peer ratings (mean r = .49), job
knowledge tests (mean r = .48), training and education ratings (mean r = .45), job tryout
procedures (mean r = .44), and integrity tests (mean r = .41). As noted previously, there is a
high degree of correspondence between method and target of measurement for some of the
personnel measures, for example GMA tests, but a low degree of correspondence for other
personnel measures, for example employment interviews.

In many jobs, technical aspects of performance are not the only aspects that matter to the
organization. For example, there is a great deal of interest in predicting organizational
citizenship (Organ, 1994; Organ & Ryan, 1995), contextual performance (Borman & Motowidlo,
1993), and "will-do" aspects of job performance (Campbell, Hanson, & Oppler, 2001). To
identify attributes that underlie these non-technical aspects of job performance, personnel
selection researchers turned to the vast literature on non-cognitive attributes. Research in the
personality, biodata, and vocational interest domains has clearly shown that measures of non-
cognitive attributes can predict job performance (Barrick & Mount, 1991; Gellatly, Paunonen,
Meyer, Jackson, & Goffin, 1991; Hunter & Hunter, 1984; McHenry, et al. 1990; Ones,
Viswesvaran, & Schmidt, 1993; Tett, Jackson, & Rothstein, 1991), particularly when a careful
effort is made to identify and measure attributes that one would expect to underlie different
criterion constructs, and when the presence or importance of those criterion constructs is
considered for different types of jobs (e.g., Hough, 1992; Hough & Ones, 2002; Hurtz &
Donovan, 2000; Mount, Barrick, & Stewart, 1998; Ones & Viswesvaran, 2001a, 2001b; Reilly &
Chao, 1982; Robertson & Kinder, 1993). Several researchers have meta-analyzed validity for
personality measures, using the "Big 5" model (Norman, 1963; Tupes & Christal, 1961) or some
other model (e.g., Hogan, 1991; Hough, Eaton, Dunnette, Kamp, & McCloy, 1990) as an
organizing structure. One well-established finding is that measures of conscientiousness appear
to be a valid predictor of job performance in virtually all jobs (Barrick and Mount, 1991;
Schmidt & Hunter, 1998). The validity of other personality characteristics seems to depend, to a
greater extent, on the type of job. For example, extraversion appears to be more valid for
predicting performance in sales and managerial jobs than in other types ofjobs (Barrick &
Mount, 1991; Hough, 1992).

Finally, there is evidence that vocational interests, for example interest in becoming an
aviator, can be a valid predictor of relevant job outcomes. It is generally assumed that interest in

3 In the Schmidt and Hunter (1998) meta-analysis, correlations were corrected for criterion unreliability and range
restriction (if present).

22
a particular occupation will lead a person to be motivated to pursue that occupation and
motivated to gain knowledge about it. In aviator selection research, there typically has not been
a clear distinction between measures of interests and measures of knowledge or background
experience, so it is not possible to estimate the likely validity of a stand-alone self-report
measure of interest in aviation. There is, however, evidence from the US Army's Project A that
scores on a self-report vocational'interest inventory are valid for predicting technical job
performance in a variety of Army enlisted military occupations (McHenry, et al., 1990; Oppler,
McCloy, Peterson, Russell, & Campbell, 2001).

Incremental Predictive Validity

It is clear from the information described in the preceding section that an aviator selection
battery focusing on cognitive ability is likely to be a valid predictor of performance in training
and on the job. So, is there anything to be gained by including measures of other KSAOs in the
aviator selection process? Research suggests that there is. In addition, given the enormous cost
of aviator training, even a small increase in validity can offer significant utility to the US Army.
Aviator Selection Research Literature

Most of the research on this topic in the aviator selection arena was conducted by the
USAF, and is based on adding the Basic Aviator Test (BAT) to the AFOQT. The BAT consists
of several computer-administered tests measuring psychomotor skills, short-term memory, time-
sharing ability, and attitudes toward risk-taking. Across several studies, the BAT demonstrated
increases in the amount of variance accounted for (i.e., R2) ranging from zero to .08 (e.g.,
Carretta, 1987b, 1988; Carretta & Ree, 1996a), with higher incremental validity for criterion
measures other than training pass-fail (e.g., Advanced Training Recommendation Board [ATRB]
ratings).

Only one study systematically examined the incremental validity of the individual BAT
subtests (Carretta & Ree, 1993). This study found that a BAT-Psychomotor composite score and
a BAT-Risk composite score each (separately), when added to the AFOQT-Aviator composite
score, increased the multiple correlation (R) by about .04 for predicting pass/fail and class rank.
Adding the BAT measure of Flying Experience to the AFOQT-Aviator composite score
increased R by about .07 while adding BAT Information Processing scores did not increase R
significantly. Adding all BAT scores to the AFOQT-Aviator composite score increased R by
approximately .13 for both pass/fail and class rank (which translates into an increase in amount
of variance accounted for, R 2 , by about .02).

In research that did not involve the BAT, Ree (2004c) found that two dependent variables
derived from the Test of Basic Aviation Skills (TBAS) increased the amount of variance
accounted for in basic flight training performance scores by .02 to .03. TBAS includes measures
of psychomotor skills, selective attention, spatial ability, and noticing and responding quickly
and appropriately to an "emergency." The report does not specify exactly on what the dependent
variables are based, but they appear to involve psychomotor and spatial aspects of test
performance. Retzlaff, King, and Callister (1995) and Carretta, Retzlaff, and King (1997) report
that tests of aviation interest/aptitude included in the AFOQT have been shown to be useful for
predicting aviator performance beyond measures of g and of specific cognitive abilities such as
verbal, math, spatial, and perceptual speed.

23
Blower and Dolgin (1990) used a hierarchical regression model to examine the
incremental validity exhibited by three tests: (1) Absolute Difference-Horizontal Tracking, (2)
Complex Visual Information Processing, and (3) Risk Taking in predicting success in primary
flight training over and above that of intelligence and demographic variables. Each resulted in
approximately a 3.5 % increase in variance explained. This study also found that a
psychomotor/dichotic listening test, a Manikin test (a mental rotation task), and a Baddeley test
(an assessment of working memory) did not add incremental validity.

Several other studies used a regression approach to examine the validity of various types
of predictor measures for predicting undergraduate training performance (Bartram & Dale, 1985;
Carretta, 1989, 1990; Morrison, 1988; Olea & Ree, 1994), but did not report incremental
validity. However, the studies did report that psychomotor, spatial orientation, biographical data,
working memory, and to a lesser degree personality, all predicted at least some unique variance
in undergraduate aviator training.
GeneralSelection Research Literature

In the general selection research literature, Schmidt and Hunter (1998) meta-analytically
derived an estimate of the incremental validity likely to occur when any of several personnel
measures were added to a measure of general mental ability for predicting a) performance in a
training program or b) overall job performance. For predicting performance in a training
program, their results show that the greatest incremental validity can be achieved by
supplementing a measure of general mental ability with an integrity test or a conscientiousness
test (increase in multiple R of .11 and .09, respectively). For predicting overall job performance,
the greatest incremental validity can be achieved by supplementing a measure of general mental
ability with an integrity test (increase in validity of .14), a conscientiousness test (increase in
validity of. 12), a work sample test (increase in validity of. 12), or an employment interview
(increase in validity of .09).

Other research has also shown that measures of non-cognitive attributes can provide
incremental validity beyond measures of cognitive attributes. This is especially true for
predicting non-technical aspects or "will-do" aspects ofjob performance (Day & Silverman,
1989; Mount, Witt, & Barrick, 2000; Ones & Viswesvaran, 2001b; Oppler, et al., 2001;
Robertson & Kinder, 1993; Russell, Mattson, Devlin, & Atwater, 1990; Salgado, 1998)
However, incremental validities have also been found in the vocational interest domain, even for
cognitive (or "can-do") aspects of job performance (Gellatly, et al, 1991; Hough, Barge, &
Kamp, 2001).
Summary of Incremental Validity Evidence

Based on the research evidence described above, there is reason to believe that measures
of the following constructs may add incremental validity beyond that achieved by a battery that
reliably and accurately measures general intelligence:
* Psychomotor skills
* Working memory
* Aviation interest/knowledge

24
" Flying experience - although the type of flying experience may make a difference
(fixed-wing versus rotary-wing)
" Personality (including factors such as conscientiousness and risk-taking)

Group Differences

As mentioned previously, the Army aviation applicant sample has historically been
homogeneous, that is, relatively young, male, and Caucasian, with at least a high school degree
and usually with some post-high school education. Some applicants already have or are working
on a private pilot's license, but very few are already certified to fly rotary-wing aircraft. Some
are already in the military, while others come from the civilian population. In fact, the Army is
the only branch of the US military that allows civilians and military enlisted personnel to apply
for slots in the aviation training program. (All branches allow military Commissioned Officers
to apply for aviator training.) In the Army, those applicants who are accepted, but who are not a
Commissioned Officer at the time of application, must complete Warrant Officer training before
they enter aviator training.

In the future, it is likely that the applicant population will become more diverse in terms
of race and gender but, barring major policy changes, will likely not become more diverse on the
other characteristics listed above. One of ARI's objectives is to minimize, to the extent possible,
adverse impact exhibited by the new aviator selection battery. The level of adverse impact
exhibited by a test battery depends on various factors, including the selection ratio, the general
characteristics of the applicant population, and placement of the pass-fail cutoff on a test battery.
While the adverse impact cannot be estimated at this point, the research that might help to
anticipate how race and gender subgroups are likely to score on an aviator selection test battery
can be examined.
Cognitive Ability Tests

Research conducted on cognitive ability tests using military and civilian samples suggests
there will be mean score differences on most cognitive ability tests when racial groups are
compared, but that the tests will not be unfair to any racial subgroup (Campbell, 1996; Carretta
& Ree, 2000; Roth, Bevier, Bobko, Switzer, & Tyler, 2001; Russell, Reynolds, & Campbell,
1994; Sackett, Schmitt, Ellingson, & Kabin, 2001; Toquam, Corpe, & Dunnette, 1989; Wise,
Welsh, Grafton, Foley, Earles, Sawin, & Divgi, 1992). Research suggests that there will be a
standardized mean score difference of 0.6-1.0 between African-Americans and Whites, with
Whites scoring higher on average. Other evidence suggests that the Hispanic-White
standardized mean score difference will be about half as large as the African American-White
subgroup difference, again with the White mean being higher. Finally, Asian subgroups
sometimes earn a higher mean score than the White subgroup, and sometimes earn a lower mean
score. Many different interpretations of and explanations for these findings have been offered
(e.g., educational differences, subtle or overt racism, cultural bias), but no one has yet found a
way to entirely explain or eliminate the differences, and efforts to ameliorate the differences
have met with limited success (Sackett, et al., 2001; Schmitt, Sackett, & Ellingson, 2002).

Research suggests that there are no gender differences in general cognitive ability (g), but
that gender differences will appear on specific types of cognitive ability tests. Specifically,

25
females tend to perform better than males on tests of verbal ability and more poorly than males
on tests of spatial, mathematical, and mechanical abilities (Geary, Saults, Liu, & Hoard, 2000;
Maccoby & Jacklin, 1974; Maitland, Intrieri, Schaie, & Willis, 2000; Weiss, Kemmler,
Deisenhammer, Fleischhacker, & Delazer, 2003; Wise, at al., 1992). Burke (1995) meta-
analyzed gender subgroup differences on aviator aptitude tests and reported findings consistent
with those from the general literature. As with the race subgroup differences, a variety of
explanations have been offered for these findings, for example, differences in socialization
experiences, but no one has fully explained or eliminated them to date.

The magnitude of gender differences on spatial ability tests appears to vary considerably
with the type of test (Linn & Peterson, 1985), with the largest differences occurring on tests that
involve three-dimensional spatial rotation and the smallest differences occurring on tests that
involve spatial visualization (e.g., paper-folding tests). Boer (1991) reviewed construct validity
evidence for a variety of spatial ability tests and concluded that "the most important aspects of
spatial ability are the identification of the optimal solution strategy and, perhaps, a final process
called evaluation and confirmation. It seems that the actual execution of the solution process,
including mental rotation, is less important."(p. 108).

The US Army's Project A included several different spatial ability tests. Factor analyses
suggested that all the tests load on a single underlying factor, but that some of the tests produced
much larger race and gender subgroup differences than others (Russell & Peterson, 2001).
Specifically, a spatial abilities test called Assembling Objects showed smaller gender differences
than other spatial ability tests but was a valid predictor of behavior. A similar pattern of
findings, using most of the same spatial ability tests developed during Project A, occurred in a
large-scale study focused on revising the ASVAB (Russell, Reynolds, & Campbell, 1994).
These researchers recommended adding the Assembling Objects subtest to the ASVAB, a
recommendation that has since been enacted.
Psychomotor Tests

Males typically score considerably higher on psychomotor tests than females (Burke,
1995; Carretta, 1997b; McHenry & Rose, 1988; Russell & Peterson, 2001) and the standardized
mean score difference is often larger than 1.0. There is much less reported evidence for race
subgroup differences on psychomotor tests but Russell and Peterson (2001) found standardized
mean score differences ranging from .38 to .87 between African American and White enlisted
personnel on Project A psychomotor tests. This finding may be at least partially explained by
the correlation between psychomotor and cognitive abilities (Carretta, 1997a; Ree & Carretta,
1992).
Speeded Information ProcessingTests

In Project A and in a joint-services project (Russell & Peterson, 2001; Russell, Reynolds,
& Campbell, 1994), there were small to no race or gender subgroup differences on speeded
measures of information processing, for example, reaction time. Interestingly, males tended to
perform somewhat better than females on measures that focus only on perceptual speed, while
the reverse was true for measures that focused on both speed and accuracy. Both Carretta
(1997b) and Burke (1995) report similar findings when examining gender differences in
performance in samples of USAF and UK Royal Air Force aviator applicants respectively.

26
Personalityand Temperament Measures

Research suggests that personality and temperament measures typically show small or no
racial subgroup differences (Bobko, Roth, & Potosky, 1999; Hough, 1998; Ones & Viswesvaran,
1998; Russell & Peterson, 2001; Schmitt, Rogers, Chan, Sheppard, & Jennings, 1997). In
contrast, there often are gender differences on personality and temperament inventories and,
prior to passage of the Civil Rights Act of 1991, many test batteries used separate within-group
norms for scoring and reporting purposes, that is, separate norms for males and females, and for
persons of different racial backgrounds. The Civil Rights Act of 1991 prohibits adjusting scores,
using different cutoffs, or otherwise altering the results of employment related tests on the basis
of race, color, religion, sex, or national origin. As a consequence, the use of within-group norms
has essentially disappeared for cognitive ability tests. In the personality measurement arena, it
appears that within-group norms are generally accepted when the results will be used for
descriptive or diagnostic purposes (e.g., in a counseling setting) but are much more controversial
if the results will be used for employment related decisions (see Sackett & Wilk, 1994).

While research suggests there are practically meaningful subgroup differences between
males and females on personality or temperament inventories, the direction and size of the
difference depends on which personality or temperament characteristic is being measured
(Sackett & Wilk, 1994) and, even then, is not always consistent. Furthermore, very little
research has been conducted to determine whether or not these differences lead to differential
prediction of job performance. Saad and Sackett (2002) analyzed data from the US Army's
Project A and found some evidence that personality scores over-predicted female performance,
but no evidence of bias.

Sackett and Wilk (1994) reviewed male-female effect sizes on the scales of several well-
known personality inventories. The results are difficult to summarize across inventories because,
when the inventories were developed, there was no common set of scale labels and no agreed-
upon set of underlying constructs. Generally, males scored higher than females on scales
measuring dominance, independence, aggression, and risk-taking, while females scored higher
than males on scales measuring nurturance, agreeableness, affiliation, and conscientiousness.
Many of the differences were not large, however.

The temperament inventory developed as part of Project A (the Assessment of


Background and Life Experience - ABLE) was developed with the intention of using the same
set of norms for both males and females. In a large sample of enlisted US Army Soldiers, the
Male-Female effect sizes ranged from .00 to .54 across the 11 ABLE content scales. With the
exception of the Physical Condition scale, all of the Male-Female effect sizes were .25 or lower.
Female Soldiers scored at least somewhat higher, on average, than males on cooperativeness,
conscientiousness, non-delinquency, traditional values, work orientation, internal locus of
control, and energy level. Male Soldiers scored at least somewhat higher, on average, than
females on emotional stability, self-esteem, dominance, and physical condition (Russell &
Peterson, 2001). Finally, Ones and Viswesvaran (1998) meta-analyzed subgroup differences on
overt integrity tests and found that females scored .16 standard deviations higher than males.

27
What Should the Army Measure?

Research focused specifically on aviator selection, as well as general research, clearly


suggests that cognitive ability, or general intelligence (g), will be an important predictor of
aviator performance. Researchers debate the usefulness of identifying more specific abilities
within the cognitive ability domain (e.g., Jensen, 1993; Ree & Earles, 1992, 1993; Schmidt &
Hunter, 1993; Sternberg & Wagner, 1993), but many personnel selection batteries include
measures of different types of cognitive ability, including some combination of:
1. General Reasoning;
2. Spatial Ability;
3. Mechanical Reasoning;
4. Quantitative Ability;
5. Verbal Ability;
6. Multiple-Task Performance (also known as Timesharing or Divided Attention); and,
7. Information Processing (e.g., perceptual speed and accuracy, working memory,
cognitive task prioritization).

Research also suggests that including measures of the following abilities and
characteristics is also likely to enhance the validity of the overall selection process:
8. Aviation or Helicopter Knowledge
9. Interest in Aviation
10. Flying Experience - although the type of flying experience may make a difference
(fixed-wing versus rotary-wing)
11. Normal-Range Personality Characteristics - Based on the aviator selection and
general research literature, traits that seem relevant for the aviator job
include:
a. Conscientiousness/Integrity;
b. Achievement Orientation;
c. Stress Tolerance/Emotional Stability;
d. Adaptability/Cognitive Flexibility;
e. Interpersonal/Crew Interaction skills;
f. Risk Tolerance;
g. Internal Locus of Control; and,
h. Dominance/Potency (including Self-Confidence/Self-Esteem).

These, then, are KSAOs that should be included, at a minimum, in a job analysis study.
Some of them may be defined more narrowly than shown here, based on taxonomic work in the

28
field of individual differences. There are other areas that should be considered in the overall
Army aviator selection process, for example, screening for serious medical conditions,
neurological deficiencies, or psychological disorders. However, these measures would be
designed to "select out" applicants that do not belong in Army aviation training, while the
present project is intended to identify those batteries that would be useful in "selecting in" the
most qualified applicants.
Review of Existing Aviator Selection Test Batteries

Even using a focused approach to the literature review, more than 150 potentially
relevant articles were identified. Rather than rely entirely on a narrative summary, a spreadsheet
was developed to summarize standard information about various test batteries and to facilitate
comparison of the test batteries when deriving a recommended selection strategy. The following
questions were identified as potentially having some bearing on testing recommendations:
1. What subtests, if any, are part of the battery?
2. Who uses the battery?
3. How long does it take to administer the battery?
4. Is the battery already computerized? Web-enabled?
5. Does the battery require non-standard equipment (e.g., joystick, timing card)?
6. What validity evidence is available for the battery?
7. What are the key studies and references describing validation efforts?

An answer for each question was provided (when possible) for a number of current or
recently available test batteries, and documented in the aforementioned spreadsheet. The results
are shown in Appendix A.

When considering potential measures of non-cognitive characteristics such as


conscientiousness, it is clear that no single, existing inventory measures every characteristic that
might be important for the aviator job. Therefore, a second spreadsheet was created, shown in
Appendix B, to summarize information available for several inventories that have been
administered in an aviator selection setting, or that are already owned by the US military. This is
not intended to be a comprehensive review of all possible personality inventories. There are
dozens of commercially-available personality and biodata inventories that could be used to
measure characteristics important for aviators, but none that are specifically designed for aviator
selection and none that appear to be significantly more comprehensive or likely to exhibit
significantly higher validity than those already available to the US military.

Selection Strategy Recommendations

The following approach was used to develop a recommended selection strategy:


1. Identify KSAOs important for the aviator job through the literature review and a job
analysis. In other words, take a construct-oriented approach to this effort.

29
2. Identify, to the extent possible, existing measures of those KSAOs with known
validity.
3. Construct a set of recommendations outlining best-bet choices for predictors that
measure critical KSAOs, taking into account what is known about expected subgroup
differences.

As noted above, one of the primary considerations in recommending an existing test


battery is whether or not there is validity evidence to support its use. Based on the literature
review, and as summarized in Appendix A, there are several aviator selection batteries that have
demonstrated a reasonable level of validity for predicting Undergraduate Pilot Training, which is
typically operationalized as pass/fail, but sometimes also includes measures of training grades or
instructor aviator ratings. Almost no one has attempted to predict aviator performance outside of
training and, when they have, have not had a great deal of success. This is likely due to issues
with criterion quality, as there are significant obstacles to developing good measures of aviator
performance on the job.

The most viable candidates for replacing the AFAST appear to be test batteries developed
by the US military, several of which are described below. 4 There are also some aviator selection
batteries developed by foreign military services or commercial organizations with demonstrated
evidence of validity, as shown in Appendix A. The latter test batteries are less viable than
batteries developed by the US military because: 1) there is no evidence that they are any more
valid than test batteries developed by the US military, and 2) it would likely be difficult and/or
expensive for the US Army to gain access to them.

The recommendations resulting from this review are presented in overview form in
Appendix C. It would seem to be an efficient use of Army testing resources to create a two-stage
testing process. The first stage would include measures of cognitive abilities such as spatial
ability, mechanical reasoning, verbal ability, numerical reasoning, and perceptual speed and
accuracy, as well as a measure that would attempt to tap motivation to become an aviator, such
as an Information subtest. The Army may be able to take advantage of the fact that the US Navy
has a web-enabled aviator selection battery that currently consists of a reasonable set of cognitive
tests and an Information subtest that assesses aviation and nautical knowledge. Including a non-
cognitive inventory in Stage I is also recommended. Such an inventory may provide
incremental validity beyond the cognitive test battery, and it may help ameliorate race subgroup
differences on the cognitive tests. The inventory could include scales from several different
inventories that have been developed by the US military. The Stage I battery could be
administered via the Internet on any standard desktop computer and would not require any non-
standard peripherals or hardware. Thus, it could be administered virtually anywhere that a
computer is available, along with a reliable Internet connection and a test control officer.

4 Re-using any of the existing AFAST subtests was not considered because Army researchers believe that the
content may have been compromised over the several years in which it has been used.

30
The second stage of the test battery would focus on psychomotor skills and multiple-task
performance. These types of tests are often combined and labeled "performance-based"
measures. Alternately, given practical considerations, this Stage 2 test battery might assist in the
classification of selected Army aviators into mission/aircraft types.
Best Bet Predictor Measures

Stage 1: Cognitive Measures

Aviator Selection Test Battery (ASTB). The ASTB is the US Navy's primary aviator
selection instrument. It grew out of the Pensacola 1000 Pilot Study, which examined over 60
psychological, psychomotor, and physical tests (North & Griffin, 1977). The current version of
the ASTB includes subtests measuring Reading Comprehension, Mathematical Ability,
Mechanical Comprehension, Spatial Apperception, and Aviation and Nautical Interests. Navy
researchers are currently building an adaptive version of the ASTB and anticipate transitioning to
adaptive testing within three to five years.

The ASTB subtests are used to create several composite scores, including the Academic
Qualification Rating (AQR) and the Pilot Flight Aptitude Rating (PFAR). Validity data for
FY98-FY04 are summarized or reported in graphic form in a series of ASTB Workshop Briefing
Slides (Operational Psychology Department, 20 July 2004). Navy researchers found that AQR
scores predict performance in ground school (r = .46 for USN student aviators and r = .39 for
USMC student aviators) while PFAR scores predict performance in primary flight school
(Primary NSS) (r = .32 USN student aviators and r = .21 for USMC student aviators). This
research suggest reported that validity for predicting attrition from aviator training was in the
high teens for student aviators in both the USN and the USMC, using AQR or PFAR scores.
Sample sizes were not provided in the briefing slides, but include thousands of cases (personal
communication, Captain John Schmidt, Operational Psychology Department, USN, October 29,
2004).

The US Navy has developed a web-administration system for the ASTB, called
Automated Pilot Examination (APEX). The APEX system is being widely used throughout the
Navy and is expected to account for the bulk of ASTB administrations by the end of FY 2005
(personal communication, Captain John Schmidt, USN, October 29, 2004). This is currently the
only web-administered aviator selection test battery.

The ASTB was designed to select Commissioned Officers who will enter training to
become a Navy or USMC aviator. All Commissioned Officers, by definition, have completed a
four-year college degree. Therefore, to ensure that the ASTB is not too difficult for the Army
aviator applicant population (which includes persons with less than a four-year college degree),
the US Navy administered the ASTB to a sample of incoming Army student aviators. In
February 2005, the Operational Psychology Department of the Naval Operational Medicine
Institute administered a paper-and-pencil version of the ASTB to 73 student aviators at Ft.
Rucker, AL. The Navy scored the data and provided summary information to the ARI monitor
and PDRI project team. There was a reasonable degree of variability in the Army scores, and no
evidence that it was too difficult for Army student aviators (i.e., no floor effect). The Navy also
provided, for comparison purposes, ASTB summary data for Navy personnel with varying levels
of education. (The ASTB is administered to Navy personnel for some purposes other than

31
aviator selection.) Overall, the Army sample scores were similar to those in a mixed-education
Navy sample, and to those in a sample of Navy personnel who had at least a bachelor's degree.
The Army aviator sample included in this effort had already been selected via the AFAST, and
thus does not represent the full range of intelligence in the Army aviator applicant population.
Nevertheless, it appears that the ASTB will not prove to be too difficult, overall, for the US
Army aviator applicant sample.

Air Force Officer Qualification Test (AFOQT). The US Air Force developed the AFOQT
in the early 1950s as a tool for selecting civilian applicants for officer precommissioning training
programs and for classifying commissionees into aircrew job specialties (Rogers, Roach, &
Short, 1986; Skinner & Ree, 1987). The Air Force has periodically revised the AFOQT to
update items, ensure test security, and improve predictive validity. The first form of the AFOQT
was implemented in 1953, Form R is currently in use, and Form S is scheduled for
implementation in the near future. Form R has 16 subtests and Form S has 11 subtests that tap
verbal, quantitative, spatial, and mechanical aptitudes. Form S also includes a measure of non-
cognitive characteristics, called the Self-Description Inventory. Scores on the AFOQT subtests
are used to form five distinct but partially overlapping composites: Pilot, Navigator-Technical,
Academic Aptitude, Verbal, and Quantitative (Sperl & Ree, 1990). The Pilot and Navigator-
Technical composites are used for classification into Undergraduate Pilot Training (UPT) and
Undergraduate Navigator Training (UJNT), respectively. The AFOQT has been validated for
more than 36 officer jobs as a predictor of technical training grades (Arth, 1986; Carretta & Ree,
1998). Carretta and Ree (1994) found that the AFOQT showed a multiple correlation of.20 for
predicting rank in UPT, and Shore and Gould (2003) reported a multiple correlation
(uncorrected) of.34 with UPT final grade for Form S of the AFOQT. According to Carretta
(2002), "the predictiveness of the AFOQT for aviator training performance comes almost
entirely from its measurement of g and aviation job knowledge."(p. 1).

As new forms of the AFOQT have been constructed in recent years, key features of the
subtests have deliberately been held constant to ensure equivalent measurement. Thus, the more
recent versions are equivalent in terms of subtest content, subtest length, item difficulty, testing
time, and stylistic features. Further, about one-half of the items in each form are taken directly
from the previous form, and analyses are conducted to equate the new form to the old (Glomb &
Earles, 1997). None of the AFOQT forms, to date, have been computerized, but the USAF
intends to develop this capability.

Like the ASTB, the AFOQT is designed primarily for use with a Commissioned Officer
population. Therefore, USAF provided access to, and permission to analyze, a normative
database containing scores on the soon-to-be-implemented AFOQT Form S for a Basic Military
Training (BMT) enlisted personnel likely to apply for the Airman Education and Commissioning
Program (n = 509), Air Force Reserve Officer Training Cadets (n = 679), and Officer Training
School cadets (n = 462). The analyses are described in Gould and Damos (2005). They
conclude, "As expected, the AFOQT was more difficult for the Air Force enlisted personnel than
for other commissioning source applicants. However, the subtest and composite score
distributions are sufficient to discriminate well between enlisted personnel if the AFOQT or
similar aptitude test is used for [aviator] selection" (p. 1). If the US Army chooses to implement
the AFOQT, these authors recommend that Army-specific norms and passing score(s) be
established.

32
Cognitive Prioritization(PopcornTest). The cognitive prioritization test follows a
format originally developed by NASA researchers, and is colloquially known as a "popcorn"
test. It is a measure of cognitive processing, specifically the ability to prioritize several moving
stimuli that appear on a computer screen. No operational pilot selection test battery has included
a popcorn test, although other test batteries, for example, Wombat© (Aero Innovation, 1998),
likely measure the same or a similar underlying ability. It is recommended that the US Army
include this type of test in its Army aviator selection battery because this ability may become
increasingly important in the future, as the cognitive load associated with flying rotary-wing
aircraft increases. Scores on this test may also be related to measures of situational awareness.

PerceptualSpeed and Accuracy. One possible measure of perceptual speed and accuracy
is the Table Reading test that is a subtest of the AFOQT. This test has been in use for aviator
selection since 1942. It continues to account for unique variance in prediction of aviator
performance, and is part of the AFOQT Pilot Composite score. A commercial version of the test
is also available.

Alternatively, it would be possible to develop a new measure of perceptual speed and


accuracy, using stimulus materials that are face valid for Army aviators. PDRI has developed
many different measures of perceptual speed and accuracy, and could do so efficiently in the
current project.

Stage 1: Non-Cognitive Measures

Test ofAdaptable Personality(TAP). The TAP was developed by the US Army for use
in training and developing Special Forces Soldiers and officers. It consists of biodata items that
were written to target constructs such as achievement orientation, fitness motivation, cognitive
flexibility, peer leadership, and interpersonal skills. In Special Forces samples, the achievement
orientation, fitness motivation, and cognitive flexibility scales have proven valid for predicting
peer and supervisor ratings of performance (personal communication, R. Kilcullen, November,
2005; Kilcullen, Goodwin, Chen, Wisecarver, & Sanders, undated; Kilcullen, Mael, Goodwin, &
Zazanis, 1999).

Assessment ofindividualMotivation (AIM). The AIM is a forced-choice non-cognitive


inventory that measures several constructs potentially important for aviator selection. It was
developed by researchers at ARI, and was developed to measure most of the same constructs as
the Assessment of Background and Life Experiences (ABLE) developed during Project A. In
Project A, the ABLE was predictive of volitional aspects of performance in a variety of military
enlisted jobs, and it exhibited incremental validity when added to a cognitive test battery (Russell
& Peterson, 2001). However, the ABLE was never implemented for selection purposes due to
concerns about its fakability (White, Young, & Rumsey, 2001).

The AIM specifically addresses fakability concerns by using the forced-choice


methodology. This methodology has long been suggested as a way to make an inventory
resistant to faking, and there is evidence to support this claim, some of it specifically based on
the AIM (Jackson, Wroblewski, & Ashton, 2000; White, et al., 2001). ARI is also currently
funding efforts to explore an Item Response Theory (IRT)-based approach to administering and

33
scoring the AIM, in an attempt to make it even more resistant to faking (Stark, Chernyshenko, &
Drasgow, 2003).

To date, research on the AIM has focused primarily on predicting attrition, but there is
some evidence that it predicts job performance and personal discipline among correctional
officers in military prisons as well as success in explosive ordinance disposal training for
military personnel (White, et al., 2001). Project A results suggest it is reasonable to believe that
the AIM will predict volitional aspects ofjob performance for the Army aviator job, because it
measures characteristics important for performing that job.

The AIM is currently used for operational recruit screening as part of the US Army's
GED Plus program, and it has shown promise for use in pre-enlistment screening of Non High
School Graduate (NHSG) recruits (White, Young, Heggestad, Stark, Drasgow, & Piskator, 2004,
2005). It is also being evaluated for potential use in screening of US Army recruiters and drill
sergeants. Researchers have also developed various scoring methods in an effort to enhance the
validity of the AIM for predicting attrition, including empirical scoring procedures (White,
Young, Heggestad, Stark, Drasgow, & Piskator, 2005), an IRT-based scoring approach
(Chemyshenkso, Stark, & Drasgow, 2003), and a decision tree approach (Lee & Drasgow,
2003).

Self-Description Inventory Plus (SDI+). The SDI was developed by the USAF, and is
currently considered an experimental subtest within Form S of the AFOQT. It was originally
developed to measure the Big Five personality factors. In recent years, USAF researchers wrote
two additional scales to measure Team Orientation and Commitment to Military Service (Service
Orientation). It contains 220 items (see Christal, et al., 1997). According to USAF researchers,
the value of the SDI is in generating profiles for people and ultimately profiles for organizations
and job families to facilitate person-job match and strategic force development (J. Weissmuller,
personal communication, February 28, 2005). It is not specifically intended for personnel
selection. Nevertheless, validity data are currently being collected in a broad USAF sample,
including some aviators.

Armistrong LaboratoryAviation PersonalityScale (ALAPS). The ALAPS was also


developed by the USAF (Retzlaff, King, Callister, Orme, & Marsh, 2002). It includes five
"personality" scales (confidence, socialness, aggressiveness, orderliness, and negativity), six
"crew interaction" scales (dogmatism, deference, team orientation, organization, impulsivity, and
risk-taking), and four "psychopathology" scales (affective lability, anxiety, depression, and
alcohol abuse). A large-scale validation study is currently underway by the USAF. The US
Navy is also planning to conduct validation research on this inventory. However, further
investigation into this inventory revealed the unfortunate fact that the items and scoring key have
been published in a USAF technical report that is available to members of the general public who
are savvy enough to locate it. Therefore, it would be unwise for the US Army to use this
inventory for selection.

New Non-cognitive Scales. Based on our review of the existing inventories, there are
several non-cognitive characteristics that may be predictive of aviator performance that are not
measured by any of the inventories readily accessible to the US Army. Therefore, it may be

34
advisable to write new scales targeting these characteristics, emulating the style and format of
the items in the TAP.

Stage 2: Psychomotor Skills and Multiple-Task Performance (Performance-BasedMeasures)

Test of Basic Aviation Skills (TBAS). This test battery was developed by the USAF as a
replacement for the BAT. It includes three subtests designed to measure spatial orientation
(tracking tasks) and multiple task performance skills (tracking plus directed listening), as well as
the ability to make decisions under stress. TBAS is scheduled for fielding in 2006 and the US
Navy is also considering adding it to their aviator selection process. Ree (2003) analyzed TBAS
data from USAF aviator trainees (n = 531) who had already been selected on other measures and
found that the spatial orientation and decision-making subtests showed low but significant
correlations with various training criterion measures. The multiple correlation for predicting a
combined training performance measure (based on check ride scores, instructor ratings, and quiz
scores, among other things) was .33; the multiple correlation for predicting UPT pass/fail was
.31. It does not appear that these correlations were corrected for shrinkage but the author notes
that they were downwardly biased due to a high degree of range restriction on the predictor
measures. There are some concerns about the stability of scores on the decision-making subtest.
Ree (2004b) examined 90-day and 180-day test-retest reliability in a small sample (n = 126) of
USAF aviator trainees. Reliability was very low for the decision-making subtest (. 15) and
acceptable for the other subtest scores (.56-.75). Further investigation of TBAS with USAF
personnel revealed the unfortunate fact that no documentation regarding the computer
programming appears to exist, nor any documentation about how the dependent variables are
calculated. For this reason, it is not recommended that the TBAS be used for Army aviator
selection unless and until program documentation can be located.

Wombat©. The Wombat© (Aero Innovation, 1998) is a commercially-available,


computerized test battery that involves learning and operating a complex system. It does not
involve discrete subtests, but rather involves continuous performance on a primary tracking task,
with secondary performance on any of three "bonus" tasks. The bonus tasks are worth varying
amounts of points at different times. All of the measures are combined to create a total
efficiency score. During the testing period, examinees are given continuous feedback on their
performance which can help them maximize their task performance strategies, to the extent that
they have the attentional and cognitive capacity to do so. There has been little published on the
validity of the Wombat©, but two studies suggest that scores are correlated with academic
performance in flight school and flight hours (Cain, 2002; Frey, Thomas, Walton, & Wheeler,
2001). The Wombat© has been used extensively for aviator selection in Canada, but has not
been used operationally by the US military, as the advertised pricing is prohibitive.

New Performance-BasedMeasure. If neither the TBAS nor the Wombat© are viable
alternatives, it is recommended that the US Army develop its own performance-based measure of
psychomotor skills and multiple-task performance. This recommendation is being made because
there does not appear to be another performance-based measure that: 1) has proven validity; 2) is
programmed in a modem programming language; and, 3) is readily available and free to the US
Army.

35
The new test battery could include subtests similar to psychomotor tests with a long
history and proven validity, for example, the Complex Coordination test and the Rotary Pursuit
test, but programmed in a modem programming language. Multiple-task performance could be
assessed by combining a directed listening test, or some other secondary task, with a
psychomotor task. For example, it might be possible to use the TBAS as a model for
development but with careful documentation of the programming and development of scoring
variables.
Conclusions

This report presents a review of a great deal of research in the aviator selection and
general personnel selection domains. That information was used to identify KSAOs that should
be included in a job analysis study focusing on the Army aviator job. It was further used to
develop a recommended strategy for an Army aviator selection battery.

Research focused specifically on aviator selection, as well as general personnel selection


research, clearly suggests that cognitive ability, or general intelligence (g), will be an important
predictor of aviator performance. More specific cognitive abilities that may be of importance
include: general reasoning; spatial ability; mechanical reasoning; quantitative ability; verbal
ability; multiple-task performance (also known as timesharing or divided attention); and
information processing (e.g., perceptual speed and accuracy, working memory, cognitive task
prioritization). Research also suggests that measures of aviation or helicopter knowledge,
interest in aviation, flying experience, and normal-range personality characteristics are likely to
enhance the validity of the overall selection process. Non-cognitive traits that seem relevant for
the aviator job include: conscientiousness/integrity; achievement orientation; stress
tolerance/emotional stability; adaptability/cognitive flexibility; interpersonal/crew interaction
skills; risk tolerance; internal locus of control; and, dominance/potency (including self-
confidence/self-esteem).

The results of this review, then, suggest a selection strategy for Army aviation that
includes measures of cognitive abilities such as spatial ability, mechanical reasoning, verbal
ability, numerical reasoning, perceptual speed and accuracy, and cognitive prioritization, as well
as a measure that would attempt to tap motivation to become an aviator. In addition, incremental
validity may be achieved by including non-cognitive measures such as the TAP and AIM, as
well as other normal-range personality inventories.

From a practical perspective, the test battery could be administered via the Internet on
any standard desktop computer and would not require any non-standard peripherals or hardware.
Thus, it could be administered virtually anywhere that a computer is available, along with a
reliable Internet connection and a test control officer. In fact, the Army may be able to take
advantage of the fact that the US Navy has a web-enabled aviator selection battery that currently
consists of a reasonable set of cognitive tests.

The addition of measures that focus on psychomotor skills and multiple-task


performance, often labeled "performance-based" measures, is recommended. However, practical
constraints on time and resources might suggest that these tests be considered as candidates for
inclusion in an aviator tracking battery, to assist in the classification of selected Army aviators
into mission/aircraft types. This recommended "Stage 2" in the selection/classification process

36
is, in fact, the next scheduled research and development effort for the ARI Rotary-Wing Aviation
Research Unit at Fort Rucker.

37
38
References

Ackerman, P. L. (1987). Individual differences in skill learning: An integration of


psychometric and information processing perspectives. PsychologicalBulletin, 102, 3-
27.

Ackerman, P. L. (1988). Determinants of individual differences during skill acquisition:


Cognitive abilities and information processing. Journalof ExperimentalPsychology:
General, 117, 288-318.
Ackerman, P. L. (1990). A correlational analysis of skill specificity: Learning, abilities, and
individual differences. Journalof ExperimentalPsychology: Learning,Memory, and
Cognition, 16, 883-901.

Ackerman, P. L., Kanfer, R., & Goff, M. (1995). Cognitive and non-cognitive determinants
and consequences of complex skill acquisition. Journalof Experimental Psychology:
Applied, 1, 270-304.

Ackerman, P. L., & Woltz, D. J. (1994). Determinants of learning performance in an


associative memory/substitution task: Task constraints, individual differences, volition,
and motivation. Journalof EducationalPsychology, 86, 487-515.
*Aero Innovation Inc. (1998, June). WOMBAT-CS Candidate'sManual (21st edition) (for
software version CS 4.9). Montreal, Quebec: Author.

Ambler, R. K., & Smith, M. J. (1974). Differentiatingaptitudefactors among current


aviation specialties (NAMRL-1207). Pensacola, FL: Naval Aerospace Medical
Research Laboratory.
*Anesgart, M. N., & Callister, J. D. (1999). Predicting training success with the NEO: The
use of logistic regression to determine the odds of completing a pilot's screening
program. From Proceedingsof the Tenth InternationalSymposium on Aviation
Psychology, Ohio State University: Department of Aerospace Engineering.

Arth, T. 0. (1986). Validation of the AFOQTfor non-rated officers (AFHRL-TP-85-50).


Brooks AFB, TX: Air Force Human Resources Laboratory, Air Force Systems
Command.
*Arth, T. 0., Steuck, K. W., Sorrentino, C. T., & Burke, E. F. (1990). Air Force Officer
Qualifying Test (AFOQT): Predictorsof UndergraduatePilot Trainingand
UndergraduateNavigator trainingsuccess (Interim Technical AFHRL-TP-89-52): Air
Force Human Resources Laboratory.

*References marked with an asterisk are cited in one of the appendices.

39
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job
performance: A meta-analysis. PersonnelPsychology, 44, 1-26.
*Bartram, D. (1987). The development of an automated testing system for aviator selection:
The MICROPAT project. Applied Psychology: An InternationalReview, 36(3/4), 279-
298.
*Bartram, D., & Dale, H. C. A. (1982). The Eysenck Personality Inventory as a selection test
for military pilots. Journalof OccupationalPsychology, 55, 287-296.

Bartram, D., & Dale, H. C. A. (1985). The prediction of success in helicopter pilot training.
Report of the XV1 Conference of the Western European Association ofAviation-
Psychology (pp. 92-101). Helsinki, Finland: Finnair Training Center.
*Bishop, S. L., Faulk, D., & Santy, P. A. (1996). The use of IQ assessment in astronaut
screening and evaluation. Aviation, Space and EnvironmentalMedicine, 67(12), 1130-
1138.
*Blower, D. J. (1998). Psychometric equivalency issues for the APEX system (NAMRL
Special Report 98-1). Pensacola, FL: Naval Aerospace Medical Research Laboratory.
*Blower, D. J., & Dolgin, D. L. (1990). An evaluation of performance based tests designed
to predict success in primary flight training. Paper presented at the 3 4 th Annual Meeting
of the Human Factors Society.

Bobko, P., Roth, P. L., & Potosky, D. (1999). Derivation and implications of a meta-analytic
matrix incorporating cognitive ability, alternative predictors, and job performance.
PersonnelPsychology, 52, 561-589.

Boer, L. C. (1991). Spatial ability and the orientation of pilots. In R. Gal, & A. D.
Mangelsdorf (Eds.), Handbook of MilitaryPsychology. New York: Wiley & Sons.

Borman, W. C., & Motowidlo, S. J. (1993). Expanding the criterion domain to include
elements of contextual performance. In N. Schmitt, & W. C. Borman (Eds.), Personnel
selection in organizations.San Francisco, CA: Jossey-Bass Publishers.

Borman, W. C., Penner, L. A., Allen, T. D., & Motowidlo, S. J. (2001). Personality
predictors of citizenship performance. InternationalJournalof Selection and
Assessment, 9, 52-69.

Burke, E. (1995). Male-female differences on aviation selection tests: Their implications for
research and practice. In N. Johnston, R. Fuller, & N. McDonald (Eds.), Aviation
Psychology: Training and Selection (pp. 188-193). Aldershot, England: Avebury
Aviation.
*Burke, E., Hobson, C., & Linsky, C. (1997). Large sample validations of three general
predictors of pilot training success. The InternationalJournal ofAviation Psychology,
7, 225-234.

40
*Burke, E., Kitching, A., & Valsler, C. (1997). The Pilot Aptitude Tester (PILAPT): On the
development and validation of a new computer-based test battery for selection of pilots.
In R. S. Jensen, & L. Rakovan (Eds.), Ninth InternationalSymposium on Aviation
Psychology (pp. 1286-1291). The Ohio State University: Department of Aerospace
Engineering, The Ohio State University.

*Cain, R. E. (2002). The relationshipsof metacognition,self-efficacy, and educational


and/orflight experience to situationalawareness in aviation students. Unpublished
dissertation, University of Missouri, Columbia, MI. [Dissertation Abstracts
International Section A: 2986.]

*Caldwell, J. A., O'Hara, C., Caldwell, J. L., Stephens, R. L., & Krueger, G. P. (1993).
Personality profiles of U.S. Army helicopter pilots screened for special operations duty.
Military Psychology, 5, 187-199.

*Callister, J. D., King, R. E., & Retzlaff, P. (1996). Cognitive assessment of USAF pilot
training candidates. Aviation, Space and EnvironmentalMedicine, 67(12), 1124-1129.

*Callister, J. D., King, R. E., Retzlaff, P. D., & Marsh, R. W. (1999). Revised NEO
personality inventory profiles of male and female U.S. Air Force pilots. Military
Medicine, 164(12), 885-890.

Campbell, J. P. (1996). Group differences and personnel decisions: Validity, fairness, and
affirmative action. Journalof VocationalBehavior, 49, 122-158.

Campbell, J. P., Hanson, M. A., & Oppler, S. H. (2001). Modeling performance in a


population of jobs. In J. P. Campbell, & D. J. Knapp (Eds.), Exploring the limits in
personnel selection and classification.Mahwah, NJ: Lawrence Erlbaum Associates.

Campbell, J. P., Harris, J. H., & Knapp, D. J. (2001). The Army Selection and Classification
Research Program: Goals, overall design, and organization. In J. P. Campbell, & D. J.
Knapp (Eds.), Exploring the limits in personnel selection and classification.Mahwah,
NJ: Lawrence Erlbaum Associates.

Campbell, J. P., & Knapp, D. J. (Eds.) (2001). Exploring the limits in personnel selection and
classification.Mahwah, NJ: Lawrence Erlbaum Associates.

*Carretta, T. R. (1 987a). Basic attributes tests (BAT) system: Development of an automated


test batteryfor pilot selection (AFHRL-TR-87-9). Brooks AFB, TX: Air Force Human
Resources Laboratory.

*Carretta, T. R. (1 987b). Field dependence independence and its relationshipto flight


trainingperformance (Interim Technical Paper AFHRL-TP-87-36). San Antonio, TX:
Brooks AFB, Air Force Human Resources Laboratory.

*Carretta, T. R. (1 987c). Spatialability as a predictorofflight trainingperformance (Interim


Technical AFHRL-TP-86-70). Brooks AFB, TX: Manpower and Personnel Division,
Air Force Human Resources Laboratory.

41
*Carretta, T. R. (1987d). Time-sharing ability as a predictor offlight trainingperformance
(Interim Technical AFHRL-TP-86-69). Brooks AFB, TX: Manpower and Personnel
Division, Air Force Human Resources Laboratory.
*Carretta, T. R. (1988). Relationshipof encoding speed and memory tests toflight training
performance (AFHRL-TP-87-49). San Antonio, TX: Brooks AFB, Air Force Human
Resources Laboratory.

Carretta, T. R. (1989). USAF pilot selection and classification systems. Aviation, Space and
EnvironmentalMedicine, 60, 46-49.

Carretta, T. R. (1990). Basic attributes test (BAT): A preliminary comparison between


Reserve Officer Training Corps (ROTC) and Officer TrainingSchool (OTS) pilot
candidates (AFHRL-TR-89-50). Brooks AFB, TX: Air Force Human Resources
Laboratory.

Carretta, T. R. (1997a). Group differences on US Air Force pilot selection tests. International
Journalof Selection andAssessment, 5, 115-127.

Carretta, T. R. (1997b). Sex differences on US Air Force pilot selection tests. Proceedingsof
the Ninth InternationalSymposium on Aviation Psychology (pp. 1292-1297).
Columbus, OH: The Ohio State University.
*Carretta, T. R. (2000). US Air Force pilot selection and training methods. Aviation, Space
and EnvironmentalMedicine, 71, 950-956.

Carretta, T. R. (2002). Common military pilot selection practices. Human Systems JAC
Gateway, X1lI(1), 1-4.
*Carretta, T. R., & Ree, M. J. (1993). Pilot CandidateSelection Method (PCSM): What
makes it work? (AL-TP-1992-0063). Brooks AFB, TX: Manpower and Personnel
Research Division, Human Resources Directorate, Air Force Systems Command.
*Carretta, T. R., & Ree, M. J. (1994). Pilot-candidate selection method: Source of validity.
InternationalJournal ofAviation Psychology, 4, 103-118.

Carretta, T. R., & Ree, M. J. (1996a). Central role of g in military pilot selection. The
InternationalJournalofAviation Psychology, 6(2), 111-123.

Carretta, T. R., & Ree, M. J. (1997a). Negligible sex differences in the relation of cognitive
and psychomotor abilities. Personalityand IndividualDifferences, 22, 165-172.

Carretta, T. R., & Ree, M. J. (1997b). A preliminary evaluation of causal models of male and
female acquisition of pilot skills. The InternationalJournalofAviation Psychology,
7(4), 353-364.

42
Carretta, T. R., & Ree, M. J. (1998). Factorstructure of the Air Force Officer Qualifying
Test: Analysis and comparison (AL/HR-TP-1997-0005). Mesa, AZ: Air Force Materiel
Command, Air Force Research Laboratory, Human Resources Directorate, Aircrew
Training Research Division.

Carretta, T. R., & Ree, M. J. (2000). Pilot selection methods (AFRL-HE-WP-TR-2000-


0116). Wright-Patterson AFB, OH: Human Effectiveness Directorate, Crew System
Interface Division.

Carretta, T. R., & Ree, M. J. (2003). Pilot selection methods. In P. S. Tsang, & M. A.
Vidulich (Eds.), Principlesand practice of aviationpsychology (pp. 357-396).
Mahwah, NJ: Lawrence Erlbaum Associates.

*Carretta, T. R., Ree, M. J., & Callister, J. D. (1999). Factorstructure of the CogScreen-
AeronauticalEdition Test Battery (AFRL-HE-AZ-TR-1998-0076): Mesa, AZ: Air
Force Materiel Command, Air Force Research Laboratory, Human Resources
Directorate, Aircrew Training Research Division.

Carretta, T. R., Retzlaff, P. D., & King, R. E. (1997). A tale of two test batteries:A
comparison of the Air Force Officer Qualifying Test and the MultidimensionalAptitude
Battery (AL/HR-TP-1997-0052). Brooks AFB, TX: Air Force Materiel Command, Air
Force Research Laboratory, Human Resources Directorate, Aircrew Training Research
Division.
*Carretta, T. R., Zelenski, W. E., & Ree, M. J. (1997). Basic Attributes Test retest
performance (AL/HR-TP-1997-0040). Mesa, AZ: Air Force Materiel Command, Air
Force Research Laboratory, Human Resources Directorate, Aircrew Training Research
Division.

Chernyshenko, 0. S., Stark, S. E., & Drasgow, F. (2003). Predicting attrition of Army
recruits using optimal appropriateness measurement. In Proceedingsof the 4 5 th Annual
Conference of the Military Testing Association (pp. 317-322), Pensacola, FL.
*Chidester, T. R., & Foushee, H. C. (1991). Leader personality and crew effectiveness: A
full-mission simulation experiment. In R. S. Jensen (Ed.), Proceedings of the Fifth
internationalsymposium on aviationpsychology, Vol.11. Columbus, OH: The Ohio
State University.

Christal, R. L. (1975). Personality factors in selection and flight proficiency. Aviation, Space
and EnvironmentalMedicine, 46, 309-311.

*Christal, R., Barucky, J. M., Driskill, W. E, & Collis, J. M. (1997). The Air ForceSelf
DescriptionInventory (AFSDI): A summary of continuing research (Informal Technical
Final Report F33615-91-D-0010). San Antonio, TX: Metrica, Inc.

Damos, D. L. (1993). Using meta-analysis to compare the predictive validity of single- and
multiple-task measures to flight performance. Human Factors,35(4), 615-628.

43
Damos, D. L. (1996). Aviator selection batteries: Shortcomings and perspectives. The
InternationalJournalofAviation Psychology, 6(2), 199-209.
*Davis, W., Koonce, J., Herold, D., Fedor, D., & Parsons, C. (1997). Personality variables
and simulator performance in the prediction of flight training performance, Proceedings
of the Ninth InternationalSymposium ofAviation Psychology (pp. 1105-1109).
Columbus, OH: The Ohio State University.

Day, D. V., & Silverman, S. B. (1989). Personality and job performance: Evidence of
incremental validity. PersonnelPsychology, 42, 25-36.
*Delaney, H. D. (1990). Validation of dichotic listening andpsychomotor task performance
as predictorsofprimaryflight trainingcriteria:Highlighting relevantstatisticalissues
(Technical report NAMRL-1357). Pensacola, FL: Naval Aerospace Medical Research
Laboratory.

Department of the Army. (1981). Army Regulation 611-85, Aviation Warrant Officer
Training. Washington, D.C.: Headquarters.

Department of the Army. (1994). Army Regulation 135-100, Appointment of Commissioned


and Warrant Officers of the Army. Washington, D.C.: Headquarters.

Department of the Army. (1996). Army Pamphlet 600-11, Warrant Officer Professional
Development. Washington, D.C.: Headquarters.

Department of the Army. (1999). Army Circular601-99-1, Warrant Officer Procurement


Program. Washington, D.C.: Headquarters.

Department of the Army. (1999). Army Memo 600-2, Policies and Proceduresfor Active-
Duty List Officer Selection Boards. Washington, D.C.: Headquarters.

Department of the Army. (1999). Army Regulation 135-210, Order to Active Duties as
Individuals Other than a PresidentialSelected Reserve Call-up, Partialor Full
Mobilization. Washington, D.C.: Headquarters.

Department of the Army. (2002). Army Regulation 611-5, Army PersonnelSelection and
Classification Testing. Washington, D.C.: Headquarters.

Department of the Army. (2003). Army Regulation 611-110, Selection and TrainingofArmy
Aviation Officers (Revised). Washington, D.C.: Headquarters.

Dolgin, D. L., & Gibb, G. D. (1988). A review ofpersonality measurement in aircrew


selection (NAMRL-Monograph-3 6). Pensacola, FL: Naval Aeromedical Research
Laboratory.
*Duke, A. P., & Ree, M. J. (1996). Better candidates fly fewer training hours: Another time
testing pays off. InternationalJournalof Selection and Assessment, 4(3), 115-121.

44
Fleishman, E. A. (1967). Performance assessment based on an empirically derived task
taxonomy. Human Factors,9, 349-366.

Fleishman, E. A. (1972). On the relation between abilities, learning, and human performance.
American Psychologist, 2 7, 1017-1032.

Fleishman, E. A., & Hempel, W. E. (1954). Changes in factor structure of a complex


psychomotor test as a function of practice. Psychometrika, 19, 239-252.

Fleishman, E. A., & Mumford, M. D. (1988). Ability requirements scales. In S. Gael (Ed.),
Job analysis handbookfor business, industry, and government (Vol. 2, pp. 917-935).
New York, NY : Wiley.

Foushee, H. C., & Helmreich, R. L. (1986). Group Interaction and flightcrew performance. In
E. L. Wiener, & D. C. Nagel (Eds.). Human Factorsin Modern Aviation. New York,
NY: Academic Press.

Franzen, R., & McFarland, R. A. (1945). Detailedstatisticalanalysis of data obtained in the


Pensacolastudy of navalpilots (Report 41). Washington, DC: Civil Aeronautics
Administration.
*Frey, B. F., Thomas, M., Walton, A. J., & Wheeler, A. (2001). WOMBATas an example of
situationalawareness testing in pilot selection: An argumentfor the alignment of
selection training,and performance. Paper presented at the 11th International
Symposium on Aviation Psychology. Columbus, OH: The Ohio State University.
*Fry, G. E., & Reinhardt, R. F. (1969). Personality characteristics of jet pilots as measured
by the Edwards Personal Preference Schedule. Aerospace Medicine, 40, 484-486.
*Garvin, J. D., Acosta, S. C., & Murphy, T. E., 11 (1995). Flight training selection using
simulators - a validity assessment. In R.S. Jensen (Ed.), Proceedings of the Eighth
InternationalSymposium on Aviation Psychology (pp. 1132-1136). Columbus, OH: The
Ohio State University.

Geary, D. C., Saults, S. J., Liu, F., & Hoard, M. K. (2000). Sex differences in spatial
cognition, computational fluency, and arithmetical reasoning. Journalof Experimental
Child Psychology, 77, 337-353.

Geist, C. R., & Boyd, S. T. (1980). Personality characteristics of Army helicopter pilots.
Perceptualand Motor Skills, 51(1), 253-254.

Gellatly, I. R., Paunonen, S. V., Meyer, J. P., Jackson, D. N., & Goffin, R. D. (1991).
Personality, vocational interest, and cognitive predictors of managerial job performance
and satisfaction. Personalityand Individual Differences, 12, 221-231.

45
*Glomb, T. M., & Earles, J. A. (1997). Air Force qualifying test (AFOQT): Forms Q
development, preliminaryequating and operationalequating (AL/HT-RP-I1996-0036).
Air Force Materiel Command, Air Force Research Laboratory, Human Resources
Directorate, Manpower and Personnel Research Division.

*Gopher, D. (1982). A selective attention test as a predictor of success in flight training.


Human Factors, 24(2), 173-183.

*Gopher, D., & Kahneman, D. (1971). Individual differences in attention and the prediction
of flight criteria. Perceptualand Motor Skills, 33, 1335-1342.

*Gordon, H. W., & Leighty, R. (1988). Importance of specialized cognitive function in the
selection of military pilots. Journalof Applied Psychology, 73(1), 38-45.

*Gough, H. G., & Bradley, P. (1996). CPIManual ( 3 rd Ed.). Mountain View, CA: Consulting
Press. SPsychologists

Gould, R. B., & Damos, D. L. (2005). Feasibility of developing a common US Army


HelicopterPilot Candidate Selection System: Analysis of US Air Force data. Gurnee,
IL: Damos Aviation Services.

*Gould, R. B., & Shore, C. W. (2002). Reduction of AFOQTAdministration Time. San


Antonio, TX: Operational Technologies Corporation.

*Gregorich, S., Helmreich, R. L., Wilhelm, 3. A., & Chidester, T. (1989). Personality based
clusters as predictors of aviator attitudes and performance. In R.S. Jensen (Ed.),
Proceedings of the 5f/h InternationalSymposium on Aviation Psychology (pp. 686-69 1).
Columbus, OH: The Ohio State University.

Griffin, G. R., & Koonce, M. J. (1996). Review of psychomotor skills in pilot selection
research of the U.S. military services. InternationalJournalof Aviation Psychology,
6(2), 125-147.

Griffin, G. R., & McBride, D. K. (1986). Multitask performance: Predictingsuccess in Naval


aviationprimaryflight training (NAMRL-I1316). Pensacola Air Station, FL: Naval
Aerospace Medical Research Laboratory.

Griffin, G. R., & Mosko, 3. D. (1977). A review of naval aviation attrition research (1950-
19 76: A basefor the development offuture research and evaluation (NAMRL 1237),
Pensacola Air Station, FL: Naval Aerospace Medical Research Laboratory.

Guilford, 3. P., & Lacey, 3. I. (1947). Printed Classification Tests. A.A.F. Aviation
PsychologicalProgram ProgressResearch Report, 5. Washington, DC: U.S.
Government Printing Office.

Harss, C., Kastner, M., & Beerman, L. (1991). The impact of personality and task
characteristics on stress and strain during helicopter flight. The InternationalJournalof
Aviation Psychology, 1(4), 301-3 18.

46
Helm, W. R., & Reid, J. D. (2003). Race and gender as factors in flight training success. In
Proceedings of the 45"' Annual Conference of the InternationalMilitary Testing
Association (pp. 123-128), Pensacola, FL.

Helmreich, R. L., Foushee, H. C., Benson, R., & Russini, W. (1986). Cockpit resource
management: Exploring the attitude-performance linkage. Aviation, Space and
EnvironmentalMedicine, 57, 1198-1200.

Hogan, R. T. (1991). Personality and personality measurement. In M. D. Dunnette, & L. M.


Hough (Eds.), Handbook ofIndustrial& OrganizationalPsychology (2 ded.): Volume
2. Palo Alto, CA: Consulting Psychologists Press.
*Horst, R. L., & Kay, G. G. (1991). CogScreen: Personal computer-based tests of cognitive
function for occupational medical certification. In R.S. Jensen (Ed.), Proceedingsof the
Sixth InternationalSymposium on Aviation Psychology. Columbus, OH: The Ohio State
University.

Hough, L. M. (1992). The "big five" personality variables - construction confusion:


Description versus prediction. Human Performance,5, 139-155.

Hough, L. M. (1998). Personality at work: Issues and evidence. In M. Hakel (Ed.), Beyond
multiple choice: Evaluating alternatives to traditionaltestingfor selection. Mahwah,
NJ: Lawrence Erlbaum & Associates.

Hough, L. M. (2001). I/Owes its advances to personality. In B. W. Roberts, & R. Hogan


(Eds.), PersonalityPsychology in the workplace. Washington, DC: American
Psychological Association.

Hough, L. M., Barge, B., & Kamp, J. (2001). Assessment of personality, temperament,
vocational interests, and work outcome preferences. In J. P. Campbell, & D. J. Knapp
(Eds.), Exploring the limits in personnelselection and classification.Mahwah, NJ:
Lawrence Erlbaum & Associates.

Hough, L. M., Eaton, N. L., Dunnette, M. D., Kamp, J. D., & McCloy, R. (1990). Criterion-
related validities of personality constructs and the effect of response distortion on those
validities [Monograph]. Journalof Applied Psychology, 75, 581-595.

Hough, L. M., & Ones, D. S. (2002). The structure, measurement, validity, and use of
personality variables in industrial, work, and organizational psychology. In H.
Anderson, D. S. Ones, H. K. Sinangil, & C. Viswesvaran (Eds.), International
Handbook of Industrial, Work and OrganizationalPsychology. Newbury Park, CA:
Sage Publications.
*Howse, W. R. (1995, November). Personalityfactors in Army aircrewselection
(unpublished handout). Fort Rucker, AL: US Army Research Institute Rotary Wing
Aviation Research Unit.

47
Hunter D. R. (1989). Pilot selection. In M. F. Wiskoff, & G. M. Rampton (Eds.), Military
personnelmeasurement: Testing, assignment, evaluation. New York: Praeger.

Hunter, D. R., & Burke, E. F. (1992). Meta analysis of aircraftpilot selection measures (ARI
Research Note 92-51). Alexandria, VA: U.S. Army Research Institute for the
Behavioral and Social Sciences.

Hunter, D. R., & Burke, E. F. (1994). Predicting aircraft pilot-training success: A meta-
analysis of published research. The InternationalJournalofAviation Psychology, 4,
297-313.

Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors ofjob
performance. PsychologicalBulletin, 96 (1), 72-98.

Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correctingfor errorand


bias in researchfindings. Newbury Park, CA: Sage Publications.

Hurtz, G. M., & Donovan, J. J. (2000). Personality and job performance: The Big Five
revisited. Journalof Applied Psychology, 85, 869-879.

*Intano, G. P., & Howse, W. R. (1991). Predictingperformance in Army aviationprimary


flight training(ARI Research Note 92-06). Alexandria, VA: U.S. Army Research
Institute for the Behavioral and Social Sciences.

*lntano, G. P., & Howse, W. R. (1992). Predicting performance in Army aviation flight
training. Proceedingsof the 36"1 Annual Meeting of the Human Factors Society (pp.
907-911).

*Intano, G. P., Howse, W. R., & Lofaro, R. J. (1991 a). Initialvalidation of the Army aviator
classificationprocess (ARI Research Note 91-38). Alexandria, VA: U.S. Army
Research Institute for the Behavioral and Social Sciences.

*Intano, G.P., Howse, W.R., & Lofaro, R.J. (1991b). The selection of an experimental test
batteryfor aviator cognitive,psychomotor abilitiesandpersonal traits (ARI Research
Note 91-21). Alexandria, VA: U.S. Army Research Institute for the Behavioral and
Social Sciences.

Jackson, D. N., Wroblewski, V. R., & Ashton, M. C. (2000). The impact of faking on
employment tests: Does forced choice offer a solution? Human Performance,13, 371-
388.

Jensen, A. R. (1993). Test validity: g versus "Tacit Knowledge." CurrentDirectionsin


PsychologicalScience, 2, 9-10.

*Jessup, G., & Jessup, H. (1971). Validity of the Eysenck Personality Inventory in pilot
Sselection. 45, 111-123.
OccupationalPsychology,

48
Keil, C. T., & Cortina, J. M. (2001). Degradation of validity over time: A test and extension
of Ackerman's model. PsychologicalBulletin, 12 7, 673-697.

*Kilcullen, R., Goodwin, J., Chen, G., Wisecarver, M., & Sanders, M. (undated). Identifying
agile and versatile officers to serve in the objective force. Unpublished manuscript.

*Kilcullen, R. N., Mael, F. A., Goodwin, G. F., & Zazanis, M. M. (1999). PredictingUS
Army Special Forces Field Performance.Unpublished manuscript.

*Kilcullen, R. N., White, L., Sanders, M., & Hazlett, G. (2003). Assessment of Right Conduct
(ARC) Administrator'sManual.Alexandria, VA: US Army Research Institute for the
Behavioral and Social Sciences.

*King, R. E., & Flynn, C. F. (1995). Defining and measuring the "right stuff':
Neuropsychiatrically enhanced flight screening. Aviation, Space, and Environmental
Medicine, 66(10), 951-956.
*King R. E., Retzlaff, P. D., & McGlohn, S. E. (1997). Female United States Air Force Pilot
Personality: The new right stuff. Military Medicine, 162, 695-697.

*King, R. E., Retzlaff, P. D., & Orme, D. R. (2001). A comparison of US Air Force pilot
psychological baseline information to safety outcomes (AFSC-TR-2001-0001).
Kirtland AFB, NM: Air Force Safety Center.

*Koonce, J. (1998). Effects of individual differences in ropensity for feedback in the


training of ab initio pilots. Proceedings of the 42n Annual Meeting of the Human
FactorsSociety.
*Koonce, J., Moore, S., & Benton, C. J. (1995). Initial validation of a basic flight instruction
tutoring system (BFITS), In R. S. Jensen (Ed.), Proceedings of the Eighth International
Symposium ofAviation Psychology (Vol. 2, pp. 1037-1040). Columbus, OH: The Ohio
State University.

*Lambirth, T. T., Dolgin, D. L., Rentmeister-Bryant, H. K., & Moore, J. L. (2003). Selected
personality characteristics of student naval pilots and student naval flight officers.
InternationalJournalofAviation Psychology, 13(4), 415-427.

Lee, W. C., & Drasgow, F. (2003). Usinf decision tree methodology to predict attrition with
the AIM. In Proceedings of the 45" Annual Conference of the InternationalMilitary
Testing Association (pp. 310-316), Pensacola, FL.

Linn, M. C., & Petersen, A. C. (1985). Emergence and characterization of sex differences in
spatial ability: A meta-analysis. Child Development, 56, 1479-1498.

Lubinski, D., & Dawis, R. V. (1992). Attitudes, skills, and proficiencies. In M. D. Dunnette
& L. M. Hough (Eds.), Handbook of Industrialand OrganizationalPsychology (2nd
Ed., Vol. 3, pp. 1-59). Palo Alto, CA: Consulting Psychologists Press.

49
Maccoby, E. E., & Jacklin, C. N. (1974). The psychology of sex differences. Stanford, CA:
Stanford University Press.

Maitland, S. B., Intrieri, R. C., Schaie, K. W., & Willis, S. L. (2000). Gender differences and
changes in cognitive abilities across the adult life span. Aging, Neuropsychology, and
Cognition, 7 (1), 32-53.

Martinussen, M. (1996). Psychological measures as predictors of pilot performance. A meta-


analysis. The InternationalJournalofAviation Psychology, 6, 1-20.
*Martinussen, M., & Torjussen, T. (1998). Pilot selection in the Norwegian Air Force: A
validation and meta-analysis of the test battery. The InternationalJournalofAviation
Psychology, 8, 33-45.

Mashburn, N. C. (1934). Mashburn automatic serial action apparatus for detecting flying
aptitude. JournalofAviation Medicine, 5, 155-160.

McCloy, R. A., Campbell, J. P., & Cudeck, R. (1994). A confirmatory test of a model of
performance determinants. Journalof Applied Psychology, 79, 493-505.

McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M. A., & Ashworth, S. (1990). Project
A validity results: The relationship between predictor and criterion domains. Personnel
Psychology, 43, 335-354.

McHenry, J. J., & Rose, S. R. (1988). Literaturereview: Validity and potentialusefulness of


psychomotor ability testsfor personnelselection and classification(ARI Research Note
88-13). Alexandria, VA: US Army Research Insitute for the Behavioral and Social
Sciences.

Melton, A. W. (1947). Apparatus tests. A.A.F. aviationpsychology research report, 4.


Washington, DC: U.S. Government Printing Office.
*Morrison, T. R. (1988). Complex visual information processing:A testfor predictingNavy
primaryflight trainingsuccess (NAMRL-1338). Pensacola, FL: Naval Aerospace
Medical Research Laboratory.
*Mount, M. K., Barrick, M. R., & Stewart, G. L. (1998). Five-factor model of personality
and performance in jobs involving interpersonal interactions. Human Performance, 11,
145-165.

Mount, M. K., Witt, L. A., & Barrick, M. R. (2000). Incremental validity of empirically
keyed biodata scales over GMA and the Five Factor personality constructs. Personnel
Psychology, 53, 299-323.
*Musson, D. M., Sandal, G. M., & Helmreich, R. L.. (2004). Personality characteristics and
trait clusters in final stage astronaut selection. Aviation, Space and Environmental
Medicine, 75(4), 342-349.

50
Norman, W. T. (1963). Toward an adequate taxonomy of personality attributes: Replicated
factor structure in peer nomination personality ratings. JournalofAbnormal & Social
Psychology, 66, 574-583.

North, R. A., & Griffin, G. R. (1977). Pilotselection 1919-19 77 (NAMRL Special Report
77-2). Pensacola, FL: Naval Aerospace Medical Research Laboratory.

Olea, M. M., & Ree, M. J. (1994). Predicting pilot and navigator criteria: Not much more
than g. JournalofApplied Psychology, 79 (6), 845-851.

Ones, D. S., & Viswesvaran, C. (1998). Gender, age, and race differences on overt integrity
tests: Results across four large-scale applicant data sets. Journalof Applied Psychology,
83, 35-42.

Ones, D. S., & Viswesvaran, C. (2001 a). Integrity tests and other criterion-focused
occupational scales (COPS) used in personnel selection. InternationalJournalof
Selection and Assessment, 9, 31-39.

Ones, D. S., & Viswesvaran, C. (2001b). Personality at work: Criterion-focused occupational


personality scales used in personnel selection. In B.W. Roberts, & R. Hogan (Eds.),
Personality in the workplace. Washington, DC: American Psychological Association.

Ones, D.S., Viswesvaran, C., & Schmidt, F.L. (1993). Comprehensive meta-analysis of
integrity test validities: Findings and implications for personnel selection and theories
of job performance [Monograph]. JournalofApplied Psychology, 78, 670-703.
*Operational Psychology Department. (20 July, 2004). 2nd ASTB Workshop [Briefing Slides].
Naval Air Station Pensacola, FL: Naval Operational Medicine Institute, Naval
Aerospace Medical Institute.
*Oppler, S. H., McCloy, R. A., Peterson, N. G., Russell, T. L., & Campbell, J. P. (2001). The
prediction of multiple components of entry-level performance. In J. P. Campbell, & D.
J. Knapp (Eds.), Exploring the limits in personnel selection and classification.
Mahwah, NJ: Lawrence Erlbaum Associates.

Organ, D. W. (1994). Organizational citizenship behavior and the good soldier. In M. G.


Rumsey, C. B. Walker, & J. H. Harris (Eds.), Personnelselection and classification.
Hillsdale, NJ: Lawrence Erlbaum Associates.

Organ, D. W., & Ryan, K. (1995). A meta-analytic review of attitudinal and dispositional
predictors of organizational citizenship behavior. PersonnelPsychology, 48, 775-802.
*Pelchat, D. (1997) The CanadianAutomated Pilot Selection System (CAPSS): Validation
and cross-validationresults. Paper presented at the Ninth International Symposium on
Aviation Psychology. Columbus, OH: The Ohio State University.

51
*Pettitt, M. A., & Dunlap, J. H. (1995). Psychologicalfactorsthat predictsuccessful
performance in a professionalpilot program. Paper presented at the Eighth
International Symposium on Aviation Psychology. Columbus, OH: The Ohio State
University.

Phillips, H. L., Arnold, R. D., & Fatolitis, P. (2003). Validation of an unmanned aerial
vehicle operator selection system. In Proceedings of the 4 5 'hAnnual Conference of the
InternationalMilitary Testing Association (pp. 129-139), Pensacola, FL.
*Portman-Tiller, C. A., Biggerstaff, S., & Blower, D. (1998). Relationship between the
aviation selection test and a psychomotor battery. Proceedingsof the 40"h Annual
Conference of the InternationalMilitary Testing Association, Pensacola, FL, USA.

Ree, M. J. (2003). Test of Basic Aviation Skills (TBAS): Scoring the tests and compliance of
the tests with standards of the American PsychologicalAssociation (unpublished
technical report). San Antonio, TX: Operational Technologies Corporation.

Ree, M. J. (2004a). Making scores equivalentfor TBAS and BAT (unpublished technical
report). San Antonio, TX: Operational Technologies Corporation.

Ree, M. J. (2004b). Reliability of the Test of Basic Aviation Skills (TBAS) (unpublished
technical report). San Antonio, TX: Operational Technologies Corporation.

Ree, M. J. (2004c). Test of Basic Aviation Skills (TBAS): Incremental validity beyond
AFOQT Aviator composite for predictingpilot criteria(unpublished technical report).
San Antonio, TX: Operational Technologies Corporation.

Ree, M. J., & Carretta, T. R. (1992). The correlation of cognitive and psychomotor tests (AL-
TP-1992-0037). Brooks AFB, TX: Armstrong Laboratory, Air Force Materiel
Command.

Ree, M. J., & Carretta, T. R. (1996). Central role of g in military pilot selection. The
InternationalJournalofAviation Psychology, 6, 111-123.

Ree, M. J., & Carretta, T. R. (1998). Computerized testing in the United States Air Force.
InternationalJournalof Selection and Assessment, 6(2), 82-89.
*Ree, M. J., Carretta, T. R., & Teachout, M. S. (1995). Role of ability and prior job
knowledge in complex training performance. Journalof Applied Psychology, 80 (6),
721-730.

Ree, M. J., & Earles, J. A. (1992). Intelligence is the best predictor of job performance.
Current Directionsin PsychologicalScience, 1, 86-89.

Ree, M. J., & Earles, J. A. (1993). g is to Psychology what carbon is to Chemistry: A reply to
Sternberg and Wagner, McClelland, and Calfee. CurrentDirections in Psychological
Science, 2, 8-9.

52
Reilly, R. R., & Chao, G. T. (1982). Validity and fairness of some alternative employee
selection procedures. PersonnelPsychology, 35, 1-62.

Retzlaff, P. D., King, R. E., & Callister, J. D. (1995). USAFpilot trainingcompletion and
retention:A ten yearfollow-up on psychological testing (AL/AO-TR- 1995-0124).
Brooks AFB, TX: Armstrong Laboratory, Air Force Materiel Command.

*Retzlaff, P. D., King, R. E., Callister, J. D., Orme, D. R., & Marsh, R. W. (2002). The
Armstrong Laboratory Aviation Personality Survey: Development, norming, and
validation. Military Medicine, 167(12), 1026-1032.

*Retzlaff, P. D., King, R. E., McGlohn, S. E., & Callister, J. D. (1996). The development of
the Armstrong LaboratoryAviation PersonalitySurvey (ALAPS) (AL/AO-TR-1996-
0108). Brooks AFB, TX: Armstrong Laboratory, Air Force Materiel Command.

Robertson, I. T., & Kinder, A. (1993). Personality and job competences: The criterion-related
validity of some personality variables. Journalof OccupationalPsychology, 66, 225-
244.

Rogers, D. L., Roach, B. W., & Short, L. 0. (1986). Mental ability testing in the selection of
Air Force officers: A briefhistoricaloverview (AFHRL-TP-86-23). Brooks Air Force
Base, TX: U.S. Air Force Human Resources Laboratory.

*Roscoe, S. N., CorI, L., & LaRoche, J. (2001). Predictinghuman performance. Saint-
Laurent, Quebec: Helio Press.

Roth, P. L., Bevier, C. A., Bobko, P., Switzer, F. S, III, & Tyler, P. (2001). Ethnic group
differences in cognitive ability in employment and educational settings: A meta-
analysis. PersonnelPsychology, 54 (2), 297-330.

Russell, C. J., Mattson, J., Devlin, S. E., & Atwater, D. (1990). Predictive validity of biodata
items generated from retrospective life experience essays. Journalof Applied
Psychology, 75, 569-580.

Russell, T. L., & Peterson, N. G. (2001). The Experimental Battery: Basic attribute scores for
predicting performance in a population of jobs. In J. P. Campbell, & D. J. Knapp (Eds.),
Exploring the limits in personnelselection and classification.Mahwah, NJ: Lawrence
Erlbaum Associates.

Russell, T. L., Reynolds, D. H., & Campbell, J. P. (Eds.) (1994). Building a joint-service
classificationresearch roadmap:Individual differences measurement (AL/HR-TP-
1994-0009). Brooks AFB, TX: Armstrong Laboratory.

Saad, S., & Sackett, P. R. (2002). Investigating differential prediction by gender in


employment-oriented personality measures. Journalof Applied Psychology, 87 (4),
667-674.

53
Sackett, P. R., Schmitt, N., Ellingson, J. E., & Kabin, M. B. (2001). High-stakes testing in
employment, credentialing, and higher education. American Psychologist, 56 (4), 302-
318.

Sackett, P. R., & Wilk, S. L. (1994). Within-group norming and other forms of score
adjustment in preemployment testing. American Psychologist,49 (11), 929-954.

Salgado, J. F. (1998). Big Five personality dimensions and job performance in Army and
civil occupations: A European perspective. Human Performance,11, 271-288.

Schmidt, F. L., & Hunter, J. E. (1993). Tacit knowledge, practical intelligence, general
mental ability, and job knowledge. CurrentDirectionsin PsychologicalScience, 2, 11-
12.

Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in
personnel psychology: Practical and theoretical implications of 85 years of research
findings. PsychologicalBulletin, 124(2), 262-274.

Schmitt, N., Rogers, W., Chan, D., Sheppard, L., & Jennings, D. (1997). Adverse impact and
predictive efficiency of various predictor combinations. Journalof Applied Psychology,
82, 719-730.

Schmitt, N., Sackett, P. R., & Ellingson, J. E. (2002). No easy solution to subgroup
differences. American Psychologist, 58 (4), 305-306.
*Shipley, B. D. (1983). Maintenanceof Level Flight in a UH-1 Flight Simulator as a
Predictorof Success in Army Flight Training. Unpublished manuscript: Army
Research Institute for the Behavioral and Social Sciences.
*Shore, W., & Gould, R. B. (2003). Developingpilot and navigator/technicalcompositesfor
the Air Force Officer Qualifying Test (AFOQT)Form S (unpublished technical report).
San Antonio, TX: Operational Technologies Corporation.
*Shull, R. N., & Dolgin, D. L. (1989). Personality and flight training performance.
Proceedings of the 33 Annual Meeting of the Human FactorsSociety, vol. 2, 891-895.
*Shull, R. N., Dolgin, D. L., & Gibb, G. D. (1988). The relationshipbetween flight training
performance, a risk assessment task, and the Jenkins Activity Survey (Interim NAMRL-
1339). Pensacola, FL: Naval Aerospace Medical Research Laboratory.
*Shull, R. N., & Griffin, G. R. (1990). Performanceof several different navalpilot
communities on a cognitive/psychomotor test battery: Pipeline comparison and
prediction (Interim NAMRL-1361). Pensacola, FL: Naval Aerospace Medical Research
Laboratory.
*Siem, F. M. (1991). Predictive validity of response latenciesfrom computer-administered
personality tests. Paper presented at the 3 3rd annual conference of the Military Testing
Association, 28-31 October, 1991.

54
*Siem, F. M. (1992). Predictive validity of an automated personality inventory for Air Force
pilot selection. The InternationalJournal ofAviation Psychology, 2, 261-270.
*Siem, F. M., Carretta, T. R., & Mercatante, T. A. (1988). Personality,attitudes andpilot
trainingperformance: Preliminaryanalysis (AFHRL-TP-87-62). Brooks AFB, TX: Air
Force Human Resources Laboratory.
*Skinner, J., & Alley, W. E. (2002). Air Force Officer Qualifying Test (AFOQT): Form R
and S development and norms (unpublished technical report). San Antonio, TX:
Operational Technologies Incorporated.

Skinner, J., & Ree, M. J. (1987). Air Force Officer Qualifying Test (AFOQT): Item and
factor analysis ofform 0 (AFHRL-TR-86-68). Brooks Air Force Base, TX: U.S. Air
Force Human Resources Laboratory.

Sperl, T. C., & Ree, M. J. (1990). Air Force Officer Qualifying Test (AFOQT):Development
of quick score compositesforforms P1 and P2 (AFHRL-TR-90-3). Brooks Air Force
Base, TX: U.S. Air Force Human Resources Laboratory.

Stark, S. E., Chernyshenko, 0. S., & Drasgow, F. (2003). A new approach to constructing
and scoring fake-resistant personality measures. In Proceedingsof the 4 5 th Annual
Conference of the Military Testing Association (pp 323-329), Pensacola, FL.

Sternberg, R. J., & Wagner, R. K. (1993). Thinking Styles Inventory. In R. J. Sternberg,


Thinking Styles. New York: Cambridge University Press.
*Street, D. R., Jr., Dolgin, D. L., & Helton, K. T. (1993). Personality tests in an enhanced
selection model. In R. S. Jensen & D. Neumeister (Eds). Proceedingsof the Seventh
InternationalSymposium on Aviation Psychology (pp. 428-433). Columbus, OH: The
Ohio State University.

Street, D. R., Jr., Helton, K. T., & Dolgin, D. L. (1992). The unique contributionof selected
personality tests to the prediction of success in naval pilot training(NAMRL-1374).
Pensacola, FL: Naval Aerospace Medical Research Laboratory.
*Taylor, J. L., O'Hara, R., Mumenthaler, M. S., & Yesavage, J. (2000). Relationship of
CogScreen to flight simulator performance and pilot age. Aviation, Space and
EnvironmentalMedicine, 71(4), 373-380.
*Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of
job performance: A meta-analytic review. PersonnelPsychology, 44, 703-742.

Tirre, W. C. (1997). Steps toward an improved pilot selection battery. In R. F. Dillon (Ed.),
Handbook on testing (pp.220-255). Westport, CT: Greenwood Press.

Toquam, J. L., Corpe, V. A., & Dunnette, M. D. (1989). Literature review: Cognitive
abilities - theory, history, and validity (ARI Research Note 91-28). Alexandria, VA:
U.S. Army Research Institute for the Behavioral and Social Sciences.

55
Tupes, E. C., & Christal, R. E. (1961). Recurrentpersonalityfactors based on trait ratings
(ASTD-TR-61-97). Lackland AFB, TX: Aeronautical Systems Division, Personnel
Laboratory.

Tumbull, G. J. (1992). A review of military pilot selection. Aviation, Space and


EnvironmentalMedicine, 63, 825-830.

Vernon, P. E. (1969). Intelligence and culturalenvironment. London, England: Methuen.

Weiss, E. M., Kemmler, G., Deisenhammer, E. A., Fleischhacker, W. W., & Delazer, M.
(2003). Sex differences in cognitive function. Personalityand IndividualDifferences,
35 (4), 863-875.
*Weissmuller, J. J., Schwartz, K. L., Kenney, S. D., Shore, C. W., & Gould, R. B. (2004).
Recent developments in USAF officer testing and selection. Paper presented at the 4 6 th
Annual Conference of the International Military Testing Association, Brussels,
Belgium.
*White, L. A., & Young, M. C. (1998). Development and validation of the Assessment of
Individual Motivation (AIM). Paper presented at the Annual Meeting of the American
Psychological Association, San Francisco, CA.
*White, L. A., Young, M. C., Heggestad, E. D., Stark, S., Drasgow, F., & Piskator, G.
(2004). Development of a non-high school diploma graduatepre-enlistment screening
model to enhance the future force. Paper presented at the 2 4 th Army Science
Conference, Orlando, FL. (www.asc2004.com/manuscripts)
*White, L. A., Young, M. C., Heggestad, E. D., Stark, S., Drasgow, F., & Piskator, G.
(2005). Army Tier Two Attrition Screen (TTAS) Update [Briefing Slides]. Presented to
The Manpower Accession Policy Working Group, Monterey, CA.
*White, L. A., Young, M. C., & Rumsey, M. G. (2001). ABLE implementation issues and
related research. In J. P. Campbell, & D. J. Knapp (Eds.), Exploring the limits in
personnel selection and classification.Mahwah, NJ: Lawrence Erlbaum Associates.

Wise, L., Welsh, J., Grafton, F., Foley, P., Earles, J., Sawin, L., & Divgi, D. R. (1992).
Sensitivity andfairness of the Armed Services Vocational Aptitude Battery (ASVAB)
Technical Composites (DMDC Technical Report 92-002). Washington, DC: Defense
Manpower Data Center, Personnel Testing Division.
*Woycheshin, D. E. (2002). CAPSS: The CanadianAutomated PilotSelection System. Paper
presented at the Workshop of the RTO Human Factors and Medicine Panel (HFM),
Monterey, CA, USA.

56
~5U) co,

U)
O)*.

0 -0

- a

<ED C'

CL))
U) ri
2 0 -.- (o 0C cu

_ Fn 0o

E 2 o 0ý .0
0u = a )-

i-n a)Ec

*0 C (

CO
- CO a) a) t
coZ-C
uiD

0D _0 2 f.~0) W 0 E 0
- =
rA~O -0- E C - CD a0
> .*D *co *.-- o * 0 0 EU
(U .;- -- E- z E -- E
E C
EnE
U)cuC E0. ) a,(~Wl

0 M - =

ao CLa

C' C Cl,0

CD

a)a

~E a
a), co 03

0))

0. a) a)..

a M

U)

CLa)a U)) -m -
_
0_ 0 U) oa
-0
Un
4. v

>0 0F0

a,,

cu
- a)
.0

C:

LLi

0 0

E ~~
E

oo Cu_
o o 00

a)V

U-
0 a

00 .-

= U

0a)5a)10C3 wU

0
0
uia ,C

mz i 0
-0
U) .

U )

a) 0)0
0.0 CD

0
U) c c
0 =6
CD
C)~- CD>~
0 ~ 0 2 a,

E
E

a)

a) )

2 U 1

a) 0 a)c

o Co CD (

a) ) -
=3 (n =3 n
.j 0.aa-I a_ U

L a) c.-0 E
0 0 00c 0
L) UC)) C

Co Z 0i
-0
(0) -
00)

*.0 CD 0
>. Zn
. 16 0) 0

0 Q0 0

0
>_- - L -6 -
CCOm~(NW ~ -
0 (1)(cu

'0
a) 0)Z

E) a)
0-<
u)-o
7 -0
a)V). E
a)Zc cu cu

- 3: 0)* C-5 -2
a)0 C, 0) a)
E E ccx,~~ (D0
0CJ a) L) < --
a) W.E3 OdCz T,, >-,
m :

a a U) .2
:2 0d)N> 5 M
Oo z )ED yEn
ECC CU1- 5 >-; w >-

U) 0 ) > ) LC d 'a~W )E o
LLJ 0)C O m '

E 0) Mn ad ';> 4)
C C*
0 -C
E E o m 0C
0) >) E E EE

- C) )

E~~~ 0)(
:2 3 Ed)'
CO~ ~~ )0 2 0l

U) !- E. 0
C-i 0
U') C: l

-D D

U)C/ d)d a 0U

U)-
E <o

Co

CL)
E-U

00

~E .9
W(U
Z ) --
-. 0 C)
U) - C'.4
00a) CUCD
ce M
E='u"C E C)

a a CZ 8~w~ )F! -L

E- _o o.u
a~a)0 w
u~)2C) 0))C
~DD O-(D~C a)- o
> u z:: Cu CO

cn 0 Co ..- Cua)(
cu C, -, = 6 0 CO
CuD 0~u --- Cu
CM - " 0~C~
t5 a>) 0ol )
.2(D-
_ff c (Cu C F-2-
E 0cu '0U)a a aCD ' ECU. Cu
=3 -. 9 -= U-.) a) a 1-
)<c u in a ( 0D CD L COu
E--i - ) Ec Cu
EE _0 .- D0~ Cu a)=E.CO
2n 0 COD Q i'0 ECDt

E cO. a)
Cu 0) co D ) 0 EC
E > CO CO -))BwCco u.1 +
0
a)
CO0mCo>L
Cu = cu .'
c -C~~.
O0.
S5
-6C0C
ZCJ cu r

N-
CN4

a)~~ ~ M D0U

C.) CDC
006 _ 0.C E 15Cý:6 E u
_~c 0 Q u0) Cu
co-E
> 2coC¼0
- COOCL C a)

C> Cu cu CDu~~C
co Cu C.) CO-
(2 0Oil co
0) 0 v; 0
.E = O ~ i 3o c CU CD
0)() " : +(0 CL
go < U) a)O E

) +

E
-0+

Co 7- a- 11i CDa < I

.sT En 0 0-~C
a
.0 Cu
)-
~CO
CO
0q-

Cu ~ +

4"2 11O 0

.2
(D oE-

- CO-
cu ~ U) >~ -
= L
C
62

Eu <11
(D -2 Cu 0>f

co 0)0)>
0L C3

A2 ED 0) a) . 1
< U~a- . LL

*a) (D :
a)~O. C

~~~E~ "0~~j
~-0
U )

r_ E U)
0w= cs-a
U)00
~O > <C)

~cu70 4) co 0 a u_

CiD co>) M U) O >

E =0 C:C

E 0 )C 0C -( '
. uo 0
U-
E
U)C 6
cz
L- n
t-am
o U

w) C) >, C 1 a- Z a

0) f;- U)

E O OL 0 )m 0a)c L

0m '0 o- L - a 2oEE

000
2 =

a)J

C C
a) a)

.0 _o 20 _

-0 0 c
(-Q -1-o-

-) CU
= 6.
LL _n 0) 0

.0_ E EE

CDc oC) m LOl

00
.C c

Ua)
0.0 (D w0
E CY
0)i
L N a

75- WO_ C
cE coLý =
-0
U) -
W0)
c-co

a)
0)a 0
~D>

- 0)
CC

m_- 70)C-3) CU O 0)

.0= ao)5 CM E -M0E,


- o C' o
o) 0 )
co U- 0 U-
0
E u D E co c c
0n 2CT> o0 - 2:- 0C2,0,

a)a)ý o c C 0 U

oa)) a

00 =

C0 C
E .E CD o (D co
E -E 2 E2 v)
a)) co) -0 m-t;
15 =E 6
E) u) =3 -6 -F3 m
0~ T- a)4 070''4D a

ca M. Z(B a E a

CDC
0,o

.0( 0 ,0

0 ,-
Cm
E ,*
E ~( . , 0

I-:n- < 0 0

0) Fn
- =U_-fr

o CD

.0a CL ) )
=E 0)E co_ _ e c a
-00 U0)

CA 0C)
CY,

+ CL c

WL* c
M~+. , a) co m c
a)* a)
E-- ui 0 2
MJ r-~ -.0 > C> n
0 m co 0 cus )0 oC) 0~ 0LW
oCc=o5CL E
ch oý ~ ui
~- w0ow x
0
-ýE
c 0

a) E
-=
LC
-
0a

*B~.E 1- . .0 co
O) La Wa-,C m -, . >

Cc0- 0-M 0-C C3 0- a


(C- W~-a ~ ~ , ~ aa -0 tC
E 0 ~ju.0 .92CE
0
a, 0)
oD 0 Ca C~.>
oC)., 0)~a

~~~
0 CL a)) >-
~
a)~~ M ~( 0). .0
D
uJ-> c
-0)
_o

-~a) - L a)cn C~)o


W
a, a) M- 0 co -L a

wD Z5,= =CC) m-=

CD
'o~~ ~ o DU
J--- E.- < c
F:
<E>
c
a

-t-) c~ a,
0 a) u)c Z5 -0 4)
cn0 0

'C
-J a,- (D -

m _-C Dc

Co 7E Q
= -

0) >

7E
a, U

E,

C,).Z-

a,..

Co0

CLL 0)
~E

U--
m a) CCN- D0) 1

7
4)t ;=0)a) cuc Cl 7a j) Cu

*E~ =a) = a) = ) =-a


:3:5 0) a) a) a)

U) -o=
-- o
00 - Cu Cum0 .
> ) co~ mo CCU cuC

'2 U. -2 (D -2
oCu Cu - cu 0Cu - co,
(1 o m~0) a) -a co -z E0~ 2,
.0 C 0 C 0. 0 cuCCC
Ž

Cu co Cu 3:. 0Cu 0Cu Cu o


0 u 0 =l * 0 =l 0 =j _; w Cu U
C ) C n a Co C0 0C
Co CD C-
C - C C C
c, E- c 0u (D E o(cu C, U
Ef -Cu 0r Cu 0 0r Cu U" 0a u U~) a)
0 C)
-0
a). 0 w
E-
ol 2
C l) - o ~C.2 ) 3'
)CoIr 2.
Ž'0
2~ a) 0 cy
E-ga ja ) =Cu .U).
aE a) 0-E 0)a CUE a) aE.0a) mC

V) to
..0 00
C Cu '0 E C 0-
0
C u
o-
~~0 _0
C a
~ ~ 0
0

E cn E- Ec a)C
L~cc cFn y ~cn U 0
2-O u0 0
C.)
O.)e U. O) e 0
0aua))
-o CDa)> )>- C
(),> 0) O C 0 0
> C, C, C,4 c) > in c, CCN
,4. 0
E a) U.a) 0 2.E a) UEa (D cu.aD
a)WO. CDl. _ U 0 .) co 0 a)n
a)- 4EO)000)
_ - ~ _ - 0-
E- e
40l) =WO-ý o l
-Cu .5-~C
. 2 0
LuC C 0
2 a) Es0 a) 0 0
20
Cu a 0 WOO 0 Wnc
OO
m ( 0 (D W 0 (D - 0 WD 0

"C " C) C C C0 C

EE E E E E
a) c:): LO c:)
-- cl) (NJ 0NJ

Cu 0)

0 C0) a)E
76 C c
U) i2

0o

Cu 0) ) a)

0- a)
00 E E0- a)CD

a,
o t

0 0
OU)

Ea
CuC
-~0
W0)

c c~ 0C) 00

C) CY0 m

CD0 o 0 cc

cc OC. U)
-D a) a U) - DEC DM'

-n, >. 0
m - o> G. 0 0

(D) 0 E& & EC~ oý2C 0


C0 ). C 0 C) o)
CU C> a0 a. o -0C
(D~o

4E a EE
MCWj3)0 -C 0 .0 u 6-M

0 02

C0) zi T
ECU -D
a) - 5>C )i- .

cc E u (

EE 0
00 2EM2a
r )c um 0. 00 M

0 0

0- E 0.

C iC
> m

> E
0 0)
> CL
U) 6
0 0 Ca
(I)ca Z- Cl).

E c E)

E, E)-
0)J 0 M )

M C- 0 E E
a) co U' LO EO

* -4

000

0 CL

0-

CL E
(3)0) E
-0
U) Cb0 c0
0
41 rN- Md
4)0 m o) -3

E 0)
L0) _) E
C) a)* 0a) a
C) c C=O
iz

0 (D -o cc a) a)
U) .- -,-a
a0) 0U 0).o--=
Z E0= -oa c C
w ~> m*0)
> a) C:- 0 w E).
0)- (D 0aC 2 0 or) 0 ) 0 0 >)
mo cc:0
0. -a) a )-E (D

2 0 E.~ 00)_ 0o0 a) 4)0) *l E


o *y) Z 5 -,e
m a)~UJ
5- 0:0 m~g
LCnw T a

-0-0ý
G)
E) CLa
a) ~ o
0
y
o 0l
EE 5 c
-o Z00)0)><' E 3 -L

=~ wn0)
(DC = -0E C2 a))0wC

4) 0 E co
00)D 0)
Ml a)o..o

a)0)

0)
-0 0 =0)

0 C5C 7

0)E 0 cn
.)0) OL4

0 (D 0 0
0) 0) .2

*-1

- 0)

0.0)

co
0(l0

00
C.a)
0) -0
~E E
-0)
U)U
U) 0)0

ir
E Z,
Cm)0 m ) C

0~~ - >, C
>C 0

-D u U) WCD
.- -'0 _ . u)E
~~c
a~0
) .
>, a)
aa a? mz:u) ,Ec0
U) C n.: >

C, Ca a C ?a

0 0

E C t U -~ C;3

> Cz (U ) < a) U) CLU) U)CaDC

E
a) a) W
C.) >
a)I a)
CY
> a) Cl 5 1)
aU E - 0)m
0))). -0 Z t w

.c a) U) :E -
L) . -

>E -0 C Ea) L)
Cl) VOO<OU)).
0) C Ca- /

U) U)0

a) 20 <

CU

S CO a) - )
aU)) F- 76ý,ý

D.c 0)0 D 0- <.)


C00W CO In x 0)
U) C a)UCM <

iC' _0 0) _r
C: Ea ýmC

0> Cn00-

inaa)*; 3 'Om C)> (

CyE COL
03 COU.a
~~WCO R~2 <
-. 0 -D
U)u
a 0 -aCY
c ~-N
cDC)co

=5 - a) U) U)
U)
g0 = 0)E=
~D> ý, 0 C)
S0 CU

0a n ) c -)c
U,- U) Fu-'

(0 0 > ~ -),f E 5E; Ca- .

co o a) co MO= a, 0- - :0 CL

w a) >L~
15 -a3 u' >, -a, 0~ _Z6E u-C - D (_-S-o .
a, - )C- 0§2 ) _ a) U) W~
co > ( -c

-. -T) 0E Cc. o >U)-


) -o .=c (D ý_M .0

a) 0 m. Z

a) a) CDu-

0 2 -o

a) a
0 U 00 CD

> D E ý60 c 16 2)

CD
) c n 0

0 CD
> 00

a) a) )a oC a =.
-.- 'n ,
.a) a<.-0

0 o
- 1 >-, a) 0 0
.0
~
.9 c0
Il)-- CU~-)C

ca a (1)~ C) ) >o21

a) -;, O

OU) Cl 0 ) c

EEu.
-~0

0 c
CDC

ea) C3

0 0D
a)- <~~a

.5n0 < 0) 0

E M Cl0 <

0 C 0

00

E 3~ C L 0
E

U)

CD

aa)

EE
0)0

wa)Y) 2Sa

CO 0) 76)

CD CD 0 0)~
0C co0 0 EU)' CK3 So 0

I0 C)-CD0 (D
)
o t=o
2
_~(
~ 2
0 00
0)-()
l - - 0
LC
E~ - 2 2)
0 0 )2
0i 0 0 '~

a) -0

CD CD
co)C)0
=) u 0
0ca ca
m z 42
CN)
En
0

*) 0T C

02D0
oDD

CD =0 0DC)
en ci,-2-c CD~ c<D
CD0~ CD -m

>-
a 4 o -

U) :r 1~0(D~ ~ 0)
--
-- D
~ ~~~c
WWii) C
m (Dl) 0
a)~
a )'- CU C-) c- == a)Oa) -)L - ý

_l - o

E)
E M (n m~'
0)~- =~c
_E>
(n
2
~0) ' .)a)
.Ž . E
co
'- E -2
m
-
D 3: (D E

a) L>ý 0 )

-o Lz - 6 coa _o DG' a co 0
a)0-00UJ = 0 0>~ r- )
0 Zm LmU -
U) -t C)<a)~ a" R DaT-P

C L:
0 ~ ~ ~ . a) -50)6
)LoaC3a''-
c,0c
0 0
0 )0C

E) >1 c )C

a)o0 D )L 6 )(

a) cn :32ocu 0 0 >
.0 o 0 1a)0
0a c.:,60ao

00

Ena -

0.) Z5w 0 )

CD - -U)) ~
Z;; a) - co Q
tE a) E-_
a)>. 5
wz>UE mii
U
?<c
mEU-
-
oCn

CO)

eEcd

w W
to o (D
>) o U~~: a)

E~ a cs

_0~- cm-z'

a) c cU a a) c
CZ - ) -CMC C

o o ~or a) c >, (D

CD
C~
S :5 " a)- Z -
o N -0 a cc o C Z Ja)0)

( ia U
2
(0 r- ClD CL ) C l
E~) PL.2-a) ct C a
0C E 0 Ua

Va a- co m 0~ - Ca)

cc C0
<Zi cu a)) c CDC
cn~~~a w-0 -c)a)

-Cu >Cc -S a
to 0r a) 0
a) >0 a)
C)u < C 2 ~c'
Ch~ C' o )o
C)2 -
CD ccC)D L - C Cl)0 CU .C=
3U2 E C
ps w roC a)

. _
w %_ m (Dc L

-D a)
6-,, a ,, M g=
06ý 2-

00 Z
coa o _. _ 5C.2t
T In =C'
E

-- - CM

~ ~. C<
-0
U) .

U0)

4) 0) FZ
co 7

~D> m

0) E
-~ 03 O3 >U)0-
0)0 =0 ) =0)3-o~0 0
00)D (Do U 0 0 )
a) 0 cm (Nd, Ccn~
C c 0)
N) C .-
0D2 ~co
C1 -0' -c CN c o )c C
CCD ) E 0Ž 0Ž
U) D (D U) C C E aS~) o.
a0) m a) co) U) 0) - Coc
0)a -00 a)0) E00 0.0T U
E a),0 0-0 OC
0) C m CLb = E co0)
ED cu 0) c0 M

0 ~0
0 0- D =
a) 0 Cc) U)o
> E~E

0 00

>E
0 g) 0 E~ 0

U) (co M 0) CL L6

0=C
0E >
EJ 0

00 m CC
0D _0 m) 0)

r- E E)C0
CD m) m

0) 0)
E 0)
0 CL0CD
U) ) 0) 00
2~ 0
2 n E z c
<0)C
0)

_ ) )

CLa
0) u

00 U)

4) CD 40 - Cc.
*) =~ 2, 0
m m 0 4) M U "o
-~0
U) -

0)

a) o)

U) 'a

(D 0 0
a) a) a)Cu .Oc

E o (n (1 >> r-
.0 Cu) - ) C C. U
c00 r_ ) 0)u
W 1 0 0~ W
'D (D CL
U) U)
0
Clo) a) -)()0a

a)C. '0 -C0 a)) cm c D a

aa)
a)C o C , )r.

a) .0Z0o

mi m a
a)

CDu
0C 0

0) -

CDu
C E

0) 0
0)
Cule
=cnM0) )C
67 (D :32 S?
0) ~ 0)~ * C

E Oa) a)
0~CY)

moCo

0~a 00 ~- U
-0
CI) -co

r I-- - 0-~*
Cccu 00M
E E_
>. 0 - o h

CY) a) u
a aC.0 !t= u 0
U) ~ 0. (D -~ a) 0.

)
co~f LI) 0
2 =u -02-6C

To :Eu M -0 C Q) C
2iU a~.C) a)) .--C ) _ U
0 Z5 0 = CO0
m E -0 7 I)0
U)ULC - 7UU >0 W ) C - M Q)
0 EI - = 0) .- 0a 0
co CUC

E C, T U) 0
-o <
-o 0) (D>. CI
0
0 CUC) aCC) W
a)u ca-~U CI
jz - 0 .- E. 0 .
c')

m)~-C
a)Tc cm-. --
m 0c 0.20 >U U) a~.))> U -SW
~
Lq ~ o =) c= C, LLO
E Cu -C =oC)

CDmm<
D a

Cu CD
2 -0Cu C

.0 CF 0M"
C) o
0) Cu<
0~0
'OtU3 0
0 0 U U) )
CUw- >C?)L C

an -Da
- ) U)

000

U) -tU)5
coua )
0
U) C * a U
Cu~u O .. 0~U ~~4 (S.
-0~a~ -0 lcZL0
SO, C r
aDC

SQ0

a B

200 - ) .0Cu2 -a) u --

A)~~ C cC)(1 ) 0 0 m 0 W C
CuC -W0 > Lr
Ll-Cu M LL.6"S
JO
<w-J - <)CL a)
) cn
I-C
0 -ý
Cý C)
.- G
CDt
U) C~d) 0a 0 : Cu (D
5I- r-Cfa- 0
C- 15C CO, a U)
W r-w.- - C

E 52 E -2 E cj-E , A? ) )
- 0
o 2 0 - W c0 Cu 0r CECI 5~
Coa)o
0 (D~
.- C
U
a) '
a) oM CI)0 - 0Cu.Cu C
o~
0) LuJ a) USU)
CfC:~

0)

0 M

Lo aiC) C
u

cl ) CL E U)

0 co
r> a) a
cnW
0

E E - C~Cu
cn
cu
CoM

.
0

0)

U)
ca.0
.0 E
Co0

a)
E) 0

ClC

E "0

0D C

00
C)3 m 0
a.C 0 (
-0
r Or
.10 00C)
C'C6 - E~

toC 0 CC- -

,,-n*O 0
n> 0 m
(_9x- 0
(_9 In

.2
>, ~CO
>1 4) 0) co c a) W

VI) -:3
4 u0 cm "0--
CD 0 .- o-
E.n C m C cj

-- )f )

E E -a
E
"__-oE•
: ::3-
o WCAW .- C
0 w o•o .
• a)_
I-- cul a E•0 c CU 1 C D to
(1) 0
76 CU 0 E. ) .- 0
CD >) 0 o - a) ,
a) :t-- c D) uOto

L). o .. 0
o .r-0
u) 1' c
) :-C
.LOt-CU .a)•L
a)= .o
In WO
- ! "E.
"
o=
a)
a)o : ._-0
a)
ID 0 0 ,0 0 cu .I,0 - C^:)
O CLc~ 0 cu) ~ cu

>U
U
Cz
~o
)N-0.-C
0
0
o
L)
=
c 0.-ri
0-
Cb-0

> Cy)a~ CD o-Dc

E-
,.E,,I o -D -coZ co
= n3. u) E C.~
-aW Cu03
El C) w-wo2
+ 3 ~ . ..-0 .- ?;5v2 o Cn

a) C- =a

2., m-

U)

.0

C/)

E0

0 0
00)o
-0
0) m

4)K~ ý- 0) -M :

4)CD 0 Y 0)C4C
0 >, 0 >n
)Z <u> 00
U) CD u) 20 ~ o0

0 0 (

wW C) , -0 .-
U)cf Cu D0'
0
E col 00 C cn C7 oC

U) SOa)c
o = o-a:- '-MM
0m a :_co

, 6 3
w E C >0C
CDua)OU
W)

0
= U) '~C

0. L-DL a

oC

E oE
U) = )

a)

E~ E

U)) Cu

0C

.2 0
En m~

Oo

SCO C'j mu-) E


U)

C 0 0o

0 .2
a.ZCOUI--U) 'o MZ ~oLL 0 CoU
-~0 wu
Cl)

c c C: .0
G)O:)

0E o omCu zt

o :8
CuVO ýo Cu

Cu -

CC
E -o Z =

0 cu0CLu ZCDC

-o :D - U)

00 = .2u > 00 Cu
.L cou a) in
CD E U -a)- U

z.~ coco
)E 15 "0
Co C o
u) CUO

C) COu u uCZ-. J0 =l~ - 2 _zý= Cu

a) a) 0

0~~~, Z
coa ,0)-2,2

0))
CD ~ ~~ ý0 C 0
- -) -V (o o3
0u - 0)

CCu:
00 LO

0u Cu 0 0=

U) E
C 0'-
C16
0. 0uu

5z Cu
6~n' E

C's
n

c A
ZE ~C,,0 L 0~
o

0 U) U)Wx
-0 -o

4
. CD CoU G , E

E0) r- O 0 C)
0 0)
CD cio

000 :3)C Oci 0

o>
0 0 : - e

-E = ) L)~- Cu
-. o _L 0 n cc_
0
a) 0Ži~~>0 a, Co a a) - a) -0
0 ) 0 C 0 "D00: Z
Ea co co.0 C -0
C- wn--0 ) 10 -0cuo~ CuoC
C C C U
(n C) = C C
CU)w > a)I EWu
).-" r- a cnc
u E
Ei 0w 0a-
CA Cu0 Mu.0 D3 Co
cflC-
M0 (cnc0 ccc

E
*S z
cn( 3
o Z6 ~
ý:C 'D0
CuC
>CE
(D oiDfi
<(D ,2

CA
CY

0' a),
ca E3 0a '

oD >,aCO
0 a) Ea,.C
2 Zci
. ) E
to r- I*'D
cu c,cci
C) =- Cu -CD) to .

ca a)i U)
aLL>
* &.-
<a ~ ~ ~ E) LW D %3ý
aY
Cu 0"
2 ,a_
0 0 EC-- -ý 0

-~ 2 -
a) US 0
E 702 ý u~C 2

0n

> Cu

=3 a) - 0a
r Cu .& 0 =
Ca) co 0 I

Cl)

of--
00

OCOL

m ca o 0=-J
toz -- 0 C
-. 0

U)

E.-~

- C - - -h
a) a -0 0) 0) 0 coo,~

L.~~~~L: t - '' C: >)L

a-C) 0 0

-t C5 CD co~C

-C r-o c1 _ c
a-S 0 - (1)
* o -ý - Ucz) U

E) C) n )) V5 ~ 0
k Mn
0 U) o o n EC0 - )w0)

" 2. a-()
a
CD~~~~~0

cu Co 0U m

E)0
= . 2 co 20c
C C=) . 0

-U LC 0~ a)

(Uca
0~ ~~
CL
0..u&
_ur a
cn~~E .6) CU flC c
a.... D

M ~ c ~ U -0c

> < 6C 0 C:)

E- Eu 0 /
( C.0 m
ca.~o~

C co
ca cu

E) a)
CU
Cl) 53 -6

C1C

(IU- - c .

wn c CD) cn C c, I) -cl)
0- (D-- U-
C/L < Cl)

CI) CU) _ U0CI ,


C
EL~~~~c Ci 0 (

a)a)-
0- 10 CL 0

Q U) 0.06

w(icc C

m z
~-0
U) - Z
0)

SU) -2 cc)
al) wou 0 M~

~DD>

Cza 0 RD

CO =C

0)
C CO 0 : 4 =a) -o

Er E cD:
o~-Cnu~U o'a, C; cc
oZD a)0~cn e

0~ _)

E) Z .) c
0a) ~~ c.2 ca,

-o

a),U
*00
CD
SE mo0
C =

wu Q C)

C,a) )
E =~c w
0 0
Lff -o

U) >) I=c

_j E

00

a, 0)
2 a, Cu
0 o
a)
cE
0 E
= 0
E EE -o 22 -ozo
0

N Cu 0 E) -2 UL
-J * 0 Z5 Z.. 0/
0 0 0 aj~ 0L
0

a C5
a)
- U)
M, 0)
o.
000

00

SU)Z W0LI
mCcc a~ U) O0
-. 0
U) co

c - a _ ca)
.") = 0)0-)
*0O)
4D
C
a-~F a~-.C
f
~ 0

cla-

0{•)) t._
.__C)C .. t-w. -

."m .o ,

m E-o-
6o c
to
----
0-
C O
-
c Co
72
( c0
cn Szt
.(D
0 (D
'=-
"e -2 -o D m - '- 0_.

co c) 6 co = = - ,-0 o-6_ .. a- oZ"-

SO
T_ a w
_Co,
= a.. F0 = a-)--
LR C
U) -le >1 E "_
-.
'6 0.2 =

" ---. C c- Cl= . C 1- 0 -o 1 --


,.6- -2 m. >1 (13 4-- ( -- cu

:'- uCaEECuC
So 0...), "
. '.- C. -
U) > 00 0 0)
a a)
0 EM .=- - : C5 0•U) , Z
:5 0 E.. .,_ ,5
E 7-0"_
0 w = w w ,-= m 0 O o--- U

E o a)o :

O,, cu - 6• -
•- C-0 a u c- C,

a) =... -7 o -E -' = 0 m c-o o..-

•a)
SU) EM
M0) 9 a )c

-o E2-
a)ia)) a)i )c
ot M' 0M0C~-( :02cn ( D a a

a- CD (D

I~a 'W)a- CflE

al 0 !1a •-o "'


-0

0 -OM
*-C) Om
a)50C-)) 0)0) C)M)o)

0-~) 0 0) 0 a) (

a)~- a). ~ . 'n o


C.()0
(Da Qt
w
t= =
~ =
m
E) a).
)0 00 )0
U)~
0
0)
)

CD
m
o ) L) 2(
005,C> a)
) U) M
~~cQ) uU) c

m W =~ 0)~ = cuc
Z = C 0)> U 0 -
-0 Cu 0D 0E U) -aCu
Z C:C)G)a)-0 2 C 0) a
M m C -(n a -- cu

0) ~LL > <~ ><


LL ~LL
>
CUL
><
>) >) >)>0
=0 -0 *=' = ý_
0 cu C.Cu
co 0cu Z co

0 S - .a Z5 c- 0.2a

0
co Ucu aO co
C C> > C>
ED =.0 =-0
E C? irC .-: c

U) (

0))

E C

CU Cu

00

z .2
0C
a~
U) -

CD m

E. Z. 0*

CD Co0 1 0 00
0D> cam~

(D (D

C.) l >i

C.)
E0 cr

E '0
%)w-~o -1 (D
E ýc m
0 > 20 c=

Cci U) 0
0l l'- 0 a)(

-0 -E 01 cm i) C 0 a~) l

z W
- 0 ->2E-5 -(- t= V -l

C_ E2 . 0
~ q
.
~C =3 L U
)) 5
auC 0
f0-
( - - 0 <u
-
> n

(D Mn L i- m a - a-
-- '! 0 00
00 -
E) ~ -C)-§.M Z Ufl

0
(D i, CIL
C)
>,C
-ICU - 0 Cu
U) > . )
o. = 2 a)0
(1) 0u .-

>) t= m 5a 4)
n C'

COOE-: 0a ff I

0 (n

i a)
(D a

CDU C
~E E
iD E~
-. 0
U). cc
(12
CD

~CD>
=~
c co

00)L_ -

272
E .j00 E~ o LL

o ~42OO
~~ 0)
o) A-. Ew CoC)0
- W

Lu

o C0
Ef

E
ca

M- r- V

C 0)

0 C._c

E ca EEU

o E ýG C

(1)0

CL -1 C 0) C)0
-~? - 7F
E1 _o-9 ='-.

0CJUO a).

r-E r< CCl)u


m acu d
-0 c
U) CD c O

-0-0)

~C~Co
-0 D -= CD cn
0 0 0 0ao 00
mc

o o u
na) o :t m ci) tz )
>.CU a)Cu+ cuC ) -00u -.

-f-- 2>o CM )I z -- -0_):


a)o C) mI ca
CD
~ 0
co
0

U)0 CuU C cuO)-F >i

0c E Ln 0 U U > )am
00 o U) U)-.00.
U, a c)C
C0c

-o E0a ,-, -
0 (
CuQ)tM CU
CU) c)a 0 1 U 6EC ,
0) aC) ,c na

(UD

EE
co)'

Ej >
0)n

> u cu_0)

C
2 C1 CuC
U) -)o >

E~ 0->

U) 75Cu

L)) U)Cu
0. u U) j ~ C

U)CD

E -0. E,
0
-Z >
0 ) 0E 0

0 C0- i C o con 0
-0
CU)-
r- r

*D 0

C) E
-5) 0)
0 o ~
ca-~ . CU

coI ZI1 c
0 0
) . )) -L C: a) (D

_00 CULC

c)c, o r_ U) c

LO - c r-cc
E cou >. C C o E
E >~~~0c U
in Q inMt 00

4)
LU

40
0

E
E
U)

CD

0-

CL

U) 0 m 6
0O
a) _ 5 E. a0))a
0)) Cu
CE 0) E
E ~CD
Cu~~ 0(ao~0

M >) >1 )
0 E > w5: w -0 EC 5 m ca 2 U

0U7 .2) E) C .UC)


93- 0) w 0 u ) 0 C
> >)
0 0> ) 0
0~.
L) ~ ~ . CO
-)
u
CO
.
-TmE)
.- -o
r
~Cl .0 C0

m z o C ) )-
-. 0
U) Cý

coco 00
41 Im m)C)C
a) Z

CD 0~ CD

> U) U)
0 Q 0 Co "
5
-o .u ) -
cou~ 0U~
n >, -6 -< <
E. co a) 75o a) LL U

r*F 23 cn0
Za C 075) (- 0O 0a -o cu 0c- co cu

SU-
a)
o
E~uC
u
- ~-0
c; -- mr
C/)a
)~E ~ ~ - ~C W
CI0

0 i *))E
< E> C
~>C ~ C 05
)0
-cu
0 o~
-EC C m*
U).
JCco EW0oT
U O o)CL
uca 0 a m 0cu

LC) cu 0: - Cl)

co - 0 m u' Cu D C
E t 00

a) Ž a)O c C 2u

75. 0 =u
EFE~Cl a).00))l

- Cu

< MCIU)

o ~.03 C)-0
07LI O CYC )
= 'co m
E U< uC)0
0 U JC6 1'-

a) c,)

Enn

.0

U)

CL a)
- U)

(U)

- EEu0

0)C aU U)
ca)
cuC
) 0 () Oc) m a
-j) .S0 E - a

- c ) QG) E ).C
0, 0) Cu 0 -0 C)
a)Eo WZ a) '_j - fC
< tE -0 0l C
>0 E2 Cu
co
-
cn c: 2 Z5 E--
> U CD (D))~cu -5;. co8
C) (CD,.2c C=0-3 E Ci) L =

67 E W~ 0 0iU,

7o E) a) Cu Si, - 0

Cio a)Cu-

E (Dm 00u E =Ec


:2. :t E2 , m
m C ' 0~u
C -) E-
0i ~ i 0 cui,
Ci, 0, 0--F ')

a)~.

0l 0) 0

E 0
~~u
< -0 EC -.-

a.)~10
""t
QW
-. C
cuCclC
o-S;)~
D

00-)> S
a) a)-~'

.- C.. C< -- z 0 a

7o f.2 r- o R,(n- a a
(D E ( O
_' 7ý
0 =-ý0 ) ;0 t-.

d .M Q~o D Cu

as
zL 0 -0 u .. a
CD-E-/c -n
0i CI)O > E )C) C 0um L 0 -

>~
W.2

00

> 0d

Oo -Lco
7aj
CL-j L -t

E a) En0 5;-
En CL < a) *=C
M. - =-L
EU cY
zp to:-,5 j6
-n oc"wo
0-

in 2~~D
CDL a
~ (DEi--c O
D- 0

C7 0 'n ýO _C m U) ma

0 00 f0 '
a)EU= ) 2 '(

cn cuS C
2 :w nOEW7

m0=
a)U cj' Dca00= n) D
U 0 2 -. u mm0c
a)0 - - ( -
;- 2
-St * muiC

76 12Ul

co

CY)

C)cn =

>1 )
w00

Efl

CD

'0 C

00

I--
400

CL 00

E !t
:2 UG)O..
m, S; 0 cu 0
Q,
-0 cu
cn -o a, :.~cu ~a)0-
22 o8 0)0 = iT
cn Ino
CO -~
9 --
5 F -:5CC

a),.(n a)1. 0 Ccu


E5 E >

0 - E
o
'i-cl
0
): ~ EE cu
c<
C> E 8) < >

.00

o6~ -0a

QE 2j~j

W m-U 0~,~n

0 w T ~ E
0=(
) -- i

0m 2n i:- 0 )

a) 0C.a , o 1 )3L - ý
-0 O
= -C 20a
;E ~~ :: ~ -,-> ~ - u ---- c

0 LL .- . C- C c
0-o -=0 0<0
-0 'a 75 E: 0 0 0
c- E 2-5-(o-3:0 >
2o 2?1 OE_
,w 0 (D

70~CC. c4 2 '0 C-)


>0, 0,m _5- -w > 0,)
mA0 a) M C/ C0
mf- .
a)cOc C - .t
2
t-W L -5 _ =U -
co zz co0L0
C.00
l--=< CO -- < ) I 60'30002 w

C',
s F--00LL -E-002w -T
U) zWL r-> C o LL _6: m ; D

C.2
= .0
E3 0 .

Cl)

CL - - ,

0~ C C
ML
C W

w u o"5c w-
z.SL Li &
C=
0 ) E
-= 2u Efc)c -Ta~ E-' ED

u- -6 (nDU c0
U~() cu)
a~ -ca
n

0 0)
'p
:2 A2 'a -T , -Z10 E -

z .2 a) ai

,0 m~ m E 0 0 u) 0 a> an) ~-0 LD

s-W E;~ w =cW0 n

Q.M' ca) in
EE :E:E .2 I
u C.,* > n~J0

ewoý CC.) C&l0

6 Cl) ?0 u-E
-0) C (D CD C) Iq M 0 u C C'

to > i Zm~O .z

E t: a)
~o
:
-Y C,
c.- 0 0

050
ta 0) (D

> 4:F
aa)
0

E E >m>
> L Z t

E0 ) W0 0)
-(D -C)C
zh- 0-0 0 c
0.

o 5 >
cui

L ) (ci

EEU
E -o~i a)cl Ca
E C c ,n
0
F- u

ci)
-0 _
(n76 cic
Cc cc:)
4 m-0 C
I. ac, c~
w. 0= m C)

0E c- cm

4)i

0~L >0 OU 2
-E.2) cu Wi 42)CalOco C/)LJ,6))J
( 16 CDa U))~
c
co 0)
a . -f- :25
C ) cu aU) 0), .o '0 UColl U
I_
CE 0 -- '= E . Cý2 a)0co
S0 Can P-0 WoC coCU U) acu o C 0 0.W C.E-.Si6Cm
0
> 0-.2 -,z o- u-roa) -u I--) -C 0' ~ ~- ' .U n 0

s(cU _o -' 0) -0 U-a)- 0 In>


Cla C- .- a)
0) cLa3
Oo LL CLW0) -5 A2 > 0)0

"~0W C-) Q a0cU


Eý 1
Ei E( q~ m) Ca
0) iC
E _C'
M >~c 0
0
=
-tM 0 c
0oo

*0 0 ) c.[ E:
.2 zi: oo,
2.. 00) o 0> .c :c
0

0 m0 a

- -6 E - EE'W(
E 0
I -- E E

-. t- --- o 0 c
0=j(- 0- o c
0(U 0 3i

CL,
C 0

0
EU 0
z z
0u 0

a5 C+ E La) C ?:0ua

Z5~~ cm(V ~ u ~ C - LJ-JU

70 -5
0EW)3E o :E 0 d-a)EW(D (n a(>
70 -C . a). U 0 O m E o a
2
(, 20 = C U5cn 81) 0W !5

f~ 0 20) nE'
(D) j--- n D

z2 0 a c

11 coC
- E L
0)

cn -0 ,= na

.S C/) (2C
E ? m u u c >m L O
E (D9 DE>. CD )7
0)Q E Z5E-'

u)0 co a) () C

C C5

I.- c Q 2 >
WIfl- a C
E cu= z cu

0))

0,C

>w ~E * .V

(V ) CE -½

C< -

LU

CD
E~ c

EE
_0

0 Ch

cu
co ~ m~ G
a)
)
L 7E
,C
-. E E- E- J

Ln -U0~ cn O C
(D C 1O 0 cDU) E
(U

E C a)) E E -0 U )

o 0 C -0L ~U cc:t E-
n(>D'~ ED (n 0 U )
W
w

-o- cu L 5 o 00 u0> = X- 0 n

ý: cn to )o ,c'=-)
< )o0c
E)

U) -U)
u L )
0)U) a) E;1 a)4:C50 x
L)- -? 0 - , o s
)
Ui00 U)0C u0ILc E' C 6ý- o Cv z a >

CU CD'~ C 0 0 ~00

oo
0 a)t
M~>

E ) _

CO 0U CO ) (L) m.0W(C) c ,= U)00U

.l) C... W_

a)-Eo c n
0-C

- 0 0 0)

E.E = _0 <
GjC0 3 ol~2 cu7 -- E a )(

E s 2C, mmI=0(

N~ 2
a) ~ c mw 0
caC/ 0
~0)oom L
U))0)-

C'.
00 ~ O.

-oo

E
-< r'F n0C O

0.
2U)U

U) U

O ý2 (D
.6 C/)-C,

E~ ~ E~c
C) >~ 0<00 ) I
X Fn.;, -o
o U
.92- a)U .2

c 2 - w m K
E
0) cn -cb 2 -? E

=) -F a) ý-0

o- m t .- , C 2 U
E 0 0 > 0 E w w--- w 0) -n-n.2
E a) E U) U)
a) CDU.C C. -l V v Ue

DEU U)U 0

E) 0,: en a E
> 2 - 3

0)o'

0) 0) 0)
C
be *. .0:

- C) , P- o U) !.ý M,
_L 0 (D z

.i 0 2 0 >_5 C)> cO )C.

0 0 0)C.) - a '-) nV m L 0)

oE - = ()L c 0x .2U)
M '= ca.- X 0 ~ C n

En 0 ~ o -, u)V

E 2 n
0=~0 _
Cl) >

E 2 )Ea- ý
0 C%4 .2 ,M,0

c-a

E .S
CL

0M

U)

CD

U, >

LEn
U)1 22 )
ccý ýo 10 c='Z5cc-)•
c; 8,
C'o ,D 00)-- 0o • ° C-) C0CD)
-(1 ' L• _

c)- 0 0 O)0)
o c--E
>T'l
.-- .- 0> c - 0 E•

0 C
_

-- E E 0 co co - .- E>- 0)0o

E
L cu-~c -- mcn ' 5 J<c

E 0( E ~f 0 "
C- o

"0 0 2 M_
E. . " .• '• . . 'E'3E • :S

i -0- c
"<0 = co- m a -6, _ g . .u-z o5 "qj-0 Q
w o=3 c;

0 a)
20 .01
SE 2
COz
in • 0
a) =3 Lm u ) C, : E* cu) -0 ~
0 D 0-a IX)II0
0- =
cn to0c ("10nl)0) = G UI0 -0__ E S="•_• 0 = 0 • O •
C-D ~0 o- l
i~~1 0 cacvi 0 0 ~ .a)
cu~~ i

cu c).). o o uE )
:s =-
l-~oo ~ 0 0', 4=-
U) 2-00~uCco))E
-'W

E C 0( Q . -c0. 0c(:EýE)5.co-f0Z)
cO mE) C

CO L)o< ~cn
a 0 cn E:

00O-.
E 2 cc

0 0- m

0
U)

E 0

z O- s
2 a) 16 a) -
C>0)
a)j.- SR ( 0 2 -0

inW o) C CD0C.)
Z .n
a)C cc"'
(V

LLU) :3 )
SyQU) U-cn MCEC 'n t5 '=>
w6 a w 1- 0 l

ui z 0
-,- u' Uu' (na -e
0 T na -. - - E 'U

r; C)

Q~~0 41 0)in

0)c 2:200
C, 00n
3
U)u MMC
00w0 0 'E -

aC
M
-. -Wa
)
0U)
a ~>, - CL
wc0L
%
C>
*~~~~~~ ))~ C ~ U . E~~

LU COC
C (D 0j ~ M
U > , a Q)
:
)LL l5 6
c

C) - o~ E
C) (1 f C)

LO ~ 0 X- -06 w

C-L
.2aZ
0C 5Z i
C 4,

C-.0

.
t;

c) 0) -

oL 0.U)UC, (

U ) =3 L)0V -

CL0 2

7z ) 0L
Z6 )Cl 2
4~~ ) E Ca cc
(D 4

U) 0 ~C IL U) 0

00

cm, a)-)

EU)-

0) a O

EU)

C)a)
<C
E
MU >) 0M
~ a.SL) l
-o 0 o -2a = 0 .n
E :

E )0 -a) -L a)
(cu uc~ C cu oa

W Z5 48' 2 - U)6
=3 a)>E=
E )U
) U 0 ca )0a)C a
E = -O -n CD~
0. '- 0 a) C L
oE Ca a)~ c) cc, >, 075 ao
E~,_ a
C.) i-7-.O

a)WUC 2-

'0

w -c00)0
0 .0
ýe a a ~D ZýU
A

o~ -0 QD 10- a- Cq
o - '20 =2 n. -: 0

a~ 2~oi o- c0-a<-2, ow o
Co 0 8.LO
~ - Co 3 a) ~ -C
a) < a)<
0U ~ aC
0 )
"

L Ca C4 U) -o 0 m -0C
afa O L
CD ca Z5 (067 '0 U6 C U)a> ) CL
En :s- -6 -6Q-t a) U)0a~
LE
Ii-o :E-:: =
E a OC
3 ) 3U 0
0 CO
C -
~
cm
0 C0))
C
=-fC
0 '2'C
0
:: 0a)>'i- - < CU 0).L) 04 < -DC~a
.0 0C...L c

CO;
E..
)w C1 0 a) -. 1

-0 -.e GU) EE 0 ý>0


01 0C.- nC:

Z:Cu :> c a 0C )>c -


C)~L CO -CE 0 w ,0)

C)- .0. co a E

U)C

) 0)

E cc U) C -5 :

0C 00>50.. L
En CL C6 CE WW m
< a))
CO c'.0~~ - .>0~ c,3 E

-l ( 'aEU

LUGL

E 1~ E -
(U
O ;- 0 ~ 0~
E

E
0

=W Eo

U 0, O ~
Lo -o E n

c~ 7 ~
oC a)-~ 5 . AQ 0 0)-~ CO
U)

CL -r, =3

wL m w c 2? _0 . .0 a _ > LL

00C 00 0a

W=

ELua) .0 w 3 8 oCS IC
C)
E c-
a)
2
>.Z 0, -'
CO z c0c L 8 MC : 5-

0 )( f L w2c D0
ca EF = 6Ci LW2
E 1
LL Gm m 1 n L
t o
D (
E " - C E EL E c) 5v

11 U)<0a na 0t DZ 5L

cu c C"
E a
2 co. cun !
12 o
0 0 C DC) clu c

co 0 -6a '

(n ) 0 CO,: n 0 Co : lD)'
w 0 (D-

ý2 -Fu o u L5 SEa
-T i 2
-4:: 0 __ m = 'n cE (D
E cu ? c 10 co
c) >,c

0 E

C- .200

CD ooE 7,5o o
Z:,o~ _) DC, ~
E ~ 0 -Zý cu.6f

co-0M: o-3 U) OU)O) ~ ~ j

Cu (n (Du
U) O
0
~ ~ o 0 c2 Ua)) 3 J (D
cL U-
)
0) )0
_-

a) U) ( u2 0 5=U~ 0 - m ~ ~ 2~ -

CDE0c 0 uw,
-cn EmC a) o2 : -= 4
LU E 0'2

-
.2 'D0 -;
z-
La
D ý a nc
An-
to@ <C)C- D8
Eo -l m c -0 c %
E.S 0w(D07 Dm--L

>U)< ' < 0 U %

>=C a)2)<a - S a K
W1 )9-c - 0 0 CL) U) 0 0:--
U)0 C3~
Cuc = l -0~w:
C.) 2,o CooP<
E C.0 2 ) a>)~ In- 1 ) . DU)

-cu Cu >Cl)
3: ) )

0 )
a- .0j C

E cp .2
CL =9=
o.. fU)
z
C
oD -0 E

00 - (1)

C0
)C
co ý:o~ !=o> V- -
-0.L c0.-;--
C
UCc= w L)PD
i E
E MU a CL 0
E -w 0
n 5w ot~ -LM

C, z6

0 0)
I.~- c - 0 00oCý
C"='
cw CDO

0 0

V)-en C)
_l z 0 -o LO

.I- (n CL) -r-- 0 0


LU~ :o)-c ch~ C.C 11U
C)2laoC:c3 a) CL) mCJ
C
EM -E coW 6-~ o
-ru ' g mu -- 81 .5 0OC 0 0 0>

Et CL c)--E a U
E o mo E>oC)E m E -9m>o>

WU .c cO cocm2o
-L 0) Cc

-- L> cu E2 0=g-
_c - CWC

C,, 0

F- 5

E. E E (-

0 0.o en.Q
u CL E
n C0 C 5
>' LL 0CL 0

C) C5

UL C)~ z CIL @5 Z3 :
co- C) .1 Lxt V C:F cl

0 Q)

0 C C) 0 - = . ..g
ConE 0c-) 1 -6 0
CD C) LUCiz-
C) _

a)C ',.C
U) >o'o0Ec--
*D Z 3: DI o )c E Z5
u o) ) CD(1 G
_0 U) -

0) it)C 0) (c 'n o 0) 0
m -,a -C Cý (.) -6 03 1 0(j LL0
o cc I- - E a5 W) C0
a)0 ) >0) -z Z -F
co U), aw Cý )0 C )O .2 Z5'~
CD 7E ! :5 L - o. E 0 CS(
0 0)-c 2 . w 2 00 c - o -~E
o 0 ~, o, N

0 >Q) 00 (D co
co 75 o ~ 0C -r_ u - cXo=, 0 LLE -0
S 00
-*0 - c'A=

-(0- a0)EN
a) (0
-o UU
Q. c 0, ' ua E--=
E 0 C 0 E EE o 0 - )` E -J
5) Ueu c Ec
bw o00 o c
0))

_ z 0 0

0 )C
Ix~

c(0)o c 0En
>,U ) 7 >0 (0
w
4)Ž 0 ,- M 75 00-;
o ) =(~ a) I )
C 00~ C C0 > )0

n0=, E 0c
E %--o
a) 0~(0 C
a~~ ---- o CD=0 M

=LI
0 a)< E -F
0)c L_> _ 0 0)
20_0

E co 2 - =c

c
.2
E
C)~ tC
06

C CD

r. 0

<m C0D
_ N0

0 -

E m EC 3): ) ~~ E~~

CL~~~ 0 L c .0
00= __0) n u en
7@
CL-U). 00

wen E~c~

U)n co) .
0) L)
.Z CLD
00

(L)D n D) _0 .0~

U)

U)
In 0)
2_ O

.0- > 0
O) aO) Q)
Mns a: 70C aU ý

-Fn00
c) m-)L
CL 0 L EC -o-

w>. 0 L Z5Co : :3
F-0. .i n-0 in wQ)
U

m a) a) C

tm 0 05 C C*
ZU 7WCCE
E
U) G co0~ ) .C ).

- 0
C C

0
:2O 2, C.2 co N "'
o L 0 '

ZbO6;
-E 0
L- L -s -c
2o 0
m) ~ 2 3 CL a)
E ,>~
E Z5 2 -r ) -
a, CL 0 j- ) 2LMa

(0
CLt u oo 0, CD0D)
m CO) , U > ) ~ 2 t . f

(U ~ U)

e
... .c .- C, Z EU >
0 a CU .O 0 j CL
o-j w ca

2E 5 _0

--
a), Si >n0
_0_
cL _ _
=_:=_M _EE__
E
E
0

0*

0)

w0 cn

cuu 0 3a)' -

0n u) a) o' :E) o) o a5
0= Cc Su10) 2-1a 0 uo m - ua 0~ E

'o~ wo s2- -ý .2 CU o . , >E


0C
-o C . 3 0C D0,
!E 'D a)cu
1--0-O 0
a) L O (D Co CD 0- CU
0
U

~o 0

A, 0) U 0)GO~ .2 o U 0=-0.D

E o -0 a CL c' 2S g2 0
E 0
c E'
t5 0" 2 75UW& C 0.u: 1-
0 (-10 LU (1) co
C 0 00

Ca. Z 3.

C U
C0

CL

CL

Cl)

C..2

0 u)
a) 3 C.)

C) 21)

ca >~ >ý
0 L C)
In 0 s
_0)0

'a E

0) .2 00C)- --

E 0)

cc E E aSc

c)s 8 ra = 2 0
LD cr 0:1U

a)a)0 52 c) cc

c 2-Co V -
e) LL 1 . C1 -0=1 U

-io --- s 8 > )

32 o 0

L~ E2 co
03 CD0

0<cn a)0w -?,8


co
. :cc, w

a)Ca>,)2 3(

= Cu
CuG)

m w

m-n
0a) 00
o L 2.
ai g.-o c
8-•
f,• • :•.., •:• {•3 • :;•.... •:•,1•-" __€.,'3
0 • •. t• • -• • •u
-•r, •-- o e0 o-•.• .•. • -

> o E•--u- • • •"


•:: • (D 0 I'• •• • .gZ

orr''--_
-•E• ,-. :=,..,
€: -. _o
:4"=: E--
o•
ox'•o•m•

v ,,,• r'•
O O •

"• u._.• •c• E• • ,,,o0


°°_ • =_ •..,=._•..-O•o -=
= ='• ..z..' •=-"=
-- "• "=•
= ,•
'•i• "=•
"•= •.___• • =..,= o •.,._

• -- o-• •.• • • • .-•.,.c: uO o._z.• o "£. •-• • •-•

cO •-o o o• u• • • .:,• .• • ,-,<0 • • • • o o •-• •1-- •__ o o.•'• ,.-- o..m •---• =:"•

•'-- • • • •'-- 0':•'--• • 0 o,,I •cO •. •JC • •.• • oo •... • •'•-• •. • "• • •u •C•
.=(.'-" o• • • •._ o • x3 D • ' •" "• •,• •. -•'o •
"->. o"
O •._.--
o"• 0 0
•• • •
•e0 • •...
--

S._ •-'•--.•"E • o• _._o o•.•= = ='-•:=- • • E'•-- = o"o m •, o.• •-•-'•'Z .o

g •

E • •
a)
-
m m 8) C 0

C: E E E E D o .
a) =3 =3a ) 1
C) ~C )' m) 0)('3
0, -
W ) c
a) ~ ~ EU
'E m
C )4- ~
.E
(D
.0c =3
.S9E
a)E
C)) C) a) C

E~~~~
co 0 ~ -
UA
>.,
E = ~
c

C= CD m

0 ) cna
0
0L L ' 00E
a)~C (D a)a)c
ci.3 CD 0L

m _C)
0)) w cm 0-

(0 _

r E .9 C)? . 0

U) L)0 00) 0

w 0 -. -.

L, 0 U) US

-o w D m 0
-DC
>) 0C D1
*) o 75

c - ! =

LOEE
E
' it
Ca)

E so;)
0 L a)) a) (D ? a
('3 ca-1 a) CLa

-C-0

C('3

8' ci 0)c)

C)-Y.

C) 00_-
ED 12 w
>U 0)Cm);ý a C -3-7
Z
0)Uc C 0) 0) CY 0)
'0 - C C"J) Cl 00 '0 0
0 .(%, C4 co 0 0 0.) 0 C=)
0D a)O)0 () Co
) r,: 0 EuL
Cuu 0u u C')

co E )C u c o m) m 0)

01~ 2=
15
uE
U' cu~ cu~ cu C 0
couC
)CM L 0) oc
) ) ocu
0)c 0)(C%3LL
E 8EE E~ E.22 SC- -
t5 E ~E *E Z .
-c O Eu'00 J(DC-0 C 0) cu = (b
Cuo co
00 0) aO Q) wo LO 0C 00 0C
4) 0 cu (nCl U) 0 U)
0 0 )

E0=

L) 0)
L) .05 T O >) cu

wU
4)0

E
E

C)

.0

ca a
E
CL

0
o 00
=3 - n 0
Cu 0
E 0 0 co
.Un - -i

C, cu'
U)
00) (D
-L
-@ ~
D mUm U) <)0

E u) -) '
'nU f)

a)i U) 0)~ M Z

WCD-o C-)C0~U On -CZ U-


_l 2 c a) cc

0. O)S<

00E L) Eo
m~~ ~ ~ 0 nTZ _ _

en -5 2 a - 0a) o) D.
.2 >I c n
Lý7;-2L0a*)U)Y:
2 ) ( ci) =_ ; U

-j E E_ )C

Iii 0)~~ 0UC,

a) H LOc 0)Ci -
=o ~ 0- -d C) 0

r - C: ) 0 0 CO
, C ) . U 0 .

CD

E.S~ E

cc z cui

C:)

0 a 0 LLc ci' E

gw ccu)C) Ec

EO -CO<0.--

C-

0-
CL L

Z 40 co
a)WCL C0 l- 76
00 o
U) u) u)
00 ~ a 4a*

U-6U a))
0
0 tt: 0- C 0) o
* ~~0-~a a)~ :3
E 8 0 E 0 bc 0 0 0n

0.0

CD)
13 : a) u) U) -

a) 0a)c

_ d~ 6 ff.C U
4) 7F.~
)0)U~- ~ 0
u)*

CD~- Lz> ~ o c

o'u) ~> -E )) E ->2


_ O Er
E 00

E " Z.>~a-oELJ Ea

o) w~ c) o 5 (

0) 0) (

0) CO

E.
w 76 76 ~ U)'
0 - 0. *0 0

.6
~~CI U)U)a o -a
C~~~~2 ~~WŽ 0~ 'a'
U~
-- a)
00) a) E2 E => =3
P c -ý5cc

_ ) ca 7D0
_S a)0(

<)-r0 0L)0 0a (' r-.m '( U nu) 0a Lu>cD < i

C CC
SLU
to -AU )0 U

z at$
U= 0 c) CU
-C2

= U) cn~C > 7-- - -

n) a ~a 00 ~ Sn CD C

C 2CE c .- . ow a
U
a)
s ) >.
F- l. a-U) E 0 . C l ) a)
U

- bmn-: ww
a) CL-
cU) c)1

U) L:
Ea) m o0 E a ) ) a
ED (D.D 2 m 0
0 )= 5 -- c M .
F- 6 oc-u 5U a ) C '

0 0

U) =

*E:

:.C)

In cin

ch

C"C

0. --- 0E0

CD

0 a)5 Co
0-0~ 0 0)
N a) :0 E

C) 4)C -

U
Ci)
~ c)
.;-
2
c-(
-o )CL
l

C)- c~c

0)

to
oa CD a
EC -

E != w' C) CT
E.
M
ZE EZ ~=U CL
L --- - I-
-- ----
Q. o ~o
> ,o 0-- 0 v- .

OC > 0.~ 0C 0 . 0 Q

=, = o(3 ,4

t . CA, 0 o0 - 0
0-0 C . 0 t 0 c ) Q) 0

, .i, C' o/ w C- E

0o )'
M A t 2

4;.
CC
5a 0d0c
0

;- o - C-'-)
-o es r.0
-e0 O
<~

0H as 0

0 v

CC

to 0'
00

0H
00

o~~~ w~
a) 0> ~~'~2
Ln 0 > ) 'aa 0 t a)) 0 < a
S 01 0 )~)a~)
6

.- - 0 0 ,. 0 ca).0 Cn oC)E
a 4

0a Cw- o)d C.
-2 w 'A
0

co4 co u 30 0\ rA
in. :3 cc>-l r E q 0 pc

m 1 o 0 E.02 m nr
17- 0 b * c % " U-
-0 - - 0 0 r

0. C0 -Xa a)-'ý.0,r

'.=~~a
- ca. * 0 -
.2 a) 0 -0 ' 0 c
C)3>r 0
-0- rA s ) Cý!
~ 0~'~~
0
0C 0 C,*~E~ 0
0
"" Eo -0 E0

0 h
0 C

0 w~

0 C)

U 0 0
C~ 0Ci0
5- -
0 0 - '0a)
0 4

> 0 0 00-O

0~~
C)0 ~ ~4) .v0t

(1 to V QOU -1 c 0 Q C)

- - Qý ,c
i-.)ýc
0-"
0

00 0 0 C
CA >G 0

1-.0

wz
C- Ew 0 - 0.
00 0 -O 0
cn 0 o

> 0
0. a0- H

tz

0..0

oN. 0o0

0 bo
-d0ý +0 ý 0V

M~ r0> C o

""D
H0 = )'

oo
U
Z >~

oo co
C 0

o 0 0
00 o r

22
WO - O..O ~ o

0- 0
N t

0 C> 0
E C\~ 0 . N
'CE
C)>

00

cc cu

m CN

a)) ý:
Cp c

u . C;
5-

You might also like