1 s2.0 S0959475201000160 Main
1 s2.0 S0959475201000160 Main
www.elsevier.com/locate/learninstruc
Abstract
Keywords: Learning; Instruction; Element interactivity; Cognitive load; Working memory; Long-term
memory; Schemas
Consider a student faced with the task of understanding and learning very taxing
material that will require considerable time, effort and thought. What mental pro-
cesses does the student engage in and what instructional procedures and designs can
best facilitate learning? Cognitive load theory deals with issues associated with cog-
nitive processes and instructional design and may assist in answering these questions.
Nevertheless, as will be seen below, when dealing with highly complex information,
there is a gap in the theory as currently formulated. This paper considers theoretical
points and provides data intended to address some of the issues associated with
learning intellectually difficult material.
* Corresponding author.
Cognitive load theory (Sweller, 1999; Sweller, van Merrienboer, & Paas, 1998)
uses some aspects of human cognitive architecture and of the structure of information
to provide instructional designs that facilitate understanding, learning and problem
solving. The theory assumes: (1) a limited working memory that can process only
a few elements of current information at any given time (Miller, 1956); (2) an effec-
tively unlimited long-term memory holding knowledge that can be used to overcome
the limitations of working memory; (3) schemas (Chi, Glaser, & Rees, 1982; Larkin,
McDermott, Simon, & Simon, 1980), held in long-term memory and used to structure
knowledge by organising elements of information comprising lower order schemas
into higher order schemas that require less working memory capacity; and (4) auto-
mation that allows schemas to be processed automatically rather than consciously in
working memory thus reducing working memory load (Kotovsky, Hayes, & Simon,
1985; Schneider & Shiffrin, 1977; Shiffrin & Schneider, 1977).
Cognitive load theory makes a distinction between the two sources of cognitive
load. Extraneous cognitive load is generated by the manner in which information is
presented to learners and is under the control of instructional designers. A major
assumption of cognitive load theory is that instruction should be structured to reduce
unnecessary extraneous working memory load (see Sweller, 1999; Sweller, van Mer-
rienboer, & Paas, 1998 for a range of instructional techniques designed to reduce
cognitive load, such as the completion effect discussed by van Merriënboer, Schuur-
man, de Croock, & Paas, 2002: the use of worked examples, Stark, Mandl, Gruber, &
Renkl, 2002; van Gerven, Paas, van Merriënboer, & Schmidt, 2002: split-attention,
discussed by Mayer & Moreno, 2002; van Bruggen, Kirschner, & Jochems, 2002:
or the modality effect, Mayer & Moreno, 2002). In contrast, intrinsic load is imposed
by the intellectual complexity of information. The intrinsic characteristics of some
information is such that if understanding is to occur, it will impose a heavy working
memory load irrespective of instructional design considerations. It is that complex
information that is the subject of this paper.
The intrinsic cognitive load of information is determined by the extent to which
various elements interact. An element is the information that can be processed by a
particular learner as a single unit in working memory. Some information can be
understood and learned, individual element by individual element because the
elements do not interact. Learning a new vocabulary provides an example. Each new
vocabulary item can be learned without reference to any of the other items. For
instance, the Spanish word for bird can be assimilated and learned independently of
the word for cat. The information is low in element interactivity. It imposes a low
intrinsic cognitive load because to understand and learn it, only a limited number
of elements need to be processed in working memory at any given time. In contrast,
high-element interactivity material consists of elements that cannot be understood in
isolation because they interact. We can learn vocabulary items individually but we
cannot learn grammatical syntax without considering several vocabulary items and
their relations: we can learn the names and perhaps even the functions of individual
electrical components one at a time but we cannot understand an electrical circuit
E. Pollock et al. / Learning and Instruction 12 (2002) 61–86 63
2. A paradox
Fig. 1. Insulation resistance test: interacting elements version of instructions. Note: normal text — inter-
acting elements version; bold text — isolated elements version.
of the test; and the criteria by which to judge if the voltmeter readings as safe. For
novices, these elements far exceed working memory limits. While an expert electrical
engineer can process all these elements simultaneously in working memory via pre-
viously acquired schemas, a relative novice (e.g. first year apprentice or trainee), not
66 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
possessing the requisite schemas, can only hold and process a few of the elements
simultaneously in working memory. If only a few of the elements are processed in
working memory simultaneously, the material cannot be understood because the
elements interact. Accordingly, this material, designed to facilitate understanding,
must inevitably fail in its aim with both understanding and learning inhibited. This
type of instructional procedure will be referred to as the “interacting elements” pro-
cedure. It is characterised by the inclusion of all elements required for understanding
but at the cost of an impossibly high working memory load.
In contrast, consider the alternative instructional material consisting solely of the
figure and the bold type text in Fig. 1. These materials are intended to be learned
element by element without a high degree of understanding. They will be referred
to as the “isolated elements” procedure. Unlike the interacting elements procedure,
isolated elements can be easily held and processed in working memory. The cost is
reduced understanding but the potential benefit is an increased probability of learning
because the reduced working memory load permits necessary elements to be held
and processed in working memory. Once these materials are learned (albeit with
limited understanding), it may be possible for learners to process the interacting
elements materials in working memory allowing full understanding to occur. We
predict that studying isolated elements materials followed by interacting elements
materials will be more beneficial to learning than repeated exposure to the interacting
elements materials. This prediction is supported by research into learning hierarchies
which suggests that lower level skills are prerequisites to higher order skills (Bloom,
1956; Gagne, 1970). This prediction follows only under the circumstances where the
knowledge base of the learners is such that they are unable to process all the inter-
acting elements in working memory. Experiments 1 and 3 test this prediction. In
contrast, if learners have acquired sufficient schemas to process the required elements
and their interactions simultaneously in working memory, the need for an isolated
elements approach may be unnecessary. Experiments 2 and 4 test this prediction.
3. Experiment 1
The first experiment used two groups. The isolated-interacting elements group
studied isolated elements instructions at phase one followed by instructions involving
interacting elements at phase two. The isolated-interacting elements group was com-
pared to a more conventional method of instruction for promoting understanding,
the interacting elements only group where instructions containing interacting
elements were presented at both phases of instruction. It was expected that the inter-
acting elements only group would perform more poorly at both phases. Compared
to the isolated-interacting elements group, the working memory resources of the
interacting elements only group would be overwhelmed because of the many
elements they were required to mentally integrate due to the high-element interactiv-
ity of the material.
E. Pollock et al. / Learning and Instruction 12 (2002) 61–86 67
4. Method
4.1. Participants
4.2. Materials
The isolated elements version of instructions explained the procedural steps for
the electrical tests. For example, in the bold text of Fig. 1, the step by step procedure
for performing an Insulation Resistance test of an electrical appliance is displayed.
The isolated elements instructions included no text other than the bold text. The
interacting elements instructions not only replicated the isolated elements version of
the information (bold text in Fig. 1), but also included other information relevant to
the tests such as the aim of the test and the logic, in terms of electrical circuit theory,
behind each step of the test (see normal text in Fig. 1). The interacting elements
instructions therefore provided all the information for gaining a meaningful under-
standing of the electrical test as a whole.
An analysis of the element interactivity of the Insulation Resistance test has been
included in Appendix A. The analysis estimates the number of elements that would
need to be simultaneously processed for a first year electrical trainee with only a
very limited electrical knowledge to fully understand the test.
Subjective ratings of task difficulty were used as a measure of cognitive load (see
Paas & van Merrienboer, 1993, 1994). The subjective mental load rating measure
used was a seven-point Likert type scale. Above the scale was the instruction “How
easy or difficult was it to learn and understand the electrical tests from the instruc-
tions you were given? Place a cross on the rating scale below”. Each number on the
scale was labelled with a description ranging from extremely easy (1) to extremely
hard (7).
The test material consisted of test items and equipment for both written and practi-
cal tests. Each test item was designed to be objective and was marked as either
correct or incorrect. The written test consisted of 23 items. Some problems had
several parts so the test was marked out of a total of 31 points.
There were two categories of questions in the written test: those tapping low-
element interactivity knowledge about the test procedures and those tapping high-
element interactivity knowledge of the electrical testing procedures and electrical
circuitry. Low-element interactivity questions were factually based, asking about a
specific part of, or reason behind the procedure of a specified electrical test. This
category contained 18 questions. Each correct answer to these items was worth
one mark.
High-element interactivity knowledge questions required the students to practically
apply their factual knowledge of the electrical tests. This category contained five
questions, some with several parts. Each correct answer to each item or part was
worth one mark; total marks possible for this category of questions was 13.
The practical test consisted of two items, and both items required transfer of
knowledge to a novel situation. Specifically, students were required to apply knowl-
edge of an electrical kettle to safety tests of a fluorescent light. The performance of
each electrical test was marked either correct or incorrect, so the practical test was
scored out of a total of two marks. The element interactivity of knowledge tapped
by the practical tests was high as students had to remember the entire test procedure
and apply it to a novel situation.
E. Pollock et al. / Learning and Instruction 12 (2002) 61–86 69
4.3. Procedure
All the students were randomly assigned to either the isolated-interacting elements
instruction or the interacting elements only group with 11 students in each group.
They were tested individually, in a quiet room. The students were informed that they
were to be given some electrical safety testing instructional material which would
be followed by a written and practical test and that this session was to be repeated
within 48 h. In the first experimental session, the students in the isolated-interacting
elements instruction group were given the isolated elements instructional booklet
while the students in the interacting elements only group were given the interacting
elements instructional booklet. An unlimited time was allowed to read through the
instructions and this time was recorded by the experimenter. All the students were
provided with a metal electric kettle and a voltmeter. They were instructed to perform
the tests with this equipment as they read through the instructions. At the completion
of the study phase, the students were provided with a subjective mental load-rating
scale, the format of which was explained to both groups. They were asked to rate
the mental effort involved in understanding all the electrical tests described in their
training booklet on the scale described above. The test section of the experiment
followed. The students were asked to complete the written test, described in the
materials section, which was common to both groups. They were allowed an unlimi-
ted time to complete the test. Students were prohibited from reattempting any ques-
tion once they had attempted it. The instructional material was not available during
the test period. The practical test followed. Students were required to attempt both
parts of the practical test in a set order. (Earth Continuity and then Insulation
Resistance.)
The second phase of the experiment was held within 48 h of the first phase. The
procedure of the second phase of the experiment was identical to the first, but for
two differences. (1) In the practical section, the order in which the two electrical
safety tests were performed by the students was reversed (i.e. Insulation Resistance
test followed by Earth Continuity test). (2) All students studied the interacting
elements instructions.
The test score variables analysed were the written test scores, divided into low-
element interactivity knowledge scores and high-element interactivity knowledge
scores, practical test scores and mental load ratings. Means and standard deviations
for these variables are provided in Table 1.
Low- and high-element interactivity knowledge scores were analysed separately
by 2 (Instruction Condition)×2 (Test Phases) ANOVAs with repeated measures on
the second factor. The results for low-element interactivity knowledge question
scores indicated there was no difference between the instructional groups,
F(1,20)=2.65, MSe=288.82. (The 0.05 level of significance is used throughout this
paper.) The test phases main effect indicated a significant difference, F(1,20)=11.77,
70 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
Table 1
Test score, instruction time (in seconds) and mental load ratings for Experiment 1
Mean SD Mean SD
Phase 1
Low-element interactivity (100%) 36.37 15.19 31.82 14.51
High-element interactivity (100%) 17.73 8.95 12.27 11.93
Practical test (2) 0.18 0.41 0.10a 0.32a
Mental load (7) 2.91 1.38 4.09 0.94
Instruction time (–) 645.27 274.05 917.27 263.01
Phase 2
Low-element interactivity (100%) 54.04 19.26 41.92 11.21
High-element interactivity (100%) 35.00 20.22 17.58 8.44
Practical test (2) 0.36 0.51 0.00 0.00
Mental load (7) 3.27 1.27 4.00 1.18
Instruction time (–) 603.46 328.11 676.09 240.94
a
Sample size reduced to 10 students due to equipment failure.
taken over the two instruction phases, F(1, 20)=3.88, MSe=56831.73, p=0.06. There
was no significant interaction, F(1, 20)=1.92, MSe=56831.73.
Mental load ratings showed a significant instruction condition main effect in the
expected direction, F(1, 20)=4.74, MSe=2.11, suggesting that the isolated-interacting
elements instruction group found their instructions less demanding than the inter-
acting elements only group over both phases. In phase one this difference was to be
expected given a disparity in the amount of information in the different sets of
instructions. At phase 2 however, when both groups received the same interacting
elements version of the instructions, (the interacting elements only group for the
second time), a difference is still maintained (see Table 1). In fact, the mean of the
isolated-interacting elements group at Phase 2 is still well below that of the inter-
acting elements only group at Phase 1. Neither was there an improvement over time,
F(1, 20)=0.26, MSe=0.77, nor was there a significant interaction, F(1, 20)=0.72,
MSe=0.77. The lack of a significant interaction also suggests that the difference in
mental load ratings between groups was similar through both instructional phases.
Paas and van Merrienboer (1993, 1994) devised a method of combining perform-
ance measures with subjective mental load ratings to investigate the relative
efficiency of instruction conditions. Instructional efficiency increases when learners
demonstrate either a superior performance than would be expected from their level
of mental effort or require less mental effort to achieve a given performance score.
Paas and van Merrienboer (1993, p. 742) suggested that “combined measures (of
performance and mental effort) may provide a better insight into the cognitive conse-
quences of training and task environments and thus improve attempts to design opti-
mal environments”. Performance can be measured by an objective test score relevant
to the learning task; mental effort ratings require the learner to translate the perceived
mental effort required by the task into a numerical value. Efficiency (E) is calculated
by transforming both mental effort (M) and performance scores (P) to standardised
z scores. These z scores for each instruction condition are then combined in the
following manner to give a single efficiency score for each condition:
M−P
E⫽
冑2
The sign of the relative condition efficiency depends on the relation between metal
load and performance scores. It is calculated according to these rules: if M⫺P⬍0,
then E is positive; if M⫺P⬎0, then E is negative. The greater the value of E, the
more efficient is the instruction condition. It was hypothesised that the isolated-
interacting elements instruction group would be more efficient than the interacting
elements only group. In other words, it was expected that the isolated-interacting
elements instruction group would achieve a greater performance for less mental
effort invested.
Efficiency measures for both high- and low-element interactivity questions were
calculated and analysed using 2 (Instruction Condition)×2 (Test Phases) ANOVAs
with repeated measures on the last factor. Table 2 shows the mean and standard
72 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
Table 2
Normalised (Z) scores and condition efficiency scores for performance and mental load for Experiment
1 (note: the efficiency scores are based on the normalised mental load scores given in Table 1 for each
instructional group for each learning phase)
Group
Z Efficiency Z Efficiency
Mean (SD) Mean (SD) Mean (SD) Mean (SD)
Phase 1
Low-element interactivity ⫺0.27 (0.89) 0.17 (1.00) ⫺0.54 (0.89) ⫺0.68 (0.87)
High-element interactivity ⫺0.19 (0.57) 0.24 (0.86) ⫺0.54 (0.77) ⫺0.68 (0.51)
Practical 0.06 (1.16) 0.41 (1.26) ⫺0.18 (0.81)a ⫺0.42 (0.65)a
Score
Mental load ⫺0.52 (1.09) – 0.41 (0.75) –
Phase 2
Low-element interactivity 0.77 (1.13) 0.71 (1.20) 0.05 (0.66) ⫺0.20 (0.93)
High-element interactivity 0.93 (1.31) 0.82 (1.33) ⫺0.20 (0.55) ⫺0.38 (0.59)
Practical 0.55 (1.36) 0.57 (1.45) ⫺0.43 (0.00) ⫺0.55 (0.66)
Score
Mental load ⫺0.23 (1.01) – 0.34 (0.94) –
a
Sample size reduced to 10 students due to equipment failure.
deviations for normalised (Z) mental effort and performance scores and the condition
efficiency scores for each group. Results for low-element interactivity question
efficiency scores indicated that there was a significant instruction condition main
effect favouring the isolated-interacting elements group, F(1, 20)=6.75, MSe=1.26.
There was no improvement over time, F(1, 20)=3.59, MSe=0.77, and no significant
interaction between the factors, F(1, 20)=0.01, MSe=0.77. The results for high-
element interactivity question efficiency scores also indicated a significant difference
between the instructional conditions favouring the isolated-interacting elements
instruction group, F(1, 20)=12.64, MSe=0.97. There was no improvement over time,
F(1, 20)=3.63, MSe=0.59, and no significant interaction between the factors, F(1,
20)=0.40, MSe=0.59.
Practical score efficiency measures were analysed in a 2 (Instruction Condition)×2
(Test Phases) ANOVA with repeated measures on the second factor. Table 2 shows
the means and standard deviations for standardised (Z) mental effort and practical
performance scores and the condition efficiency scores for each group. Results indi-
cated there was a significant instruction condition main effect favouring the isolated-
interacting elements instruction group, F(1, 20)=5.64, MSe=1.83. There was no
improvement over time, F(1, 20)=0.00, MSe=0.44, and no interaction effect, F(1,
20)=0.45, MSe=0.44.
The results from the written test and the subjective mental load measures support
the hypothesis that the isolated-interacting elements instruction group performed at
E. Pollock et al. / Learning and Instruction 12 (2002) 61–86 73
a superior level to the interacting elements only group because this method of instruc-
tion imposed a lower cognitive load. We assume that in the initial learning phase, the
isolated-interacting elements group of students did not fully understand the complex
concept of electrical safety testing, but they acquired rudimentary schemas. By obvi-
ating the need to process all interacting elements required for understanding in work-
ing memory, students may have been able to acquire at least some of the necessary
elements, resolving the paradox of learning without processing all interacting
elements required for understanding in working memory. Subsequently, the interac-
tions between these elements could be learned in the second phase allowing a more
complete understanding of the material. In contrast, the interacting elements only
group may have acquired few elements in either the first or the second phase because
the interacting elements may have exceeded working memory capacity.
6. Experiment 2
If schema formation was the key to the success of the isolated-interacting elements
of instruction with inexperienced learners in Experiment 1, that method of instruction
may be of little or no benefit to more experienced or expert learners. These learners,
by definition of their expertise, already possess more sophisticated schemas in the
given area and so may be able to process the interacting elements in working memory
without undue difficulty. The purpose of Experiment 2 was to investigate the success
of the isolated-interacting elements method of instruction compared with the inter-
acting elements only method, over two learning periods using participants with rela-
tively more expertise than those of Experiment 1. Experiment 1 was replicated in
all respects, except, in the expertise of the participants.
7. Method
7.1. Participants
The instructional material used in this experiment was identical to that used in
Experiment 1. The procedure was almost identical to that used in Experiment 1. The
74 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
The test score variables analysed were high-element interactivity knowledge score,
low-element interactivity knowledge score and practical test score. Means and stan-
dard deviations for these variables are provided in Table 3. Other variables analysed
included mental load rating and instruction time. Means and standard deviations for
these variables are provided in Table 4.
The only significant difference to be found appertain to instruction time. There
was a significant difference between groups in instruction time, F(1,23)=5.79,
MSe=43048.96, in time taken over the two instructional phases, F(1,23)=10.158,
MSe=8699.05 and a significant interaction between the factors, F(1,23)=60.36,
MSe=8699.05. In order to assess the interaction effect, a test of simple effects was
carried out at each phase. At Phase 1, a significant difference was found between
the instructional groups, t(23)=5.66 with the isolated-interacting elements group
requiring less learning time. At Phase 2, when the groups studied the same version
of the experimental material, no difference was found between the groups,
t(23)=0.95. These differences between groups were entirely because of the isolated-
interacting elements group requiring less time to read their instructions in Phase 1.
Table 3
Test score, instruction time (in seconds) and mental load ratings for Experiment 2
Mean SD Mean SD
Phase 1
Low-element interactivity (100%) 60.76 13.47 58.65 18.36
High-element interactivity (100%) 31.81 25.32 33.85 19.20
Practical score (2) 0.67 0.65 0.54 0.66
Mental load (7) 2.17 0.94 2.85 0.80
Instruction time (–) 392.92 105.47 739.31 186.18
Phase 2
Low-element interactivity (100%) 72.22 17.97 71.15 17.05
High-element interactivity (100%) 48.33 18.95 46.28 19.52
Practical score (2) 0.50 0.52 0.62 0.77
Mental load (7) 2.42 1.38 2.46 0.97
Instruction time (–) 522.17 109.32 458.31 208.25
E. Pollock et al. / Learning and Instruction 12 (2002) 61–86 75
Table 4
Normalised (Z) scores and condition efficiency scores for performance and mental load for Experiment
2 (note: the efficiency scores are based on the normalised mental load scores given in Table 3 for each
instructional group for each learning phase)
Group
Z Efficiency Z Efficiency
Mean (SD) Mean (SD) Mean (SD) Mean (SD)
Phase 1
Low-element interactivity ⫺0.26 (0.77) 0.01 (0.91) ⫺0.40 (1.05) ⫺0.53 (1.09)
High-element interactivity ⫺0.36 (1.13) ⫺0.05 (1.18) ⫺0.29 (0.89) ⫺0.45 (0.90)
Practical score 0.12 (0.97) 0.29 (0.88) ⫺0.06 (1.04) ⫺0.30 (1.08)
Mental load ⫺0.28 (0.87) – 0.35 (0.77) –
Phase 2
Low-element interactivity 0.35 (0.99) 0.29 (1.29) 0.32 (0.98) 0.24 (1.07)
High-element interactivity 0.36 (0.85) 0.29 (1.33) 0.29 (0.91) 0.22 (1.12)
Practical score ⫺0.12 (0.78) ⫺0.04 (1.20) 0.06 (1.21) 0.05 (1.10)
Mental load ⫺0.06 (1.28) – ⫺0.02 (0.93) –
The results from Experiment 2 contrast sharply with those of Experiment 1, where
instruction by the isolated-interacting elements method significantly improved the
test performance. The level of expertise of the participants was the only identifiable
difference between the two experiments. This observation lends credence to the
hypothesis that the isolated-interacting elements method of instruction benefits lear-
ners through assisting the process of schema construction. This hypothesis explains
why the isolated-interacting elements method of instruction showed distinct perform-
ance advantages for novice learners (Experiment 1), who previously had only very
primitive schemas (e.g. “a” represents active) in the area. The more expert learners
in Experiment 2, already in possession of schemas for the general area of electrical
circuitry, learnt equally effectively, regardless of the instruction method used.
10. Experiment 3
11. Method
11.1. Participants
Eighteen first year industrial trade students enrolled in a training course at a Syd-
ney company participated in the experiment. All the students had completed at least
Year 10 of High School. They were selected from students enrolled in one of the
following courses: a six-month manufacturing and engineering trainee-ship course
(13 students) or the practical section of a Year 11 school electronics course (5
students). Students were tested within their first fortnight of training so both groups
had little electrical knowledge and what had been covered, with respect to simple
electrical concepts, was very similar for both courses. Both groups had been intro-
duced to very basic electrical principles (e.g. A represents active, N neutral and
current flows from active to neutral) but none of the participants had ever studied
the electrical circuit or any circuits similar to that used in the experiment.
11.2. Materials
Two sets of instructional materials were developed to introduce the electrical cir-
cuit: an isolated elements version and an interacting elements version. The difference
between the two versions lay in the emphasis the interacting elements version placed
upon the cause and effect relationships that drove the system.
The isolated elements version was presented on a single A4 (21×29.5 cm2) sized
page. Numbered text points were integrated into the circuit diagram. The interacting
elements version was presented in an identical format to the isolated elements version
except that it was on an A3 size page (42×29.5 cm2). This larger diagram was neces-
sary because of the difference in the amount of text contained in the interacting
elements version. (The size of the instructional diagram made it impossible to include
as a figure in this paper. It may be obtained from the authors.)
Both groups also received a single, A4 (21×29.5 cm2) sized page that depicted a
diagram representing the oven-baking process. The diagram included key features
of the oven such as the motor, the heating elements, the conveyor and the oven
control panel realistically rather than as electrical circuitry. Another diagram,
depicting only the oven electrical circuit was available to students during testing.
Switches and symbols were labelled on the diagram but no textual description was
included. It was presented on an A4 (21×29.5 cm2) sized page.
The subjective mental load rating measure used a nine point Likert type scale.
The number of points on the mental effort Likert scale was increased from seven in
Experiments 1 and 2, to nine in this experiment, to give the students a greater range
of options to describe their subjective mental load when learning.
There were two categories of questions in the written test: those tapping low- and
high-element interactivity knowledge. Both types of questions included textual and
diagrammatical test items. Low-element interactivity questions asked about a specific
part or fact of the electrical circuit. This section of questions included four test items
with the total marks possible for this category of questions being eight. High-element
interactivity questions dealt with issues such as the progression of events in the
circuit or the structure and function of the oven circuit. These questions required
students to have a holistic understanding of the circuit and its processes. There were
eight test items in the high-element interactivity category of questions and the total
marks possible for this category of questions was 27 because some questions con-
tained subparts.
11.3. Procedure
All the students were randomly assigned to either the isolated-interacting elements
instruction group or the interacting elements only group. All students were tested
individually, in a quiet room. There were nine students in each group. The students
were informed that they were to be given instructional material regarding an electrical
circuit which would be followed by a written test and that this session was to be
repeated within 48 h. Students filled in an information sheet at this stage that asked
for biographical and educational details. Any student considered to have prior knowl-
edge (see criteria outlined in the section on “participants”) was eliminated at this
stage. In the first experimental session, the students in the isolated-interacting
78 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
elements instruction group studied the isolated elements instructional booklet while
the students in the interacting elements only group were given the interacting
elements instructional booklet. Both the groups received the schematic diagram of
the oven-baking process also. Students were allowed a maximum of 10 min to read
through the instructional material. If a student finished before this time, they were
asked to review the information for the remaining time. This procedure was used to
provide additional control by equalising study times rather than allow unlimited times
as occurred in Experiments 1 and 2. At the completion of the study phase, the stu-
dents were provided with a subjective mental load-rating scale, the format of which
was explained to both groups. They were asked to rate the mental effort expended
in understanding the oven circuit and to translate this to a numerical value. The test
section of the experiment followed. The students were asked to complete the written
test, which was common to both groups. The instructional materials were not avail-
able to the students during this test period. The circuit diagram (labelled circuit
elements with no text description) was available to the students at this stage but was
removed for diagrammatical questions that required the students to find faults in the
circuit. At all stages of the test period, students were prohibited from reattempting
questions after they had attempted them and from reviewing the previously com-
pleted test answers.
The second phase of the experiment was held within 48 h of the first phase. The
procedure for the second phase of the experiment was identical to the first, except
that all students studied the interacting elements instructions.
The test score variables under analysis were low- and high-element interactivity
knowledge scores and mental load rating. Means and standard deviations for these
variables are given in Table 5.
Table 5
Test scores and mental load ratings for Experiment 3
Mean SD Mean SD
Phase 1
Low-element interactivity (100%) 50.00 25.00 26.39 17.05
High-element interactivity (100%) 25.51 10.55 19.75 15.16
Mental load (9) 5.67 1.00 6.00 1.12
Phase 2
Low-element interactivity (100%) 55.56 25.85 38.89 26.10
High-element interactivity (100%) 40.33 14.87 23.46 14.58
Mental load (9) 5.22 1.09 6.00 1.73
E. Pollock et al. / Learning and Instruction 12 (2002) 61–86 79
Table 6
Normalised (Z) scores and condition efficiency scores for performance and mental load for Experiment
3 (note: the efficiency scores are based on the normalised mental load scores given in Table 5 for each
instructional group for each learning phase)
Group
Z Efficiency Z Efficiency
Mean (SD) Mean (SD) Mean (SD) Mean (SD)
Phase 1
Low element interactivity 0.29 (0.98) 0.23 (1.05) ⫺0.64 (0.67) ⫺0.61 (0.85)
High-element interactivity ⫺0.11 (0.68) ⫺0.05 (0.86) ⫺0.49 (0.98) ⫺0.50 (0.77)
Mental Load ⫺0.04 (0.80) – 0.22 (0.89) –
Phase 2
Low-element interactivity 0.51 (1.02) 0.64 (1.13) ⫺0.15 (1.03) ⫺0.26 (1.42)
High-element interactivity 0.84 (0.96) 0.88 (1.21) ⫺0.25 (0.94) ⫺0.33 (1.22)
Mental load ⫺0.40 (0.87) – 0.22 (1.38) –
80 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
13. Experiment 4
Experiment 4 replicated the design of the previous experiments using the materials
and procedure in Experiment 3 but with a sample of more experienced learners. The
participants involved in this study were industrial apprentices, experienced in the
area to be studied and so it was predicted that no performance differences would be
obtained between the two instructional conditions. It was expected that these more
expert learners would not benefit from the isolated-interacting elements method of
instruction as the understanding instructions should not overburden working memory.
14. Method
14.1. Participants
quarters of the way through their eight-month training course in industrial and dom-
estic electricity (students had completed a mean of 28.55 units from a 34 unit course).
The instructional materials used in this experiment was identical to that used in
Experiment 3. The procedure used in this experiment was virtually identical to that
used in Experiment 3. The only difference was in the allocation of participants to
instructional groups. Participants were matched for apprenticeship type (i.e. electri-
cal, mechanical, dual apprenticeship, fitter–turner, and tool maker) before being ran-
domly assigned to either the isolated-interacting elements instruction group or the
interacting elements only group.
The test score variables under analysis were low- and high-element interactivity
knowledge scores and mental load rating. Means and standard deviations for these
variables are provided in Table 7.
As was the case for Experiment 2 when using more experienced learners, there
was no significant difference between the groups on performance and mental ratings
measures. These results are in line with our predictions. As might be expected given
these results, there was no significant difference between groups in instructional
efficiency using subjective mental load ratings and any of the performance scores.
Means and standard deviations for normalised (Z), mental effort ratings and perform-
ance scores as well as instructional efficiency scores are provided in Table 8. It can be
concluded that there is no performance benefit for expert learners using the isolated-
interacting elements method of instruction.
Table 7
Test scores and mental load ratings for Experiment 4
Mean SD Mean SD
Phase 1
Low-element interactivity (100%) 65.29 20.52 69.44 15.45
High-element interactivity (100%) 32.51 11.97 39.51 20.37
Mental Load (9) 5.44 1.13 4.67 1.87
Phase 2
Low-element interactivity (100%) 70.83 27.95 76.39 17.05
High-element interactivity (100%) 46.50 14.12 58.44 21.26
Mental load (9) 4.22 1.39 3.44 1.51
82 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
Table 8
Normalised (Z) scores and condition efficiency scores for performance and mental load for Experiment
4 (note: the efficiency scores are based on the normalised mental effort scores given in Table 7 for each
instructional group for each learning phase)
Group
Z Efficiency Z Efficiency
Mean (SD) Mean (SD) Mean (SD) Mean (SD)
Phase 1
Low-element interactivity ⫺0.26 (1.01) ⫺0.62 (1.02) ⫺0.05 (0.76) ⫺0.13 (1.27)
High-element interactivity ⫺0.61 (0.62) ⫺0.87 (0.82) ⫺0.25 (1.06) ⫺0.27 (1.41)
Mental load 0.62 (0.70) – 0.14 (1.16) –
Phase 2
Low-elementi nteractivity 0.02 (1.38) 0.11 (1.52) 0.29 (0.84) 0.64 (0.93)
High-element interactivity 0.12 (0.73) 0.18 (1.07) 0.74 (1.10) 0.96 (1.37)
Mental load ⫺0.14 (0.87) – ⫺0.62 (0.94) –
Cognitive load theory is based on the assumption that certain features of the struc-
ture of information and of human cognitive architecture interact, and this interaction
has important implications for how we learn and understand. The paradox of learning
seems to be that some material which is very high in element interactivity cannot
be processed simultaneously in working memory with understanding until after it
has been stored in a schematic form in long-term memory. The understanding of
high-element interactivity material follows schema construction because all relevant
elements are incorporated in the schema and so can be processed in working memory
as a single element. It was not clear how the elements that constitute a complex
schema could be processed in working memory before the schema had been con-
structed. This paper aims to investigate an instructional technique to combat this
paradoxical situation.
It was proposed that initially, the element interactivity of complex material had
to be artificially reduced to enable a schema or partial schema for the information
to be developed. That reduction could be accomplished by presenting the material
as isolated elements that could be processed in working memory. The consequence
of the reduction in element interactivity of material may result in an initial decrease
in the student’s capacity for understanding. Over the longer term however, it was
hypothesised that the promotion of schema construction would lead to an increase
in the learner’s understanding. The isolated-interacting elements method instructional
technique had two stages: firstly students studied instructions detailing only the indi-
vidual information elements forming a concept; in stage two, learners studied a ver-
sion of the instruction that expounded all the information elements and how they
E. Pollock et al. / Learning and Instruction 12 (2002) 61–86 83
Acknowledgements
The work reported in this paper was supported by a grants from the Australian
Research Council. The authors wish to thank Email Ltd and BHP Ltd for their ongo-
ing support and commitment to our joint research programs. In particular, we wish
to thank Bryan Jones, Richard Winter and the technical training team, trainees and
apprentices of Email Training Services and Eddy Gosek, Brian Walker the training
staff and apprentices of BHP Training Centre, Port Kembla.
The elements of interacting information have been estimated for the Insulation
Resistance electrical safety test. These estimates represent what could reasonably be
the case for novice electricians. The estimates are based on the instructional material
provided in Fig. 1.
References
Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. London. Cambridge
University Press.
Bloom, B. (1956). (Ed.) Taxonomy of educational objectives: cognitive domain. NY: McKay.
van Bruggen, J., Kirschner, P., & Jochems, W. (2002). External representation of argumentation in CSCL
and the management of cognitive load. Learning and Instruction, 12, 121–138.
Chandler, P., & Sweller, J. (1996). Cognitive load while learning to use a computer program. Applied
Cognitive Psychology, 10, 151–170.
Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, 55–81.
Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. Stenberg (Ed.), Advances
in psychology of human intelligence. Hillsdale, NJ: Erlbaum.
De Groot, A. D. (1965). Thought and choice in chess. The Hague: Mouton.
Gagne, R. (1970). The conditions of learning (2nd ed.). NY: Holt, Rinehart & Winston.
van Gerven, P., Paas, F., van Merriënboer, J., & Schmidt, H. (2002). Cognitive load theory and aging:
effects of worked examples on training efficiency. Learning and Instruction, 12, 87–105.
Halford, G. S., Wilson, W. H., & Phillips, S. (1998). Processing capacity defined by relational complexity:
86 E. Pollock et al. / Learning and Instruction 12 (2002) 61–86
Implications for comparative, developmental, and cognitive psychology. Behavioural and Brain
Sciences, 21 (6), 803–864.
Hoosain, R. (1983). Memorization of classical Chinese. Psychologia: An International Journal of Psy-
chology in the Orient, 26 (3), 193–197.
King, D. J., & Russell, G. W. (1966). A comparison of rote and meaningful learning of connected mean-
ingful material. Journal of Verbal Learning and Verbal Behaviour, 5, 478–483.
Kotovsky, K., Hayes, J. R., & Simon, H. A. (1985). Why are some problems hard? Evidence from Tower
of Hanoi. Cognitive Psychology, 17, 248–294.
Larkin, H., McDermott, J., Simon, D., & Simon, H. (1980). Models of competence in solving physics
problems. Cognitive Science, 11, 65–99.
Marcus, N., Cooper, M., & Sweller, J. (1996). Understanding instructions. Journal of Educational Psy-
chology, 88 (1), 49–63.
Mayer, R., & Moreno, R. (2002). Aids to computer-based multimedia learning. Learning and Instruction,
12, 107–119.
van Merriënboer, J., Schuurman, J., De Croock, M., & Paas, F. (2002). Redirecting learners’ attention
during training: effects on cognitive load, transfer test performance and training efficiency. Learning
and Instruction, 12, 11–37.
Miller, G. A. (1956). The magical number seven plus or minus two: some limits on our capacity for
processing information. Psychological Review, 63, 81–97.
Paas, F., & van Merrienboer, J. (1993). The efficiency of instructional conditions: an approach to combine
mental-effort and performance measures. Human Factors, 35, 737–743.
Paas, F., & van Merrienboer, J. (1994). Variability of worked examples and transfer of geometric problem-
solving skills: a cognitive load approach. Journal of Educational Psychology, 86, 122–133.
Rumelhart, D. E. (1980) Schemata: the building blocks of cognition. In R. C. Anderson, R. J. Spiro, & W.
E. Montague (Eds.) Theoretical issues in reading comprehension (pp. 33–58). Hillsdale, NJ: Lawrence
Erlbaum Associates.
Schneider, W., & Shiffrin, R. (1977). Controlled and automatic human information processing: I. Detec-
tion, search and attention. Psychological Review, 84, 1–66.
Shiffrin, R., & Schneider, W. (1977). Controlled and automatic human information processing: II. Percep-
tual learning, automatic attending, and a general theory. Psychological Review, 84, 127–190.
Stark, R., Mandl, H., Gruber, H., & Renkl, A. (2002). Conditions and effects of example elaboration.
Learning and Instruction, 12, 39–60.
Sweller, J. (1994). Cognitive load theory, learning difficulty and instructional design. Learning and
Instruction, 4, 295–312.
Sweller, J. (1999). Instructional designs in technical areas. Melbourne: ACER.
Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction,
12 (3), 185–233.
Sweller, J., Mawer, R., & Ward, M. (1983). Development of expertise in mathematical problem solving.
Journal of Experimental Psychology: General, 112 (4), 639–661.
Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional
design. Educational Psychology Review, 10 (3), 251–296.
Thorndyke, P. W., & Hayes-Roth, B. (1977). The use of schemata in the acquisition and transfer of
knowledge. Cognitive Psychology, 11, 82–106.