Ten Steps To Complex Learning A Systematic Approach To Four-Component Instructional Design (Jeroen J. G. Van Merriënboer, Paul A. Kirschner Etc.) (Z-Library)
Ten Steps To Complex Learning A Systematic Approach To Four-Component Instructional Design (Jeroen J. G. Van Merriënboer, Paul A. Kirschner Etc.) (Z-Library)
Ten Steps to Complex Learning presents a path from an educational problem to a solution
in a way that students, design practitioners, and researchers can understand and easily use.
Students in the felds of instructional design and the learning sciences can use this book
to broaden their knowledge of the design of training programs for complex learning.
Practitioners can use this book as a reference guide to support their design of courses,
curricula, or environments for complex learning.
Driven by the acclaimed Four-Component Instructional Design (4C/ID) model, this
fourth edition of Ten Steps to Complex Learning is fully revised with the latest research,
featuring over 50 new references. The entire book has been updated for clarity, incorporating
new, colorful graphics and diagrams, and the guiding example used throughout the book
is replaced with a training blueprint for the complex skill of ‘producing video content.’ The
closing chapter explores the future development of the Ten Steps, discussing changes in
teacher roles and the infuence of artifcial intelligence.
Jimmy Frèrejean (1986) is Assistant Professor at the Faculty of Health, Medicine and Life
Sciences at Maastricht University, The Netherlands. Alongside his teaching and research
on instructional design at the School of Health Professions Education, Jimmy actively
contributes to the Maastricht University Medical Center and Academy as an educational
consultant. He specializes in simulation-based education, coordinates a national lifelong
learning program for healthcare professionals, and is part of the TRISIM expert group on
training, research, and innovation in simulation-based education in healthcare.
Ten Steps to Complex
Learning
3 Ten Steps 45
Appendix 1 355
Appendix 2 359
Glossary 366
References 396
Author Index 425
Subject Index 433
About the Authors
More than 30 years ago, the award-winning article Training for Refective
Expertise: A Four-Component Instructional Design Model for Complex Cog-
nitive Skills (van Merriënboer, Jelsma, & Paas, 1992) described the frst
precursor of the Ten Steps to Complex Learning. Five years later, in 1997,
Jeroen J. G. van Merriënboer published his award-winning book, Train-
ing Complex Cognitive Skills. That book presented a comprehensive descrip-
tion of a training design system for acquiring complex skills or professional
competencies, based on research conducted on the learning and teaching
of knowledge needed for jobs and tasks in a technologically advanced and
quickly changing society. The basic claim was that educational programs for
complex learning must consist of four basic interrelated components: learn-
ing tasks, supportive information, procedural information, and part-task
practice. Each component was linked to a fundamental category of learning
processes. The instructional methods prescribed for each component were
derived from a broad body of empirical research.
Whereas Training Complex Cognitive Skills was very well received in
the academic feld of learning and instruction, practitioners in the feld of
instructional design frequently complained that they found it difcult to
systematically design educational programs based on the four components.
The article Blueprints for Complex Learning: The 4C/ID Model (van Mer-
riënboer, Clark, & De Croock, 2002) was the frst attempt to provide more
guidance on this design process. The systematic approach described in this
article was further developed in the frst edition of the book Ten Steps to
Complex Learning (2007), which can best be seen as a complement to the
psychological foundation described in Training Complex Cognitive Skills.
The Ten Steps describe a path from a training problem to a training solution
in a way that students, practitioners (both instructional designers and teach-
ers), and researchers can understand and use. This book was a great success
and has been translated into Korean, Chinese, and, in part, in Spanish and
Persian. It also spawned a Dutch language analog, of which a new edition
appeared at the time of publication of this book.
Preface xiii
Sadly, Jeroen J. G. van Merriënboer got very ill and passed away before
completing this fourth edition. Despite this, Jeroen played a pivotal role
in overseeing and contributing to all the modifcations. We want to clarify
that any changes made after his passing were discussed with him before-
hand. In short, the updates to this book come in three categories. First, this
fourth edition has been completely updated with the newest insights into
teaching, training, and instructional design. More than 50 new references
have been added, and, where relevant, the latest insights from the feld have
been included. This can be seen in all the chapters in the book. We have
also updated the text about cognitive load theory, especially the concepts of
intrinsic and extraneous cognitive load.
Second, we have signifcantly changed the example training-blueprint
that formed the book’s backbone. In the frst three editions, we used the
moderately complex skill of ‘searching for literature’ by a librarian or docu-
mentalist to explain and illustrate most of the Ten Steps. This blueprint
is now replaced with a more appealing one for a training program for the
complex skill of ‘producing video content’ by a video content producer.
The new blueprint is more extensive and detailed, giving a better example
of the Ten Step’s design approach.
Finally, many smaller and larger changes and additions were made
throughout the book to improve readability and comprehensibility. Also,
where the third edition included some occurrences of (s)he, this is now
replaced with they. Concrete examples and cases were added or updated
where useful. In addition, several fgures and tables have been added or
revised, and all fgures now appear in color. The fnal chapter consists of an
updated perspective on the future of the Ten Steps, including the role of
artifcial intelligence in developing instruction and training.
The structure of this book is straightforward. Chapters 1, 2, and 3 con-
cisely introduce the Ten Steps to Complex Learning. Chapter 1 presents a
holistic approach to the design of instruction for achieving the complex
learning required by modern society. Chapter 2 relates complex learning
to the four blueprint components: learning tasks, supportive information,
procedural information, and part-task practice. Chapter 3 describes the use
of the Ten Steps for developing detailed training blueprints. Then, the Ten
Steps are discussed in detail in Chapters 4 through 13. Chapter 14 discusses
the teaching/training of domain-general skills in programs based on the
Ten Steps, and Chapter 15 discusses the design of summative assessment
programs that are fully aligned with the Ten Steps. Finally, Chapter 16 posi-
tions the Ten Steps in the feld of the learning sciences and discusses future
directions.
Acknowledgments
This is the fourth revised edition of this book. To begin, we would like to
thank all the teachers, trainers, students, instructional designers, and educa-
tors who have bought and used this book to the extent that the publisher
asked us to bring out a new, highly revised edition.
Our debt to colleagues and graduate students who in some way contrib-
uted to the development of the Ten Steps described in this book is enor-
mous. Without them, this book would not exist. We thank John Sweller,
Paul Ayres, Paul Chandler, and Slava Kalyuga for working together on the
development of Cognitive Load Theory, which heavily afected the formu-
lation of the Ten Steps. We thank Iwan Wopereis, who helps manage the
www.4cid.org website and organize the recurring Dutch 4C/ID User’s
Day. We thank Ameike Janssen-Noordman and Bert Hoogveld for contrib-
uting to the Ten Steps when writing the Dutch books Innovatief Onderwijs
Ontwerpen [Innovative Educational Design, 2002/2009/2017] and Inno-
vatief Onderwijs Ontwerpen in de Praktijk [Innovative Educational Design
in Practice, 2011]. We thank our research groups at the Open University of
the Netherlands, the Thomas More University of Applied Sciences in Ant-
werp, Belgium, and Maastricht University. We especially thank our master’s
and PhD students for the ongoing academic debate and for giving us many
inspiring ideas. We thank the many educational institutes and organizations
that use the Ten Steps and provide us with useful feedback. Last but not
least, we thank all the learners who, in some way, participated in the research
and development projects that ofered the basis for writing this book.
Finally, we thank Roel Willems, professional content creator and audio-
visual teacher. Without his expert help and guidance, it would have been
impossible to accurately describe an example 4C/ID blueprint for training
a video content producer.
Jeroen J. G. van Merriënboer, Paul A. Kirschner, Jimmy Frèrejean
Maastricht/Hoensbroek, December 2023
Chapter 1
When Rembrandt van Rijn painted The Anatomy Lesson of Dr. Nicolaes
Tulp in 1632, our understanding of human anatomy, physiology, and mor-
phology was extremely limited. The tools of the trade were rudimentary
at best and barbarian at worst. Medicine was dominated by the teachings
of the church, which regarded the human body as a creation of God, and
the ancient Greek view of the four humors (i.e., blood, phlegm, black bile,
and yellow bile) prevailed. Sickness was attributed to an imbalance in these
humors, and treatments, such as bloodletting and inducing vomiting, aimed
to restore this balance. Surgical instruments were basic. A surgeon would
perform operations with the most basic tools: a drill, a saw, forceps, and
DOI: 10.4324/9781003322481-1
2 A New Approach to Instruction
pliers for removing teeth. If a trained surgeon was not available, it was usu-
ally the local barber who performed operations and removed teeth. The
trained surgeon was more an ‘artist’ than a ‘scientist.’ For example, because
there were no anesthetics, surgeons took pride in the speed with which they
operated, even amputating a leg in just a few minutes. Progress in anatomy,
physiology, morphology, and medical techniques was virtually nonexistent
or painfully slow. Although microscopes existed, they lacked the power to
reveal bacteria, hindering our understanding of the causes of diseases. This
meant that there was also little improvement in medical treatments.
Compare this to today’s situation, where hardly a day goes by without new
medical discoveries, diseases, drugs and treatments, and medical and surgical
techniques. Just a generation or two ago, medicine, medical knowledge and
skills, and even the attitudes of medical practitioners toward patients and the
way patients approach and think about their doctors were radically diferent
than they are today. It is no longer enough for surgeons to master the tools
of the trade during their studies and then apply and perfect them through-
out their careers. Competent surgeons today (and tomorrow) must master
complex skills and professional competencies—both technical and social—
during their studies and never stop learning throughout their careers. This
book is about how to design instruction for this complex learning.
The interest in complex learning has grown rapidly since the beginning
of the 21st century. It is an inevitable reaction of education and teaching
to societal and technological developments and students’ and employers’
uncompromising views about the value of education and training for updat-
ing old knowledge and skills to prevent obsolescence (Hennekam, 2015) and
learning new ones. Machines have taken over routine tasks, and the complex
cognitive tasks humans must perform are becoming increasingly complex
and important (Frey & Osborne, 2017; Kester & Kirschner, 2012). More-
over, the nature of available jobs is changing, necessitating the acquiring and
applying new and diferent skills and quickly making the information relevant
to carrying out those jobs obsolete (Thijssen & Walter, 2006). This poses
higher demands on the workforce, with employers stressing the importance
of problem solving, reasoning, decision making, and creativity to ensure that
employees can and will fexibly adjust to rapid changes in their environment.
Two examples might drive this home. Many aspects of the job of an air
trafc controller have been technologically automated. But even though this
is the case, the complexity of the controller’s responsibilities has grown sub-
stantially due to the enormous increase in air trafc, the growing number
of safety regulations, and the advances in the technical aids themselves (see
Figure 1.1). The same is true for family doctors (General Practitioners/GPs),
who need to address the physical, psychological, and social aspects of their
patients but also encounter a much more diverse clientele with diferent
cultural backgrounds, a food of new medicines, tools and treatments, and
issues dealing with registration, liability, insurance, and more.
The feld of education and training has increasingly recognized these
new demands posed by society, business, and industry. In response to these
demands, there has been a concurrent increase in the attempts to better
prepare graduates for the labor market and help them develop adaptive
expertise (Carbonell et al., 2014). The educational approaches mentioned
earlier, emphasizing complex learning and the development of professional
competencies throughout the curriculum, strive to reach this goal. How-
ever, educational institutes lack proven design approaches. This often results
in implementing innovations that undeniably aim at better preparation of
students for the labor market but that do so with varying degrees of success
(Dolmans et al., 2013).
An often-heard student complaint is that they experience their curricu-
lum as a disconnected set of courses or modules, with only implicit relation-
ships between the courses and an unclear relevance of what they should
learn for their future professions and why. This is often complicated by
‘fexible’ curricula that ofer a broad range of possibilities but leave it to
the student to choose what and when they want to study without giving
them any support or guidance. Often, as a compromise with those instruc-
tors who want to ‘teach their subject areas,’ curricula implement a separate
stream in which problems, projects, cases, or other learning tasks are used
for the development of complex skills or competencies, hopefully in a—for
the student—recognizable and relevant situation. However, even in those
curricula, students struggle to link what they are required to do to both
the theoretical coursework, which is typically divided into traditional sub-
jects, and what they perceive to be important for their future professions,
which often lies at the basis of the learning tasks. Not surprisingly, students
have difculties combining everything they learn into an integrated knowl-
edge-base and employing it to perform real-life tasks and solve practical
work-related problems once they have graduated. The whole is inevitably
greater than the sum of its parts. In other words, they do not achieve the
envisioned and required ‘transfer of learning’ or ‘transfer of training’ (Blume
et al., 2010).
The fundamental problem facing the feld of instructional design is the
inability of education and training to achieve this necessary transfer. Design
theory must support the development of training programs for learners
who need to learn and transfer professional competencies or complex skills
acquired in their study to an increasingly varied set of real-world contexts
and settings. The Ten Steps to Complex Learning (from this point on, referred
to as the Ten Steps) claims that a holistic approach to instructional design is
A New Approach to Instruction 5
necessary to reach this goal (cf. Tracey & Boling, 2013). In the next section,
we discuss this holistic design and why it is thought to help improve transfer
of learning. Subsequently, we position the instructional design model dis-
cussed in this book within the feld of learning and instruction, describing
its four core components and the ten steps. Finally, we provide an overview
of the book’s structure and contents.
Compartmentalization
knows how your body functions (i.e., its anatomy and physiology), is techni-
cally dexterous, and has a good bedside manner. This indicates that it makes
little sense to distinguish learning domains for professional competencies.
Many complex surgical skills simply cannot be performed without in-depth
knowledge of the structure and workings of the human body because this
allows for the necessary fexibility in behavior. Many skills cannot be per-
formed acceptably if the performer does not exhibit particular attitudes.
And so forth. Therefore, holistic design models for complex learning aim
to integrate declarative learning, procedural learning (including perceptual
and psychomotor skills), and afective learning (including the predisposition
to keep all of these aspects up-to-date, including patient skills). So, they
facilitate the development of an integrated knowledge base that increases
the chance that transfer of learning occurs (Janssen-Noordman et al., 2006).
Fragmentation
coordination into account, does not work because learners ultimately are
not capable of integrating and coordinating the separate elements in transfer
situations (e.g., Gagné & Merrill, 1990; Lim et al., 2009; Rosenberg-Kima
et al., 2022; Spector & Anderson, 2000). To facilitate transfer of learning,
holistic design models focus on reaching highly integrated sets of objectives
and, especially, the coordinated attainment of those objectives in real-life
task performance.
Although this ‘blocked’ practice schedule will be most efcient for reach-
ing the three objectives, minimizing the required time-on-task and learners’
investment of efort, it also yields low transfer of learning. This is because
the chosen instructional method invites learners to construct highly specifc
knowledge for diagnosing each distinct error, allowing them to perform
in the way specifed in the objectives but not to performances beyond the
given objectives. If a designer aims at transfer and the objective is that learn-
ers can correctly diagnose as many errors as possible in a technical system,
then it is far better to train them to diagnose the three errors in a random
order. This leads, for example, to the following training blueprint:
Table 1.1 Four blueprint components of 4C/ID and the Ten Steps.
The term ‘learning task’ is used here in a very generic sense: A learning
task may refer to a case study that the learners must study, a project that
must be carried out, a problem that must be solved, a professional task that
must be performed, and so forth. The supportive information helps learn-
ers perform nonroutine aspects of learning tasks that often involve prob-
lem solving, decision making, and reasoning (e.g., information about the
teeth, mouth, cheeks, tongue, and jaw helps a student in dentistry with
clinical reasoning; Postma & White, 2015, 2016). The procedural informa-
tion enables learners to perform the routine aspects of learning tasks; that
is, those aspects of the learning task that are always performed in the same
way (e.g., how-to instructions for measuring blood pressure help a medical
student with conducting physical examinations). Finally, part-task practice
pertains to additional practice; that is, overlearning of routine aspects to help
learners develop a high level of automaticity of these aspects and improve
10 A New Approach to Instruction
et al., 2023) and specifc skills (e.g., Costa & Miranda, 2019; Güney, 2019a,
2019b; Linden et al., 2013; Maddens et al., 2020; Zhou et al., 2022).
Finally, it is used in continuous professional development; for example, in
the felds of teacher training (Frèrejean et al., 2021; Kukharuk et al., 2023;
Meutstege et al., 2023) and medical training (Kolcu et al., 2020). The model
will typically be used to develop training programs of substantial duration—
ranging from several weeks to several years. In terms of curriculum design,
the model will typically be used to design a—substantial—part of a curricu-
lum to develop one or more professional competencies or complex skills.
While the Ten Steps is strongly informed by decades of research on learn-
ing and cognitive psychology and ofers many clear recommendations for
design activities, it is essential to consider the unique needs of each situation
when selecting an instructional design model. When deciding to use the Ten
Steps, the following considerations may help:
• We recommend the Ten Steps if you have sufcient resources for devel-
opment. Designing the four components may require the involvement
of educational specialists, subject-matter experts, practitioners, teach-
ers, students, and multimedia developers (see Chapter 16). While rapid
instructional design approaches exist and can be helpful, this book will
make apparent that many design decisions require a robust analysis of the
situation and an understanding of which methods are efective in that
situation. Working fast or with limited information to inform your deci-
sions can bring the risk of sacrifcing efectiveness.
• Finally, we recommend the Ten Steps if you have sufcient autonomy to
make design changes in your current program or decisions about a new
program. Without control over course length, assessment methods and
moments, or structure and planning, your opportunities for adequately
applying the design principles are constrained. Another challenge may
occur in subject-based educational programs with a strong separation of
teams or persons responsible for each subject, where it may require efort
to integrate subject domains into whole tasks.
• Chapters 4 through 13 describe the Ten Steps in detail. You should always
start your design project with Step 1, but you only need to consult the
other chapters if these steps are required for your project. Each chapter
starts with general guidelines that may help you decide whether the step
is relevant to your project.
• The next two chapters are relevant for readers with a specifc interest
in training domain-general skills (Chapter 14) or designing programs of
assessment (Chapter 15). Minimum requirements for assessment (per-
formance assessment of learning tasks; Step 2 in Chapter 5) and teaching
domain-general skills (task selection by the learner in on-demand educa-
tion; Step 3 in Chapter 6) are already an integral part of the Ten Steps.
If you are a student in the feld of instructional design and want to broaden
your knowledge of the design of training programs for complex learning,
we advise you to study all chapters in the order in which they appear. For all
readers, whether practitioner or student, we tried to make the book as useful
as possible by including the following:
• Each chapter ends with a Summary of its main points and design
guidelines.
• Key concepts are listed at the end of each chapter and included in a Glos-
sary. This glossary contains pure defnitions of terms that might not be
familiar and, in certain cases, may be more extensive (in the case of sem-
inal or foundational concepts, theories, or models) and contain back-
ground information. In this way, the glossary can help you organize the
main ideas discussed in this book.
• In several chapters, you will fnd Boxes in which the psychological founda-
tions for particular design decisions are briefy explained.
• Two Appendices with example materials are included at the end of this
book.
Chapter 2
DOI: 10.4324/9781003322481-2
16 Four Blueprint Components
Having globally discussed a holistic approach to design and the Ten Steps
in Chapter 1, this chapter proceeds to describe the four main components of
a training blueprint; namely, (a) learning tasks, (b) supportive information, (c)
procedural information, and (d) part-task practice. After a brief description of
a training blueprint built from the four components in Section 1, Sections 2
through 6 explain how well-designed blueprints deal with the three problems
discussed in the previous chapter. Section 2 describes how the blueprint pre-
vents compartmentalization by focusing on integrating skills, knowledge, and
attitudes into one interconnected knowledge base. Section 3 describes how
the blueprint avoids fragmentation by focusing on learning to coordinate con-
stituent skills in real-life task performance. Section 4 describes how the blue-
print deals with the transfer paradox by acknowledging that complex learning
involves qualitatively diferent learning processes with diferent requirements
for instructional methods. Section 5 explains how the dynamic selection of
learning tasks by the teacher/system or by the ‘self-directed’ learner makes
individualized instruction possible. Section 6 discusses the use of traditional
and new media for each component. The chapter concludes with a summary.
It should be noted that most of what is discussed in this chapter is fur-
ther elaborated on in Chapters 4–13; in particular, Chapter 4 deals with
designing learning tasks, Chapter 7 with designing supportive information,
Chapter 10 with designing procedural information, and Chapter 13 with
designing part-task practice. The current chapter provides the basic knowl-
edge about the four components and is an advance organizer to help better
understand and integrate what follows.
Figure 2.1 A schematic training blueprint for complex learning and the four
components’ main features.
The next three Sections explain how these four components can help solve
the problems of compartmentalizing knowledge, skills, and attitudes; the
fragmentation of what is learned in small parts; and the transfer paradox.
Learning Tasks
Learners work on tasks that help them develop an integrated knowledge base
through inductive learning, in which they induce new knowledge from con-
crete experiences (see Box 4.1—Induction and Learning Tasks). Therefore,
each learning task should ofer whole-task practice that confronts the learner
with a set of constituent skills allowing for real-life task performance, together
with their associated knowledge and attitudes (Van Merriënboer & Kester,
2008). In the video-production example, the frst learning task would ideally
confront learners with the creation of a production plan (i.e., preproduction),
production of footage (i.e., the production), and creation of the fnal product
(i.e., postproduction). All learning tasks are meaningful, authentic, and repre-
sentative of a professional’s tasks in the real world. In this whole-task approach,
20 Four Blueprint Components
learners develop a holistic vision of the task that is gradually embellished during
the training. A sequence of learning tasks provides the backbone of a training
program for complex learning. In a schematic blueprint, it simply looks like this:
Variability of Practice
The frst requirement, thus, is that each learning task is a whole task to encour-
age the development of an integrated knowledge base. In addition to this, all
learning tasks must difer from each other on all dimensions on which tasks
also difer in the real world, such as the context or situation in which they
are performed, how they are presented, the saliency of the defning charac-
teristics, and so forth. Thus, the learning tasks in a training program must be
representative of the breadth of the variety of tasks and situations in the real
world. The variation allows learners to generalize and abstract away from the
details of each single task. For example, learning tasks for the video-produc-
tion example may difer on the type of video that must be produced (e.g.,
an ad, an informative clip, a documentary), locations where the video must
be recorded (e.g., outdoor, indoor, well-lit, dark), tools required (e.g., dif-
ferent types of cameras and microphones), and type of client (e.g., informal,
friendly, corporate, difcult). There is strong evidence that such variability of
practice is of utmost importance for achieving transfer of learning—both for
relatively simple tasks and highly complex real-life tasks (Paas & van Mer-
riënboer, 1994; Van Merriënboer et al., 2006). In the sequence of learning
tasks, variability of practice is indicated by little triangles placed at diferent
positions in the learning tasks. Schematically, it looks like this:
their related constituent skills and associated knowledge and attitudes. For
this reason, constituent skills are seen as aspects rather than parts of a com-
plex skill, which is also why the term ‘constituent skill’ and not ‘subskill’ is
used here. In a whole-task approach, learners are directly confronted with
many diferent constituent skills from the start of the training, although
they cannot be expected to coordinate all those aspects independently at
that moment. Thus, it is necessary to simplify the tasks (i.e., make them
less complex) and give learners sufcient support and guidance.
Task Classes
It is not possible to begin a training program with very complex learning tasks
with high demands on the coordination of many constituent skills. All tasks
are made up of separate elements that can be more or less interrelated (i.e.,
interact with each other). The number of elements inherent to a task and the
degree of interaction between those elements determines the task’s complex-
ity. The more elements a task entails and interactions between them, the more
complex the task. Please note, we are not talking about complex as opposed
to easy or simple, as opposed to difcult, because easiness or difculty is not
only determined by task complexity but also by the level of expertise or prior
knowledge of the learner (they are subjective terms). In our defnition, a
learning task with a particular complexity can, thus, be easy for a learner with
high prior knowledge but difcult for a learner with low prior knowledge.
Learners, thus, start working on relatively simple but whole learning tasks
that appeal to only a part of all constituent skills (i.e., few elements) and
progress toward more complex, whole tasks that appeal to more constituent
skills and, thus, also usually require more coordination (i.e., more interac-
tion between the elements). Categories of learning tasks, each representing
a version of the task with a particular level of complexity, are called task
classes. For example, the simplest task class in the video-production exam-
ple contains learning tasks that confront learners with situations where they
only have to produce a short recap video summarizing an indoor event with
plenty of time to complete it. The most complex task class contains learn-
ing tasks that confront learners with situations where the desired videos
are long, deal with difcult topics (e.g., documentaries), and potentially
require outdoor recording in challenging weather conditions, with limited
time available. Additional task classes of an intermediate complexity level can
be added between these two extremes.
Learning tasks within a particular task class are always equivalent
because the tasks can be performed based on the same body of knowledge
and already acquired skills. However, a more complex task class requires
more or embellished knowledge for efective performance than the pre-
ceding, simpler task classes. This is also one of the reasons we made a
distinction between simple-complex and easy-difcult. Each new task class
22 Four Blueprint Components
will contain more complex learning tasks than previous ones. However,
because learners will have increasingly more prior knowledge when dealing
with subsequent task classes, they will experience all learning tasks across
task classes as more or less equally difcult. The blueprint organizes the
learning tasks in an ordered sequence of task classes (i.e., the dotted boxes)
representing simple-to-complex versions of the whole task. Schematically,
it looks like this:
When learners begin to work on a new, more complex task class, it is essen-
tial that they also receive the support and guidance needed to coordinate the
diferent aspects of their performance (Kirschner et al., 2006). Support—
actually, task support—focuses on providing learners with assistance with
the task elements involved in the training; namely, the steps in a solution
that get them from the givens to the goals (i.e., it is product-oriented).
Guidance—actually, solution-process guidance—focuses on assisting
learners with the processes inherent to fnding a solution (i.e., it is process-
oriented). These two topics will be discussed in more depth in Chapter 4.
Both the support and the guidance diminish in a process of scafolding as
learners acquire more expertise (Reiser, 2004). The continuum of learning
tasks with high support to learning tasks without support is exemplifed by
the continuum of support techniques ranging from case studies to conven-
tional tasks. The highest level of support in the video-production example
could, for instance, be provided by a case study where learners receive an
interesting documentary and are asked questions about the efectiveness of
the approach taken (i.e., the given solution), possible alternative approaches,
the quality of the fnal editing, the thought processes of the video producer,
and so on. Intermediate support might take the form of an incomplete case
study where the learners receive the client’s assignment, a script, and a list
of necessary materials for recording, and they produce and edit the fnal
video (i.e., they have to complete a given, partial solution). Finally, no sup-
port is given by a conventional task, for which learners have to perform all
actions by themselves. This type of scafolding, known as the completion
strategy (Van Merriënboer, 1990; Van Merriënboer & de Croock, 1992)
or fading-guidance strategy (Renkl & Atkinson, 2003), is highly efective.
In the schematic training blueprint, each task class starts with one or more
learning tasks with a high level of support and guidance (indicated by the
flling in the circles), continues with learning tasks with a lower level of
Four Blueprint Components 23
support and guidance, and ends with conventional tasks without any sup-
port and guidance:
Part-Task Practice
Limitations of CLT
CLT is fully consistent with the four components, but this is not to
say that CLT alone is sufficient to develop a useful instructional design
model for complex learning at the level of whole educational programs.
Applying CLT prevents cognitive overload and (equally important)
frees up processing resources that can be devoted to germane process-
ing; that is, learning. To ensure that the freed-up resources are actually
30 Four Blueprint Components
Further Reading
Mavilidi, M. F., & Zhong, L. (2019). Exploring the development
and research focus of cognitive load theory, as described by its
founders: Interviewing John Sweller, Fred Paas, and Jeroen
van Merrienboer. Educational Psychology Review, 31, 499–508.
https://doi.org/10.1007/s10648-019-09463-7
Paas, F., & van Merriënboer, J. J. G. (2020). Cognitive-load theory:
Methods to manage working memory load in the learning of com-
plex tasks. Current Directions in Psychological Science, 29, 394–398.
https://doi.org/10.1177/0963721420922183
Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory.
Springer. https://doi.org/10.1007/978-1-4419-8126-4
Van Merriënboer, J. J. G., & Sweller, J. (2005). Cognitive load
theory and complex learning: Recent developments and future
directions. Educational Psychology Review, 17, 147–177.
https://doi.org/10.1007/s10648-005-3951-0
1. Task classes
• If performance on unsupported learning tasks meets all standards for
acceptable performance (e.g., criteria related to accuracy and speed,
attitudes, values), then the learner proceeds to the next task class and
works on a more complex learning task with a high level of support
and/or guidance.
• If performance on unsupported learning tasks does not yet meet all
standards for acceptable performance, then the learner proceeds—at
the current complexity level—to either another unsupported learning
task or a learning task with specifc support and/or guidance.
2. Support and guidance
• If performance on supported learning tasks meets all standards for
acceptable performance, then the learner proceeds to a next learning
task with less support and/or guidance.
• If performance on supported learning tasks does not yet meet all stand-
ards for acceptable performance, then the learner proceeds to either a
32 Four Blueprint Components
Figure 2.3 The cycle for dynamic task selection based on continuous assessment
of individual performance on learning tasks.
Four Blueprint Components 33
Who Is in Control?
Dynamic task selection is a cyclical process that enables the creation of indi-
vidualized learning trajectories. But who exactly is responsible for select-
ing the proper learning tasks? Simply said, this can be the responsibility
of an intelligent agent such as a teacher or e-learning application or of the
learner, who then acts in a self-directed fashion. With system control, the
teacher or the e-learning application assesses if the standards for acceptable
performance have been met and, based upon this appraisal, selects the next
learning task or task class for a learner. With learner control, the self-directed
learner assesses whether the standards have been met and selects the next
learning task or task class from all available tasks (Corbalan et al., 2011). As
shown in Table 2.1, the distinction between system and learner control is
34 Four Blueprint Components
relevant to all four components of the Ten Steps. In addition, system con-
trol and learner control can be combined in a system of ‘shared control,’ as
will be further discussed later, in the section “Second-order scafolding for
Table 2.1 Examples of system control and learner control for each of the four
blueprint components.
of tasks, and, above all, decreasing support and guidance in a process of scaf-
folding (Van Merriënboer & Sluijsmans, 2009).
We speak about second-order scafolding for teaching self-directed learn-
ing skills because it pertains not to the complex cognitive skill being taught
but to the intertwined self-directed learning skills. Second-order scafolding
involves a gradual transition from teacher/system control to learner con-
trol, thus from adaptive learning to on-demand education, from planned
information presentation to resource-based learning, from unsolicited to
solicited information presentation, and from dependent to independent
part-task practice. How we can use second-order scafolding in a system of
‘shared control’ to teach learners to select their learning tasks is discussed
in Step 3, sequencing learning tasks (Section 6.4). How we can use second-
order scafolding to teach learners information literacy and deliberate prac-
tice skills is discussed in Chapter 14, which deals with domain-general skills.
Schema Inductive learning 1. L earning tasks Real task environment, Computer-simulated task
construction (Box 4.1) role play, project environments, computerized serious
groups, problem-based games, computerized high-fidelity
learning (PBL) groups simulators (e.g., mannequins), virtual
reality
Elaboration 2. S upportive information Textbooks, teachers Hypermedia (e.g., Internet), multimedia
(Box 7.1) giving lectures, systems (video, animation), social
physical models, realia media, computer-supported
(e.g., a skeleton) collaborative learning, microworlds,
AI, Large Language Models
Schema Rule formation 3. P rocedural information Teacher acting as Online job aids and help systems,
automation (Box 10.1) ‘assistant looking mobile technologies (smartphones,
over your shoulder,’ tablets), wizards, pedagogical agents,
job and learning augmented reality
aids, quick reference
guides, manuals
Strengthening 4. Part-task practice Paper and pencil, skills Drill-and-practice computer-based
(Box 13.1) laboratory, practicals, training (CBT), part-task trainers,
real task environment games for basic skills training
Four Blueprint Components 39
An efective approach might work from low physical fdelity, which is the
similarity between the simulated task environment and the real task environ-
ment, to high physical fdelity, either in each separate task class or across task
classes. For example, physical fdelity might frst be low, as with problem-
based learning (PBL; Hung et al., 2019; Loyens et al., 2011) where groups
of students work on paper-based authentic cases; then be intermediate, as
for a project group working on an authentic assignment that a real company
brought in; and fnally, be very high, as for medical students who role-play
with simulated patients played by highly trained actors. The same continuum
from low to high physical fdelity can be observed for new media, which may
be computer-simulated or virtual-reality task environments. A low-fdelity
simulation may take the form of textual case descriptions of patients pre-
sented in a web-based course; a moderate-fdelity simulation may take the
form of lifelike simulated characters (i.e., avatars or, in a medical context,
virtual patients) that can be interviewed in a virtual reality environment;
and a high-fdelity simulator may take the form of a full-fedged operating
room, where medical residents treat a computerized mannequin who reacts
just like a real patient.
Supportive information helps learners construct cognitive schemata
through elaboration; they must actively integrate new information with
prior knowledge already available in long-term memory. Traditional media
for teaching supportive information are textbooks, teachers giving lectures,
and ‘realia’ (i.e., real things). They describe models of a domain and how
to approach tasks in that domain systematically. In addition, they illustrate
the ‘theory’ with case studies and examples of how experts solve prob-
lems in the domain. Computer-based hypermedia and multimedia systems
may take over these functions (Gerjets & Kirschner, 2009). Such systems
present theoretical models and concrete cases that illustrate the models
and cases in a highly interactive way, explain problem-solving approaches,
and illustrate these approaches by showing, for example, expert models
on video or via animated, lifelike avatars. Computer-based simulations of
conceptual domains are a special category of multimedia in that they ofer
a highly interactive approach to presenting cases where learners can change
the settings of particular variables and study the efects of those changes on
other variables (De Jong et al., 2013). The main goal of such microworlds
is not to help learners practice a complex skill (as is the case in computer-
simulated task environments) but to help them construct, through active
exploration and experimentation, mental models of how the world is organ-
ized and cognitive strategies of how to systematically explore this world.
Procedural information helps learners automate their cognitive schemata
via rule formation. It is preferably presented precisely when and where the
learners need it for working on the learning tasks. The traditional media
for presenting procedural information are the teacher and all kinds of job
40 Four Blueprint Components
aids and learning aids. The teacher’s role is to walk through the classroom,
laboratory, or workplace, peer over the learner’s shoulder, and give direc-
tions for performing the routine aspects of learning tasks (e.g., “No—you
should hold that instrument like this,” “Watch, you should now select
this option”). Job aids may be the posters with frequently used software
commands that are hung on the walls of computer classes, quick reference
guides adjacent to a piece of machinery, or booklets with instructions on the
house style for interns at a company. In computer-based environments, pro-
cedural information is often presented by online job aids and help systems,
wizards, and (intelligent) pedagogical agents. Smartphones and tablets are
quickly becoming important tools for presenting procedural information.
Such devices are particularly useful for presenting small displays of informa-
tion that tell learners how to perform the routine aspects of the task at hand
correctly during task performance. Nowadays, augmented reality (Limbu
et al., 2019) also makes it possible to present procedural information just in
time, triggered by the learner who is looking at a particular object and then
receives instruction on how to manipulate that object or who is looking at
a tool and then receives instruction on how to use that tool (Figure 2.4).
Figure 2.4 Presenting procedural information just in time with augmented reality.
criticism misses the point. Critics contrast drill-and-practice CBT with edu-
cational software focusing on rich, authentic learning tasks. According to the
Ten Steps, however, part-task practice never replaces meaningful whole-task
practice. It merely complements work on rich learning tasks. It is applied only
when the learning tasks themselves cannot provide enough practice to reach
the desired level of automaticity for automating required to-be-automated
recurrent aspects. If such part-task practice is necessary, the computer is a
highly suitable medium because it can make drill-and-practice efective and
appealing through the presentation of procedural support, by compressing
time so that more exercises can be completed than in real time, by giving
knowledge of results and immediate feedback on errors, and by using multi-
ple representations, gaming elements, sound efects, and so forth.
To conclude this chapter, it must be stressed that the Ten Steps does not
provide guidelines for the final selection and production of media. Media
selection is a gradual process in which media choices are narrowed down as
the design process continues (Clark, 2001). The final selection is influenced
not only by the learning processes involved but also by factors such as con-
straints (e.g., available personnel, equipment, time, money), task require-
ments (e.g., media attributes necessary for carrying out learning tasks and
the required response options for learners), and target group characteristics
(group size, computer literacy, handicaps). The reader should consult spe-
cialized models for the final selection of media and their production.
2.7 Summary
• A high-variability sequence of whole, authentic learning tasks provides
the backbone of a training program for complex learning.
• Simple-to-complex sequencing of learning tasks in task classes and sup-
porting learners’ performance through scaffolding are necessary to help
them learn to coordinate all aspects of real-life task performance.
• To facilitate the construction of cognitive schemata supportive informa-
tion explains how a domain is organized and how to approach problems
in this domain so that learners can fruitfully work on the nonrecurrent
aspects of learning tasks within the same task class.
Four Blueprint Components 43
Glossary Terms
Ten Steps
Painting a room, which, for most of us, is a fairly simple task, involves a
straightforward procedure. First, you empty the room and remove all panel
work, wall sockets and fxtures, foor coverings, hanging lamps, and so on,
storing them away. Next, you strip old wallpaper and/or faking paint from
the walls and repair the walls and ceilings (e.g., plaster, sand, spackle). Then,
you paint the room, window, and/or door frames (often with diferent
DOI: 10.4324/9781003322481-3
46 Ten Steps
paints and paint colors), and return the panel work, wall sockets, and fx-
tures to their places. Finally, you rehang lamps, carpet the foors, and return
the furniture to the room. When painting a whole house—a much more
complex task—you could follow these same steps in linear order. However,
in practice, a diferent approach is typically taken. This involves avoiding the
impracticality of frst removing and storing all the furniture in the whole
house, removing all panel work, fxtures, and wall hangings, covering all of
the foors, steaming of all of the wallpaper, and so forth, until, in the reverse
order, all of the furniture can be moved back into the repainted house.
Unfortunately, this is not only unfeasible, but the results will also often not
be very satisfying. It is impracticable because house occupants would have
nowhere to eat, sleep, and live during the entire repainting. It would also
probably not lead to the most satisfying results because those involved could
not proft from lessons learned and new ideas generated along the way.
Instead of following the fxed procedure, a more pragmatic zigzag strategy
through the house is typically employed, involving doing certain parts of
diferent rooms in a specifc order until the whole house is completed.
This third and fnal introductory chapter describes the instructional design
process in ten steps; thus, the Ten Steps to Complex Learning. But to do
this, Section 1 begins with describing ten design activities rather than steps.
The reason is simple. Though there are theoretically ten steps that could
be followed in a specifc order, in real-life instructional design projects, it is
common to switch between activities, resulting in zigzag design behaviors,
as in the house-painting example. Section 2 describes these system dynam-
ics. Nevertheless, a linear description of the activities must present a work-
able and understandable model description for a systematic approach to the
design process. To this end, we use David Merrill’s (2020) pebble-in-the-pond
approach to order the activities described in Section 3. This approach takes a
practical and content-oriented view of design, starting with the key activity at
the heart of our model; namely, the design of whole tasks. The learning task
is the pebble cast in the pond and the frst of the Ten Steps. This one pebble
starts all the other activities. Section 3 then briefy discusses the nine remain-
ing steps in the order in which they are needed after designing learning tasks.
Section 4 positions the Ten Steps within the Instructional Systems Design
(ISD) process and the ADDIE model (i.e., Analysis, Design, Development,
Implementation, and Evaluation). The chapter concludes with a summary.
The pace of this chapter should not daunt the reader. This chapter merely
provides an overview; in Chapters 4–13 each of the Ten Steps are elaborated.
Figure 3.1 A schematic overview of the ten activities in the design process for
complex learning.
The lower part of the figure is identical to the schematic training blue-
print presented in Chapter 2 (Figure 2.1) and contains the four activities
that correspond with the four blueprint components. The design of learning
tasks is the heart of this training blueprint. For each task class, designers cre-
ate learning tasks that provide learners with variable whole-task practice at a
particular complexity level—a task class—until they can independently carry
out these tasks up to prespecified standards, after which they continue to the
next, more complex task class. The design of supportive information pertains
to all information that may help learners carry out the problem-solving,
reasoning, and decision-making (i.e., nonrecurrent) aspects of the learning
tasks within a particular task class. The design of procedural information per-
tains to all information that exactly specifies how to carry out the routine
(i.e., recurrent) aspects of the learning tasks. Finally, the design of part-task
practice may be necessary for developing selected to-be-automated recur-
rent aspects to a very high level of automaticity.
48 Ten Steps
The two activities on the central axis of the fgure sustain the design of
learning tasks. At the top, the design of performance assessments makes it pos-
sible to determine to what degree learners have reached prespecifed standards
for acceptable performance. It is up to the designer or design team to deter-
mine what is acceptable. For some tasks, it can be that something minimally
works; for others, mastery; and for still others, perfection. Because complex
learning deals with highly integrated sets of learning objectives, the focus is
on decomposing a complex skill into a hierarchy describing all aspects or con-
stituent skills relevant to performing real-life tasks. Performance assessments
should make it possible to measure performance on these constituent skills and
monitor learners’ progress over learning tasks; that is, over time. At the center
of the fgure, the sequencing of learning tasks describes a simple-to-complex
progression of categories of tasks that learners may work on. It organizes the
tasks in such a way that learning is optimized. The simplest tasks connect to
the entry level of the learners (i.e., what they can already do when they enter
the training program), and the fnal, most complex tasks connect to the fnal
attainment level of the whole training program. In individualized learning
trajectories based on frequent performance assessments, each learner receives
a unique sequence of learning tasks adapted to their individual learning needs.
In on-demand education, learners can select their learning tasks but often
receive support and guidance for doing this (i.e., second-order scafolding).
The two activities on the left side of the fgure, analyzing cognitive strate-
gies and mental models, sustain the design of supportive information. They
are drawn next to each other because they have a bidirectional relationship:
One is not conditional to another. The analysis of cognitive strategies answers
the question: How do profcient task performers systematically approach
problems in the task domain? The analysis of mental models answers the
question: How do profcient task performers organize the task domain? The
resulting systematic approaches to problem solving (SAPs) and domain mod-
els are used to design supportive information for a particular task class (see
Section 7.2). There is a clear reciprocity between the sequencing of learn-
ing tasks in simple-to-complex task classes and the analysis of nonrecurrent
task aspects: More complex task classes require more detailed and/or more
embellished cognitive strategies and mental models than simpler task classes.
The two activities on the right side of the fgure, analyzing cognitive
rules and analyzing prerequisite knowledge, sustain the design of procedural
information and part-task practice. They are drawn on top of each other
because they have a conditional relationship: Cognitive rules require the
availability of prerequisite information. The analysis of cognitive rules identi-
fes the condition-action pairs that enable experts to perform routine aspects
of tasks without conscious efort (IF condition THEN action). The analysis
of prerequisite knowledge identifes what experts need to know to apply those
condition-action pairs correctly. Together, the results of these analyses pro-
vide the basis for the design of procedural information (see Section 10.2).
Ten Steps 49
In addition, identifed cognitive rules are precisely those rules that require
automation through part-task practice.
As indicated by the arrows in Figure 3.1, some activities provide pre-
liminary input for other activities. This suggests that the best order for
performing the activities would, for example, be to start by designing
the necessary performance assessments, then continue by analyzing the
nonrecurrent and recurrent aspects and sequencing the learning tasks,
and end with designing the four blueprint components. Indeed, the ten
activities have previously been described in this analytical order (e.g.,
Van Merriënboer & de Croock, 2002), but in real-life design projects,
each activity afects and is afected by all other activities. This makes it an
open question as to which order for carrying out the ten activities is most
fruitful.
Iteration
Figure 3.2 Breadboard with a prototype of the Tube Screamer and the final
product.
Ten Steps 51
Layers of Necessity
In real-life design projects, designers often will not perform all design activities,
or at least, not all at the same level of detail. Based upon allotted development
time and available resources, they choose what activities they incorporate into
the project and what level of detail is necessary for those activities. In other
words, they fexibly adapt their professional knowledge. Wedman and Tess-
mer (1991) describe this process in terms of layers of necessity. The instruc-
tional design model is then described as a nested structure of submodels,
ranging from a minimalist model for situations with severe time and resource
limitations to a highly sophisticated model for ideal situations with generous
time and resources. A minimalist version of the Ten Steps (i.e., the frst layer
of necessity) might, for example, only contain developing a set of learning
tasks, as this is at the heart of the training blueprint. The most sophisticated
version might contain the development of a highly detailed training blueprint
where the descriptions of supportive information, procedural information,
and part-task practice are based on comprehensive task- and content analy-
ses. The key to using layers of necessity is a realistic appraisal of the time and
resource constraints associated with the goals of a particular design project.
A related issue is reusing instructional materials (Van Merriënboer & Boot,
2005; Wiley et al., 2012). Many instructional design projects do not design
training programs from scratch but redesign existing training programs. This
reduces the need to carry out certain analysis and design activities and almost
certainly reduces the need to do this at a very high level of detail. The redesign
of an existing training program according to the Ten Steps, for example, will
always start with specifying a series of learning tasks, which are then organized
into task classes. Concerning designing the information that learners will need
for working productively on those tasks, it might be sufcient to reorganize
already-available instructional materials to better connect them to the relevant
task classes and learning tasks. There is no need for an in-depth task and
content analysis. Furthermore, reuse of materials is also increasingly popular
for the design of new courses. Mash-ups, for example, recombine and modify
existing digital media fles into new instructional materials. This approach is
becoming increasingly popular and often employs what is known as Open
Educational Resources (OERs): freely accessible, openly licensed documents
and media for teaching, learning, and assessing (Littlejohn & Buckingham
Shum, 2003; Wiley et al., 2014). Again, the analysis necessary to determine
which existing media fles are needed to create a mash-up is probably much less
detailed than the analysis necessary to develop those materials from scratch.
Zigzag Design
order for some activities. For example, in Figure 3.1, there is no preferred
order for analyzing the nonrecurrent and the recurrent aspects of the skill,
no preferred order for the design of supportive information, procedural
information, and part-task practice, and, fnally, no necessity of completing
one step before beginning on another. Iterations, layers of necessity, and
switches between independent activities result in highly dynamic, nonlinear
forms of zigzag design. Nevertheless, it is important to prescribe the basic
execution of the ten activities in an order that gives optimal guidance to
designers. We will discuss this in the next section.
The frst three steps aim at the development of a series of whole tasks that
serve as the backbone for the educational blueprint:
The frst step, throwing the pebble in the pond, specifes a set of typical
learning tasks representing the whole complex skill the learner should be
able to perform after following the instruction. In this way, it becomes
clear from the beginning, and at a very concrete level, what the training
program must achieve. The frst ripple in the design pond, Step 2, per-
tains to the articulation of standards that learners must reach if we are
to conclude that they can carry out the tasks in an acceptable fashion.
The design of performance assessments for the whole task makes it pos-
sible to (a) determine whether learners met the standards and (b) give
learners the necessary feedback on the quality of their performance. By
frst specifying the learning tasks and, only after having done this, the
standards and performance assessments, the pebble-in-the-pond approach
avoids the common design problem of abandoning or revising learning
objectives that, early in the process, are determined to correspond more
closely to the content that has fnally been developed (Merrill, 2020). The
next ripple in the design pond, Step 3, pertains to the sequencing (i.e.,
the progression; cf. Figure 3.3) of learning tasks. When there is a set of
learning tasks and an instrument to assess performance, it is important to
order the tasks to optimize the learning process. One approach to achieve
this is by defning a sequence of tasks that gradually increases in complex-
ity. It is important to note here that making the task more complex is not
the same thing as making the task more difcult. Complexity is a feature
of the task while difculty is also determined by the knowledge or skill
level of the learner. A task with a given level of complexity can, thus, be
easy for an advanced learner but difcult for a novice learner. When we
increase complexity over a sequence of learning tasks, the difculty of
these tasks should remain more or less the same for the individual learner!
At each level of complexity, the level of support and guidance decreases.
In this way, if learners can successfully carry out all tasks, they are consid-
ered to have mastered the prespecifed knowledge, skills, and attitudes.
Alternatively, this can be accomplished by using performance assessments
to dynamically choose learning tasks tailored to the learning needs of
individual learners.
54 Ten Steps
Further ripples in the design pond identify the knowledge, skills, and atti-
tudes necessary to perform each learning task in the progression of tasks.
This results in the remaining blueprint components (cf. Figure 3.3), which
are subsequently connected to the backbone of learning tasks. We distinguish
supportive information, procedural information, and part-task practice. The
steps followed for designing and developing supportive information are:
3.5 Summary
• The instructional design process consists of ten activities: the design of
learning tasks, the design of supportive information, the design of pro-
cedural information, the design of part-task practice, the design of per-
formance assessments, the sequencing of learning tasks, the analysis of
cognitive strategies, the analysis of mental models, the analysis of cogni-
tive rules, and the analysis of prerequisite knowledge.
• System dynamics indicate that the output of each activity afects all other
activities. In real-life design projects, iterations, skipping activities (layers
of necessity), and switching between activities are common, resulting in
zigzag design behaviors.
• The pebble-in-the-pond approach is a content-centered modifcation of
traditional instructional design in which one or more learning tasks are
frst specifed rather than abstract learning objectives. Thus, the process is
initiated by casting a pebble—one or more whole learning tasks—in the
instructional design pond. The process unrolls as a series of expanding
activities or ripples initiated by this pebble.
Ten Steps 57
Glossary Terms
Step 1
Design Learning Tasks
4.1 Necessity
Learning tasks are the frst design component and provide the backbone for
your training blueprint. You should always perform this step.
Traditional school tasks are highly constructed, well defned, short, ori-
ented toward the individual, and designed to best ft the content (i.e., the
curriculum) instead of reality. An archetypical problem of this type is: “Two
DOI: 10.4324/9781003322481-4
60 Step 1: Design Learning Tasks
Ill-Structured Problems
School tasks are usually well-structured: They present all problem elements
to the learner, require applying a limited number of rules and/or proce-
dures, and have knowable, understandable solutions. They are also often
Step 1: Design Learning Tasks 61
Multidisciplinary Tasks
The most efective approach for identifying real-life tasks for designing
learning tasks involves interviewing professionals working in the feld along
with trainers with experience teaching in that domain. To prepare for this
process, it is essential to study documentation materials such as technical
handbooks, on-the-job documentation, and function descriptions—as well
as existing educational programs and OERs—to avoid duplicate work. The
document study should provide an instructional designer with enough
background information to interview professionals efectively and efciently.
In later phases of the design process, it will be helpful to include subject-
matter experts from diferent disciplines to do justice to the multidiscipli-
nary nature of the tasks.
Using real-life tasks as the basis for learning tasks ensures that they engage
learners in activities that directly involve them with the constituent skills
involved—as opposed to activities in which they have to study general infor-
mation about the skills or related to them. Such a careful design should also
guarantee that the tasks put learning before performance. In other words,
Step 1: Design Learning Tasks 63
the tasks should stimulate learners to focus on the cognitive processes for
learning rather than solely on the outcomes of executing the tasks. This can
be achieved by changing the real-life or simulated environment in which the
tasks are performed, ensuring variability of practice, and providing proper
support and guidance to the learners carrying out the tasks (Kirschner et al.,
2006).
Especially in the earlier phases of the learning process (i.e., learning tasks
at the beginning of a task class or task classes at the beginning of the edu-
cational program), simulated task environments may ofer more favorable
opportunities for learning than real task environments. Table 4.1 lists the
major reasons for using a simulation. As can be seen, not only can simulated
task environments improve learning, but real task environments may some-
times even hamper learning. They may make it difcult or even impossible
to control the (sequence of) learning tasks, with the risk that learners must
practice with tasks that are either much too difcult or much too easy for
them (e.g., training air trafc controllers in a hectic airport) or that show
64 Step 1: Design Learning Tasks
Table 4.1 Reasons to offer learning tasks in a simulated rather than real task
environment.
insufcient variability (e.g., training teachers with only one group of stu-
dents). They may also make it difcult to provide the necessary support or
guidance to learners (e.g., training fghter pilots in a single-person aircraft if
no qualifed people are available at the time and/or place needed to provide
support and guidance). They may lead to dangerous, life-threatening situ-
ations or loss of materials (e.g., if novice medical students were to practice
surgery on real patients). They may lead to inefcient training situations that
take much longer than necessary (e.g., a chemical titration where the chemi-
cal reaction is slow). They may make the educational program extremely
expensive (e.g., training frefghters to extinguish burning aircraft). They
may make it impossible to present the needed tasks (e.g., training how to
deal with situations such as calamities that rarely occur or technical problems
Step 1: Design Learning Tasks 65
Figure 4.1 Virtual reality (VR) parachute trainer with high physical fidelity.
66 Step 1: Design Learning Tasks
that are irrelevant in the current stage of the learning process but may nev-
ertheless attract learners’ attention and disrupt their learning. As explained,
one might, for example, present business students with a paper-based case
study of a company with fnancial problems with the assignment to develop
a business strategy to increase proft or present paper-based descriptions of
patients to students in medicine with the assignment to reach a diagnosis
and develop a treatment plan. Figure 4.2 provides an example of a medical
case generated by ChatGPT. Artifcial intelligence in large language models
such as ChatGPT can help develop case materials, although a fnal check by
an expert is still crucial before using them in teaching materials. The pulmo-
nologist who checked the case in Figure 4.2 did not observe errors but had
valuable suggestions for improving it.
In a second stage, learners continue practicing in a task environment with
higher functional fdelity; that is, an interactive environment that reacts in
response to the actions executed by the learner. For example, medical stu-
dents may be exposed to so-called ‘virtual patients,’ computer-based patient
simulations that enable them to interrogate the patient, request laboratory
tests, and carry out other related actions (Huwendiek et al., 2009; Janesar-
vatan & van Rosmalen, 2023; Marei et al., 2017). Alternatively, role playing
can be used, where peer students take on the roles of simulated patients. For
business students, management games allow them not only to develop but
also to test business strategies, while so-called ‘virtual companies’ (Westera
et al., 2000) make it possible to work on real projects in a Web-based envi-
ronment that more or less resembles reality. Finally, for the acquisition of
presentation skills, there are environments such as The Presentation Trainer
(Schneider et al., 2016), an augmented reality toolkit for learning and prac-
ticing nonverbal public speaking skills (see Figure 4.3). The program tracks
and analyzes the user’s body posture, body movements, speaking cadence,
and voice volume to give instructional feedback on nonverbal communica-
tion skills (sensor-based learning) on screen both during and after practice.
In a third stage and with increasingly advanced learners, more details of
the real task environment become relevant. This, in turn, may make it neces-
sary to carry out the learning tasks in a high-fdelity simulated-task environ-
ment. For example, medical students may engage role-playing professional
actors who simulate real accident victims or patients. Or patients may take
the form of computer-controlled mannequins that react as real patients to
practice resuscitation skills in emergency teams (see Figure 4.4; McGraw
et al., 2023). For business students, high-fdelity simulation may occur in a
simulated ofce space where project teams work on a real task brought in by
a commercial client. These kinds of simulations smoothly fow into real-task
environments, where medical students work with real patients in the hospital
and business students work with real clients in companies. There are even
some situations where the high-fdelity, simulated and real-task environments
68 Step 1: Design Learning Tasks
are indistinguishable. This, for example, is the case for satellite radar data
imaging, where the only diference between the two might be that the sim-
ulated-task environment uses a database of stored satellite data and images,
while the real-task environment uses real-time satellite data and images.
70 Step 1: Design Learning Tasks
The principles just outlined also apply to computer-based simulated task envi-
ronments, including serious games: simulation-based games designed not pri-
marily for entertainment, but rather, to learn or change complex skills in felds
such as science and engineering, environmental policy, health care, emergency
management, and so forth (e.g., Akkaya & Akpinar, 2022; Faber et al., 2021;
Hummel et al., 2021). Table 4.2 provides examples of computer-based simu-
lated-task environments ordered from low to high—functional and physical—
fdelity. Low-fdelity environments are often Web-based and present realistic
learning tasks and problems to learners but ofer either no or very limited inter-
activity (e.g., Holtslander et al., 2012). Medium-fdelity environments typically
react to the learners’ actions (i.e., high functional fdelity) and, for team tasks,
allow learners to interact with each other. Many serious games are good exam-
ples of computer-based simulated-task environments with high functional but
low physical fdelity. They are attractive because they are less expensive to design
and use than high-fdelity simulators. Furthermore, they include gaming ele-
ments that may enhance learners’ motivation and can be used across a range of
users in a variety of locations (Faber et al., 2021; Lukosch et al., 2013). High
physical fdelity simulation, fnally, typically is used only in situations where
the ‘see, feel, hear, smell and taste’ of the task environment is relatively easy to
implement or where practicing in the real environment is out of the question.
In general, although learning tasks may not be initially performed in the
real-task environment from the start of the training program, they are eventu-
ally performed in the real environment at the end of the program or the end
of each task class. Thus, medical students will treat real patients in a hospi-
tal, learners in a fight training program will fy real aircraft, and trainees in
accountancy will deal with real clients and conduct real fnancial audits, all
under supervision. The simple reason is that even high-fdelity simulation usu-
ally cannot compete with the real world. There can be exceptions for learning
tasks that rarely occur in the real world (e.g., disaster management such as an
earthquake or food, dealing with failures in complex technical systems such
as a nuclear meltdown, conducting rare surgical operations), tasks linked to
high expenses (e.g., launching a missile, shutting down a large plant), and
tasks where the simulated environment is virtually identical to the real environ-
ment (e.g., satellite image processing, robotic surgery). For such tasks, high-
fdelity simulation using virtual reality with advanced input-output facilities
(e.g., VR helmets, data gloves) and complex software models running in the
background may help limit the gap between the real world and its simulations
(Mulders, 2022).
Generalization
When learners generalize or abstract away from concrete experiences,
they construct schemata that leave out the details so that they apply to
a wider range or less tangible events. Learners may construct a more
general or abstract schema if they create successful solutions for a class
of related learning tasks or problems. Then, the schema describes the
common features of successful solutions. For instance, a child may dis-
cover that 2 + 3 and 3 + 2 both add up to 5. It may induce the simple
schema or principle ‘if you add two digits, the sequence in which you
add them is of no consequence for the outcome’—the law of commu-
tativity. It may also induce another, even more general schema ‘if you
add a list of digits, the sequence in which you add the digits is of no
consequence for the outcome.’
Discrimination
In a sense, discrimination is the opposite of generalization. Suppose
the child makes the overgeneralization ‘if you perform a computa-
tional operation on two digits, the performance sequence is of no con-
sequence for the outcome.’ In this case, discrimination is necessary
to arrive at a more efective schema. The child may construct such
a more efective schema if failed solutions are created for a class of
related problems. Then, particular conditions may be added to the
schema and restrict its range of use. For instance, if the child fnds out
that 9 – 4 = 5 but that 4 – 9 = −5 (minus 5), the more specifc schema
or principle induced is ‘if you perform a computational operation on
two digits, and this operation is not subtraction (added condition), the
sequence in which you perform it is of no consequence for the out-
comes.’ While this schema is still overgeneralized (i.e., it is true for
multiplication but not for division), discrimination has made it more
efective than the original schema.
Step 1: Design Learning Tasks 73
Mindful Abstraction
Inductive learning is typically a strategic and controlled cognitive pro-
cess requiring conscious processing from the learner to generate plau-
sible alternative conceptualizations and/or solution paths when faced
with novel or unfamiliar tasks or task situations. Mindful abstraction can
originate from one single learning task but is greatly facilitated by incor-
porating multiple learning tasks that vary across the same dimensions as
real-world tasks. Such variability of practice should encourage inductive
processing because it increases the chances that learners can identify
similar features and distinguish relevant ones from irrelevant ones.
This variability of practice comes in two ‘flavors’; namely,
Implicit Learning
Some tasks lacking clear decision algorithms involve integrating large
amounts of information. In this case, implicit learning is sometimes
more effective than mindful abstraction to induce the construction of
cognitive schemata. Implicit learning is more or less unconscious and
occurs when learners work on learning tasks that confront them with a
wide range of positive and negative examples. For example, if air traffic
controllers must learn to recognize dangerous air traffic situations on
74 Step 1: Design Learning Tasks
Further Reading
Battig, W. F. (1966). Facilitation and interference. In E. A. Bilodeau
(Ed.), Acquisition of skill (pp. 215–244). Academic Press.
Bjork, R. A. (1994). Memory and metamemory considerations in the
training of human beings. In J. Metcalfe & A. Shimamura (Eds.),
Metacognition: Knowing about knowing (pp. 185–205). MIT Press.
Hall, K. G., & Magill, R. A. (1995). Variability of practice and contextual
interference in motor skill learning. Journal of Motor Behavior, 27(4),
299–309. https://doi.org/10.1080/00222895.1995.9941719
Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (Eds.).
(1989). Induction: Processes of inference, learning, and discovery.
MIT Press.
Kirschner, P. A., Hendrick, C., & Heal, J. (2022). How teaching hap-
pens: Seminal works in teaching and teacher efectiveness and what
they mean in practice. Routledge.
Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus perfor-
mance: An integrative review. Perspectives on Psychological Science,
10(2), 176–199. https://doi.org/10.1177/1745691615569000
Reber, A. S. (1996). Implicit learning and tacit knowledge: An essay on
the cognitive unconscious. Oxford University Press.
One may ensure that learning tasks difer from each other on dimensions
such as the conditions under which it is performed (e.g., a time-constrained
video-production task, as is the case for commissioned work, or a for-free
creative work, which is not), the way of presenting it (e.g., a video-pro-
duction task with detailed criteria for the fnal product or an open-ended
task leaving room for interpretation and creativity), the saliency of its defn-
ing characteristics (e.g., a video-production task to stimulate product sales
or one to infuence audience attitudes), and its familiarity (e.g., a video-
production task in a familiar domain or one in a domain foreign to the
learner). Furthermore, one must ensure that learning tasks difer from each
other on both surface features and structural features:
• Tasks that difer from each other on surface features may look diferent
from each other but can nevertheless be carried out in the same way. For
Step 1: Design Learning Tasks 75
A special kind of variability deals with the issue of how to order learning tasks
at the same level of complexity (i.e., in the same task class). If adjacent learn-
ing tasks cause learners to practice the same constituent skills, then so-called
contextual interference (Bjork, 1994) is low. Conversely, when adjacent learn-
ing tasks require learners to practice diferent versions of constituent skills,
contextual interference is high (also called ‘interleaving’; Birnbaum et al.,
2013). This, in turn, helps learners develop a more integrated knowledge
base. For example, if a medical student learns to diagnose the three diseases—
d1, d2, and d3—with 12 diferent patients, it is better to use a random prac-
tice schedule (d3-d3-d2-d1, d1-d3-d2-d1, d2-d2-d1-d3) than a blocked one
(d1-d1-d1-d1, d2-d2-d2-d2, d3-d3-d3-d3). Studies on contextual interfer-
ence and interleaving thus show that the variability and its structure across
learning tasks determine the extent of learning transfer. Practice under high
contextual interference stimulates learners to compare and contrast adjacent
tasks and mindfully abstract away from them, resulting in higher transfer of
learning than practice under low contextual interference (for examples, see
De Croock & van Merriënboer, 2007; Helsdingen et al., 2011a, 2011b).
In summary, it is wise to use a varied set of learning tasks in which tasks
difer on both surface and structural features and to sequence them randomly.
Although variability and random sequencing may lead to a longer training
time, a higher number of learning tasks needed to reach a prespecifed perfor-
mance level, and/or more mistakes during the learning process, every bit of
it pays itself back in higher transfer of learning. This is an example of what has
been called the transfer paradox in Chapter 1. If you want learners to reach
transfer of learning, they need to work a bit harder and longer. As the saying
goes: “No pain, no gain”. Bjork (1994) calls this a desirable difculty, making
it harder but in a good way. Therefore, it may be helpful to tell learners that
variability is applied and explain to them in some detail why it is applied to
increase their awareness and willingness to invest efort in mindful abstraction.
For complex learning, the nonrecurrent aspects of learning tasks are ill-
structured; there is no one optimal solution but many acceptable solutions.
Also, there may be several intermediate solutions besides the fnal solution
(e.g., for our video-production example, production plans, scripts, storyboards,
and produced footage may be seen as intermediate solutions). There may also
be several acceptable solution paths to arrive at similar or diferent solutions.
Finally, the goal states and given states may also be ill-defned, requiring prob-
lem analysis on the part of the learner (which is then one of the constituent
skills involved). These characteristics sometimes make it difcult to analyze a
real-life task in its given state, goal state, acceptable solution, and problem-
solving process for generating a solution. Table 4.3 provides some examples.
The distinction between a solution and the problem-solving process for
fnding an acceptable solution is characteristic of complex learning. This type
of learning primarily deals with nonrecurrent skills; both the whole skill and a
number of its constituent skills are nonrecurrent. To perform such skills, learn-
ers tentatively apply operators to problem states, searching for a sequence that
transforms the initial state into a new state, meeting the criteria for an accept-
able goal state. It is like playing chess or checkers, where the executed moves
represent the solution, while the player’s mental simulations of potential moves
represent the problem-solving process. In the worst case, the player randomly
selects operators or employs trial-and-error strategies, making behavior look
aimless to an outside observer. But normally, players do not use trial-and-error
because cognitive schemata and/or instructional measures guide their search
process. This can manifest as cognitive strategies in the learner’s head or SAPs
in the instructional materials that allow the learner to approach the problem
systematically and as mental models in the learner’s head or domain models in
the instructional materials to reason about the domain.
In the next sections, we use the framework to distinguish between built-
in task support and guidance. Task support does not pay attention to the
problem-solving process itself but only involves given states, goal states, and
solutions (e.g., if you are going to go on a vacation trip, this would be where
the trip begins, where the trip ends, and the given routes). Guidance, on the
other hand, also takes the problem-solving process itself into account and
typically provides useful approaches and heuristics to guide the problem-
solving process and help fnd a solution (for that same vacation, this would
be a set of guidelines for how to fnd interesting routes yourself; for exam-
ple, with historical or picturesque things to see along the way).
78
Real-life task Given(s) Goal(s) Acceptable solution Problem-solving process
tasks that confront the learner with only a given state and a set of criteria for
an acceptable goal state (e.g., starting with a certain mixture, distill alcohol
with a purity of 98%). These conventional tasks provide the learner with no
support (i.e., part of the solution), and the learner must generate a proper
solution themselves. As shown at the top of Figure 4.6, conventional tasks
come in diferent forms, depending, on the one hand, on their structure
and the equivocality of their solution and, on the other, the solver. A con-
ventional task may be well structured and have only one acceptable solution
(e.g., compute the sum of 15 + 34 in base 10). But, as explained, the Ten
Steps will typically use learning tasks based on real-life tasks, yielding con-
ventional tasks that take the form of ill-structured problems.
Table 4.4 Examples of different types of learning tasks for the complex skill
‘producing video content’ are ordered from high task support (i.e.,
worked-out example) to no task support (i.e., conventional task).
A plus-sign (+) in the columns ‘Given,’ ‘Goal,’ and ‘Solution’ means
that this aspect is presented to the learner; a designation (e.g., pre-
dict) denotes a required action from the learner.
that those solutions can reach. Usually, learners receive goal-specifc prob-
lems, such as “A car with a mass of 950 kg accelerating in a straight line
from rest for 10 seconds travels 100 meters. What is the fnal velocity of the
car?” This problem could easily be made goal nonspecifc by replacing the
last line with “Calculate the value of as many of the variables involved here
82 Step 1: Design Learning Tasks
as you can”. Here, the learner would not only calculate the fnal velocity but
also the acceleration and the force exerted by the car at top acceleration.
And if the word ‘calculate’ was replaced by ‘represent,’ the learner could
also include graphs and the like. Nonspecifc goal problems invite learners
to move forward from the givens and to explore the problem space, which
may help them construct cognitive schemata, in contrast to conventional
goal-specifc problems that force learners to work backward from the goal.
Working backward is a cumbersome process for novice learners that may
hinder schema construction (Sweller et al., 1998, 2019).
Completion tasks give learners a given state, criteria for an acceptable goal
state, and a partial solution. The learners must then complete the partial
solution by determining the missing steps and then adding them, either at
the end or at one or more places in the middle of the solution. A particu-
larly strong aspect of completion tasks is that learners must carefully study
the partial solution provided. They cannot develop a complete solution if
they do not do this. Completion tasks seem to be especially useful in design-
oriented task domains and were originally developed in the domain of soft-
ware engineering (Van Merriënboer, 1990; Van Merriënboer & de Croock,
1992), where learners had to fll in missing command lines in computer
programs. Well-designed completion tasks ensure learners understand the
partial solution and still have to perform a nontrivial completion.
The learning tasks’ common element is directing the learners’ attention
to problem states, acceptable solutions, and useful solution steps. This helps
them mindfully abstract information from good solutions or use inductive
processes to construct cognitive schemata that refect generalized solutions
for particular types of tasks. Research on learning tasks other than unguided
conventional tasks has provided strong evidence that they facilitate schema
construction and transfer of learning for novice learners (for an overview,
see Van Merriënboer & Sweller, 2005, 2010). The bottom line is that hav-
ing learners solve many problems independently is often not the best way to
teach them problem solving (Kirschner et al., 2006; Sweller et al., 2007)!
For novice learners, studying useful solutions and the relationships between
the characteristics of a given situation and the solution steps applied is much
more important for developing problem solving, reasoning, and decision-
making skills than solving equivalent problems. Only more experienced
learners, who have already developed most of the cognitive schemata neces-
sary to guide their problem solving, should use conventional tasks.
• The client’s briefng presents the given situation: Create a 3-minute pro-
motional video for a local bakery called ‘Breaducation’ specializing in
pastries and desserts. The promotional video will showcase its products
and create a connection with the local community.
• The goal situation consists of a promotional video highlighting the bak-
ery’s products and character that can be shared on the website and social
media channels, attracting new customers and generating buzz for the
business.
• The problem-solving process leading to the solution consists of preproduc-
tion (e.g., creating a production plan including a storyboard outlining the
video’s shots and sequences), production (e.g., recording footage of the
bakers creating pastries and desserts, the variety of available wares, cus-
tomers enjoying their treats, the bakery’s interior, as well as conducting
brief interviews with the owner and satisfed customers to add a personal
touch), and postproduction (e.g., editing the video to create a seamless
and engaging narrative including the interviews and footage of pastries
and desserts, text overlays or graphics highlighting key information such
as the bakery’s location, contact details, and opening hours, accompanied
by cheerful, uplifting background music).
In the upper left cell of Table 4.5 are learning tasks that provide maximum
support and guidance. Along with worked-out examples describing the solu-
tion for a given state and goal state, they make the expert’s problem-solving
process visible (i.e., modeling examples). In the bottom right cell of Table 4.5
are learning tasks that provide minimal support and guidance. They provide
conventional, unsupported tasks without any guidance. The next sections
describe the diferent types of guidance and provide further examples of how
to combine guidance with diferent types of built-in support.
Table 4.5 Combinations of built-in task support and guidance for the learning task “Create a promotional video (i.e., promo)
84
for a local bakery”.
(Continued)
Table 4.5 (Continued)
Guidance
Nonspecifc goal Learners observe an Learners receive the Learners receive the Learners receive the Learners receive
task expert who is given bakery’s briefng and bakery’s briefng. bakery’s briefng and the bakery’s
Asks for possible the bakery’s briefng must brainstorm They brainstorm as must brainstorm and briefing and must
solutions for and thinks aloud and script as many many ideas for scripts script as many diferent brainstorm and
diferent goals while brainstorming diferent promos as possible, requiring promos that meet the script as many
for a given and scripting as many that meet the approval before requirements under the different promos
state. diferent promos that requirements writing the scripts. guidance of a tutor. that meet the
meet the requirements. following a process requirements.
worksheet.
Completion Learners observe an Learners receive the Learners receive the Learners receive the Learners receive the
task expert who receives bakery’s briefing, a bakery’s briefng, a bakery’s briefing, a bakery’s briefing,
Gives a partial the bakery’s briefing, partial script, and partial script, and partial script, and a partial script,
solution to a partial script, and unfinished footage incomplete footage. unfinished footage and unfinished
be completed unfinished footage with the goal of They complete with the goal of footage with the
for a given and thinks aloud completing the preproduction, completing the goal of completing
state and goal while completing the promo to meet the production, and promo to meet the the promo
85
process worksheet. tutor. promo that meets
the requirements.
86 Step 1: Design Learning Tasks
Modeling
Process Worksheets
A process worksheet (Van Merriënboer, 1997; Van Gog et al., 2004) pro-
vides learners with the phases they need to go through to solve a problem
and guides them through the problem-solving process. In other words, it
provides them with a SAP, including rules-of-thumb for carrying out the
learning task.
A process worksheet may be as simple as a sheet of paper indicating the
problem-solving phases (and, if applicable, subphases) that might help carry
out the learning task. The learner uses it as a guide for solving the problem.
The worksheet provides rules-of-thumb that may help complete each phase.
These rules-of-thumb may take the form of statements (e.g., when prepar-
ing a presentation, consider the audience’s prior knowledge and take it into
account) or guiding questions (e.g., what aspect(s) of your audience should
you take into account when preparing a presentation and why?). An advan-
tage of using the interrogatory form (i.e., epistemic questions) is that learners
are provoked to think about the rules-of-thumb. Furthermore, if they write
down their answers to these questions on the process worksheet, a teacher
can observe their work and provide feedback on the applied problem-solving
strategy. It should be clear that both the phases and the rules-of-thumb are
heuristic: They may help the learner to solve the problem, but they do not
necessarily do so. This distinguishes them from algorithmic rules or proce-
dures. Table 4.6 provides an example of phases in problem solving and rules-
of-thumb used in a course for law students being trained to plead a case in
court.
88 Step 1: Design Learning Tasks
Table 4.6 Phases and rules-of-thumb for the complex skill ‘preparing a plea’.
Performance Constraints
electronic form for performing a situational analysis (a cognitive tool for Phase 4)
before thoroughly studying the whole fle (Phase 3; Nadolski et al., 2006).
For the video-production example, performance constraints for a con-
ventional task (bottom cell in the column ‘performance constraints’ in
Table 4.5) may, for instance, require approval of the preproduction phase
by a teacher or supervisor before learners start the production phase and
require approval of the production phase before they start the postproduc-
tion phase. Performance constraints that guide the cognitive process will be
diferent for learning tasks with built-in support than for conventional tasks.
For a nonspecifc goal task, learners must generate as many—intermediate—
solutions as possible (see the cell ‘performance constraints/nonspecifc goal
task’ in Table 4.5). Performance constraints may then require the learners to
give an accurate summary of the given state and defne the goal state before
they can start their brainstorming. Well-designed performance constraints
might also reduce the number of phases or decrease the specifcity of each
phase if learners acquire more expertise (Nadolski et al., 2005). Because per-
formance constraints are more directive than process worksheets, they may
be particularly useful for early phases in the learning process.
Tutor Guidance
Technique Fading
solution steps. Thus, for novices, learning to carry out unguided conven-
tional tasks is diferent from and incompatible with how they are ‘supposed
to be’ carried out; that is, how experts carry them out. Giving novices proper
support and guidance is necessary for learning (van Merriënboer et al., 2003).
For learners with more expertise, support and guidance may not be nec-
essary or may even be detrimental to learning because they have already
acquired the cognitive schemata that guide their problem solving, reasoning,
and decision-making processes. They have their own proven personal and/
or idiosyncratic ways of working. These cognitive schemata may interfere
with the examples, process worksheets, or other means of support and guid-
ance provided. Rather than risking confict between the experts’ available
cognitive schemata and the support and guidance provided by the instruc-
tion, it is preferable to greatly reduce or even eliminate the support and
guidance. This means providing large amounts of support and guidance for
learning tasks early in a task class (when the learners are novices for the tasks
at a particular level of complexity). In contrast, no support and guidance
should be given for the fnal learning tasks in this task class (when these same
learners have gained the necessary expertise at this level of complexity).
Step 1: Design Learning Tasks 93
• If you design a sequence of learning tasks, you must ensure that learners
start with learning tasks with a high level of support and guidance but
end with tasks without support and guidance (this is called ‘scafolding’).
Glossary Terms
Step 2
Design Performance Assessments
5.1 Necessity
An integrated set of performance objectives provides standards for accept-
able performance. Assessment instruments use these standards for perfor-
mance assessment. We strongly recommend carrying out this step.
DOI: 10.4324/9781003322481-5
96 Step 2: Design Performance Assessments
to, having to, fully clothed, tread water and then swim 12.5 meters, swimming
4 × 25 meters with two diferent strokes (alternating between breaststroke
and backstroke), and swimming underwater for 3 meters. Each of these ‘per-
formances’ is further defned, for example, concerning what constitutes ‘fully
clothed’ (i.e., pants, shirt, and shoes), that they have to swim underwater
through a ring, how deep the ring is placed underwater (3 meters), that they
have to turn along the axis of their body between the diferent strokes, that
there is a specifc minimum time for treading water (15 seconds) and a maxi-
mum time for completing the two strokes, et cetera. Children get this diploma
only when they have met all these standards for acceptable performance.
This chapter discusses identifying, formulating, and classifying perfor-
mance objectives and their use for developing performance assessments. As
specifed in the previous chapter, learning tasks already give a good impression
of what learners will do during and after the training. But performance objec-
tives give more detailed descriptions of the desired ‘exit behaviors,’ includ-
ing the conditions under which the complex skill needs to be performed
(e.g., fully clothed), the tools and objects that can or should be used during
performance (e.g., through a hoop), and, last but not least, the standards
for acceptable performance (e.g., 25 meters, turning along their body axis).
According to the Ten Steps, learners learn from and practice almost exclu-
sively on whole tasks to help them reach an integrated set of performance
objectives representing many aspects of the complex skill. Performance
objectives help designers diferentiate the many aspects of whole-task per-
formance and connect the front end of training design (i.e., what do learners
need to learn?) to its back end (i.e., did they learn what they were supposed
to?). Performance assessments make it possible to determine whether stand-
ards have been met and to provide informative feedback to learners.
The structure of this chapter is as follows. Section 2 describes skill decom-
position as identifying relevant constituent skills and their interrelationships.
The result of this decomposition is a skill hierarchy. Section 3 discusses for-
mulating a performance objective for each constituent skill in this hierarchy.
The whole set of objectives provides a concise description of the contents
of the training program and sets the standards for acceptable performance.
Section 4 delves into categorizing performance objectives related to specifc
constituent skills, classifying them as nonrecurrent or recurrent. Nonrecur-
rent constituent skills always involve problem solving, reasoning, or deci-
sion making and require presenting supportive information. In contrast,
recurrent constituent skills involve applying rules or procedures and require
presenting procedural information. The classifcation further encompasses
to-be-automated recurrent constituent skills requiring part-task practice
and double-classifed constituent skills. Moreover, certain constituent skills
will not be taught because learners have already mastered them. The per-
formance objectives for acquiring skills serve multiple purposes. They form
Step 2: Design Performance Assessments 97
the basis for discussing the training program’s content with diferent stake-
holders, give input to further analysis and design activities, and, importantly,
provide the standards for performance assessments. Section 5 discusses the
design of these performance assessments. This includes the specifcation of
scoring rubrics for assessing learners’ performance and their progress when
implemented in development portfolios. The chapter concludes with a
summary.
Skill Hierarchy
Developing a skill hierarchy starts with the overall learning goal (i.e., the
top-level skill), which provides the basis for identifying the more specifc
constituent skills that enable the performance of the whole skill. The idea
behind this is that constituent skills lower in the hierarchy enable the learn-
ing and performance of skills higher in the hierarchy. This vertical, enabling
relationship is also called a prerequisite relationship (Gagné, 1968). Fig-
ure 2.2 in Chapter 2 presented a skill hierarchy for producing video content.
It is repeated here in Figure 5.1 with a classifcation of constituent skills.
98
Step 2: Design Performance Assessments
Figure 5.1 A hierarchy of constituent skills for the complex skill ‘producing video content,’ with a classification of its
constituent skills.
Step 2: Design Performance Assessments 99
Data Gathering
2005). Professionals are necessary to identify the constituent skills and verify
the skill hierarchy in several validation cycles. As a nonexpert in the domain
being taught but with expertise in instruction and its design, the designer has
a key role. The designer tries to help the expert overcome what is known as
the curse of expertise/knowledge. The curse of knowledge/expertise is a cog-
nitive bias that occurs when someone communicating with others—often,
the expert—assumes that the others have the knowledge that they need to
understand what is being communicated. In other words, they assume they
all share a background and understanding. A second role is to help decom-
pose what the expert says that they do. Many skills that the expert carries out
are so automated that the expert does not realize the underlying constituent
skills. In these validation cycles, the designer, thus, checks whether the hier-
archy contains all the constituent skills necessary to learn and perform the
complex cognitive skill and whether lower-level skills facilitate the learning
and performance of those higher in the hierarchy. If this is not the case, the
hierarchy needs to be refned or reorganized. Skill decomposition is a dif-
fcult and time-consuming process that typically requires several validation
cycles and is frequently updated after working on other steps.
Three guidelines that help build a skill hierarchy are to focus on (a) simple
versions of the task before analyzing more complex versions of it, (b) objects and
tools used by the task performer, in addition to looking at overall performance,
and (c) defciencies in novice performance, and thus, do not only focus on desired
task performance. When it comes to task complexity, which, as stated earlier, is a
function of the number of elements in a task and the interactivity between those
elements, it is best to frst confront analysts with relatively simple real-life tasks.
Only introduce more complex tasks after all constituent skills for the simpler tasks
have been identifed. Step 3 (described in the next chapter) provides approaches
for distinguishing between simple and complex versions of the whole task.
Objects refer to those things that are changed or attended to while suc-
cessfully carrying out a task. As a task performer switches their attention
from one object to another, this often indicates that diferent constituent
skills are involved. If, for example, a video producer switches their attention
from the camera lenses to a light meter, this may indicate that at least two
constituent skills are involved; namely, ‘selecting camera lenses’ and ‘light-
ing the scene.’ If a surgeon switches attention from the patient to a donor
organ waiting to be transplanted, it may indicate the constituent skills ‘pre-
paring the patient’ and ‘getting the donor organ ready for the transplant.’
Tools refer to things used to change objects. As was the case for objects,
tool switches often indicate that diferent constituent skills are involved (see
also Figure 5.3). For example, if a video-content producer switches between
a camera lens, a light refector, and a microphone, this may indicate that
there are three constituent skills involved; namely, ‘selecting lenses’ (where
the lens is used), ‘lighting the scene’ (where the light refector used), and
‘capturing audio’ (where the microphone is used). If a surgeon switches
102 Step 2: Design Performance Assessments
Figure 5.3 Observing the use of objects and tools by proficient task performers
may help identify relevant constituent skills.
from using a scalpel to using a forceps, it may indicate the constituent skills
‘making an incision’ and ‘removing an organ.’
Finally, it is often worthwhile not only to focus on gathering data on the
desired performance but also to look at the performance defciencies of the
target group. This change of focus is particularly useful if the target group
consists of persons already involved in carrying out the task or parts of it
(e.g., employees who will be trained). This is usually the case when perfor-
mance problems on the work foor lead to the development of a training pro-
gram. Performance defciencies indicate discrepancies between the expected
or desired performance and the actual task performance. The most common
method used to assess such defciencies is interviewing the trainers (i.e., ask-
ing them what the typical problems are that their learners encounter), the
target learners (i.e., asking employees what problems they encounter), and
their managers or supervisors (i.e., asking the target learners’ superiors which
undesired efects of the observed problems are most important for them or are
most harmful to the organization). The to-be-developed training program will
focus on constituent skills in which learners exhibit performance defciencies.
objective and creating one or more corresponding test items for each
objective. This is absolutely not the case for the Ten Steps! The Ten Steps
emphasize that, in complex learning, the training program must actively
support integrating and coordinating the constituent skills outlined in the
performance objectives (i.e., integrative goals; Gagné & Merrill, 1990).
Thus, instructional methods cannot be linked to one specifc objective but
must always link to interrelated sets of objectives that can be hierarchical,
heterarchical, or retiary and have a temporal, simultaneous, or transposable
relationship. This means that design decisions in the Ten Steps are based
directly on the characteristics of learning tasks (Step 1) and task analysis
results (Steps 5–6 and 8–9), not on the separate objectives. Nevertheless, a
performance objective is specifed for each constituent skill identifed in the
skill hierarchy because, in the performance of the whole skill, these aspects
will also become visible—often in the form of points of improvement—
and will give the designer or trainer a foothold for solving the learning
problems, flling the defciencies, or designing task support and feedback.
The integrated set of objectives describes the diferent aspects of efective
whole-task performance. Well-formulated performance objectives (Mager,
1997) contain an action verb that clearly refects the desired performance
after the training, the conditions under which the skill is carried out, the
tools and objects required (most discussions of performance objectives do
not include this aspect), and—last but not least—the standards for accept-
able performance, including criteria, values, and attitudes (see Figure 5.4).
Action Verbs
An action verb clearly states what learners can do after completing the train-
ing or learning experience. It should indicate observable, attainable, and
measurable behaviors. The most common mistake is to use verbs like ‘com-
prehend,’ ‘understand,’ ‘be aware of,’ ‘be familiar with,’ or ‘know.’ These
verbs should be avoided in performance objectives because they do not
describe what learners can do after the training but what they need to know
to do this. See Table 5.1 for an example of the types of action verbs you
can use for the two highest levels (create and evaluate) of Bloom’s Revised
Taxonomy (Anderson & Krathwohl, 2001). Although this taxonomy talks
about learning goals, we prefer the term ‘performance objectives’ for the
highest levels because they are not about the things that must be learned
Table 5.1 Action verbs for the two highest levels in Bloom’s revised taxonomy
in the cognitive domain.
but about the things the learner must be able to do after the educational
program. However, the term ‘learning goals’ is appropriate for lower lev-
els in Bloom’s taxonomy. Learning goals for supportive information can be
described on the levels of analyzing and understanding; learning goals for
procedural information can be described on the level of remembering, and
learning goals for part-task practice can be described on the level of applying.
It is important to note that these analyses are specifc to Steps 5–6 and 8–9.
Performance Conditions
The performance conditions specify the circumstances under which the con-
stituent skill must be carried out. For instance, in the context of the swimming
diploma, one condition was ‘fully clothed’ for the skill of ‘treading water,’ as
that is often the case in real life where a person—in the Netherlands—might
fall into a canal or other waterway. They may include safety risks (e.g., if the
skill is needed in a critical working system), time stress (e.g., where delays
could cause major problems), workload (e.g., in addition to tasks that cannot
be delegated to others), environmental factors (e.g., amount of noise, light,
or the weather conditions), time-sharing requirements (e.g., when the skill
has to be performed alongside other skills), social factors (e.g., in a hostile or
friendly group/environment), and more. It is crucial to defne these condi-
tions in a way that minimizes the transfer-of-training problem.
Consider surgeons trained under optimal conditions in a sterile operat-
ing room with good lighting, modern tools, and a full staf of well-trained
colleagues. If they are military surgeons also sent to combat zones, they will
have to perform that same surgery under less-than-optimal conditions, in a
battlefeld hospital with poor lighting, limited tools, and minimal and possi-
bly poorly trained, local staf. Often, relevant conditions already appear dur-
ing the design of learning tasks when the designer determines dimensions
on which real-world tasks difer (see Section 4.4 on variability of practice).
One dimension relates to the conditions under which to perform the task,
which are also relevant when formulating performance objectives.
The performance objective for a specifc constituent skill outlines the tools
and objects necessary for its performance. This documentation is crucial for
creating a suitable learning environment for practicing and learning to carry
out the tasks. All objects and tools, or (low or high-fdelity) simulations
or imitations of them, must be available in this task environment. On the
other hand, because some tools and objects may quickly change from year to
year (such as computer hardware, input devices, software programs, medi-
cal diagnostic and treatment equipment, tax laws, codes, and regulations,
etc.), it is equally important to document which performance objectives
106 Step 2: Design Performance Assessments
and related constituent skills are afected by the introduction of new objects
and tools. This will greatly simplify updating existing training programs and
designing programs for retraining.
After the training program, learners are able to plan, produce, and edit
high-quality video content for a variety of purposes, including promotional,
informative, and documentary content [conditions], and handle all aspects
of the video production process, including creating scripts and storyboards
[object], operating the camera [object], lighting [object], coaching people
being flmed [object] and editing the video [object], using relevant equip-
ment such as lenses, light refectors, microphone mounts and video editing
software [tools], to meet the creative and technical needs [value] of clients.
After the training program, learners are able to create compositions by oper-
ating and placing cameras [object] with appropriate lenses [object] and
correctly arranged lighting [object] so that conventional framing conven-
tions (e.g., rule of thirds) are followed [value], rough shapes, colors, and
brightness of primary objects complement each other [value], visual elements
feel balanced [value], and foreground is clearly distinguished from the
background [criterion] to achieve the desired artistic outcome [value].
1. Will or will not be taught. By default, classify constituent skills as skills that
will be taught.
2. Is treated as nonrecurrent, recurrent, or both. By default, classify con-
stituent skills as nonrecurrent, involving schema-based problem solving,
reasoning, and decision making after the training and requiring the avail-
ability of supportive information during the training.
3. Needs to be automated or not. By default, classify recurrent constituent
skills as skills that do not need full automation. Nonautomated recurrent
skills involve applying rules after the training and require presenting pro-
cedural information during the training. If you classify recurrent constit-
uent skills as skills that need to be fully automated, they may also require
additional part-task practice during the training program (see Step 10 in
Chapter 13).
108 Step 2: Design Performance Assessments
top-level skill can never be recurrent when using the Ten Steps. By default,
its constituent skills are also considered nonrecurrent. A constituent skill is
classifed as recurrent only if it will be performed based on specifc cogni-
tive rules after training. Performance objectives for nonrecurrent constitu-
ent skills describe exit behaviors that vary from one problem situation to
another. However, this behavior remains efective because it is guided by
cognitive schemata that steer the problem-solving behavior (using cognitive
strategies) and allow for reasoning about the domain (using mental models).
After training, learners should possess the necessary schemata to fnd a solu-
tion whereby their behavior is efective, efcient, and fexibly adaptable to
new and often unfamiliar situations.
For example, in the video-production example, the constituent skill ‘writ-
ing a script or synopsis’ (see Figure 5.1) is classifed as nonrecurrent because
this process difers for each new project (i.e., each new project requires a
new script or synopsis). Skills enabled by this particular nonrecurrent con-
stituent skill, like ‘developing the story’ and ‘creating the production plan,’
must also be nonrecurrent since nonrecurrent skills can never enable recur-
rent ones. In the training blueprint, nonrecurrent constituent skills require
the availability of supportive information for their development (Step 4).
Finally, there may also be reasons not to teach specifc constituent skills.
An instructional designer may focus on performance objectives for which
learners have shown performance defciencies, thereby excluding all other
Step 2: Design Performance Assessments 113
Scoring Rubrics
Table 5.3 Example of a partial scoring rubric for the complex skill ‘producing
video content.’
Performance aspect/ Standards as specified in the Scale of values
constituent skill performance objective
Attitude: • Exemplary empathetic
Coaching is done with an coaching
empathetic attitude, actively • Shows active listening
listening to participants’ and attempts to address
concerns and adjusting concerns
coaching strategies to alleviate • Attentive but with room for
discomfort. improvement
• Failure to actively listen and
address concerns
Please explain your answer.
Value: • Exceptional adherence
The coaching adheres to ethical and • Strong adherence
cultural sensitivity guidelines by • Basic adherence
respecting personal boundaries • Limited adherence
and (cultural) sensitivities and Please explain your answer.
Coaching people
avoiding language or gestures
being filmed
that may be ofensive or
(nonrecurrent)
inappropriate.
Value: • Effective and engaging
Communication is clear and communication
effective, without jargon or • Consistently effective
technical terms that may be communication
confusing. • Mostly effective
communication
• Ineffective communication
Please explain your answer.
Criterion: Yes/No
The participant briefing is
completed at least 15 minutes
before the scheduled start
time of the filming session.
Criterion: Yes/No
Audio levels are set correctly,
preventing clipping or
Capturing audio
distortion.
(recurrent,
not-to-be Value: • Sufficient
automated) Back-up and safeguarding • Almost sufficient
protocols are followed to • Insufficient
prevent data loss.
Operating camera Criterion: Yes/No
and equipment Switching between camera
(recurrent, settings is faultless and very
to-be-automated) fast.
To be continued for all other relevant aspects of performance . . .
116 Step 2: Design Performance Assessments
Monitoring Progress
Thus, it is not the level of the standards that changes throughout the
educational program but, in contrast, the complexity of the learning tasks
and the support and guidance provided for carrying out those tasks. Learn-
ers must frst reach particular standards for a simple/completely guided and
supported task and later for increasingly more complex/less guided and
supported tasks. The standards remain the same. This has two important
advantages. First, learners can be introduced to all standards from the begin-
ning of the program, providing a good impression of the fnal attainment
level they must reach at the end of the program. In this way, learners know
what to aim for. Second, it ensures that assessments are based on a broad
and varied sample of learning tasks, which is important for reaching reliable
assessments. ‘One measure of performance is no measure of performance,’
and most studies conclude that reliable performance assessments require at
least 8–10 tasks (Kogan et al., 2009).
Figure 5.5 provides the standards-tasks matrix as a schematic representa-
tion of assessments gathered. In the columns are the learning tasks, and an
X indicates on which standards the tasks are assessed. Many of the tasks will
contain support and guidance, but at the end of each task class or level of
complexity, there will also be unsupported tasks (in Figure 5.5, T3, T7, and
T10). In the rows are the standards for performance. The hierarchical order-
ing of the standards follows the skill hierarchy (in Figure 5.5, there are four
hierarchical levels, but there can be more). The standards-tasks matrix illus-
trates three principles: (a) the distinction between standard-centered and
task-centered assessment, (b) the increasing number of relevant standards
when tasks become more complex, and (c) the opportunity to counterbal-
ance the increasing number of relevant standards later in the teaching or
training goals by assessing on a more global level.
Standard-centered assessment focuses on one particular standard: It gives
information about how a learner is doing and developing over tasks on one
particular performance aspect. It refects the learner’s mastery of distinct
aspects of performance and yields particularly important information for
identifying points for improvement. In contrast, task-centered assessment
considers all standards: It gives information about how a learner is doing
overall and how overall performance is developing over tasks (Sluijsmans
et al., 2008). It, thus, refects the learner’s mastery of the whole complex
skill and is more appropriate for making progress decisions, such as whether
a learner is ready to continue to a next, more complex, task class. The use of
task-centered and standard-centered assessments for selecting new learning
tasks is part of Step 3 of the Ten Steps and will be further discussed in the
next chapter (Section 6.5, Individualized Learning Trajectories).
Second, Figure 5.5 shows that the number of relevant standards increases
when tasks become more complex. There are more Xs in the later columns than
in the early columns. Although there is a constant set of standards, this does
not imply that each separate learning task can be assessed based on the same
118
Step 2: Design Performance Assessments
Figure 5.5 Standards-tasks matrix for monitoring learner progress.
Step 2: Design Performance Assessments 119
set of standards. Not all standards are relevant for all tasks; more standards will
often become relevant for more complex tasks. For example, constituent skills
of the complex skill ‘creating the composition’ include ‘selecting lenses’, ‘light-
ing the scene’, and ‘operating camera and equipment’ (cf. Figure 5.1). Each
constituent skill has its performance objectives and standards. However, sup-
pose learners only produce videos with smartphones or camcorders in the frst
task class. In that case, the standards for ‘selecting lenses’ are irrelevant here
because these cameras have no interchangeable lenses. More and more stand-
ards will become relevant as learners progress and tasks become more complex.
Finally, performance objectives and related standards are hierarchically
ordered according to the skill hierarchy, making it possible to vary the level
of detail of the assessments. Figure 5.6, for example, depicts a part of the
standards for ‘producing video content’ ordered according to its corre-
sponding skill hierarchy. Assessors use standards associated with constituent
skills high in the hierarchy to assess performance at a global level, standards
lower in the hierarchy to assess performance at an intermediate level, and
standards at the lowest level of the hierarchy to assess performance at a
highly detailed level (cf. the hierarchical levels of standards in the left side of
Figure 5.5). For example, only the standards at the level of the constituent
skills ‘creating the production plan,’ ‘producing footage,’ and ‘creating the
fnal product’ are considered for global assessment. Medium-level assess-
ment takes into account standards that are part of the performance objec-
tives for constituent skills lower in the hierarchy: In addition to standards
for ‘producing footage,’ for example, standards for ‘interacting with people
being flmed,’ ‘collaborating with crew,’ and ‘shooting video’ are also taken
into account (e.g., creating a friendly atmosphere [value], clear communica-
tion with crew [criterion], ftting composition [value], etc.). For a highly
detailed assessment, standards that are part of the performance objectives at
the lowest levels of the hierarchy are also taken into account: In addition to
standards for ‘shooting video,’ for example, standards for ‘creating the com-
position’ and ‘selecting lenses’ are also taken into account (e.g., following
conventional framing conventions such as the rule of thirds [value], and the
properties of the selected lens, such as aperture or shutter speed, align with
the artistic requirements [criterion]). In general, highly detailed assessments
will be limited to those constituent skills that are not yet mastered by the
learners or for which learners still have to improve their performance. This
makes it possible to counterbalance the fact that more and more standards
become relevant later as the training progresses: Standards learners have
already reached are then only assessed globally.
120 Step 2: Design Performance Assessments
Standards for
producing
Global assessment
video content
Medium-level assessment
Preproduction Production
Standards for Standards for Standards for Standards for Standards for Standards for
briefing with developing creating a interacting collaborating shooting
the client the story schedule with people with crew video
Standards for Standards for Standards for Standards for Standards for Standards for
doing writing a creating a coaching interviewing creating the
background script or storyboard people people composition
research synopsis
Development Portfolios
same development portfolio with the same scoring rubrics and the same
standards should be used throughout so that learners are exposed to all rel-
evant standards from the start. Although learners will not be assessed on all
standards immediately, the fnal attainment levels of the program are com-
municated right from the beginning, helping learners work towards them.
For example, Figure 5.7 presents a screenshot of an electronic devel-
opment portfolio used for student hairstylists (STEPP—Structured Task
Evaluation and Planning Portfolio; Kicken et al., 2009a, 2009b). After each
learning task—typically, styling a client’s or a lifelike model’s hair—the asses-
sor updates the development portfolio using an electronic device. If pre-
ferred, they update the portfolio after several tasks. The portfolio describes
each completed task in terms of its complexity, determined by the task class
it belongs to, and its level of support and guidance. This may range from
independently performed tasks, tasks performed partly by the employer or
trainer and partly by the learner, or even tasks performed by a more expe-
rienced peer student or trainer and only observed by the learner (i.e., a
modeling example). The assessor can also upload digital photographs of the
client’s hairstyle before and after the treatment. To assess performance on
a completed task, the assessor selects the constituent skills relevant to the
performed task from the left side of the screen (see left part of Figure 5.7).
The constituent skills of the complex skill ‘styling hair’ are hierarchically
ordered and include, for example, washing and shampooing, haircutting,
permanent waving, and coloring, but also communicating with clients, sell-
ing hair care products, operating the cash register, making appointments,
and so forth. Clicking the right mouse button on a selected constituent
skill shows the performance objective for this skill, including standards for
acceptable performance (i.e., criteria, values, attitudes). The assessor selects
a relevant aspect at the desired level of detail by clicking the left mouse
button, revealing a scoring rubric for this aspect on the right side of the
screen. This also makes it possible to judge the global aspects frst and, only
if desired, unfold them to judge the more specifc aspects until the highest
level of detail (i.e., the lowest level of the hierarchy) is reached. This ofers
the opportunity to use a high level of detail for judging new or problematic
aspects of the completed task and, simultaneously, use a more global level
for judging the already-mastered aspects of the task. At the chosen level of
detail, the assessor flls out the scoring rubric, the time spent on this aspect
of performance, and, in a separate text box, points for improvement. This
process repeats for all aspects relevant to the learning task being assessed,
and if more than one task is assessed, it is repeated for all tasks. The next
chapter will discuss using a development portfolio for individualizing the
sequence of learning tasks. In that case, several assessors may update the
portfolio: the teacher, the learner themselves (self-assessments), peers (peer
assessments), but also the employer, clients, and other stakeholders.
Glossary Terms
Step 3
Sequence Learning Tasks
6.1 Necessity
The Ten Steps approach organizes learning tasks in simple-to-complex cat-
egories or task classes. In exceptional cases, such as when designing short,
rigid training programs, you might skip this step and treat all learning tasks
as belonging to a single task class.
DOI: 10.4324/9781003322481-6
126 Step 3: Sequence Learning Tasks
Although the Ten Steps uses authentic whole tasks, it is not the case
that the learner is “thrown into the deep end of the swimming pool to
either sink or swim”. When learning to swim—in the Netherlands, in any
event—children begin in very shallow water with simple techniques and dif-
ferent fotation devices. They progress through a series of eight stages until
they are capable of doing all of the things necessary for getting their frst
swimming diploma (see Chapter 5), such as treading water for 15 seconds
fully clothed, alternately swimming two diferent strokes (breaststroke and
backstroke) twice for 25 meters each, and swimming underwater through
a ring for 3 meters. After getting this diploma, they can go on to their next
diplomas in the same way. The techniques they must learn and the steps
they must carry out are mostly organized from simple to complex through
the stages.
This chapter discusses methods for sequencing task classes that form sim-
ple-to-complex groups of learning tasks. For each task class, the designer
needs to specify the characteristics or features of its learning tasks. This
allows them to correctly categorize already developed learning tasks and
select and develop additional tasks. The progression of task classes—flled
with learning tasks—provides a global outline of the training program.
The structure of this chapter is as follows. Section 2 discusses whole-
task sequencing for defning task classes, including simplifying conditions,
emphasis manipulation, and knowledge progression methods. All these
methods start the training program with a task class containing whole
tasks representative of the simplest (that is to say, least complex) real-world
tasks. Section 3 discusses the issue of learner support for tasks within that
task class. Whereas learning tasks within the same task class are equally
complex, a high level of support is typically available for the frst learning
task or tasks. This support gradually and systematically decreases during
the following tasks until no support is available for the fnal learning task or
tasks within the task class. Section 4 discusses part-task sequencing. Find-
ing a task class simple enough to start the training may be impossible in
rare cases. Then, it may be necessary to break the complex skill into mean-
ingfully interrelated clusters of constituent skills (i.e., the ‘parts’) that can
be addressed in the training program. Section 5 discusses the realization of
individualized learning trajectories. A structure of task classes, each con-
taining learning tasks with diferent levels of support and guidance, serves
as a task database. Based on performance assessments, tasks that best ft the
learning needs of individual learners are then selected from this database.
It is also possible and may be desirable to teach learners the self-directed
learning skills necessary for selecting suitable learning tasks (through what
is known as ‘second-order scafolding’). The chapter concludes with a
summary.
Step 3: Sequence Learning Tasks 127
The next subsections discuss three whole-task methods for sequencing simple-
to-complex task classes. The simplifying conditions method identifes conditions
that simplify task performance and describes each task class in terms of whether
those conditions are present or not. The emphasis manipulation method identi-
fes sets of constituent skills that may be emphasized or de-emphasized during
training and includes an increasing number of sets of emphasized skills for later
task classes. Finally, knowledge progression methods base a sequence of task classes
on the results of in-depth task and knowledge analyses.
Simplifying Conditions
Shorter videos are generally, but not always, easier to produce, requiring
less content and editing. Longer videos require more planning and content
creation. They may involve more detailed scripting and editing. Producing
longer videos, such as documentaries, is complex due to extensive planning,
production, and postproduction work.
Project goal
Location
Controlled, indoor settings are generally easier to work with, as they usually
ofer more predictable conditions. Outdoor shoots can be challenging due
to weather, lighting, and environmental variables that are harder to control.
Available time
When there is plenty of time, there is room for multiple takes, adjustments,
and successive ‘tweaking,’ resulting in a smoother and less stressful produc-
tion process. A tight schedule limits the number of takes and adjustments,
potentially afecting the fnal quality of the production.
Participant dynamics
These simplifying conditions indicate the lower and upper limits of the pro-
gression of simple-to-complex task classes. The simplest task class would
contain learning tasks requiring learners to produce a short recap or after-
movie summarizing the atmosphere at an event, shot in indoor locations
with plenty of time and without the need for interviews or interaction with
participants. The most complex task class includes learning tasks requiring
learners to produce long videos such as documentaries in unpredictable loca-
tions and circumstances, in limited time, and with challenging participant
dynamics (e.g., inexperienced individuals, high-profle fgures, animals). We
can then defne a limited number of task classes within these limits by vary-
ing one or more simplifying conditions. For an example of task classes, see
the preliminary training blueprint in Table 6.2 later in this chapter.
Consider the highly complex skill of patent examination as another exam-
ple. Patent examiners follow a two-year training program before they can
deal with all common types of patent applications. For new patent applica-
tions, they must frst prepare a ‘search report.’ They carefully analyze the
application, search vast databases for already granted related patents, and
enter the results of this examination in a search report. If the report reveals
130 Step 3: Sequence Learning Tasks
the presence of similar patents, the examiner advises the applicant to end the
application procedure. Otherwise, they conduct a ‘substantive examination’
and discuss necessary changes to the patent application with the applicant.
This process eventually leads to either granting or rejecting the application.
Several factors, including the following simplifying conditions, infuence the
complexity of patent examination:
Given these simplifying conditions, the frst task class might consist of learn-
ing tasks that require learners to handle a clear application involving a single,
independent claim with one clear and complete reply from the applicant and
no need for intermediate revision during the examination process. The fnal
task class would consist of learning tasks that require learners to handle an
unclear application involving several claims with multiple interdependen-
cies and many unclear and incomplete replies from the applicant and with a
need for intermediate revisions during the examination process. As shown
in Table 6.1, additional task classes with intermediate complexity levels may
be added between these two extremes by varying one or more of the sim-
plifying conditions (for constituent skills involved, see also Figure 6.4 later
in this chapter).
Emphasis Manipulation
In emphasis manipulation, learners perform the whole task from the begin-
ning to the end, but diferent task classes emphasize diferent sets of con-
stituent skills. This is also called ‘changing priorities’; for an example, see
Frèrejean et al. (2016). This approach allows learners to focus on the
emphasized aspects of the task without losing sight of the whole task. Learn-
ers also experience and/or learn the costs of carrying out the de-emphasized
aspects of the task. For example, when teaching medical students to examine
a patient, particular phases of the program may emphasize specifc diag-
nostic skills. This not only helps them further develop their diagnostic
skills but also allows them to experience the costs of developing other skills
because their interpersonal and communication skills may sufer from the
emphasis on diagnostic skills. The emphasis manipulation approach involves
emphasizing and de-emphasizing diferent (sets of) constituent skills dur-
ing training, requiring learners to manage their priorities and change the
Step 3: Sequence Learning Tasks 131
Table 6.1 Example of task classes for training the highly complex skill of patent
examination.
Task Class 1
Learning tasks that require learners to handle a clear application involving
a single independent claim with one clear and complete reply from the
applicant and no need for intermediate revision during the examination
process. Learners must prepare the search report and carry out the
substantive examination.
Task Class 2
Learning tasks that require learners to handle a clear application involving a
single, independent claim with many unclear and incomplete replies from
the applicant and a need for intermediate revisions during the examination
process. Learners must prepare the search report and carry out the
substantive examination.
Task Class 3
Learning tasks that require learners to handle an unclear application involving
several claims with multiple dependencies and many unclear and incomplete
replies from the applicant but no need for intermediate revisions during the
examination process. Learners must prepare the search report and carry
out the substantive examination.
More task classes may be inserted, as necessary
Task Class n
Learning tasks that require learners to handle an unclear application involving
several claims with multiple dependencies and many unclear and incomplete
replies from the applicant and with a need for intermediate revisions during
the examination process. Learners must prepare the search report and
carry out the substantive examination.
learn to prepare and give lessons (i.e., the whole task), relevant constituent
skills are, for example, presenting subject matter, questioning learners, lead-
ing group discussions, and so forth. Four possible task classes based upon
the emphasis manipulation approach would be to:
Note that the emphasis manipulation approach typically assumes that the
subsequent task class is inclusive, meaning there is a logical order or pro-
gression of tasks. In other words, once learners have learned to prepare and
present a lesson with a subject matter focus, they will continue to do this in
the next task class where the focus is on questioning (i.e., it is impossible to
prepare and present a lesson without content!).
Another example, but this time without such a logical order, is the replace-
ment of jet engines (see Figure 6.2). This is a complex but straightforward
process with few simplifying conditions. One can hardly ask a maintenance
person to replace a jet engine but leave out some of its parts! In this case,
emphasis manipulation may be a good alternative. The main phases in the
process are removing the old engines, testing electrical, fuel, and mechanical
connections to ensure that they can properly supply the new engines, install-
ing the new engines, and running a test program. A training program for
aircraft maintenance could use the following task classes:
Knowledge Progression
Learning tasks within one task class are always equivalent in that they all draw
upon the same body of knowledge for carrying them out. More complex
task classes require more detailed or enriched knowledge than simpler ones.
Consequently, each task class can be characterized by its underlying body of
knowledge, or conversely, a progression of ‘bodies of knowledge’ can be used
to defne or refne task classes. However, this requires an in-depth task and
knowledge analysis. One approach involves analyzing a progression of cogni-
tive strategies. These strategies specify how to efectively approach tasks in the
domain. The progression starts with simple, straightforward strategies and
ends with complex, more elaborate ones. Each cognitive strategy in the pro-
gression then defnes a task class, containing learning tasks that can be car-
ried out by applying that cognitive strategy at the given level of specifcation.
See Chapter 8 for further discussion and examples. Another approach entails
analyzing a progression of mental models that specify the organization of the
domain. Again, the progression starts with rudimentary/basic models and
proceeds toward highly detailed ones. Each mental model is used to defne
a task class containing learning tasks that can be solved by reasoning based
on the associated mental model. More details and examples are provided in
Chapter 9.
In conclusion, it is important to highlight that diferent methods for
whole-task sequencing can be easily combined. Usually, simplifying condi-
tions is tried out frst. If it is hard to fnd sufcient simplifying conditions,
either emphasis manipulation or a combination of simplifying conditions
with emphasis manipulation may be applicable. For instance, the frst task
class for training patent examination (see the top row in Table 6.1) may
be further divided into simpler task classes by emphasizing the analysis
of applications, carrying out searches, recording pre-examination results,
performance of substantive examinations, and fnally, all those aspects at
the same time (and in that order). Knowledge progression is suitable if in-
depth task, and knowledge analyses have been conducted in Steps 5 and 6.
134 Step 3: Sequence Learning Tasks
If these analysis results are available, they are particularly useful for refning
an already existing global description of task classes.
Table 6.2 A preliminary training blueprint: Task classes and learning tasks with
decreasing support in each task class for the complex skill ‘producing
video content’.
Task Class 1: Learners produce videos for fictional clients under the
following conditions.
• The video length is 1–3 minutes
• The clients desire aftermovies or event recaps, summarizing the
atmosphere at an event
• Locations are indoors
• There is plenty of time for the recording
• No interaction with other on-camera participants
Learning Task 1.1
Support: Worked-out example
Guidance: Performance constraints
Learners receive a production plan, intermediate footage, and the final video
of an existing aftermovie. They evaluate the quality of each aspect, but
their evaluations must be approved before they can continue with the next
aspect.
Learning Task 1.2
Support: Completion task
Guidance: Tutoring
Learners receive a production plan and intermediate footage. They must
select the footage and edit the video into the final product. A tutor guides
learners in studying the given materials and using the postproduction
software.
Learning Task 1.3: Imitation task
Support: Conventional task
Guidance: Modeling
Learners study a modeling example of how a teacher/expert created a recap
video for an (indoor) automotive show. In groups, students imitate this but
for a local exposition.
Learning Task 1.4
Support: Conventional task
Guidance: None
Learners create an individual recap video for an indoor event of their
choosing.
(Continued)
136 Step 3: Sequence Learning Tasks
Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3–5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Learning Task 2.1
Support: Completion task
Guidance: Process worksheet
Learners receive the client briefing, synopsis, and storyboard for a video
promoting a new coffee machine. They follow a process worksheet to
record footage and create the final video.
Learning Task 2.2
Support: Reverse task
Guidance: Tutoring
Learners study a promotional video about a new startup in the field of
artificial intelligence. A tutor helps them work backward to explain
critical decisions in the production phase and develop a storyboard that
fits the video and meets the client’s requirements.
Learning Task 2.3: Imitation task
Support: Conventional task
Guidance: Modeling
Learners study a modeling example of how a teacher/expert creates a
short social media advertisement video for a small online clothing store.
Learners remake the ad for a small online art store.
Learning Task 2.4
Support: Conventional task
Guidance: Tutoring
Under guidance from a tutor, learners create a promotional video
highlighting the products or services of a local store.
(Continued)
Step 3: Sequence Learning Tasks 137
Task Class 3: Learners produce videos for fictional clients under the
following conditions.
• The video length is increased to 5–10 minutes
• The clients desire informational or educational videos
• Locations are indoor or outdoor
• There is plenty of time for the recording
• Participant dynamics are more challenging (e.g., inexperienced/nervous
participants)
This task class employs the completion strategy.
Learning Task 3.1: Modeling example
Support: Worked-out example
Guidance: Modeling
Learners observe an expert thinking aloud while working outdoors with
experienced and inexperienced cyclists to create an informational video
about safe cycling.
Learning Task 3.2
Support: Completion task
Guidance: Process worksheet
Learners receive a production plan and footage with bad takes and good
takes. They must select good takes and edit them into a video informing
patients about a medical product. A process worksheet provides guidance.
Learning Task 3.3:
Support: Completion task
Guidance: Tutoring
Learners receive a production plan and footage of an expert (i.e., actor)
explaining content but showing nervousness and making mistakes. Learners
must reshoot the footage with the actor, coaching them to arrive at the
desired result and finish the final video.
Learning Task 3.4
Support: Completion task
Guidance: Performance constraints
Learners receive a synopsis for a training video on a construction site and
must write the script, record the footage, and create the final video. Each
step requires approval from a teacher before they can continue.
Learning Task 3.5
Support: Conventional task
Guidance: None
An expert in home organization and decluttering with no on-camera
experience wants an explainer video. Learners carry out all phases to
create the final video for the client.
(Continued)
138 Step 3: Sequence Learning Tasks
Task Class 4: Learners produce videos for fictional clients under the
following conditions.
• The video length is longer than 10 minutes
• The clients desire documentaries or interview videos
• Locations can be outdoors in bad weather
• There is limited time for the recording
• Participant dynamics are challenging (e.g., interviewing people, working
with animals)
Learning Task 4.1
Support: Worked-out example
Guidance: Tutoring
Learners study the production plans and completed videos of three
documentaries. A tutor facilitates a group discussion about recording
outdoors, storytelling, interviewing techniques, etc.
Learning Task 4.2
Support: Nonspecific goal task
Guidance: Tutoring
Learners receive a script for a documentary about a historic outdoor
location. They visit the site and simulate various challenging situations,
such as unexpected weather conditions, breaking equipment, etc. Learners
must develop approaches for dealing with such challenges.
Learning Task 4.3
Support: Conventional task
Guidance: Process worksheet
Learners create a 15-minute documentary about a farmer, requiring them to
interview, work with animals, and record outdoors. They receive a process
worksheet for guidance.
Learning Task 4.4
Support: Conventional task
Guidance: None
Learners create a 30-minute documentary on a topic of their choosing.
Figure 6.4
T hree skill clusters for the highly complex skill ‘examining pat-
ent applications.’ Part A consists of ‘preparing search reports’ and
lower-level skills; part B consists of ‘issuing communications or
votes’; and part C consists of ‘re-examining applications’ and lower-
level skills.
140 Step 3: Sequence Learning Tasks
reports’ (called, for generalization, part A instead of branch A), ‘issuing com-
munications or votes’ (part B), and ‘re-examining applications’ (part C).
Several studies have shown that backward chaining with snowballing can
be a very efective sequencing strategy. In an old study, Gropper (1973)
used it to teach instructional systems design. Initially, learners learn to try
out and revise instructional materials (i.e., traditionally, the last part of the
instructional design process). They are given model outputs for design
tasks ranging from task descriptions to materials development. In subse-
quent stages, students learn to design and develop instructional materials.
The strategy proved very efective because learners had the opportunity to
inspect several model products before being required to perform tasks such
as strategy formulation, sequencing, or task analysis themselves. Along the
same lines, Van Merriënboer and Krammer (1987) described a backward
chaining approach with snowballing for teaching computer programming.
Initially, learners evaluated existing software designs and computer programs
through testing, reading, and hand tracing (i.e., traditionally, the last part of
the development process). In the second phase, they modifed, completed,
and scaled up existing software designs and computer programs. Only in the
third phase did they design and develop new software and computer pro-
grams from scratch. The strategy was much more efective than traditional
forward chaining, probably because learners could better base their perfor-
mance on the many models and example programs they had encountered in
the earlier phases.
Task class Skill cluster Learning task Skill cluster Task class Learning task
(whole) (part) (part) (“whole”)
High support High support
C ABss Low support C ABss Low support
No support No support
High support High support
ABC s
BC A sm
Low support C AB s
C AB sm
Low support
No support No support
High support High support
ABC sc
Low support C AB sc
Low support
No support No support
High support High support
C ABms Low support BC Ams Low support
No support No support
High support High support
ABC m
BC A mm
Low support BC A m
BC A mm
Low support
No support No support
High support High support
ABC mc
Low support BC A mc
Low support
No support No support
High support High support
C ABcs Low support ABC cs Low support
No support No support
High support High support
ABC c
BC A cm
Low support ABC c
ABC cm
Low support
No support No support
High support High support
ABC cc
Low support ABC cc
Low support
No support No support
Notes:
Superscripts refer to the complexity of task classes or learning tasks: ss = simple task
class and simple skill cluster; sm = simple task class and medium skill cluster or vice
versa, mc = medium task class and complex skill cluster or vice versa, and so forth.
Subscripts refer to the output of previous skills that is given to the learner: C AB = perform
C based on given output from A and B, and BC A is perform B and C based on given
output from A.
Step 3: Sequence Learning Tasks 145
for intermediate revisions during the examination process. Each task class
is now further divided into three subclasses representing parts of the task
that are carried out in the following order: Learners (a) re-examine applica-
tions based on given search reports and communications/votes; (b) issue
communications/votes and re-examine applications based on given search
reports, and (c) prepare search reports, issue communications/votes, and
re-examine applications. In this approach, each task class ends with the
whole task at an increasingly higher level of complexity.
Table 6.5 Two task classes (wholes) with three skill clusters (parts) each for
the examination of patents, based on a whole-part sequence and
backward chaining with snowballing for the parts.
assessment results (see Step 2 in the previous chapter). Based on the infor-
mation in the development portfolio, a new task is selected from the task
database that best fulflls the learner’s learning needs; that is, a task that
ofers the best opportunity to work on identifed points of improvement
and/or that ofers further practice on an optimal level of complexity, sup-
port, and guidance. After receiving the learning task, the learner works on
this new task, and the cycle begins again.
The following subsections elaborate on three main elements of Fig-
ure 6.6: (1) performance assessment, (2) the development portfolio, and
(3) task selection. First, for assessing performance on relevant standards,
we will argue that using diferent assessment formats and assessors benefts
the cyclical process. Second, for the development portfolio, the protocol
portfolio scoring will be discussed as a systematic approach to gathering
assessment results, storing them, and using them for task selection. Third,
second-order scafolding is discussed as a systematic way to help learners
develop self-directed learning skills necessary for fruitful task selection.
replace some of the assessments made by teachers and others, provided that
the balance does not tip in the direction of self and peer assessment (Kon-
ings et al., 2019; Sluijsmans et al., 2002b, 2004).
150
Standard-centered standards (for 8 aspects)
Score per aspect (maximum score = 6 for each aspect) Horizontal standard Mean score Decision
1 1.1 WOE-MCT A HK 3 3 4 2 3
average 3.0 3.0 4.0 2.0 3.0 3.74 3.0 −
decision − + + − −
1 1.2 COM-SJT SA 5 2 1 3 3 4 2
average 4.0 2.0 2.0 3.0 3.5 3.0 2.0 3.0 3.71 2.8 −
decision − − − − + − − −
1 1.3 WOE-WST AGS 6 5 5 5 6 6 5
average 4.7 3.5 3.0 3.0 4.0 4.0 4.0 4.0 3.71 3.8 +
decision + + + − + + + −
1 1.4 CON-POJ PA 3 4 2 4 3 4
average 4.3 3.7 3.0 2.5 4.0 4.0 3.7 4.0 3.71 3.7 −
decision − + + − + + − −
1 1.5 COM-WST A HK 5 4 5 6 5 6 5
average 4.3 4.0 3.3 3.3 4.4 4.3 4.3 4.3 3.71 4.0 +
decision − + + − + + + −
1 1.6 CON-POJ A AH 5 6 6 5 6 6
average 4.4 4.0 3.8 4.0 4.5 4.3 4.6 4.6 3.71 4.3 +
decision − + + + + + + +
2 2.1 WOP-MCT PA 2 5 3 3 5 4
average 2.0 5.0 3.0 3.0 5.0 4.0 3.9 3.7 −
decision − + − − + −
Etcetera
Note:
Format: WOE‑MCT = worked-out example with multiple-choice test; COM‑SJT = completion assignment with situational judgment test;
WOE‑WST = worked-out example with work-sample test; CON‑POJ = conventional task with performance on-the-job assessment; COM-
WST = completion assignment with work sample test; WOP‑MCT = worked-out example plus process support with multiple-choice test.
Assessor: SA = self-assessment; PA = peer assessment; A = assessments by others (subscript is the initial of the other assessor).
Step 3: Sequence Learning Tasks 151
The fnal three columns in Table 6.6 concern the task-centered assessments.
Here, we read the table horizontally. The task-centered minimum score is the
average of the individual minimum scores for all standards relevant to that
learning task. The mean score is the average of all measured assessment scores
on that particular learning task. Continuing with the previous example, the
mean assessment score of assessor HK for the frst learning task is 3.0 (see
shaded column), calculated by summing up individual scores (15.0) divided
by the number of scores (5.0). This score falls below the minimum score of
3.74 for that learning task (also calculated over fve measured aspects). This
results in a negative decision, meaning the next task should again include
learner support. The learner self-assesses the second learning task with an
average score of 2.8, which remains below the task-centered minimum score
of 3.71 (the average of the minimum scores for all eight standards). This
suggests a decline in performance, and as a result, the next task will provide
additional support: It will be a worked-out example instead of a completion
assignment. Assessor GS gives an average assessment score of 3.8 for the
third learning task, above the task-centered minimum score. Therefore, the
next learning task will be a conventional task without support or guidance.
A peer gives an average assessment score of 3.7 for the fourth, unsupported
task, which is still a little below the task-centered minimum score. Therefore,
the next task will again provide support and guidance. Assessor HK gives an
average assessment score of 4.0 for the ffth task, above the task-centered
minimum score. Consequently, the next task is a conventional task without
support. Assessor AH gives an average assessment of 4.3 for the sixth, unsup-
ported task, well above the task-centered minimum score. Consequently, the
next learning task will be more complex and part of a second task class.
This example shows that task-centered assessments are critical for select-
ing learning tasks with the right amount of support/guidance and at the
right level of complexity. Support decreases when task-centered assessment
results improve, support increases when task-centered assessment results
decline, and support is stopped when task-centered assessment results
exceed the standard (cf. Figure 2.3 in Chapter 2). Task-centered assess-
ment of unsupported tasks is critical for determining the desired level of
complexity. The learner progresses to a next task class or complexity level
only when assessment results for unsupported tasks are above the task-cen-
tered minimum score. This process repeats itself until the learner success-
fully performs the conventional, unsupported tasks in the most complex
task class. Thus, it yields a unique learning trajectory optimized for an indi-
vidual learner.
(Continued)
Step 3: Sequence Learning Tasks 153
Glossary Terms
Step 4
Design Supportive Information
7.1 Necessity
Supportive information is one of the four principal design components,
helping learners perform the nonrecurrent aspects of the learning tasks. We
strongly recommend performing this step.
After designing learning tasks, the next step is designing supportive infor-
mation for carrying out those tasks. This chapter presents guidelines for
this. It concerns the second design component and bridges the gap between
DOI: 10.4324/9781003322481-7
158 Step 4: Design Supportive Information
what learners already know and what they should know to fruitfully work
on the nonrecurrent aspects of learning tasks within a particular task class.
Supportive information refers to (a) information on the organization of the
task domain and how to solve problems within that domain, (b) examples
illustrating this domain-specifc information, and (c) cognitive feedback on
the quality of the task performance. All instructional methods for present-
ing supportive information promote schema construction through elabora-
tion. In other words, they help learners establish meaningful relationships
between newly presented information elements and connect this to what
they already know (i.e., their prior knowledge; Wetzels et al., 2011). This
elaboration process yields rich cognitive schemata that relate many elements
to many others. Such schemata allow for deep understanding and increase
the availability and accessibility of task-related knowledge in long-term
memory.
The structure of this chapter is as follows. Section 2 discusses the nature
of general information about how to solve problems in the task domain
(i.e., Systematic Approaches to Problem solving SAP) and the organization
of this domain (i.e., domain models). Section 3 describes how to illustrate
SAPs with modeling examples and illustrate domain models with case stud-
ies. Section 4 discusses deductive and inductive presentation strategies for
combining general SAPs and domain models with specifc modeling exam-
ples and case studies. It also describes inquisitory methods such as guided
discovery learning and resource-based learning. Section 5 presents guide-
lines for providing cognitive feedback on the quality of nonrecurrent aspects
of task performance. Section 6 discusses suitable media for presenting sup-
portive information, including multimedia, hypermedia, microworlds,
epistemic games, and social media. Section 7 explains how to position sup-
portive information in the training blueprint. The chapter concludes with a
summary.
SAPs tell learners how to best solve problems in a particular task domain. To
this end, they provide an overview of the phases and, where necessary, the
subphases needed to reach particular goals and subgoals. They depict the
temporal order of these phases and subphases and indicate how particular
phases and subphases may depend on the outcomes of prior phases. In addi-
tion, provided rules-of-thumb or heuristics help the learner reach the goals
for each phase or subphase.
Figure 7.1 gives an example of a SAP for a video content producer. This
SAP for ‘developing a story’ comes as a fowchart (also called a SAP chart)
in which particular phases depend on the success or failure of one or more
preceding phases. According to this SAP, ‘creating a storyboard’ happens
only if needed (e.g., for communicating complex narratives with many visual
elements and transitions to crew or clients); otherwise, the video-content
producer only ‘refnes the script.’ The SAP on the right side of Figure 7.1
is a further specifcation of the frst phase of the SAP on the left (‘defne the
160 Step 4: Design Supportive Information
purpose of the video’) and depicts two subphases. Usually, you will present
global SAPs for early task classes and increasingly more detailed SAPs, with
more specific subphases, for later task classes.
The SAP on the right side of Figure 7.1 also provides some helpful
rules-of-thumb for identifying the client’s objectives (i.e., goal of Phase 1)
and the key message (i.e., goal of Phase 2). It is best to provide a prescrip-
tive formulation of the rules-of-thumb such as ‘to reach . . . , you should
try to do . . .’—and to discuss why to use these rules, when to use them,
and how to use them. Furthermore, it may be helpful to give SAPs a name
(Systematic Search, Split Half Method, Methodical Medical Acting, etc.)
to easily refer to them in later instructional materials and discussions with
learners.
Instructional methods for presenting SAPs must help learners establish,
in a process of elaboration (see Box 7.1), meaningful relationships between
newly presented information elements (e.g., phases, goals, rules) and mean-
ingful relationships between those new information elements and already
available prior knowledge. For the phases in a SAP, the chosen method or
Step 4: Design Supportive Information 161
Meaningful Learning
The best way to increase learners’ memory for new information is
to have them elaborate on the instructional material. This involves
having them enrich or embellish the new information with what
they already know. When learners elaborate, they frst search their
memory for general cognitive schemata that may provide a cogni-
tive structure for understanding the information in general terms
and for concrete memories that may provide a useful analogy (“Oh,
I came across something like this before”). These schemata con-
nect to the new information. Elements from the retrieved schemata
that are not part of the new information are now linked to it. It is a
form of meaningful learning because the learners consciously estab-
lish connections between the new material and one or more exist-
ing schemata in their memory. Thus, learners use what they already
know about a topic to help them structure and understand the new
information.
Structural Understanding
The main result of elaboration is a cognitive schema that enriches the
new information, with many interconnections within and extending from
that schema to other schemata. This network of connections will facili-
tate retrieving and using the schema because it ofers multiple retrieval
routes to access specifc information. In short, elaboration results in a rich
knowledge base that provides a structural understanding of the subject
matter. The knowledge base is well-suited for manipulation by controlled
processes. In other words, learners may employ it to guide problem-solv-
ing behavior, reason about the domain, and make decisions.
162 Step 4: Design Supportive Information
Elaboration Strategies
Like induction, elaboration is a strategic and controlled cognitive
process requiring conscious processing from the learners. It can be
learned and includes subprocesses such as exploring how new infor-
mation relates to things learned in other contexts, explaining how
new information fts in with things learned before (‘self-explana-
tion’), or asking how to apply the information in other contexts. Col-
laboration between learners and group discussion might stimulate
elaboration. In a collaborative setting, learners often must articulate
or clarify their ideas to the other group member(s), helping them
deepen their understanding of the domain. Group discussions may
also beneft the activation of relevant prior knowledge and so facilitate
elaboration.
Tacit Knowledge
Cognitive schemata resulting from elaboration (or induction) can
guide problem-solving behavior, making informed decisions, and rea-
soning about a domain. However, with consistent and repeated prac-
tice, cognitive rules may develop. These rules eventually produce the
efect of using the cognitive schema directly without referring to this
schema anymore (the formation of such cognitive rules is discussed in
Box 10.1). For many schemata, this transition to cognitive rules will
never occur. For instance, a troubleshooter might construct a schema
for dealing with a system malfunction based on one or two experi-
ences. When faced with a new problem situation, they might use this
(concrete) schema to reason about the system (“Oh yes, I encountered
something like this seven or eight years ago”). This schema will never
become automated because it is not used frequently enough. How-
ever, people may also construct schemata that are repeatedly applied
afterward. This can lead to tacit knowledge (literally: silent or not
spoken; also called implicit knowledge, ‘tricks-of-the-trade,’ or ‘fnger-
spitzengefühl’), which is characterized by its difculty to articulate and
heuristic nature; you ‘feel’ that something is the way it is. The explana-
tion for this phenomenon is that people initially construct advanced
schemata through elaboration or induction and subsequently form
cognitive rules based on direct experience. Afterwards, the schemata
are no longer used as such and quickly become difcult to articulate.
The cognitive rules directly drive performance but are not open to
conscious inspection.
Step 4: Design Supportive Information 163
Further Reading
Kalyuga, S. (2009). Knowledge elaboration: A cognitive load perspec-
tive. Learning and Instruction, 19, 402–410.
https://doi.org/10.1016/j.learninstruc.2009.02.003
Van Boxtel, C., van der Linden, J., & Kanselaar, G. (2000). Collabo-
rative learning tasks and the elaboration of conceptual knowledge.
Learning and Instruction, 10, 311–330.
https://doi.org/10.1016/S0959-4752(00)00002-5
methods should stress the temporal organization of the goals and subgoals
the task performer must reach. For instance, when learners study a SAP,
the instruction should explain why particular phases need to be performed
before performing other phases (e.g., “an aqueous solution must be heated
to a certain temperature before the reagent can be added because the chem-
ical reaction that you want to occur is temperature dependent”) or indicate
the efects and problems of rearranging phases (e.g., “if you frst add the
reagent to the aqueous solution and then heat it, it will cause the reagent
to bind to a specifc molecule and lose its function”). When presenting
rules-of-thumb, instructional methods should stress the change relationship
between an ‘efect’—the goal that must be reached—and its ‘cause’—what
must be done to reach this goal. For instance, the instruction may explain
how particular rules-of-thumb bring about particular desired states of afairs
(e.g., “if you add a certain reagent to an aqueous solution, then the calcium
will precipitate out of the solution”) or they may predict the efects of the
use—or lack of use—of particular rules-of-thumb (e.g., “if you don’t heat
the solution, then the reagent will not function, because the working of the
reagent is temperature dependent”).
Domain models indicate how things are organized in a specifc world (i.e.,
the relevant domain), specifying the relevant elements in a domain as well as
the relationships between those elements. We can distinguish three types of
models: conceptual, structural, and causal.
Conceptual models are the most common type of model encountered.
They have concepts as their elements, allowing for classifying and describing
objects, events, and activities. Conceptual models help learners answer the
question: What is this? For instance, knowledge about several types of medi-
cines, treatments, side efects, and contraindications, along with how these
difer from each other, helps a doctor determine the possibilities and risks asso-
ciated with diferent courses of action. Knowledge of diferent types of story
164 Step 4: Design Supportive Information
arcs helps a video-content producer organize content and engage the viewer
when writing a script. Analogous to instructional methods for presenting
SAPs, methods for presenting domain models should help learners establish
meaningful relationships between newly presented elements. Table 7.1 sum-
marizes some popular methods for establishing such relationships.
Causal models focus on how objects, events, or activities afect each other
and help learners interpret processes, give explanations, and make predic-
tions. Such models help learners answer the question: How does this work?
A principle is the simplest causal model that relates an action or event to an
efect. A principle allows learners to draw implications by predicting a certain
phenomenon that is the efect of a particular change (e.g., if A, then B) or
to make inferences by explaining a phenomenon as the efect of a particular
change (e.g., B has not happened, so A was probably not the case). Principles
may refer to very general change relationships, in which case they often take
the form of laws (e.g., the law of supply and demand, law of conservation of
energy) or to highly specifc relationships in one particular technical system
(e.g., opening valve C leads to an increase of steam supply to component X).
Causal models that explain natural phenomena through an interrelated
set of principles are called theories; causal models that explain the working
of engineered systems are called functional models. For instance, knowledge
about how components of a chemical plant function and how each compo-
nent afects all other components helps process operators with their trouble-
shooting tasks. For the presentation of causal models, additional methods to
stress relationships are (see Methods 7 and 8 in Table 7.1):
Note that the expository methods in Table 7.1 do not provide any ‘practice’
for the learners because they do not explicitly stimulate learners to process
the new information actively. The methods are typically used in expository
texts and traditional one-way lectures. An enormous amount of educational
literature on writing instructional texts and preparing instructional presen-
tations discusses many more explanatory methods than the ones presented
in Table 7.1 (e.g., Hartley, 1994). We want to emphasize that, while the
Ten Steps recognizes the importance of knowledge acquisition strategies
such as desirable difculties (Bjork, 1994) and generative learning strategies
(Fiorella & Mayer, 2016; Wittrock, 1989), it does not prioritize them over
Step 4: Design Supportive Information 167
whole-task practice and inductive learning. The Ten Steps is written from
the perspective that education traditionally focused too much on knowledge
acquisition instead of developing complex skills to carry out tasks. While
books are flled with methods for stimulating memorization and com-
prehension, the Ten Steps does not consider these methods sufcient for
developing complex skills. Rather than starting the design by describing the
supportive information and selecting the instructional methods to acquire
the necessary knowledge, the Ten Steps considers these methods supportive
to developing complex skills, considering them only after designing learning
tasks. Therefore, we do not extensively discuss methods such as retrieval or
spaced practice here. However, to accommodate interested readers, we pre-
sent Table 7.2, which describes eight generative learning activities presented
by Logan Fiorella and Richard Mayer (2016) that may foster elaboration by
requiring learners to actively transform—cognitively and also, sometimes,
physically—new information into something else; for example, to express
the meaning of the newly presented information ‘in their own words’ or
transform verbal information (e.g., a text or a lecture) into a visual represen-
tation (e.g., a concept map or a picture).
Table 7.2 Eight generative learning strategies by Fiorella and Mayer (2016).
SAPs and domain models and memories of concrete cases exemplifying this
knowledge. In real-life tasks, people draw upon their general knowledge of
how to approach problems in that task domain, how the domain is organ-
ized, and their specifc memories of concrete cases related to similar tasks
they previously carried out. In case-based reasoning, the memories serve
as an analogy for solving the problem. In inductive learning (refer back to
Box 4.1 in Chapter 4 for a short description of this basic learning process),
they may serve to refne the general knowledge.
In instructional materials, modeling examples and case studies (i.e.,
worked-out examples with equivocal solutions) are the external counter-
parts of internal memories, providing a bridge between the supportive
information and the learning tasks. When providing supportive informa-
tion, modeling examples illustrate SAPs, and case studies illustrate domain
models. At the same time, these same two approaches may be seen as learn-
ing tasks with maximum task support (see Figure 7.2). They are important
for learners at all levels of expertise, ranging from beginners to true experts.
For instance, it is known that Tiger Woods, when he was the best golfer in
the world, still extensively studied videotapes of himself and his opponents
(even though he had probably played against them many times) to refne
cognitive strategies on how to approach problems and situations during
the match and meticulously studied the layout of golf courses around the
world to refne his mental models, allowing him to fgure out how he could
best play them (even though he had probably played these courses numer-
ous times). In other words, even extremely expert task-performers fne-tune
their cognitive strategies and mental models by extensively studying con-
crete examples.
Figure 7.2 Modeling examples and case studies as a bridge between learning
tasks and supportive information.
Step 4: Design Supportive Information 169
Modeling Examples
Case Studies
Diferent kinds of case studies can be used, depending on the domain model
they exemplify. Case studies that illustrate conceptual models will typically
describe a concrete object, event, or activity exemplifying the model. Stu-
dents learning to produce video content may study a variety of example
videos to develop a sense of composition, diferent shot transitions, types
of background music, etc. Architecture students may study successful (or
particularly unsuccessful) building designs to develop mental models of con-
cepts such as sight lines, ventilation obstacles, environmental friendliness, etc.
Case studies that illustrate structural models may be artifacts or descrip-
tions of those artifacts designed to reach particular goals. Students learning
to produce video content may study a deconstructed camera to determine
how diferent parts, such as lenses and sensors, are organized and related.
A more elaborated model of a camera’s internal organization may help them
record better quality footage. Architecture students may visit ofce build-
ings to study how particular goals have or have not been met using certain—
often, prefabricated—templates or elements in particular ways. An improved
170 Step 4: Design Supportive Information
model of possible design and construction techniques may help them design
better buildings.
Case studies that illustrate causal models may be real-life processes or
technical systems. Students learning to produce video content may study
example videos with diferent exposures, shutter speeds, and apertures and
how these settings afect the footage. A more detailed mental model of how
these camera settings afect the recording may help them improve their con-
tent. Architecture students may study a detailed description of the events
that led to a disaster or near disaster in an ofce building. A better mental
model of possible fault trees (see Section 9.2) may help them identify weak-
nesses in building processes or even design safer buildings.
Table 7.3 Inquisitory methods that help learners activate their prior knowl-
edge and establish meaningful relationships in presented supportive
information.
Figure 7.3 depicts the four basic strategies for presenting supportive informa-
tion. The deductive-expository strategy (left upper box in Figure 7.3) repre-
sents a specifc type of ‘direct instruction’—pure expository instruction—and is
very time-efective. But it has some serious drawbacks. Learners with little or
no relevant prior knowledge may have difculties understanding the general
information. And learners are neither invited nor required to elaborate on the
presented information nor are they stimulated to connect it to what they already
know. Therefore, it is best to use a deductive-expository strategy if instructional
time is severely limited, if learners already have ample relevant prior knowledge,
and/or if a deep level of understanding is not strictly necessary.
174 Step 4: Design Supportive Information
Promoting Reflection
Comparing one’s models and solutions with the models and solutions of
experts may be especially useful in the early phases of the learning process
because it helps to construct a basic understanding of the task domain and
how to approach problems in this domain. Comparisons with models and
solutions provided by peer learners may be especially useful in later phases of
the learning process. Group presentations and discussions typically confront
learners with various alternative approaches and solutions, ofering a degree of
variability that can help them fne-tune their understanding of the task domain.
Collins and Ferguson (1993) proposed additional methods that pro-
mote refection through ‘feedback by discovery,’ such as selecting counter-
examples, generating hypothetical cases, and entrapping learners (cf. the
inquisitory methods presented in Table 7.3). For example, if a student of
patent examination has applied a particular method to classify patent applica-
tions, they could be made aware of a counter-example of a situation in which
this method will not work well. Alternatively, they could receive a task in
which their strategy leads to a wrong decision. In another example, if a medi-
cal student decides that a patient has a particular disease because the patient
has particular symptoms, the instructor might present a hypothetical patient
with the same symptoms that have arisen as a side efect of medication and
not because of the diagnosed disease.
Sometimes, a learner may receive regular notifcations that the standards for
a particular nonrecurrent aspect of performance were not met and desired
improvements did not materialize, despite repeated cognitive feedback on
relevant cognitive strategies and mental models. This situation indicates per-
sistent problems with one or more aspects of performance over a prolonged
period. In such a case, it becomes essential to conduct an in-depth diagnostic
process to reveal possible intuitive cognitive strategies (see Section 8.3 for
their analysis) or naïve mental models (see Section 9.3 for their analysis) that
might explain the lack of progress. It is crucial to invite the learner to criti-
cally compare their problem-solving process and mental models with those
intuitive strategies and naïve models—and to work towards more efective
strategies and models in an efortful process of conceptual change. Chap-
ters 8 and 9 ofer more suggestions for dealing with intuitive strategies and
naïve models. Artifcial intelligence and advanced computer-based systems
capable of learning are now developing capabilities to perform in-depth
analyses of suboptimal problem-solving, reasoning, and decision-making
processes to provide cognitive feedback. Until they become more sophisti-
cated (at the time of this writing, that is not the case, but who knows where
we will be in a few years), teachers or instructors must actively engage with
the learner and their learning process to provide this diagnostic feedback.
178 Step 4: Design Supportive Information
Multimedia
Principle Example
Hypermedia
extends the concept of hypertext, where text-based links can lead to other
text documents, by including diverse multimedia elements. This intercon-
nected environment enhances user engagement and provides fexible path-
ways for information retrieval and exploration. This has become the basis of
most online learning systems being used. By far, the most extensive hyper-
media system is the World Wide Web. Most online learning environments
make use of hypermedia, and the most modern try to model the learner’s
dynamic neural network (Baillifard et al., 2023).
Some authors argue that hypermedia may help learners elaborate and
deeply process presented information because their structure, to a certain
extent, refects how human knowledge is organized in elements and non-
arbitrary, meaningful relationships between them. But the opposite is also
true. Salomon (1998) describes the butterfy defect: learners futter across the
information on the computer screen, click or do not click on nodes or pieces
of information, to quickly futter to the next piece of information, never
knowing the value of it and without a plan. Learners often click on links,
forgetting what they are looking for. This futtering leads—at best—to a very
fragile network of knowledge and, at worst, to a tangle of charming but irrel-
evant pieces of information (and not knowledge). A second problem, also
signaled by Salomon, is that learners see hypermedia as being ‘easy’ and are
inclined to hop from one information element to another without putting
efort into deep processing and elaboration of the information. They may see
hypermedia as an opportunity to relax, similar to watching television.
One approach to designing hypermedia that stimulates elaboration of the
given information can be found in cognitive fexibility theory (Jonassen, 1992;
Lowrey & Kim, 2009). This theory starts from the assumption that ideas are
linked to others with many diferent relationships, enabling one to take multi-
ple viewpoints on a particular idea. For instance, if a case study describes a par-
ticular piece of machinery, the description may be given from the viewpoint
of its designer, a user, the worker who must maintain it, the salesperson who
has to sell it, and so on. Comparing and contrasting the diferent viewpoints
helps the learner better process and understand the supportive information.
Microworlds
Figure 7.4 Wind energy simulator for exploring relations between wind speed,
the number of turbines, and current output (for this and other
examples, see www.golabz.eu).
182 Step 4: Design Supportive Information
Epistemic Games
Social Media
The position of general information within a task class depends on the cho-
sen presentation strategy. In a deductive strategy, learners frst study the
general information (in SAPs and domain models) by reading textbooks,
attending lectures, studying multimedia materials, and so forth. This infor-
mation is then illustrated in the frst learning tasks, which will usually take
the form of modeling examples and case studies (i.e., deductive-expository
strategy), or the learners are tasked with generating examples themselves
(i.e., deductive-inquisitory strategy). Typically, the general information
will be available to the learners before they start working on the learning
tasks and while they work on those tasks. Thus, if questions arise during
practice, they may consult their textbooks, teachers, Internet sites, or any
other background materials containing relevant information for perform-
ing the tasks.
In an inductive strategy, learners begin by studying modeling examples
and case studies that lead them to the general information relevant to per-
forming the remaining learning tasks. Then, there are three options. First,
184 Step 4: Design Supportive Information
Table 7.5 Preliminary training blueprint for the complex skill ‘producing video
content’. A specification of the supportive information has been
added to one task class.
Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3–5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Supportive Information (inductive strategy): Case study
Learners study three worked-out examples (i.e., case studies) of promotional
videos for a backpack with integrated solar panels, a virtual fitness
platform, and an urban art festival. In groups, a tutor guides them in
comparing and evaluating each example’s goals, scripts, camera use,
lighting, etc.
Supportive Information: Presentation of cognitive strategies
• SAP for developing a story for promotional videos
• SAPs for interacting with people and collaborating with the crew
• SAPs for shooting video (detailed strategies for creating compositions and
capturing audio)
Supportive Information: Inquiry for mental models: learners are asked to
identify examples of:
• Different types of cameras, microphones, and lights (conceptual models)
• Story arcs (structural models)
Learning Task 2.1
Support: Completion task
Guidance: Process worksheet
Learners receive the client briefing, synopsis, and storyboard for a video
promoting a new coffee machine. They follow a process worksheet to
record footage and create the final video.
Learning Task 2.2
Support: Reverse task
Guidance: Tutoring
Learners study a promotional video about a new startup in the field of
artificial intelligence. A tutor helps them work backward to explain critical
decisions in the production phase and develop a storyboard that fits the
video and meets the client’s requirements.
Learning Task 2.3: Imitation task
Support: Conventional task
Guidance: Modeling
Learners study a modeling example of how a teacher/expert creates a
short social media advertisement video for a small online clothing store.
Learners remake the ad for a small online art store.
(Continued)
186 Step 4: Design Supportive Information
Glossary Terms
Step 5
Analyze Cognitive Strategies
8.1 Necessity
Analysis of cognitive strategies provides the basis for the design of sup-
portive information, particularly systematic approaches to problem solving
(SAPs). Only perform this step if this information is not yet available in
existing materials.
We all have previously encountered a problem or have had to carry out a task
that looks familiar to us and for which we think we are experienced. Unfortu-
nately, while solving the problem or carrying out the task, we encounter some
DOI: 10.4324/9781003322481-8
190 Step 5: Analyze Cognitive Strategies
aspect we have never encountered before. At this point, our available routines
are insufcient, and we must use a diferent type of knowledge—strategic
knowledge—to solve the problem or carry out the task. Such strategic knowl-
edge helps us systematically approach new problems and efciently marshal
the necessary resources to solve them. This chapter focuses on analyzing cog-
nitive strategies for dealing with unfamiliar aspects of new tasks. The results
of the analyses take the form of SAPs that specify how expert task-performers
organize their behaviors; that is, which phases they go through while solving
problems and which rules-of-thumb they use to complete each phase suc-
cessfully. Systematic descriptions of how to approach particular problems in a
subject matter domain are sometimes already available in the form of existing
job descriptions, instructional materials, or other documents. If this is the
case, there is no need to carry out the activities described in this chapter. In all
other cases, the analysis of cognitive strategies may be important for design-
ing problem-solving support for learning tasks (e.g., process worksheets), for
refning a chosen sequence of task classes, and, last but not least, for designing
an important part of the supportive information.
The structure of this chapter is as follows. Section 2 discusses the speci-
fcation of SAPs, including identifying phases in problem solving and the
rules-of-thumb that may help complete each phase successfully. Section 3
discusses the analysis of intuitive cognitive strategies because the existence
of such strategies may interfere with the acquisition of more efective strate-
gies. Section 4 describes the use of SAPs in the design process because SAPs
help design problem-solving guidance, refne a sequence of task classes, and
design supportive information. For each activity, intuitive strategies may
afect the selection of instructional methods. The chapter concludes with a
summary of the main guidelines.
1. Provide the basis for developing task and problem-solving guidance such
as process worksheets or performance constraints (see Section 4.7).
2. Help refne a sequence of task classes, such as identifying a progression
from simple to more complicated cognitive strategies (see Section 6.2).
3. Provide the basis for developing an important part of the supportive
information (see Section 7.2 in the previous chapter).
tasks), it is best to start with tasks from the frst task class. The expert’s high-
level approach is described in terms of phases, with their related goals and
applied rules-of-thumb, that may help reach the goals identifed for each
phase. Then, each phase may be further specifed into subphases, and again,
rules-of-thumb may be identifed for each subphase. After completing the
analysis for tasks at a particular level of complexity, the analyst confronts
the task-performer with progressively more complex tasks. These tasks typi-
cally require additional phases and/or rules-of-thumb and, thus, relate to
nonrecurrent constituent skills and associated performance objectives that
were not dealt with before. This process repeats until the analysis of the
most complex tasks has been completed—by then, the analyst should have
dealt with all nonrecurrent constituent skills and associated performance
objectives. At each iteration, they identifed phases in task completion and
rules-of-thumb.
These phases may be further specifed into subphases. For instance, sub-
phases for the frst phase include analyzing the (a) context, (b) the target
group, and (c) task or content domain. Sometimes, the subphases need to
be further specifed into sub-subphases, and so forth.
Figure 8.1 gives an example of a SAP for a training program for patent
examiners. This kind of SAP takes the form of a fowchart (see Section 11.2)
and may also be called a SAP chart (refer back to Figure 7.1 for another
example). The left part describes the problem-solving phases that can be
distinguished for one of the task classes in the skill cluster ‘preparing search
reports’ (refer back to Figure 6.4). This SAP shows that ‘writing a draft com-
munication’ occurs only if defects have been found in the patent application;
otherwise, the patent examiner has to ‘write a draft vote’ (i.e., a proposal
to the examining division to grant the patent). The right side of Figure 8.1
shows a further specifcation of the frst phase (‘read application’) of the SAP
on the left and is divided into two subphases with rules-of-thumb.
Figure 8.1 SAP for examining patent applications. It describes phases in prob-
lem solving (see left part) as well as subphases and rules-of-thumb
that may help to complete each phase or subphase (see right part).
Identifying Rules-of-Thumb
Rules-of-thumb take the general form: ‘If you want to reach X, you may try
to do Y.’ Expert task-performers often use these rules to generate a solution
to a particular problem tailored to the situation and its particular circum-
stances. Rules-of-thumb are also called heuristics or prescriptive principles.
The basic idea is that some principles that apply in a particular domain may
be formulated in a prescriptive way to yield useful rules-of-thumb. For
example, one well-known principle in the field of learning and instruction is:
• If you are driving a car and have difculties negotiating curves, try to
estimate the angle of the curve and turn the steering wheel appropriately
to that angle.
• If you are defending in a soccer game, try to look at the ball rather than
the person with the ball.
• If you are controlling air trafc at an airport and are overwhelmed by the
task, try to control the aircraft by making as few corrections as possible.
• If you are presenting at a conference, try to adapt the amount of informa-
tion you will present to your audience’s prior knowledge.
As part of a SAP analysis, rules-of-thumb are analyzed for each phase and
subphase. Each specifed goal may then defne a category of rules-of-thumb
dealing with similar causes and efects. The ‘IF-sides’ of the rules-of-thumb
typically refer to the general goal of the phase under consideration but may
include additional conditions. For instance, the grey rectangles in Figure 8.2
list rules-of-thumb that may help learners understand the problem (the frst
main phase) and help introduce alternate processes (one of the subphases
within the main phase, ‘reformulate the problem’). Three guidelines for the
specifcation of such rules-of-thumb are:
1. Provide only rules-of-thumb not yet known by the learners and limit
them to those rules-of-thumb necessary for performing the most impor-
tant tasks.
2. Formulate rules-of-thumb in a readily understandable way for learners,
and use the imperative to make clear that they are directions for desired
actions.
3. Make the text as specifc as possible, but at the same time, remain general
enough to ensure the appropriateness of the rules-of-thumb for all situa-
tions to which they apply.
196 Step 5: Analyze Cognitive Strategies
afects performance because feints easily set the defender on the wrong foot.
The intuitive counterpart of the rule-of-thumb to control air trafc by making
corrections only if this is necessary to prevent dangerous situations is to make
many small corrections, a strategy that may work when only a few aircraft
are on the radar screen (e.g., by a small regional airport) but which is not
suitable for handling large numbers of aircraft in busy airports. Finally, the
intuitive counterpart of the rule-of-thumb to adapt the amount of presented
information to the audience’s prior knowledge is to provide the audience with
as much information as possible in the available time. This strategy does not
work because the audience becomes cognitively overloaded or bored.
SAPs can help design problem-solving guidance in the form of process work-
sheets (see Section 4.7 and Table 4.5 for examples). Such worksheets indicate
the phases to go through when carrying out the task. The rules-of-thumb
can be presented to the learners as statements (e.g., “If you want to improve
your understanding of the problem, then you might try to draw a fgure
representing the problem situation”) or as guiding questions (“What activi-
ties could you undertake to improve your understanding of the problem?”).
Moreover, worksheets can help determine performance constraints that can
be applied to ensure that learners cannot perform actions irrelevant to or
detrimental to the phase they are working on and/or cannot continue to
the next phase before successfully completing the current one. For example,
based on the SAP presented in Figure 8.2, a performance constraint might
be that learners must frst submit their scheme of characteristics of the prob-
lem and its system boundaries to the teacher and are only allowed to con-
tinue solving the problem after receiving approval of this scheme.
SAPs can help refne an existing sequence of task classes using a method
known as knowledge progression (see Section 6.2). The simplest version of
198 Step 5: Analyze Cognitive Strategies
the task (i.e., the frst task class) will usually be equivalent to the simplest
or shortest path through a SAP chart. Subsequently, during a process
known as path analysis (Merrill, 1987), more intricate paths can be iden-
tifed, corresponding to more complex task classes. More complex paths
contain more decisions and/or goals than simpler ones. In addition,
more complex paths usually contain the steps of simpler paths, allowing
for the organization of a hierarchy of paths to refne a sequence of task
classes. For example, the SAP chart shown in Figure 8.2 encompasses
three paths:
1. The shortest path occurs for a standard problem. Thus, the question “Is
this a standard problem?” is answered with “Yes”. Consequently, the frst
task class contains standard problems in thermodynamics.
2. The next-shortest path occurs for a nonstandard problem that can be
transformed into a standard problem if identifying key relations yields
a solvable set of equations. Thus, the question “Is this a standard prob-
lem?” is answered with “No,” and the question “Is this a solvable set of
equations?” is answered with “Yes”. Consequently, the second task class
contains nonstandard problems that can easily be converted to standard
problems.
3. The longest path occurs for a nonstandard problem that cannot be trans-
formed into a standard problem by identifying key relations. In this situ-
ation, the question “Is this a standard problem?” is answered with “No,”
and the question “Is this a soluble set of equations?” is also answered
with “No”. Consequently, the third and fnal task class contains non-
standard problems that can only be transformed into standard problems
via reformulations, special cases, or analogies.
SAPs also provide the basis for designing the part of the supportive informa-
tion related to cognitive strategies. First, SAPs can explicitly be presented
to learners because they tell them how to best approach problems in a par-
ticular task domain (see Section 7.2). An instructional specifcation is often
needed after the analysis to ensure that the phases and the rules-of-thumb
are understandable for the learners. Second, SAPs can drive the search for
or design of modeling examples that give concrete examples of their appli-
cation (see Section 7.3). Such modeling examples may be seen as learning
tasks with maximum process-oriented support or illustrations of cognitive
strategies (see Figure 7.2). Finally, SAPs can help provide cognitive feedback
to learners (see Section 7.5). For instance, learners can be asked to compare
their problem-solving process with a presented SAP or with modeling exam-
ples illustrating this SAP.
Step 5: Analyze Cognitive Strategies 199
Identifying intuitive strategies may afect decision making for the three
design activities discussed. Concerning designing problem-solving guidance,
intuitive strategies may be a reason to provide extra guidance to learners
working on the learning tasks. This extra guidance prevents unproductive
behaviors and mistakes that result from using the usually far-from-optimal
intuitive strategy. Providing particular aids such as process worksheets and
structured answer forms can help learners stay on track and apply useful
rules-of-thumb. Performance constraints can block using intuitive strate-
gies and can force learners to apply a more efective systematic approach.
In a course on scientifc writing, for example, learners could be forced or
coerced into fully decomposing their manuscript into ideas and subideas
with associated headings and subheadings (i.e., to use a top-down breadth-
frst approach) before they are allowed to start writing the text. All word
processors have this function (i.e., the outline function or view) built into
them (see, for example, De Smet et al., 2011).
Concerning refning task classes, intuitive strategies can be a reason for
providing additional task classes; that is, to slowly work from simpler learn-
ing tasks toward more complex learning tasks. This allows learners to care-
fully compare and contrast their intuitive approach to problem solving at
each level of complexity with more efective approaches. Ideally, the intui-
tive approach will gradually be replaced, although some intuitive strategies
(often in the form of misconceptions) are highly resistant to change.
Concerning designing supportive information, intuitive strategies can
infuence the choice of instructional methods. For the coupling of SAPs and
modeling examples, an inductive-expository strategy, which involves study-
ing modeling examples before discussing the general SAP, is recommended
as the default strategy for novice learners (see Figure 7.3). Extra modeling
examples could be provided to address intuitive strategies, and the inductive-
expository strategy might be replaced with an inductive-inquisitory strategy
(i.e., guided discovery). This approach allows learners to connect the newly
presented phases and rules-of-thumb to their existing intuitive ideas. For
using cognitive feedback (see Section 7.5), intuitive strategies underscore
the importance of learners carefully comparing and contrasting their intui-
tive approaches and the resulting solutions with the provided SAPs, mod-
eling examples, and expert solutions.
• If you identify phases and subphases in a SAP, then specify an ordered set
of (sub)goals that the task performer should reach and, if necessary, the
decisions they must make because particular goals depend on the success
or failure of reaching previous goals.
• If you identify rules-of-thumb that may help complete one problem-
solving phase successfully, then list the conditions under which the
rule-of-thumb may help solve the problem and the action or actions the
learner could try out.
• If you analyze intuitive cognitive strategies, then focus on the discrepan-
cies between the problem-solving phases and rules-of-thumb applied by
expert task-performers and those by a naïve learner.
• If you use SAPs to design problem-solving guidance, then design pro-
cess worksheets or performance constraints so that the learner is guided
through all relevant problem-solving phases and prompted to apply use-
ful rules-of-thumb.
• If you use a SAP to refne a sequence of task classes, then identify simple
to increasingly more complex paths in the SAP charts and defne associ-
ated task classes.
• If you use SAPs to design supportive information, then formulate the
phases and rules-of-thumb so that they are understandable for your target
group, and select or design modeling examples to illustrate them.
• If you teach a cognitive strategy to learners inclined to use an inefec-
tive intuitive strategy, then provide extra problem-solving guidance, let
task complexity progress slowly, and let the learners critically compare
and contrast their intuitive problem-solving strategy with more efective
strategies.
Glossary Terms
Step 6
Analyze Mental Models
9.1 Necessity
Analyzing mental models into conceptual, structural, and causal models
provides the basis for designing supportive information, particularly domain
models. Only carry out this step if this information is not yet available in
existing materials.
What we know determines what we see and not the other way around
(Kirschner, 1992, 2009). A geologist walking in the mountains of France will
see geological periods and rock formations. A bicyclist in those same mountains
DOI: 10.4324/9781003322481-9
202 Step 6: Analyze Mental Models
will see gear ratios and climbing percentages. Each of them sees the same thing
(in terms of their sensory perception) but interprets what they see in very dif-
ferent ways (in terms of how they understand what they see). In this respect,
these two people have very diferent mental models of the mountains of France.
Mental models help task performers understand a task domain, reason
in this domain, give explanations, and make predictions (Van Merriën-
boer et al., 2002). This chapter focuses on analyzing mental models that
represent the organization of a domain. The result of such an analysis is a
domain model, which can take the form of a conceptual model (What is
this?), a causal model (How does this work?), and a structural model (How
is this built?). Mental models specify how expert task-performers mentally
organize a domain to reason about it and support their problem solving
and decision making. Extensive descriptions of relevant domain models are
often available in existing instructional materials, study books, and other
documents. If this is the case, then there is no need to carry out the activi-
ties described in this chapter. In all other cases, analyzing mental models is
important to refne a chosen sequence of task classes and, in particular, to
design an important part of the supportive information.
The structure of this chapter is as follows. Section 2 discusses the specif-
cation of domain models, including the identifcation of conceptual, struc-
tural, and causal models. Section 3 briefy discusses the empirical analysis of
intuitive mental models because the existence of such models may interfere
with the learner’s construction of more efective and scientifc models, as
was the case in the previous chapter on analyzing cognitive strategies. Sec-
tion 4 discusses the use of domain models for the design process. Domain
models help refne a sequence of task classes and design an important part
of the supportive information. For both activities, intuitive mental models
may afect the selection of instructional methods. To conclude, we briefy
compare the analysis of mental models with the analysis of cognitive strate-
gies and present the main guidelines.
more or less related domains, the more likely it is that you will be able to
efectively solve problems or carry out tasks in this domain”. A great risk
here is to proceed with the associative analysis process for too long. In a
sense, everything is related to everything, and thus, an analyst can build
seemingly endless networks of interrelated pieces of knowledge. Therefore,
it is essential not to introduce new relationships if an expert task-performer
cannot clearly explain why newly associated facts or concepts improve their
performance.
However, this is certainly not an easy decision to make. For instance,
should students in information science know how a computer works to pro-
gram a computer, and if so, then to what extent? Should art students know
about the chemistry of oil paints to be able to paint, and if so, how much
should they know? Should students in educational technology know how
people learn to produce an efective and efcient instructional design, and
if so, to what level of specifcity should they know this? When is enough
enough? The number of relationships in domain models is theoretically
unlimited. According to the prominent types of relationships, the earlier
discussed three basic kinds of models may be distinguished: conceptual,
structural, and causal.
in turn, results from a basic lamp failure and the absence of a spare lamp.
It should be clear that fault trees for large technical systems might become
extremely complex.
Structural and causal models are special kinds of conceptual models that
provide a particular perspective on a task domain. Complex domain mod-
els may combine them into semantic networks that try to represent the
whole mental model, enabling the performance of a complex cognitive
skill. However, focusing first on only one type of model might be worth-
while to keep the analysis process manageable. Different task domains have
different dominant structures: Structural models are particularly impor-
tant for domains that focus on analysis and design, such as mechanical
engineering, instructional design, or architecture. Causal models, on the
other hand, are particularly important for domains that focus on explana-
tion, prediction, and diagnosis, such as the natural sciences or medicine.
Finally, general conceptual models are particularly important for domains
that focus on description, classification, and qualitative reasoning, such as
history or law. The analyst (i.e., the instructional designer) should start
with analyzing the dominant type of model or ‘organizing content’ in
the domain of interest (Reigeluth, 1992). In later stages of the analysis
process, other models forming part of the mental model may be linked to
this organizing content.
Step 6: Analyze Mental Models 213
Intuitive mental models are often very hard to change. One approach
to achieving this change is beginning instruction with the existing intui-
tive models (i.e., using inductive teaching methods) and slowly progressing
toward increasingly more efective models in a process of conceptual change
(Mills et al., 2016). Another approach is using instructional methods that
help learners question the efectiveness of their intuitive model, such as con-
trasting it with more accurate models or taking multiple perspectives on it.
The next section briefy discusses these approaches.
214 Step 6: Analyze Mental Models
Domain models can help refne an existing sequence of task classes via men-
tal model progression (see Section 6.2). The frst task class is a category of
learning tasks that can be correctly carried out based on the simplest domain
model. This model already contains the most representative, fundamental,
and concrete concepts and should be powerful enough to formulate non-
trivial learning tasks that learners may work on. Increasingly more complex
task classes correspond one-to-one with increasingly more complex domain
models. In general, more complex models contain more—diferent types
of—elements and/or more relationships between those elements than ear-
lier models that are less complex (Mulder et al., 2011). They either add
complexity or detail to a part or aspect of the previous models and become
elaborations or embellishments of them or provide alternative perspectives
on solving problems in the domain. In mental model progression, all mod-
els, thus, share the same essential characteristics: Each more complex model
builds upon the previous models. This process continues until a level of
elaboration and a set of models ofering diferent perspectives are reached
that underlie the fnal exit behavior.
Table 9.2 provides an example of a mental model progression in trouble-
shooting electrical circuits (White & Frederiksen, 1990). Each model ena-
bles learners to carry out tasks that may also occur in the post-instructional
environment. In this domain, causal models describe the principles govern-
ing the behavior of electrical circuits and their components, such as batter-
ies, resistors, capacitors, etc. Three simple-to-complex causal models are:
Domain models provide the basis for designing the part of the supportive
information related to mental models. First, they may be explicitly presented
to learners because they tell them how things are labeled, built, and work in
a particular domain (see Section 7.2). In many cases, an educational specif-
cation is necessary to ensure that the domain model is presented in a manner
that learners can readily understand. Second, domain models may help the
instructional designer fnd case studies that give concrete examples of the
classifcation, organization, and working of things (see Section 7.3). Those
case studies may be seen as learning tasks with maximum product-oriented
support or as illustrations of mental models (refer back to Figure 7.2).
Third, domain models may help provide cognitive feedback, ofering learn-
ers opportunities to compare their solutions with a presented domain model
(see Section 7.5). For example, if learners have written a scientifc article,
they may compare the structure of their article with a given structural
model (cf. Figure 9.3) or with a specifc article from a scientifc journal
in that domain. If a learner has reached a diagnosis of a particular error,
they can then check the credibility of their diagnosis with a given fault tree
(cf. Figure 9.4). Such refective activities help learners elaborate on the sup-
portive information and construct more accurate mental models.
216 Step 6: Analyze Mental Models
Identifying intuitive mental models that novice learners possess may afect
decision making for the design activities discussed. Concerning refning task
classes, the existence of intuitive models may be a reason to start from those
existing models and to provide a relatively large number of task classes; that
is, to work slowly from the intuitive, inefective models via more efective but
still incomplete and fragmented models, toward more efective, more com-
plete, and more integrated models. This allows learners to carefully compare
and contrast the new and more powerful models at each level of complexity
with previous, less powerful models. Ideally, expert scientifc models then
gradually replace the novice intuitive models (cf. the naïve models of the
Earth in Figure 9.5 that gradually become more accurate from left to right).
However, intuitive mental models may be highly resistant to change (e.g.,
Steinberg et al., 1990).
Concerning designing supportive information, strong intuitive mental
models might be a reason to select instructional methods that explicitly focus
on elaborating new information. First, concerning case studies, it is desirable
to present a relatively large number of case studies to illustrate the domain
model and present them from multiple viewpoints. For instance, when teach-
ing a model of the Earth, the supportive information may be in the form
of satellite images from diferent perspectives (always showing a sphere but
with diferent continents). When teaching the workings of the tides, case
studies may show that the tides are stronger in the Atlantic Ocean than in
the Mediterranean Sea and are, thus, not only afected by the relative loca-
tion of the moon and the sun to the Earth but also by the shape, size, and,
for the Mediterranean Sea, the fact that the Strait of Gibraltar hinders the
fow of water and the boundaries of the body of water. Second, inductive-
expository and guided discovery methods should take priority over deductive
methods. The former two methods help learners refne their existing models
and construct more efective models. Leading questions such as “Why do
ships disappear beyond the horizon?” and “What happens if you start walk-
ing in a straight line and never stop?” may help refne a model of the Earth.
Questions such as “When is it spring tide?” and “When is it neap tide?” may
help build a model of how the tides work. Third, feedback by discovery should
stimulate learners to critically compare and contrast their models with more
efective or scientifc models. Here, learners could compare their model of
the Earth with a globe, and they could compare their predictions of the tides
with the actual measurements provided by the coast guard.
Mental models describe how the world is organized, while cognitive strate-
gies (Chapter 8) describe how task performers’ actions in this world are
Step 6: Analyze Mental Models 217
Glossary Terms
Concept map; Fault tree; Intuitive mental model; Mental model; Product-
oriented support; Semantic network
Chapter 10
Step 7
Design Procedural Information
10.1 Necessity
Procedural information is one of the four principal design components and
enables learners to perform the recurrent aspects of learning tasks and part-
task practice items. We strongly recommend carrying out this step.
out those task aspects or perform those acts that are the same every time. In
other words, we follow fxed procedures.
This chapter presents guidelines for designing procedural information. It
concerns the third blueprint component, which specifes how to carry out
the recurrent aspects of learning tasks (Step 1) or how to carry out part-task
practice items (Step 10), which are always recurrent. Procedural information
refers to (a) just-in-time (JIT) information displays providing learners with
the rules or procedures that describe the performance of recurrent aspects
of a complex skill as well as information prerequisite for correctly carrying
out those rules or procedures, (b) demonstrations of the application of those
rules and procedures as well as instances of the prerequisite knowledge, and
(c) corrective feedback on errors. All instructional methods for presenting
procedural information promote rule formation, a process of converting
new knowledge into task-specifc cognitive rules. Cognitive rules can drive
the recurrent aspects of performance without the need to interpret cognitive
schemata. After extensive training, which may sometimes take the form of
part-task practice (see Step 10 in Chapter 13), the rules can even become
fully automated (i.e., routines) and drive the recurrent aspects of an expert
task-performer’s performance without the need for conscious control.
The structure of this chapter is as follows. Section 2 discusses the design
of JIT information displays. These displays should be modular, use simple
language, and prevent split-attention efects. Section 3 describes the use of
demonstrations and instances. The presented rules and procedures are best
demonstrated in the context of whole learning tasks. Section 4 discusses
three presentation strategies for procedural information; namely, unsolicited
JIT information presentation by an instructor or other intelligent pedagogi-
cal agent during task performance, unsolicited information presentation in
advance so that learners can memorize the information before they start
working on the learning tasks, and solicited JIT information presentation
where learners consult checklists, manuals, or other resources during task
performance. Section 5 gives guidelines for providing corrective feedback on
errors in the recurrent aspects of performance. Section 6 discusses suitable
media for presenting procedural information, including the teacher acting
as an ‘assistant looking over your shoulder,’ job aids, and various electronic
tools. Section 7 discusses the positioning of procedural information in the
training blueprint. The chapter concludes with a summary of guidelines.
Weak Methods
In the early stages of learning a complex skill, the learner may receive
information about the skill by reading textbooks, listening to lec-
tures, studying examples, etc. The general idea is that this information
may be encoded in declarative memory and be interpreted by weak
methods to generate behavior. Weak methods are problem-solving
strategies independent of the particular problem; they are generally
applicable and include methods such as means-ends analysis, forward-
chaining search, subgoaling, hill climbing, analogy, trial-and-error,
and so forth. According to ACT-R, weak methods are innate (i.e.,
they are biologically primary; Geary, 2008) and can be used to solve
problems in any domain. However, this process is very slow, takes up
many cognitive resources, and is prone to errors. Learning, on the
one hand, involves the construction of cognitive schemata through
induction (see Box 4.1) and elaboration (see Box 7.1), which makes
performance much more efcient and efective because acquired cog-
nitive strategies and mental models may be interpreted to guide the
problem-solving process. On the other hand, it also involves the for-
mation of rules that eventually directly steer behavior—without the
need for interpretation. This process often starts by following how-to
instructions, watching demonstrations, and/or studying worked-out
examples, after which cognitive rules are further refned in a process of
production compilation.
Production Compilation
Production compilation creates new, task-specifc cognitive rules
by combining more general rules and eliminating conditions so
that one rule never has more than one retrieval from declarative
memory. Taatgen and Lee (2003) provide an example where there
are three rules (also called ‘productions’—that is why this process
is called production compilation) for computing the sum of three
numbers:
Step 7: Design Procedural Information 223
Rule 1:
IF
The goal is to add three numbers
THEN
Send a retrieval request to declarative memory for the sum of the frst
two numbers
Rule 2:
IF
The goal is to add three numbers AND the sum of the frst two num-
bers is retrieved
THEN
Send a retrieval request to declarative memory for the sum that has
just been retrieved and the third number
Rule 3:
IF
The goal is to add three numbers AND the sum of the frst two and
the third number is retrieved
THEN
The answer is the retrieved sum
Rule 1&2:
IF The goal is to add 1, 2, and a third number
THEN Send a retrieval request to declarative memory for the sum of
3 and the third number
Rule 2&3:
IF The goal is to add three numbers and the third number is 3 AND
the sum of the frst two numbers is retrieved and is equal to 3
224 Step 7: Design Procedural Information
Rule 1&2&3:
IF The goal is to add 1, 2, and 3
THEN The answer is 6
Compared with the original three rules, the new rule is highly task-
specifc—it will only work for the numbers 1, 2, and 3. With produc-
tion compilation, people learn new rules only if the new rules are more
specifc than the rules they already possess. Thus, they will learn new,
task-specifc rules only for frequently occurring situations (as is typical
for recurrent constituent skills!). For example, people probably learn
the rule that allows them to automatically answer the question of what
the sum of 1, 2, and 3 is but not the rule that allows them to auto-
mat cally answer the question of what the sum of 5, 9, and 7 is.
Further Reading
Anderson, J. R. (2007). How can the human mind occur in the physical
universe? Oxford University Press.
https://doi.org/10.1093/acprof:oso/9780195324259.001.0001
Taatgen, N. A., & Lee, F. J. (2003). Production compilation: A simple
mechanism to model complex skill acquisition. Human Factors, 45,
61–76. https://doi.org/10.1518/hfes.45.1.61.27224
Figure 10.1 Example of a JIT information display presenting the procedure for
changing document orientation.
Source: Based on OpenOffice.org Writer.
226 Step 7: Design Procedural Information
The most important requirement for formulating JIT information is that each
rule or procedural step is specifed at the entry level of the learners. Ideally,
even the lowest-level ability learners must be able to apply the presented rules
or carry out the presented actions without making errors—under the very
important assumption that they already possess the prerequisite knowledge.
This requirement is directly related to the main distinguishing characteristic
between a systematic approach to problem solving (SAP) and a procedure:
In a SAP, success is not guaranteed because the phases merely guide the
learner through a heuristic problem-solving process, but in a procedure, suc-
cess is guaranteed because the steps provide an algorithmic description of
how to reach the goal. However, note that it is always possible to apply a rule
or carry out an action incorrectly and have the procedure fail. Knowing the
rule does not mean that it is carried out correctly! Because each step or rule
is directly understandable for each learner, the learner does not have to make
a particular reference to related knowledge structures in long-term memory
during the presentation. Elaboration is important for understanding sup-
portive information, but it is superfuous if the information is immediately
understandable, as should be the case for procedural information.
It is important to use an action-oriented writing style for JIT informa-
tion displays, which entails using the active voice, writing in short sentences,
and not spelling everything out. If a procedure is long and/or has many
diferent branches, it is often better presented graphically. Moreover, when
procedures specify the operation of complex machinery, it may be helpful to
depict physical models of those devices in exploded views or via other graphi-
cal representations. Section 12.2 will briefy discuss analyzing tools and
objects into physical models. There are many more guidelines for micro-
level message design and technical writing that will not be discussed in this
book (e.g., Alred et al., 2012; Carroll, 2003).
When making use of JIT information, learners must divide their attention
between the learning task that they are working on and the JIT information
presented to specify how to carry out the recurrent aspects of the task. In
this situation, learners must continuously switch their attention between
the JIT information and the learning task to mentally integrate the two.
This continuous switching between mental activities (i.e., carrying out the
task and processing the JIT information) may increase the extraneous cog-
nitive load (see Box 2.1) and hamper learning. Compare this with trying
to hit a golf ball while, at the same time, the golf pro is giving you all
types of pointers on your stance, your grip, the angle of your arms, and so
Step 7: Design Procedural Information 227
forth. This split-attention efect has been well documented in the literature
(for a review, see Ginns, 2006). To prevent the split-attention efect, it is
of utmost importance to fully integrate the JIT information into the task
environment and to replace multiple information sources with a single, inte-
grated information source. This physical integration removes the need for
mental integration, reduces extraneous cognitive load, and positively afects
learning.
Figure 10.2 shows a task environment where students learn to trouble-
shoot electrical circuits with a computer simulation. In the upper fgure,
the diagram of the electrical circuit on the left side of the screen is sepa-
rated from the JIT information on the right side, causing split attention
because the learner must constantly move back and forth between the
circuit and its components on the left-hand side of the screen and the
information about the components on the right. In the lower fgure, the
same JIT information is fully integrated into the diagram of the electri-
cal circuit such that the learner does not have to split their attention
between a component in the circuit and the information about that com-
ponent. The integrated format positively afects learning (Kester et al.,
2004, 2005).
A special type of split attention occurs with paper-based or mobile
device-based manuals (or checklists, quick reference guides, etc.) contain-
ing procedural information that relates to a real-life task environment. In
such a situation, it may be impossible to integrate the information into the
task environment. For instance, if a medical student is diagnosing a patient
or a process operator is starting a distiller, it is impossible to integrate the
relevant procedural information into the task environment. One option to
prevent learners from dividing their attention between the task environ-
ment and the procedural information here is to include representations of
the task environment in the manual. For instance, if the task environment is
computer-based, you can include screen captures in the manual. If the task
environment is an operating task for a particular machine, you can include
pictures of the relevant parts of the machine in the manual (Chandler &
Sweller, 1996). After all, splitting the learner’s attention between the task
environment and the manual creates the problem, not the use of the manual
per se. This split-attention problem, thus, can be solved by either integrat-
ing the JIT information in the task environment or, vice versa, by integrat-
ing relevant parts of the task environment in the manual. New modes of
presentation, such as augmented reality, are making this problem smaller
(you might refer back to Figure 2.4). To use the example previously men-
tioned, the process operator could wear a pair of augmented reality glasses
that present the JIT information at the time and place that the operator is
looking.
228 Step 7: Design Procedural Information
A switch.
100
22V 0.00
0.00
A voltmeter is connected in parallel because
electrons cannot pass through this meter.
0.00
An ammeter is cannected in series because this
meter has no resistance
The current in a series circuit is the same at all points in the circuit
The voltage is divided over the element in the circuit Electrans stop
flowing through the circuit when the serles connection is Interrupted.
The current in a parllel cinnection is divided over the parallel
branches. The voltage in a parallel circuit is the same in every
branch. Interruption of one of the parallel branmches has no
consequence for the flow of electrons through the other branches.
The current in a circut always follows the way of the
least resistance. A short circuit arises wehen a circuit
has no resistance.
0.00
The current in a parllel
cinnection is divided
over the parallel A voltmeter is
branches. The voltage in connected in parallel
a parallel circuit is the because electrons
same in every branch. cannot pass through The current in a circut always follows the way of the
Interruption of one of this meter. least resistance. A short circuit arises wehen a circuit
the parallel branmches has
has no resistance.
no consequence for the
flow of electrons through
the other branches.
100 A lamp (9W:
6OmA: always the
A source of A resistor (100 same).
22v electrical Ohm: variable).
protential
(voltage An a
variable). mmeter is
Current flows cannected
from the in series 0.00
positive pole because
of a battery to this meter
the negative has no
pole. A switch. resistance
The current in a series circuit is the same at all points in the circuit
The voltage is divided over the element in the circuit Electrans stop
flowing through the circuit when the serles connection is Interrupted.
Figure 10.2 An electrical circuit with nonintegrated JIT information (above) and
integrated JIT information (below).
Step 7: Design Procedural Information 229
Demonstrations
The rules and procedures presented in JIT information displays can be dem-
onstrated to the learners. For example, it is not uncommon that online dis-
plays like the one presented in Figure 10.1 contain a ‘show me’-link, which
animates the corresponding cursor movements and menu selections, allow-
ing the learner or user to observe how the rules are applied or how the steps
are carried out in a concrete situation. This is the equivalent of a colleague
or teacher saying, “Let me show you how to do it!” The Ten Steps strongly
suggests providing such demonstrations in the context of whole, meaning-
ful tasks. Thus, demonstrations of recurrent aspects of a skill ideally coincide
with modeling examples or other suitable types of learning tasks. This allows
learners to see how a particular recurrent aspect of a task fts within mean-
ingful whole-task performance.
Going back to students learning to produce video content, at some point
in their training, they will receive JIT information on how to operate the
video editing software. A particular JIT information display might present
the step-by-step procedure for ‘color correcting’: adjusting and enhanc-
ing things like exposure, contrast, and saturation of the video clips. This
procedure is preferably demonstrated in the context of the whole learning
task, showing how to correct colors relevant to the video at hand instead
of dreaming up a demonstration for an imaginary situation. Another situ-
ation is where a complex troubleshooting skill requires executing a stand-
ard procedure to detect when a specifc value is out of range. It is best to
demonstrate this standard procedure as part of a modeling example for the
troubleshooting skill and focus the learner’s attention on those recurrent
aspects spotlighted in the demonstration.
Instances
of the concept ‘page style’ (see the box in Figure 10.1), this should be a set
of specifcations that is as relevant as possible for the specifc task of ‘chang-
ing the page orientation.’ In the case where students learn to produce video
content, a JIT information display on how to correct colors would pro-
vide concept defnitions of exposure, contrast, saturation, etc. Again, when
showing a concrete instance of one particular type of color correction, this
is preferably a manipulation relevant to the video at hand.
Van Merriënboer and Luursema (1996) describe CASCO (Comple-
tion ASsignment COnstructor), an intelligent tutoring system for teaching
computer programming in which all procedural information is presented
and demonstrated just-in-time. The system uses completion tasks in which
learners must complete partially written computer programs. When using
a particular piece of programming code for the frst time in part of a to-
be-completed program, an online JIT information display presents how-to
instructions and prerequisite knowledge for using this particular pattern of
code. At the same time, the instantiated code is highlighted in the given part
of the computer program, ofering a concrete instance exemplifying the use
of the code in a realistic computer program.
and instances must be divergent for all situations the JIT information applies
to. A demonstration will typically show the application of only one version of
a procedure. For instance, the procedure for changing the page orientation
(Figure 10.1) is useful for changing the orientation of one page or the whole
document. Therefore, the procedure may be demonstrated by changing the
layout of one page, but it may also be demonstrated by changing the layout
of the whole document. Ideally, the whole set of demonstrations given to
the learner is divergent and representative of all situations that can be han-
dled with the procedure. Likewise, an instance only concerns one example of
a concept, plan, or principle. When presenting an instance to exemplify the
concept of ‘page style,’ the instance may show diferent footers, headers, and
orientations. As for demonstrations, the whole set of instances should ideally
represent all entities covered by the concept, plan, or principle.
This writing is uninviting because no words prompt the learner to act, stim-
ulating the user to read rather than act. The word ‘note’ refers to a remark
on the side—an addendum. The learner cannot really act or explore because
there are no alternatives to try out. In an action-oriented style, the words
clearly prompt the learner to act, and the invitation comes just at the right
moment. There are no side notes, but rather, true alternatives that can be
tried out, as in the following display inviting learners to browse a text:
cognitive load on the learners and may reduce learning. Consulting addi-
tional information when cognitive load is already high due to the character-
istics of the learning task easily becomes a burden for learners, as indicated
by the saying, “When all else fails, consult the manual”.
In the feld of minimalism (Carroll, 2003), additional guidelines have
been developed for the design of minimal manuals that provide proce-
dural information on demand. Minimalism explicitly focuses on supporting
learners—or users, in general—who are working on meaningful tasks. The
three main guidelines for this approach pertain to:
1. Goal directedness. Use an index system allowing learners to search for rec-
ognizable goals they may be trying to reach rather than functions of the
task environment. In other words, design in a task-oriented rather than
system-oriented fashion. Learners’ goals, thus, provide the most impor-
tant entrance point to the JIT information displays (see Table 10.1).
2. Active learning and exploration. Allow learners to continue their work
on whole, meaningful learning tasks to explore things. Let them try out
diferent recurrent task aspects for themselves.
3. Error recovery. When learners try out the recurrent task aspects, things
may go wrong. Error recognition and error recovery must be supported
on the spot by including a section—‘What to do if things go wrong?’—in
the JIT information display (Lazonder & van der Meij, 1995).
them, and form accurate rules and procedures. Thus, in contrast to cog-
nitive feedback, the main function of corrective feedback is not to foster
refection, but rather, to detect and correct errors and to help learners form
correct cognitive rules.
In the Ten Steps, corrective feedback is seen as one type of procedural infor-
mation because it consists of information that helps learners automate their
cognitive schemata in a process of rule formation (refer back to Box 10.1)—it
shares this aim with the presentation of all procedural information. If the
rules or procedures that algorithmically describe efective performance are
not correctly applied, the learner is said to make an error. The following
subsections describe the design of efective corrective feedback and the diag-
nosis of malrules and misconceptions that may be necessary when learners fail
to improve on particular recurrent constituent skills for a prolonged period.
Well-designed feedback should inform the learner that there was an error and
why there was an error but without simply saying what the correct action is
(Wiliam, 2011). It should consider the goals that learners may be trying to
reach. If the learner makes an error that conveys an incorrect goal, the feed-
back should explain why the action leads to the incorrect goal and provide a
suggestion or hint on how to reach the correct goal. Such a hint will often
take the form of an example or demonstration. An example of such a sugges-
tion or hint is: “When trying to solve that acceleration problem, applying this
formula does not help you to compute the acceleration. Try using the same
formula used in the previous example”. If the learner makes an error that
conveys a correct goal, the feedback should only provide a hint for the cor-
rect step or action and not simply give away the correct step or action because
learning-by-doing is critical to forming cognitive rules. An example of such a
suggestion or hint is: “When trying to solve that acceleration problem, you
might want to consider substituting certain quantities for others”. Simply tell-
ing the learner what to do is inefective. The learner must execute the correct
action while the critical conditions for performing this action are active in
working memory.
If a learner has made an error, it may be necessary to give information on
how to recover from the results of this error before giving any other infor-
mation. This will be relatively straightforward if an instructor gives the feed-
back, but including this type of error information in manuals or help systems
is more difcult. One aspect of minimalist instruction (Carroll, 2003) is to
support learners’ error recovery by giving error information on the spot by
combining the JIT information displays with a section on ‘what to do if
things go wrong?’ To include such a section, it is necessary to analyze the
typical errors made by the target learners to support the recognition and
recovery of the most frequent errors (see Section 11.3 in the next chapter
238 Step 7: Design Procedural Information
for the analysis of typical errors). The error recovery information should
contain the following:
• a description of the situation that results from the error so that the learner
can recognize it;
• information on the nature and the likely cause or causes of the error so
that it can be avoided in the future; and
• action statements for correcting the error.
A learner may receive frequent notifcations about not meeting the standards
for a particular recurrent aspect of performance and not showing improve-
ments in performance accuracy, despite repeated corrective feedback on
errors. This situation requires an in-depth diagnosis to reveal possible mal-
rules, which are incorrect cognitive rules leading to persistent errors (see
Section 11.3 for their analysis), or misconceptions, which are pieces of misin-
formed prerequisite knowledge leading to incorrect application of rules (see
Section 12.3 for their analysis). It is important to note that not all errors
signify the existence of malrules: Most errors based on learners’ prior experi-
ence and intuition can be easily corrected and will never lead to the forma-
tion of malrules. We only talk about malrules when learners continue to
make the same type of error; thus, when they form incorrect or suboptimal
cognitive rules. For example, some people may always switch of their desk-
top computer by pressing its power button, which may be seen as the result
of applying a malrule rather than applying the correct rule: “If you want to
shut down the computer in Windows 11, then click <Windows icon>, then
click <Power icon> and then click <Shut Down>”. A related misconception
is that <Windows icon> only applies to starting up new programs, whereas it
also applies to changing settings, searching documents, and shutting down
applications. The existence of malrules and/or misconceptions may afect
the presentation of procedural information in the following ways:
Principle Example
and/or instances ft into the context of the whole task. This requires the use
of learning tasks that present a part of the problem-solving process or of the
solution (e.g., modeling examples, case studies, completion tasks) and indi-
cates the use of a deductive-expository strategy where the JIT information
displays are illustrated by simultaneously presented examples. Corrective
feedback on recurrent aspects of task performance is best given immediately
after the misapplication of a rule or procedural step. Whereas JIT informa-
tion displays and related examples can often be designed before learners
participate in an educational program, this may be impossible for corrective
feedback because it depends on the behaviors of each learner. Nevertheless,
some preplanning might be possible if typical errors of the target group are
analyzed beforehand.
If procedural information is coupled to the frst learning task for which it
is relevant and subsequent tasks, this is done via a process of fading. Fading
ensures that the procedural information repeats, in ever-decreasing amounts,
until learners no longer need it. For instance, a help system may frst system-
atically present relevant JIT information displays, including a description of
procedural steps and a demonstration thereof, as well as prerequisite infor-
mation and instances exemplifying this information. In a second stage, the
help system may only present the procedural steps and prerequisite informa-
tion (i.e., leaving out the examples). In a third stage, it may only present the
procedural steps (i.e., also leaving out the prerequisite information). And in
a fnal stage, no information might be provided at all. As another example, a
particular task environment may frst explain why there is an error, then pre-
sent only right/wrong feedback, and fnally, provide no corrective feedback
at all. Fading ensures that learners receive procedural information as long
as they need it and provides better opportunities to present a divergent set
of demonstrations and/or instances. This is an example of one of the fve
desirable difculties discussed by Bjork (Kirschner et al., 2022). Ideally, the
whole set of examples should represent all situations the presented rules or
procedures can handle.
Concluding this chapter, Table 10.3 presents one task class out of the
training blueprint for the complex skill ‘producing video content’ (you may
refer back to Table 6.2 for a description of the other task classes). A speci-
fcation of the procedural information has been added to the specifcation
of the task class, the learning tasks, and the supportive information. As you
can see, procedural information now appears when learners work on the frst
learning task for which it is relevant. See Appendix 2 for the complete train-
ing blueprint. One fnal remark: Procedural information is not only relevant
to performing the learning tasks but may also be relevant to part-task prac-
tice. Chapter 13 discusses special considerations for connecting procedural
information to part-task practice.
Step 7: Design Procedural Information 243
Table 10.3 Preliminary training blueprint for the complex skill ‘producing video
content’. For one task class, a specification of the procedural infor-
mation has been added to the learning tasks.
Task Class 1: Learners produce videos for fictional clients under the
following conditions.
• The video length is 1–3 minutes
• The clients desire aftermovies or event recaps, summarizing the
atmosphere at an event
• Locations are indoors
• There is plenty of time for the recording
• No interaction with other on-camera participants
Supportive Information (inductive strategy): Modeling example
Learners shadow a professional video team while they produce an aftermovie
of the yearly local cultural festival. Learners can interview the video team
during and after the project.
Supportive Information: Presentation of cognitive strategies
• Global SAP for preproduction, production, and postproduction phases
• SAP for shooting video (e.g., basic strategies for creating compositions and
capturing audio)
• SAPs for basic video editing (e.g., selecting footage and editing the video)
Supportive Information: Presentation of mental models
• Conceptual models of basic cinematography, such as composition and lighting
• Structural models of cameras
• Causal models of how camera settings affect the image and audio (music,
effects) affects mood
Learning Task 1.1
Support: Worked-out example
Guidance: Performance constraints
Learners receive a production
plan, intermediate footage, and
the final video of an existing
aftermovie. They evaluate the
quality of each aspect, but their
evaluations must be approved
before they can continue with the
next aspect.
Learning Task 1.2 Procedural Information
Support: Completion task Unsolicited
Guidance: Tutoring • How-to instructions for using
Learners receive a production plan postproduction software
and intermediate footage. They • How-to instructions for exporting
must select the footage and edit the video
the video into the fnal product.
A tutor guides learners in studying
the given materials and using the
postproduction software.
(Continued)
244 Step 7: Design Procedural Information
Glossary Terms
Step 8
Analyze Cognitive Rules
11.1 Necessity
Analysis of cognitive rules provides the basis for the design of procedural
information and, if applicable, part-task practice. Only perform this step if
this information is not yet available in existing materials.
DOI: 10.4324/9781003322481-11
248 Step 8: Analyze Cognitive Rules
performing the task and reaching its goal. The rules and procedures dis-
cussed in this chapter are thus examples of ‘strong methods.’ Their strength,
however, is counterbalanced by their limited fexibility: Highly domain-
specifc rules ensure that familiar task aspects can be correctly performed,
but they are not at all useful for unfamiliar task aspects in new problem
situations. For the design of instruction, the analysis of cognitive rules into
IF-THEN rules or procedures serves three goals:
Rule-Based Analysis
IF condition(s)
THEN action(s)
A simple set of rules will illustrate the working of rule-based analysis. The
rules describe the recurrent skill of stacking buckets so that smaller buckets
always go into larger ones. The frst IF-THEN rule specifes when the task
is fnished:
1.
IF there is only one visible bucket
THEN the task is fnished.
2.
IF there are at least two buckets
THEN use the two leftmost buckets and put the smaller one into the
larger one.
In this rule, the term bucket might refer to a single bucket or a stack of
buckets. As shown in the upper part of Figure 11.1, these two rules already
describe the performance of this task for a great deal of all possible situations.
However, the middle part shows a situation where another rule is needed.
An impasse occurs here because a larger bucket is placed on a smaller one.
The following rule may help to overcome this impasse:
3.
IF a smaller bucket blocks a larger bucket
THEN put the larger bucket on the leftmost side and the smallest bucket
on the left.
As shown in the bottom part of Figure 11.1, this additional rule helps over-
come the impasse. The three identifed IF-THEN rules are actually sufcient
to stack up buckets for all situations one might think of. Moreover, the rules
are independent of one another, meaning that their order is unimportant.
This makes it possible to add or delete IF-THEN rules without upsetting the
behavior of the whole set of rules. For example, stacking up buckets can be
made more efcient by adding the following rule:
4.
IF the largest and the smallest bucket are both on the leftmost side
THEN put the smallest bucket to the rightmost side.
If you try out the rule set with this new rule, it becomes clear that fewer
cycles are necessary to stack up the buckets in many situations. Moreover,
252 Step 8: Analyze Cognitive Rules
this has been reached by simply adding one rule to the set without having to
bother about the position of this new rule in the whole set.
Figure 11.1 Working of a set of IF-THEN rules describing the task of stacking
buckets.
Performing a specifc task may be a function of not only the identifed rules
but also how so-called higher-order rules handle those rules. For instance, it
may be true that more than one rule has an IF-side that matches the given
state (actually, this is also the case for rules 2, 3, and 4!). In this situation,
a confict must be solved by selecting precisely one rule from among the
candidates. This process is called confict resolution. Common approaches
to confict resolution are to prioritize more specifc rules over more general
rules (e.g., prioritize rule 3 over rule 2), to prioritize rules that match more
recent states over rules that match older states (e.g., prioritize rules 3 and 4
over rule 2), and prioritize a rule that has not been selected in the previous
cycle above a rule that has been selected before.
Information-Processing Analysis
focuses on the overt and/or covert decisions and actions made by the task
performer and yields a procedure typically represented as a fowchart. A typi-
cal fowchart uses the following symbols:
In this fowchart, some actions are covert (e.g., add, decrease), and some
are overt (e.g., write result). To make it easier to refer to the diferent ele-
ments in the fowchart, the actions are indicated with the numbers 1 through
6 and the decisions, with the letters A and B.
Before ending the analysis process, the analyst should validate and verify
the fowchart to ensure that it includes all actions and decisions (i.e., mental/
covert and physical/overt) and all possible branches from decisions. Profes-
sional task performers and instructors with experience teaching the task can
254 Step 8: Analyze Cognitive Rules
Table 11.1 Examples of typical errors, which might evolve into malrules if not
properly corrected.
If you want to switch the values If you want to switch the values of
of variables A and B, then variables A and B, then state C = A,
state A = B and B = A A = B, and B = C
If you want to switch off the If you want to shut down the computer,
computer, then press the then click <Windows icon>, then click
power button <Power> and then click <Shut Down>
If you want to expand the If you want to expand the expression
expression (x + y) 2, then just (x + y) 2, then multiply x + y by x + y
square each term (i.e., x 2 + y 2) (i.e., x 2 + 2xy + y 2)
Typical errors often refect the intention to apply the correct rule but still
have things go wrong. It is especially important to identify errors related to
rules or procedural steps that are difcult to apply, dangerous to perform, or
easily omitted by the learners in the target group. For instance, for the rule
“If you want to select a word, then place the cursor on it and double-click
the mouse,” it is important to stress that the two clicks quickly follow each
other because especially young children and elderly learners have difculties
with rapid double-clicking. For the rule “If you need to release the fshing
net, cut the cord by moving the knife away from you,” it is important to
stress the movement of the knife because of the risk of injuries. For the step
“Decrease the next left column by 1,” which is part of the procedure for
subtracting two-digit numbers with borrowing, it is important to stress this
step because novice learners who borrowed ten often omit it.
256 Step 8: Analyze Cognitive Rules
be discussed in the next chapter. Together, the analysis results for cogni-
tive rules and prerequisite knowledge provide the main input for designing
procedural information, particularly JIT information displays (Chapter 10).
Designers should include part-task practice for one or more recurrent con-
stituent skills if and only if those skills require a very high level of auto-
maticity (i.e., they are classifed as to-be-automated recurrent constituent
skills; see Table 5.2) through a learning process called strengthening (see
Box 13.1). Part-task practice increases the fuency of fnal whole-task per-
formance and helps learners pay more attention to the problem-solving,
reasoning, and decision-making aspects of learning tasks because it frees up
processing resources, as fully automated processes no longer require con-
scious processing (for example, see Hopkins & O’Donovan, 2021). Facil-
itating strengthening requires providing many practice items that require
learners to repeatedly apply the identifed rules or perform the procedures.
For long and/or multibranched algorithms, the identifed rule-sets or pro-
cedures play a crucial role in sequencing practice items, progressing from
258 Step 8: Analyze Cognitive Rules
Identifying typical errors and malrules can impact decision making in the
discussed design activities. Concerning the design of procedural informa-
tion, the presence of typical errors or malrules may necessitate particular
instructional methods (see Section 10.4). First, learners should focus on
rules or procedural steps susceptible to errors or mistakes. This may involve
providing unsolicited JIT information, a slow fading of presented informa-
tion, and using many divergent demonstrations for rules and steps that are
error-prone. Second, the JIT information should include error recovery
information to help learners undo or repair the undesired or unintended
consequences of errors once they have occurred. Finally, especially when
malrules are in play, learners should be stimulated to critically compare and
contrast the malrules they use with (demonstrations of) correct rules and
procedural steps.
Glossary Terms
Step 9
Analyze Prerequisite Knowledge
12.1 Necessity
Analyzing prerequisite knowledge provides input for designing procedural
information. Only carry out this step if the procedural information is not
yet specifed in existing materials and if you have already analyzed cognitive
rules in Step 8.
DOI: 10.4324/9781003322481-12
262 Step 9: Analyze Prerequisite Knowledge
involved. These concepts may be defned at the lowest level by stating the
facts or propositions that apply to them. Such propositions are typically seen
as the smallest building blocks of cognition and, thus, further analysis is
impossible.
Concepts allow the description and classifcation of objects, events, and pro-
cesses (Tennyson & Cocchiarella, 1986). They enable us to give the same
name to diferent instances that share common characteristics (e.g., poo-
dles, terriers, and Chihuahuas are all dogs; dogs, cats, and humans are all
mammals). Concepts are important for all kinds of tasks because they allow
task performers to talk about a domain using appropriate terminology and
classify the elements within this domain. For the analysis of prerequisite
knowledge, the relevant question is: Are there any concepts the learners
have not yet mastered but need to understand to correctly apply a specifc
rule or carry out a particular procedural step? For example, in photography,
a procedural step for repairing cameras might be “Remove the lens from the
camera”. Presenting the concept ‘lens’ might be a prerequisite for correctly
applying this step. Whether this really is the case depends on the learners’
prior knowledge. It is only prerequisite for the program if the least prof-
cient learner does not yet know what a lens is. Another example in database
management is the rule: “If you want to delete a feld permanently, then
choose Clear Field from the Edit Menu”. Here, the concept of ‘feld’ might
be unfamiliar to the learners and thus could be a prerequisite to correctly
using the rule.
Plans relate concepts to each other in space to form templates or build-
ing blocks or in time to form scripts. Plans are often important prerequisite
knowledge for tasks involving the understanding, designing, and repairing
of artifacts such as texts, electronic circuits, machinery, and so forth. For
analyzing prerequisite knowledge, the relevant question is: Are there any
plans the learners have not yet mastered but need to understand to correctly
apply a specifc rule or to correctly carry out a particular procedural step?
For example, a rule in the domain of statistics might state, “If you present
descriptive statistics for normally distributed data sets, then report means
and standard deviations”. A simple template prerequisite to the correct appli-
cation of this rule may describe how scientifc texts typically present means
and standard deviations; namely, as “M = x.xx; SD = y.yy,” where x.xx is the
computed mean and y.yy is the computed standard deviation, with M and
SD capitalized and italic. As another example, a rule in text processing might
be “If you want to change a text from a Roman typeface to italic, then open
the Context Menu, click Style, and click italic” (see Figure 12.2). A simple
script that is prerequisite to correctly applying this rule may describe that,
Step 9: Analyze Prerequisite Knowledge 265
for formatting a text, frst, the text needs to be selected, and only then can
the formatting option be keyed in or selected from the toolbar. This script
might also contain one or more new concepts that the learners do not yet
know. For instance, the concept of ‘Context Menu’ might be new to them
and, thus, require further analysis.
Italic
Makes the selected text italic. If the cursor is in a word, the entire word is made italic. If
the selection or word is already italic, the formatting is removed.
If the cursor is not inside a word, and no text is selected, then the font style is applied to
the text that you type.
To access this command...
Open context menu - choose Style - Italic
A Italic
Figure 12.2 Example of a JIT information display presenting the procedure for
making text italic.
Source: Taken from OpenOffice.org Writer.
The analyst may deconstruct plans and principles into their constituting
concepts. Concepts, in turn, may be further analyzed into facts and/or
physical models that apply to instances of the concept. One common way
to specify a concept is to list all facts that apply to its instances in a feature
list. The features or facts that apply to instances of the concept take the form
of propositions. A proposition consists of a predicate or relationship and at
least one argument. Examples of propositions or facts that characterize the
concept ‘column’ are, for instance:
should not contain irrelevant details for carrying out the identifed proce-
dural steps and the IF-THEN rules. Thus, there should be a one-to-one
mapping between the analysis of rules and procedures and the analysis of
their related physical models.
In conclusion, analyzing prerequisite knowledge results in feature lists
and defnitions characterizing concepts, plans, and principles made up of
those concepts and, if appropriate, physical models that may help to clas-
sify things as belonging to a particular concept. Thus, the concept ‘resis-
tor’ specifes the main attributes of a resistor in some kind of feature list (it
impedes the fow of electric current, it looks like a rod with colored stripes,
etc.), thereby enabling a task performer to classify something as either being
a resistor or not. In addition, a physical model (see Figure 12.4) might help
a task performer recognize something as being a resistor or not. This exam-
ple may also illustrate the diference between analyzing prerequisite knowl-
edge and analyzing mental models into domain models. In analyzing mental
models, a conceptual model of ‘resistors’ (cf. top of Figure 12.1) would not
only focus on the resistor itself but would also include comparisons with
other components (e.g., transistors, capacitors, etc.); their function in rela-
tion to voltage, current, and resistance (including Ohm’s law); a description
of diferent kinds (e.g., thermistors, metal oxide varistors, rheostats) and
parts of resistors; and so on. The physical model would not only depict one
or more isolated resistors but would be replaced by a functional model illus-
trating the working of resistors in larger circuits.
lowest-ability learners in the target group before the instruction. The previ-
ous examples already introduced this hierarchical approach:
The analyst should not stop this iterative, hierarchical process too early.
Learners’ prior knowledge is often overestimated by professionals in a task
domain and, to a lesser degree, by teachers of that domain who to a certain
extent are also experts. This is because experienced task performers are very
familiar with their domain, which makes it difcult to put themselves in the
position of novices (i.e., the curse of knowledge). Therefore, the analysis
process should typically go one or two levels beyond the entry level indi-
cated by profcient task performers or teachers.
For instance, whereas people in the United States and Great Britain speak
approximately the same language, some words have diferent meanings. In
the United States, the concept ‘tip’ refers to the amount of money that one
adds to a restaurant bill for good service, whereas, in Great Britain, it refers
to a garbage dump. In other words, the concept of ‘refuse tip’ can have
two completely diferent meanings. For a designer, the concept of ‘elegant’
refers to how attractive something is. In contrast, for a computer program-
mer, ‘elegant’ refers to how parsimonious the program is (i.e., how few lines
are needed for the program). In other words, the concept of ‘elegant appli-
cation’ can have two completely diferent meanings. One should be aware of
such diferences in teaching to prevent misconceptions that hinder learning.
Concerning plans, an example of a common naïve plan in scientifc writing
concerns the misuse of ‘et al.’ for all citations in the body text with more than
two authors. For three, four, or fve authors, the correct plan (according to
the Publication Manual of the American Psychological Association, 7th edition)
is to cite only the frst author in every citation, even the frst, unless doing so
would create ambiguity. For example, sometimes multiple works with three or
more authors and the same publication year shorten to the same in-text cita-
tion. To avoid ambiguity, when the in-text citations of this kind shorten to the
same form, write out as many names as needed to distinguish the references,
and abbreviate the rest of the names to “et al”. in every citation.
Concerning principles, an example of a common misunderstanding (also
referred to as a misconception) is that heavy objects fall faster than light
objects. This misunderstanding might easily interfere with correctly applying
the procedures for computing forces, masses, and acceleration. Actually, in the
absence of air resistance (i.e., in a vacuum), all objects fall at the same rate (i.e.,
they accelerate equally), regardless of their mass. In other words, a feather and
a bowling ball fall at the same rate in a vacuum. The analysis of misconceptions
best starts with describing all prerequisite knowledge for one particular recur-
rent task aspect. Then, the question for each concept, plan, or principle is: Are
there any misconceptions or misunderstandings for my target group that may
interfere with acquiring this concept, plan, or principle? Experienced teachers
are often the best sources of information to answer this question.
Glossary Terms
Step 10
Design Part-Task Practice
13.1 Necessity
Part-task practice is the last of the four design components. The other three
design components are always necessary, but part-task practice is not. You
should only carry out this step if the additional practice of recurrent task
aspects is strictly necessary to reach a high level of automaticity of these
aspects.
DOI: 10.4324/9781003322481-13
276 Step 10: Design Part-Task Practice
Table 13.1 Examples of conventional, edit, and recognize practice items for
adding two-digit numbers.
Edit and recognize practice items provide learners with task support (refer
back to Figure 4.4) because they give them (a part of) the solution. If part-
task practice cannot immediately start with conventional items because they
are too difcult for the learner, the best option is to start with items that
provide high support and work as quickly as possible toward items without
support. A well-known fading strategy is the recognize-edit-produce sequence
(Gropper, 1983), which starts with items that require learners to recognize
280 Step 10: Design Part-Task Practice
which steps or IF-THEN rules to apply, continues with items where learners
have to edit incorrect steps or incorrect IF-THEN rules, and ends with con-
ventional items for which learners have to apply the steps or rules on their
own to produce the solution.
Problem-solving guidance is irrelevant for practice items because carrying
out the procedure correctly always yields the right solution. This process is
algorithmic rather than heuristic, so the learner does not need to try difer-
ent mental operations to fnd an acceptable solution. This makes providing
modeling examples, process worksheets, or other heuristic aids superfuous.
For part-task practice, the procedural information should specify a straight-
forward way to perform the procedure or apply the rules (see Section 13.4
later in this chapter).
Performance constraints, however, may be useful to support the learner
in carrying out long procedures or applying large sets of rules because such
constraints impede or prevent inefective behaviors. Performance constraints
for part-task practice often take the form of what is known as training wheels
interfaces (Carroll & Carrithers, 1984), a term indicating a resemblance to
using training wheels on a child’s bicycle. At the beginning of learning to
ride a bicycle, the wheels are on the same plane as the rear wheel, which
makes the bicycle very stable and prevents it from falling over. As the child’s
sense of balance increases, the wheels are moved above the plane so that the
bicycle still cannot fall over, but if the child is in balance, the wheels will not
touch the ground. The child is riding on two wheels (the front and back
wheels), except when negotiating curves, where the child has to slow down,
tends to lose balance, and is prone to falling. Ultimately, the child can ride
the bicycle, and the training wheels are removed, often replaced by a parent
running behind the child to ensure they do not fall.
Thus, the basic idea behind using training wheel interfaces is to ensure
that actions related to inefective procedural steps or rules are unreachable
for learners. For instance, a training wheels interface in a word-processing
course would frst present only the minimum number of toolbars and menu
options necessary for creating a document (i.e., the basics such as create,
save, delete, et cetera). All other options and toolbars would be unavailable
to the learner. New options and toolbars (e.g., for advanced formatting,
drawing, and making tables) only become available to the learners after they
have mastered using the previous, basic ones. Another example is a touch-
typing course, where it is common to cover the keyboard keys so the learner
cannot see the key symbols. The inefective or even detrimental behavior of
‘looking at the keys’ to fnd and use the correct key is blocked. The key cov-
ers are removed later in the training program. A fnal example is the teaching
of motor skills, where an instructor may hold or steer the learner in such a
way (see Figure 13.1) that it forces the learner to make a particular body
Step 10: Design Part-Task Practice 281
movement. Again, holding the learner prevents them from making inefec-
tive, undesired, or dangerous body movements.
learning tasks must be representative of all situations that may occur in the
real world, including unfamiliar situations without known approaches. In
other words, divergence helps learners use rules or procedures; variability
helps learners fnd rules or procedures.
Table 13.2 Techniques for sequencing practice items for procedures with many
steps/decisions or large sets of rules
Accumulating Strength
It is usually assumed that each cognitive rule has a strength associated
with it that determines the chance it applies under the specifed condi-
tions and how rapidly it then applies. While rule formation leads to
284 Step 10: Design Part-Task Practice
Further Reading
Crossman, E. R. F. W. (1959). A theory of the acquisition of speed-
skill. Ergonomics, 2, 153–166.
https://doi.org/10.1080/00140135908930419
Palmeri, T. J. (1999). Theories of automaticity and the power law
of practice. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 25, 543–551.
https://doi.org/10.1037/0278-7393.25.2.543
• Subgoaling forces learners to identify the goals and subgoals reached with
particular procedural steps or rules.
• Attention focusing ensures that learners pay more attention to the difcult
aspects than the easy ones.
• Multiple representations help learners process the given information in
more than one way.
• Matching allows learners to critically compare and contrast correct and
incorrect task performance. In this case, it is critical to clearly point out
which of the two demonstrations is incorrect and why.
Contingent Tutoring
Table 13.3 Four techniques for dealing with difficult aspects of a recurrent
constituent skill.
Figure 13.2 Step-by-step instructions for how to tie a bow tie; ideally, each
step is provided or highlighted at the moment the learner has to
carry out that step.
13.5 Overlearning
The instructional methods discussed in the previous sections are sufcient to
teach a recurrent skill to a level where the learner can accurately carry it out.
However, the goal of part-task practice typically extends beyond mere accurate
performance, focusing instead on reaching a very high level of automaticity
and speed. Accurate performance is only the frst step. To attain full automa-
ticity, overlearning is essential. This involves extensive amounts of divergent,
conventional practice items representing all situations where the performer
can apply the procedure or set of rules. The slow learning process under-
lying overlearning is strengthening (refer back to Box 13.1). Three instruc-
tional strategies that explicitly aim at strengthening through overlearning are
changing performance criteria, compressing simulated time, and distributing
practice.
systems, the time required for practicing the skill under normal conditions
becomes enormous. Compressing simulated time by a factor of 10 to 100
can drastically reduce the necessary training time and, at the same time, facil-
itate automation due to increased speed stress and latency time of feedback.
In an old study, Schneider (1985) provides an example of air trafc con-
trol. Making judgments about where an aircraft should turn and seeing the
results of this decision normally takes about 5 minutes, but the simulated
time for this maneuver was compressed by a factor of 100 to enable com-
pleting a practice item in a few seconds. Consequently, practicing more
items in 1 day than in months of normal training becomes possible, while
the associated speed stress promotes overlearning.
they want to. For a teacher, independent part-task practice is relatively easy
to implement because it:
Figure 13.3 Pocket guide ( JIT information display) for setting a camera’s
aperture, shutter speed, and ISO.
Figure 13.4 Intermixing practice on learning tasks and part-task practice for
two selected recurrent task aspects.
To conclude this chapter, Table 13.4 presents one of the task classes from
a training blueprint for the complex skill ‘producing video content’ (you
may refer to Table 6.2 for a description of the other task classes). A specif-
cation of part-task practice has been added to this task class, indicating that
part-task practice of sketching scenes for a storyboard starts parallel to a
learning task for which it is relevant. Part-task practice continues until learn-
ers reach the standards for acceptable performance. See Appendix 2 for the
complete training blueprint.
Table 13.4 Preliminary training blueprint for the complex skill ‘producing video
content.’ For one task class, a specification of part-task practice has
been added to the blueprint.
Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3-5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Supportive Information (inductive strategy): Case study
Learners study three worked-out examples (i.e., case studies) of promotional
videos for a backpack with integrated solar panels, a virtual fitness platform,
and an urban art festival. In groups, a tutor guides them in comparing and
evaluating each example’s goals, scripts, camera use, lighting, etc.
Supportive Information: Presentation of cognitive strategies
• SAP for developing a story for promotional videos
• SAPs for interacting with people and collaborating with the crew
• SAPs for shooting video (detailed strategies for creating compositions and
capturing audio)
Supportive Information: Inquiry for mental models: learners are asked to
identify examples of:
• Different types of cameras, microphones, and lights (conceptual models)
• Story arcs (structural models)
(Continued)
296 Step 10: Design Part-Task Practice
• Sketching a storyboard
Part-task practice
Learning Task 2.2 Procedural Information
Support: Reverse task Unsolicited
Guidance: Tutoring • How-to instructions for
Learners study a promotional video sketching a storyboard
about a new startup in the field
of artificial intelligence. A tutor
helps them work backward to
explain critical decisions in the
production phase and develop a
storyboard that fits the video and
meets the client’s requirements.
Learning Task 2.3: Imitation Procedural Information
task Solicited
Support: Conventional task • How-to instructions for
Guidance: Modeling lighting and selecting lenses
Learners study a modeling example and microphones
of how a teacher/expert creates a • Platform with how-to videos
short social media advertisement for using postproduction
video for a small online clothing software
store. Learners remake the ad for
a small online art store.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 2.3.
Learning Task 2.4 Procedural Information
Support: Conventional task Solicited
Guidance: Tutoring • Platform with how-to videos
Under guidance from a tutor, for using postproduction
learners create a promotional software
video highlighting the products
or services of a local store.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 2.4.
Glossary Terms
Domain-General Skills
DOI: 10.4324/9781003322481-14
300 Domain-General Skills
The Ten Steps focuses, in the frst place, on training domain-specifc com-
plex skills or professional competencies. Yet, training programs based on
the Ten Steps also provide good opportunities for training domain-general
skills; that is, skills not bound to one particular domain. We encountered
one example in Step 3—the sequencing of learning tasks. In on-demand
education, self-directed learners can select learning tasks that best ft their
needs, yielding individualized learning trajectories based on the results of
frequent self-assessments. Thus, task selection is a domain-general self-
directed learning skill that can be well trained in programs based on the
Ten Steps, provided that the learner is given a certain amount of freedom
to select learning tasks and is also capable of efectively using that freedom.
This chapter discusses the training of domain-general skills in programs
based on the Ten Steps, including self-regulated learning skills, self-directed
learning skills (in addition to task-selection skills, also information literacy
skills and deliberate practice skills), and other domain-general skills (also
mistakenly called 21st century skills).
Although domain-general skills are not ‘bound’ to one particular domain,
they must always be learned and trained in one or more domains. When learn-
ers select learning tasks, these tasks concern learning in a particular domain.
When learners regulate their learning, they regulate their acquisition of
knowledge, skills, and attitudes in a particular domain. When learners search
for necessary learning resources, these resources contain information about
a particular domain. Thus, domain-general skills must always be learned and
trained in a learning program directed at developing domain-specifc com-
plex skills, but the design should also allow for the acquisition and practic-
ing of the domain-general skills. For example, learners can only practice task
selection skills in a program in which they can choose their learning tasks.
Likewise, they can only practice information literacy skills in a program in
which they can search for their learning resources. They can only practice
collaboration skills in a program in which they can collaborate with others,
and so forth. This is a strict requirement because attempts to teach domain-
general skills outside of domains consistently fail (Tricot & Sweller, 2014).
The structure of this chapter is as follows. Section 1 describes the mecha-
nisms underlying self-regulated and self-directed learning and the implica-
tions for their teaching. The following sections discuss the training of two
important self-directed learning skills: information literacy skills in Section 2
and deliberate practice skills in Section 3. The focus is on intertwining the
training of domain-specifc skills with the training of self-directed learning
skills. Section 4 discusses the popular but mistaken term 21st-century skills.
They include learning and literacy skills, which are the focus of this chapter,
but also thinking and social skills. In the Ten Steps, intertwining the train-
ing of domain-specifc skills and the training of domain-general skills always
follows the same approach. The chapter ends with a summary.
Domain-General Skills 301
When students monitor their learning and estimate how well they learned
something (often called judgments of learning or JOLs), their metacognitive
302 Domain-General Skills
thoughts are typically based on cues that more or less predict their future
performance (Koriat, 1997). Unfortunately, learners are not good at this,
either overestimating their knowledge and skills or, especially for low-per-
forming students, having an overly optimistic view of what they know—a
phenomenon known as ‘unskilled and unaware,’ or the Dunning-Kruger
efect (Kruger & Dunning, 1999). They often base their JOLs on invalid
cues. One striking example of an invalid cue that learners often use that
is not predictive of future performance is the ease of recall of information
immediately after study. The information is then easily recallable because it
is still active in working memory, but this does not mean it is readily retriev-
able from long-term memory. Thus, a much better cue is whether the infor-
mation is easily recallable a few hours after study (Van Loon et al., 2013).
Unfortunately, there is an overall tendency for learners to use invalid and/or
superfcial cues, which may also explain their overconfdence when predict-
ing future performance. When learners use invalid cues and are overconf-
dent, this has negative consequences for their control decisions; for example,
they use surface rather than deep study strategies, they terminate practice or
study too soon, or they skip particular elements during practice or study—all
of which have negative efects on learning outcomes (Bjork et al., 2013).
Accurate monitoring must, thus, use valid cues. In the Ten Steps, what
valid cues are will difer for learning tasks, supportive information, proce-
dural information, and part-task practice. When learners work on learning
tasks and are involved in schema construction through inductive learning,
they should monitor whether their learning activities help construct sche-
mata in long-term memory that allow for transfer of learning. Valid cues
are whether they can carry out alternative approaches to the task or explain
how their approach difers from other’s approaches. Unfortunately, learn-
ers often use invalid cues. For example, they may solely monitor the accu-
racy and fuency of their current performance. Yet, being able to perform a
task smoothly does not predict future performance on transfer tasks (cf. the
‘transfer paradox,’ described in Section 2.4). Instruction that helps learn-
ers use more valid cues may take the form of metacognitive prompts that
explicitly help them focus on more valid cues (i.e., improve monitoring)
and undertake learning activities that promote schema construction (i.e.,
improve control; see the frst row of Table 14.1 for examples).
Similarly, giving learners metacognitive prompts may help them use bet-
ter cues for monitoring and controlling their learning of supportive infor-
mation, procedural information, and part-task routines. For supportive
information, the ease of immediate recall or the ease of studying the infor-
mation are not valid cues for the desired construction of schemata through
elaboration (they yield an ‘illusion of understanding’; Paik & Schraw, 2013).
Instead, learners should ask themselves whether they can generate keywords,
summaries, or diagrams of the studied information or answer test ques-
tions about it (see second row in Table 14.1 for examples). For procedural
Domain-General Skills 303
information, the ability to carry out the current task (i.e., learning task or
part-task practice item) with procedural information and corrective feedback
at hand is not a valid cue for the desired automation of schemata through
rule formation. Instead, learners should ask themselves whether they can
carry out the same task without consulting the procedural information or
without receiving immediate feedback on errors (see third row of Table 14.1
for examples). Finally, for part-task practice, the ability to carry out the task
accurately and without errors is not a valid cue for the desired automation
of schemata through strengthening. Instead, learners should ask themselves
Metacognitive prompts
Monitor Control
whether they can carry out the task faster and/or together with other tasks
(see bottom row of Table 14.1 for examples).
Learning is always self-regulated: It is impossible for learners to work on
learning tasks without monitoring their approach and adapting it accord-
ingly, studying supportive information without monitoring comprehension
and adapting reading or viewing strategies accordingly, etc. The Ten Steps
fully acknowledges the importance of (teaching) SRL skills on the task and
content level, but a full discussion falls beyond the scope of this book (see
De Bruin & van Merriënboer, 2017). Instead, we focus on SDL skills on the
instructional-sequence level because the Ten Steps provides unique oppor-
tunities for teaching domain-general skills at this level.
One important SDL skill, the selection of new learning tasks, was discussed
in Step 3 because it sequences learning tasks into individualized learning
trajectories. Moreover, we described how we can support learners in devel-
oping their task-selection skills (Section 6.4). As shown in Table 14.2, teach-
ing task-selection skills takes place in the context of on-demand education,
where learners can select their learning tasks from a set of available tasks (cf.
Figure 6.6). There needs to be a form of shared control where the teacher or
other intelligent agent provides support and/or guidance to the learner for
assessing progress, identifying learning needs, and selecting learning tasks
that can fulfll these needs. Support and guidance decrease in a process of
second-order scafolding, meaning that there is a gradual transition from a
situation where a teacher/system decides on which learning tasks the learner
should work on to a situation where the learner decides on the next task or
tasks to work on. Thus, the learner gains increasing control over the task-
selection as their task-selection skills develop. In Chapter 6, an electronic
development portfolio was described as a useful tool to help learners develop
both domain-specifc skills and domain-general task-selection skills because
it keeps track of all performed tasks, gathers assessments of those tasks, and
provides overviews that indicate points of improvement or learning needs.
In coaching meetings, learners and teachers can then use the information
from the portfolio to refect on progress and points of improvement and
plan future learning tasks (see Van Meeuwen et al., 2018).
Table 14.2 describes two other types of SDL skills highly relevant for
educational programs based on the Ten Steps. First, information literacy
skills enable a learner to search, scan, process, and organize supportive infor-
mation from various learning resources to fulfll information needs resulting
from the work on learning tasks. Learners can develop such information lit-
eracy skills in the context of resource-based learning (Section 7.4). Second,
deliberate practice skills enable a learner to identify recurrent aspects of a
Domain-General Skills 305
skill that can be successively refned and automated through part-task prac-
tice to improve whole-task performance and to use procedural information
from various learning resources to support this part-task practice. Learn-
ers can develop such deliberate practice skills in the context of independ-
ent part-task practice (Section 13.6) and solicited information presentation
(Section 10.4). Information literacy skills and deliberate practice skills will
be further discussed in the next sections.
their work on learning tasks; while working on a task, learners become aware
that they need to study domain models, case studies, SAPs, or modeling
examples to successfully complete the task. This is similar to the self-study
phase in problem-based learning (PBL; Loyens et al., 2011). There, small
groups of learners work on learning tasks called ‘problems.’ Their main aim
is to come up with a solution, which often takes the form of a general expla-
nation for the particular phenomena described in the problem. To do so,
they cycle through three phases (Wood, 2003). In the frst orientation phase,
learners come together in a tutor-led small-group meeting to discuss the
problem. They clarify unknown concepts, defne the problem, ofer tentative
explanations, draw up an inventory of explanations, and formulate learning
issues. In the second self-study phase, learners attempt to reach the learning
objectives by fnding relevant learning resources and collecting supportive
information in the ‘study landscape,’ which includes the library (e.g., books,
articles) and other learning resources (multimedia, Internet, human experts,
etc.). In the third evaluation phase, the learners come together again in a
second small-group meeting in which they report their fndings, synthe-
size the collected information, and evaluate and test it against the original
problem. In this way, PBL not only helps learners acquire knowledge about
the learning domain but, if carried out well-structured, can also help them
develop information literacy or problem-solving skills related to systemati-
cally searching for relevant learning resources.
The teaching of information literacy skills, and, actually, the teaching of
all domain-general skills, should follow exactly the same design principles as
the Ten Steps prescribes for domain-specifc skills or professional competen-
cies (Argelagós et al., 2022). Thus, for the design of information literacy
learning tasks, critical principles to take into account are variability of prac-
tice, providing problem-solving guidance, providing task support, and scaf-
folding support and guidance as learners’ information literacy skills develop
(cf. Step 1 in Chapter 4). The ‘primary’ training blueprint for the complex
skill or professional competency (i.e., domain-specifc skill) taught should
enable the learners to practice both the domain-specifc and the domain-
general skills. For example, if learners must develop task-selection skills, the
primary training blueprint should allow them to practice selecting learning
tasks (i.e., use on-demand education). If learners must develop information
literacy skills, the primary training blueprint must permit them to practice
searching for relevant learning resources (i.e., use resource-based learn-
ing). The designer can then develop a ‘secondary’ training blueprint for the
domain-general skill in a recursive fashion, using the same design principles
they used for the primary training blueprint, and fnally, intertwine the pri-
mary and secondary training blueprint so that the learners can simultane-
ously develop domain-specifc and domain-general skills (Frèrejean et al.,
2019). This will be explained in the next subsections.
Domain-General Skills 307
Variability
Task Support
For task support, Step 1 of the Ten Steps distinguished between the given
situation, the desired goal situation, and the solution, transforming the
given situation into the goal situation (refer back to Figure 4.4). For infor-
mation literacy learning tasks, the given situation refers to an information
problem that arises from working on one or more domain-specifc learn-
ing tasks, the desired goal situation refers to the availability of learning
resources that contain the supportive information necessary to carry out
these domain-specifc learning tasks, and the solution refers to an organized
set of relevant learning resources. In traditional instruction with planned
information provision, the teacher/system guarantees that the learning
resources are available for the learners when needed and that their quality
is high. However, when information literacy skills are taught in resource-
based learning, learners must learn to fnd relevant learning resources (or,
if basic resources are already provided, additional or alternative resources)
themselves.
Second-order scafolding should then be used to gradually decrease
the given support. PBL can provide an example of second-order scafold-
ing of given task support. During an educational program, support can be
decreased by:
• First, giving the learners a limited list of relevant resources (e.g., books,
articles, video lectures, websites, educational multimedia, etc.) that they
should consult to explain the phenomenon introduced in a particular
problem but asking them to analogously expand the list.
308 Domain-General Skills
Guidance
• In a frst phase, the tutor gives the learners explicit advice on how to defne an
information problem arising from the domain-specifc learning task (in each
frst group meeting), on how to search, scan, and process relevant learning
resources (in the self-study phase), and on how to organize the information
for others and for their later use (in each second group meeting).
Domain-General Skills 309
• In a second phase, the tutor might no longer give explicit advice but,
at the end of each first group meeting, ask the learners how they plan
to search for relevant resources and provide them with cognitive feed-
back on their intended search strategies and, at the end of each second
group meeting, give learners feedback on processing and organizing the
information.
• In a third phase, the tutor may not guide at all or even be absent because
the group should, at that point, be ready and able to function as a self-
managed group.
(Continued)
Domain-General Skills 311
information literacy skills, see Frèrejean et al., 2016; Wopereis et al., 2015).
Note that what is (unspecifed) supportive information in the primary blue-
print is a learning task in the secondary blueprint! After all, the main goal
of the information literacy learning tasks in the secondary blueprint is to
identify the supportive information needed to carry out the domain-specifc
learning tasks in the primary blueprint.
Intertwining training blueprints for domain-specifc and domain-general
skills is largely uncharted territory in instructional design. For information
literacy skills, the basic principle is to replace supportive information in
the primary blueprint with learning tasks in the secondary blueprint, thus
requiring learners to search for precisely this supportive information. Then,
supportive information, procedural information, and part-task practice
can be added to this secondary blueprint. Table 14.3 includes supportive
information at three levels of complexity for the domain-specifc skills. This
means there are three task classes for patent-examination learning tasks, but
the three information-literacy-learning tasks connected to this supportive
information are treated as being all on the same level of complexity, leading
to only one task class for information-literacy-learning tasks.
Does Table 14.3 refect the optimal way of intertwining the two blue-
prints? We have no defnitive answer to this question. It might be better to
distinguish task classes for both the patent-examination and information lit-
eracy tasks. This would have made the development of information literacy
skills ‘smoother’ (due to a more gradual increase in task complexity), but
the intertwining of the two blueprints would have been more complex. In
the combined blueprint in Table 14.3, learners must also search for the sup-
portive information needed to carry out the frst patent-examination learn-
ing tasks (tasks 1.1–1.3 in the primary blueprint). Thus, patent-examination
and information literacy skills develop in parallel right from the start of the
training program. However, this approach might lead to a great deal of cog-
nitive load and thus impede the learning of the primary task, the secondary
task, or both. If this is the case, it might be preferable to start developing
information literacy skills only after learners master the patent-examination
skills on a basic level of complexity. In short, there are many open questions
about intertwining blueprints that can only be answered by future research.
the Ten Steps, deliberate practice is primarily concerned with the automa-
tion of recurrent aspects of performance, which relates to the presentation
of procedural information and the provision of part-task practice. Further-
more, it aims to help learners monitor and control their performance, which
relates to second-order scafolding so that learners gradually take over the
regulation of their learning. This will be further explained in the following
subsections.
Intertwining Blueprints
but also disreputable information sources requires skills that were not really
needed before. There are many frameworks describing different types of
these skills. A common distinction can be made between (a) learning skills,
(b) literacy skills, (c) thinking skills, and (d) social skills (see Figure 14.2).
This chapter focused on learning skills, which relate to self-regulation, select-
ing one’s learning tasks, and deliberate practice; and literacy skills, which
relate to skills such as searching for learning resources and using ICT and
new technologies. The Ten Steps sees information literacy skills as a type of
self-directed learning skill, not a distinct category.
The Ten Steps also provides excellent opportunities for developing learn-
ers’ thinking and social skills. The use of learning tasks based on real-life
tasks and the focus on transfer of learning naturally stresses the develop-
ment of problem-solving, reasoning, and decision-making skills. Tasks are
typically ill-structured or even wicked (Rittel & Webber, 1973), asking for
innovation and creativity. Moreover, learning tasks will often require team-
work and interprofessional work, providing good opportunities for practic-
ing communication, cooperation, and interpersonal and cross-cultural skills
(e.g., Claramita & Susilo, 2014; Susilo et al., 2013). Instructional methods
for having learners study supportive information will also often be collabo-
rative (e.g., group discussion, brainstorming, peer learning), giving even
more opportunities for developing social skills.
Domain-General Skills 317
To summarize, the Ten Steps provides three general guidelines for devel-
oping domain-general skills:
14.5 Summary
• Domain-general skills are not bound to one particular domain but can
only be taught in domains. They include self-regulated and self-directed
learning skills (i.e., task selection, information literacy, deliberate practice).
• Self-regulated learning (SRL) and self-directed learning (SDL) include
the metacognitive processes of monitoring and control. Monitoring
refers to learners’ thoughts about their learning (Am I able to carry out
this task? Do I understand this text?). Control or regulation refers to
what learners do to improve their performance (e.g., continue practicing)
or understanding (e.g., restudy a text).
• In the Ten Steps, SRL relates to the task or topic level, while SDL relates
to the instructional-sequence level.
• Metacognitive prompts may help learners use better cues to monitor and
control their learning. The prompts are diferent for each of the four
blueprint components.
• Training information literacy skills requires a form of second-order scaf-
folding, from planned information provision to resource-based learning,
so that learners become more and more responsible for searching and
using their learning resources.
• Training deliberate practice skills requires a form of second-order scaf-
folding, from unsolicited information presentation to solicited informa-
tion presentation and from dependent part-task practice to independent
part-task practice, so that learners become more and more responsible for
successively refning and automating recurrent aspects of their whole-task
performance.
• The design principles for training domain-general skills are identical to
those for training domain-specifc skills. In the Ten Steps, a ‘secondary’
training blueprint for the domain-general skill is developed that can then
be intertwined with the ‘primary’ training blueprint for the domain-
specifc skill or professional competency.
Domain-General Skills 319
Glossary Terms
Programs of Assessment
DOI: 10.4324/9781003322481-15
322 Programs of Assessment
Whole tasks as intended in the Ten Steps are virtually absent in many educa-
tional programs. For example, in the traditional lecture-based curriculum in
higher education, courses strongly focus on transmitting supportive infor-
mation and mainly provide procedural information in the context of part-
task practice, which takes place in practicals or skills labs. In some settings,
the only whole task provided to students is their fnal project. Consequently,
learners are expected to integrate the entirety of their acquired knowledge
and skills into this fnal whole task at the end of the program. Unsurpris-
ingly, then, transfer of learning is often low. Except for the fnal project,
student assessments in such an educational program predominantly focus on
acquired knowledge and part-task performance.
This is completely opposite to assessment in an educational program
based on the Ten Steps. In the Ten Steps, the program’s backbone consists
of learning tasks, and performance assessments are gathered in a develop-
ment portfolio (see Step 2) to measure learners’ whole-task performance
at particular points in time as well as their gradual progression toward the
program’s fnal attainment levels. The Ten Steps assumes that, when learners
demonstrate they can carry out tasks in a way that meets all of the standards,
they must also have mastered the underlying—supportive and procedural—
knowledge and routine skills. Thus, performance-based whole-task assess-
ment is the only type of assessment that is an integral and required part of
the Ten Steps. This is sufcient for most situations!
However, there might be reasons for assessing learners not only on the
level of whole-task performance but also on the levels of acquired knowledge
(i.e., remembering and understanding) and part-task performance. External
authorities may require that learners are not only assessed on reaching per-
formance objectives (i.e., describing the acceptable performance of tasks; Step
2) but also on reaching learning objectives (i.e., describing what learners must
learn to be able to perform those tasks). Especially when learners are not fre-
quently assessed on whole-task performance, being assessed on part-tasks and
acquired knowledge might stimulate them to invest time and efort in learning
(Reeves, 2006). In that case, a good match between these assessments and the
organization of the curriculum, also called constructive alignment, is necessary
to efectively promote learning (Carr & Harris, 2001). This chapter discusses
what a complete program of assessment might look like in a whole-task cur-
riculum based on the Ten Steps (Torre et al., 2020). The focus is on summa-
tive assessment; that is, assessment to make pass/fail and certifcation decisions.
The structure of this chapter is as follows. Section 1 describes Miller’s
pyramid as a framework to distinguish four assessment levels related to the
four blueprint components. Section 2 revisits the assessment of learning
tasks, now focusing on summative assessment. Section 3 discusses the assess-
ment of supportive information, distinguishing between assessing cognitive
Programs of Assessment 323
• Learning tasks are located on the ‘shows-how’ and ‘does’ levels. Here,
the distinction between recurrent and nonrecurrent task aspects is no
longer relevant because learning tasks appeal, by defnition, to both of
them.
• In addition, supported/guided learning tasks performed in a simulated
task environment are on the ‘shows how’ level, while unsupported/
unguided learning tasks performed in a real-life task environment (the
workplace or daily life) are on the ‘does’ level.
The Ten Steps, as described in this book, worked from the top to the bot-
tom of Miller’s pyramid. It began in Step 1 with the identifcation of real-life
tasks as a basis for the design of learning tasks, followed, in Step 2, by the
formulation of performance objectives, including standards of acceptable
performance for both unsupported/unguided tasks (does) and supported/
guided tasks (shows how). Thus, assessment was limited to performance
assessment of whole tasks and was only formative, aiming to improve learn-
ing. In most educational programs, this is the only thing needed: When
learners can demonstrate they can carry out the learning tasks, including
Programs of Assessment 325
On the does-level, learners will often carry out tasks in a professional setting
(e.g., internships, placements, clerkships), which are then professional tasks
326 Programs of Assessment
Figure 15.3 shows how EPAs and levels of supervision can be placed in a
training blueprint that is developed according to the Ten Steps and imple-
mented in the workplace. The advantage of this approach is that a training
program has ‘milestones,’ making progress at different levels of complexity
visible: At a particular point in time (i.e., the vertical line in Figure 15.3),
the learner can be fully responsible for carrying out tasks at one particular
level of complexity while still being supervised when carrying out tasks at
a higher level of complexity and only observing others carrying out tasks
at even higher levels of complexity. Thus, the educational blueprint is used
flexibly: The learner can be working on tasks from more than one task class
at the same time but with different levels of workplace supervision.
Essays and open-ended questions are most commonly used for assessing
mental models. The open questions here do not ask the learner how to
approach a particular situation, as was the case for cognitive strategies, but
rather, to describe phenomena or ideas (conceptual models); how things
are organized, structured, or built (structural models); or how processes
or machines work or function (causal models). The focus is, thus, not on
assessing factual knowledge but on knowledge of how things are interre-
lated; the methods described in Tables 7.1 and 7.2 highlighted the kinds of
relationships the learner must be able to explain. We can distinguish short-
answer questions, long-answer questions, short-essay questions, and full
essays. An advantage of full essays and short-essay questions, as opposed
to short- and long-answer, open-ended questions, is that they can easily be
combined with information sources so that not only domain knowledge but
also domain-general academic skills can be assessed (e.g., writing, analyz-
ing, synthesizing, refective thinking, etc.). Yet another method for assessing
mental models is asking the learner to draw a concept map of a knowledge
domain. This is a time-efcient alternative to writing essays or answering
short-essay questions, and the quality of concept maps may also be easier to
score (see, e.g., Turns et al., 2000).
When assessing mental models, using closed rather than open questions
may help simplify the scoring, but developing good closed questions is far
from easy. The most common format is the multiple-choice test, where learn-
ers must choose one correct answer from several possible answers (typically
three or four); another is the extended-matching-questions test, where learn-
ers must choose more correct answers from several possible answers (typically
10–25). The two main problems with developing closed questions for meas-
uring conceptual knowledge are formulating questions that truly measure
understanding and comprehension rather than factual knowledge and formu-
lating unacceptable answer alternatives that are still credible. This is also why
using true/false questions is not recommended for assessing mental models.
Progress Testing
When providing part-task practice in the Ten Steps, the standards for to-
be-automated, recurrent constituent skills will focus mainly on accuracy,
speed, and time-sharing capabilities. Thus, when summatively assess-
ing part-tasks, not only accuracy counts but also speed and ability to per-
form the skill together with other skills (i.e., time-sharing). This guideline
is often neglected in educational practice. For example, when children in
primary school learn the multiplication tables, assignments like 3 × 4 and
7 × 5 should primarily be assessed on speed rather than only on accuracy
because multiplication of numbers smaller than 10 is typically classifed as
a to-be-automated recurrent constituent skill; assignments like 23 × 64 and
573 × 12, in contrast, should be assessed on accuracy because multiplica-
tion of numbers greater than 10 is typically classifed as a normal, recurrent
constituent skill. Another example is when nurses- and doctors-in-training
learn cardiopulmonary resuscitation (CPR), which will typically be classi-
fed as a to-be-automated recurrent skill, they should primarily be assessed
on speed and time-sharing capabilities (assuming that they do it properly),
such as being able to keep an eye on the environment and give directions to
bystanders while doing the CPR.
The separate assessment of part-tasks, which always make an appeal on
to-be-automated recurrent constituent skills in the Ten Steps, has some
added value to only assessing whole-task performance because it might indi-
cate when a learner has reached the standards and may stop practicing the
part-task. In other words, summative assessment of part-tasks might be used
as an entry requirement for whole-task practice. For example, the nurses-
and doctors-in-training learning CPR might not be allowed to work in the
emergency department of a hospital, where they are confronted with criti-
cal whole tasks before they have successfully reached the speed and time-
sharing standards of the part-task CPR training. Yet, this does not negate
the need for assessing to-be-automated recurrent aspects of whole-task per-
formance! Carrying out a part-task in isolation difers from carrying it out
in the context of a whole task because the latter requires coordination of
the part-task with the other aspects of the whole task. Thus, an assessment
of whole-task performance on the does-level, including its to-be-automated
recurrent aspects, will also be required.
Figure 15.4 The objective, structured clinical examination (OSCE) with a series
of test stations.
can communicate with the patient and can explain to the patient what they
are doing while they are doing it during the examination of the abdomen?
The simple answer is no. Doing both tasks at the same time requires coordi-
nation and increases complexity; thus, cognitive load. If this is what doctors
must be able to do (i.e., if this is the whole task), then the tasks should be
practiced together and assessed together. If the curriculum does not prop-
erly realize this, students might do well in communication skills courses
and assessments in school while still failing to properly communicate with
patients or clients when they have to combine it with other tasks during
internships.
Third, concerning variability, a troubling fnding is that a learner’s per-
formance on only one assessment task is a very poor predictor of their
performance on another similar assessment task. This is called the ‘con-
tent specifcity’ problem, the dominant source of unreliability of OSCEs
(Petrusa, 2002). The original motivation of implementing OSCEs—namely,
to increase assessment reliability by objectifying and standardizing (the O
and the S in the acronym) the assessment—thus did not pan out. To increase
the assessment reliability, one should increase variability, thus assessing a
range of tasks difering from each other on all dimensions that real-life tasks
also difer. Think of examining the abdomen and, if applicable, simultane-
ously explaining what you are doing (stations 1 and 2) to the parents of a
newborn or baby, a toddler, an adolescent, an elderly person, a pregnant
woman, someone with edema, etc. Like variability of practice in a set of
learning tasks is essential to develop competence, variability in a set of assess-
ment tasks is essential to measure this competence reliably. In more sim-
plistic terms, ‘one measure is no measure.’ We should always be extremely
careful with single-point assessments—not only because they are unreliable
but also because learners will quickly learn about the assessments and start to
memorize checklists, making the assessments trivial (Van Luijk et al., 1990).
The three problems connected with assessing nonrecurrent part-tasks (as
opposed to to-be-automated recurrent ones) are solved in a whole-task cur-
riculum in which summative assessments are coupled to the unsupported/
unguided learning tasks in a development portfolio, as described in Step 2.
Figure 15.5
S kill hierarchy for information literacy skills in resource-based
learning, which can serve as the basis for developing an assessment
instrument
Source: Brand-Gruwel et al., 2005.
Programs of Assessment 337
Due to the recursivity involved, this chapter must end where it began:
The Ten Steps assumes that, when learners demonstrate that they can carry
out domain-general tasks (task selection, information literacy, deliberate
practice, teamwork, etc.) in a way that meets all of the standards, they must
also have mastered the underlying (supportive and procedural) knowledge
and routine skills. Thus, for both domain-specifc and domain-general skills,
performance-based, whole-task assessment is the only type of assessment
that is an integral and required part of the Ten Steps. There might, however,
for the domain-general skills, also be reasons for assessing learners not only
on their level of whole-task performance but also on the levels of acquired
knowledge (i.e., remembering and understanding) and part-task perfor-
mance. Then, you might develop a whole program of assessment for the
domain-general skills in the same way described for domain-specifc skills in
this chapter.
15.6 Summary
• Miller’s pyramid can be used to distinguish between assessments on the
‘knows,’ ‘knows how,’ ‘shows how,’ and ‘does’ levels.
• For summative assessment of learning tasks, one should only use unsup-
ported/unguided tasks (the ‘empty’ circles in the schematic training
blueprint).
• For summative assessment of tasks in a professional workplace setting,
include narrative, expert judgments and give the assessments a strong,
formative function.
• Entrustable professional activities (EPAs) are responsibilities learners are
allowed to perform without supervision after being summatively assessed
on unsupported/unguided tasks at a particular level of complexity (i.e.,
task class).
• A combination of case descriptions and open-ended what-to-do ques-
tions can be used to assess cognitive strategies; essays, open questions,
and assignments to draw up concept maps can be used to assess mental
models.
• Progress testing can be used for assessing supportive information.
It measures how a student’s multidisciplinary knowledge develops
throughout the educational program and, therefore, nicely fts the Ten
Steps.
• Summative assessment of part-tasks should only be used for to-be-
automated recurrent skills and focus not only on accuracy but also on
speed and time-sharing capabilities. Training and assessing other types of
part-tasks is discouraged by the Ten Steps.
338 Programs of Assessment
Glossary Terms
Closing Remarks
This chapter concludes the Ten Steps to Complex Learning. The introduc-
tory Chapters 1–3 discussed the main aims of the model, the four blue-
print components, and the Ten Steps for developing educational blueprints
based on the four components. Chapters 4–13 discussed each of the steps
in detail. Chapters 14 and 15 discussed, in order, the teaching of domain-
general skills and programmatic assessment in educational programs based
on the Ten Steps. This fnal chapter briefy discusses the position of the Ten
Steps in the current feld of instructional design and education and sketches
some directions for the model’s further development.
DOI: 10.4324/9781003322481-16
340 Closing Remarks
Task-Centered Learning
the credibility and validity of the claims made. Moreover, there is increas-
ing empirical evidence that applying the five principles helps improve
transfer of learning; that is, the ability to apply what has been learned to
new tasks and in real-life contexts (Francom, 2017; Van Merriënboer &
Kester, 2008).
When teachers or novice designers use the Ten Steps for the first time, they
often find the model difficult to apply. We think this is mainly caused by
their prior experiences in education—either as students or as teachers—
where teaching typically starts from presenting theoretical information, and
then, practice tasks are coupled to the information presented. The Ten Steps
replaces this knowledge-first approach with a task-first approach, which
might initially feel counter-intuitive. It reflects a toppling, where practice
tasks are no longer coupled to the information presented but, in contrast,
where helpful information is coupled to learning tasks that are specified first
(see Figure 16.1). Together, these changes may appear to reflect a (mod-
erate) constructivist view of learning and instruction because the learning
tasks primarily drive a process of active knowledge construction by the
In educational programs based on the Ten Steps, the roles of teachers will
change—both with regard to preparing and realizing lessons or educational
programs. Concerning preparing lessons, the fipped classroom, as described
earlier, makes clear that teachers need to fulfll their role as instructional
designer—the ‘teacher as designer’ (Kali et al., 2015). Traditionally, the con-
tents of study books used for a particular course largely defned the contents
of the lessons. But in a fipped classroom or a program based on the Ten
Steps, the lessons are based on learning tasks that students work on. Often,
these learning tasks will not be available in existing instructional materials
but must be designed and developed. Important considerations for teachers
who act as designers are the following:
After the development of the program, for realizing lessons, teacher roles in
a program based on the Ten Steps will typically include those of (a) tutor,
(b) presenter, (c) assistant looking over your shoulder (ALOYS), (d) instruc-
tor, and (e) coach. As a tutor, the teacher’s main role is guiding learners on
carrying out the learning tasks and giving them cognitive feedback (Wolter-
inck et al., 2022). As a presenter, teachers will stick to their traditional role of
explaining how a learning domain is organized but will also increasingly fulfll
the role of expert model, showing how to approach real-life tasks systemati-
cally and how the application of rules-of-thumb may help overcome difcul-
ties (Van Gog et al., 2004, 2005). As ALOYS, the teacher’s role is to present
JIT how-to information on routine aspects of learning tasks or part-task prac-
tice to learners and give them corrective feedback (Kester et al., 2001). As
instructor, the teacher will provide part-task practice to learners and apply
principles such as changing performance criteria and distributing practice.
Finally, teachers will increasingly fulfll the role of coach, helping learners
develop domain-general skills related to task selection, deliberate practice,
information literacy, creativity, and/or teamwork (i.e., provide second-order
scafolding). All of these new teacher roles pose new requirements to the
form and content of future teacher training programs, with more intensive
use of new technologies (Kirschner & Selinger, 2003; Yan et al., 2012), more
attention for the complex skill of providing diferentiated instruction (Frère-
jean et al., 2021; Van Geel et al., 2019), new physical learning spaces (Van
Merriënboer et al., 2017), and knowledge communities that allow for the
exchange of learning experiences (Kirschner & Wopereis, 2003).
inevitably start from research questions because they will drive the further
development of the Ten Steps as a research-based model (Table 16.1). The
fve main questions relate to diferent ‘blends’ of learning, mass customiza-
tion, and big data, intertwining the training of domain-general and domain-
specifc skills, the role of motivation and emotion, and computer-based tools
to support the instructional design process.
Table 16.1 Five research questions driving the further development of the Ten
Steps.
Question Topics
for each learner to monitor their performance and progress, identify points
of improvement, and advise on points to improve and new tasks to select.
The teacher/coach will give the learner increasing control and responsibil-
ity as they progress. This approach works well for relatively small groups of
learners, but the question arises: How can we deal with (very) large groups
of learners?
The frst part of the answer to this question may be mass customization,
which uses computer-aided systems that combine the low unit costs of mass
production with the fexibility of individual customization (Schellekens
et al., 2010a, 2010b). The basic principle is that, in a group of ten learn-
ers, it may be very costly to ofer one particular task to only one of the ten
students, but in a group of 1,000 learners, there will be many more students
needing this one task, which makes it much more economically feasible to
provide it. Thus, mass customization identifes subgroups of students to
group together because they have the same needs and, thus, can work on
the same learning task or receive the same information (i.e., it follows a
service-oriented approach). A second part of the answer lies in big data and
learning analytics (Edwards & Fenwick, 2016). If big data are available
on learners’ background characteristics and performance assessments on a
series of selected learning tasks (cf. development portfolio), learning analyt-
ics could make it possible to give individual learners advice on the best tasks
to select or even to decide on the level of control that can be safely given to
them. Future research should aim to develop the algorithms that make this
possible and relieve the tasks of a human coach.
learners to select their learning tasks and provide them with tasks that are
much too easy or much too difcult or courses that ask learners to search
their learning resources, only for them to end up with essays on Baconian
science with texts about the 20th-century British artist Francis Bacon and
about the problems that Martin Luther King Jr. had with Pope Leo X and
Holy Roman Emperor Charles V (Kirschner & van Merriënboer, 2013).
The Ten Steps assumes that teaching domain-general skills can only be
efective when meeting three requirements. First, domain-general skills are
only taught in the context of acquiring domain-specifc skills. Thus, the ‘pri-
mary’ blueprint for training the domain-specifc skill or competency should
enable learners to practice the domain-general skill: If learners must develop
information literacy skills, they must be allowed to search for their learn-
ing resources; if learners must develop team skills, they must be required
to work in teams on complex team tasks; if learners must develop delib-
erate practice skills, they must be able to decide which routine behaviors
they want to automate further; and so forth (‘recursivity’; see Chapter 14).
Second, domain-general skills must be trained according to exactly the
same principles as domain-specifc skills. Thus, the ‘secondary’ blueprint
for training domain-general skills contains learning tasks based on real-life
tasks, shows variability of practice, and applies second-order scafolding of
support and guidance—if necessary—on increasingly higher levels of com-
plexity. In addition, learners must receive the necessary supportive informa-
tion (including cognitive feedback), procedural information, and part-task
practice. Third, the ‘primary training blueprint’ and the ‘secondary training
blueprint’ need to be intertwined so learners can simultaneously develop
domain-specifc and domain-general skills. These three requirements have
the status of hypotheses and not proven principles at the time of this writing;
future research needs to establish how successful this approach to teaching
domain-general skills really is. One piece of research on this by Noroozi
et al. (2017) is on the presentation, scafolding, support, and guidance for
achieving the domain-general skill of argumentation. Furthermore, there
are many ways to intertwine blueprints, and research is also needed to inves-
tigate the efects of diferent options.
Relatedness is feeling the need for close relationships with others, includ-
ing teachers and peer learners. It stresses the importance of using learn-
ing tasks that require group work and instructional methods for the study
of supportive information that use collaborative learning. For educational
programs largely realized online, it also stresses the importance of learning
networks: “online social networks through which users share knowledge
and jointly develop new knowledge. This way, learning networks may enrich
the experience of formal, school-based learning and form a viable setting
for professional development” (Sloep & Berlanga, 2011, p. 55). They do
this through tools and processes such as instant messaging, email and list-
serves, blogs, wikis, feeds (RSS, Atom), podcasting and vodcasting, open
educational learning resources, tags and social bookmarking, etc. A learning
350 Closing Remarks
Compared to other design felds, few computer-based design tools are avail-
able for instructional designers (Van Merriënboer & Martens, 2002). The
available tools are either very general (e.g., for making fowcharts or concept
maps) or focus on the ADDIE phases of development (e.g., producing slide
shows, authoring e-learning applications) and implementation (e.g., learn-
ing management systems, MOOC platforms). For the analysis and design
phases, De Croock et al. (2002) describe a prototype design tool that sup-
ports the fexible application of the Ten Steps, allowing for zigzag design
approaches. Its main function is to support the construction of a training
blueprint consisting of the four components. The design tool provides func-
tions for entering, editing, storing, maintaining, and reusing analysis and
design products, providing templates for easily entering information in a
way consistent with the Ten Steps. Furthermore, it provides functions to
check whether the analysis and design products are complete, internally con-
sistent, and in line with the Ten Steps.
Future research is needed to develop more powerful computer-based
tools that ofer functionalities additional to the ones already mentioned.
First, such tools should support the construction of a training blueprint
along with standards and scoring rubrics necessary for assessing learner
performance and combine both in some kind of tasks-standards matrix (cf.
Figure 5.4). Second, tools should support the realization of individualized
learning trajectories; thus, they must specify how information on learner
performance and learner progress is either used to select tasks or to gener-
ate advice to learners on how to select their tasks. Finally, the tools should
support intertwining blueprints for domain-specifc and domain-general
skills. This complex task would greatly beneft from computer support and
will, likely, become more important given the current focus in education on
metacognitive skills.
Artificial Intelligence
When we wrote the third edition of the Ten Steps, artifcial intelligence was
like some exotic animal. A few people had heard about it, fewer had seen it,
352 Closing Remarks
and only a privileged few had ever experienced it. It was something people
dreamed of: a computer application that could think. Now, when writing
this fourth edition, there have been several rapid technological advances in
this feld, not the least of which is the rise of large language models used
in tools such as ChatGPT. These models are trained to learn patterns and
relationships in language data to capture grammar rules, vocabulary, and
contextual understanding from the texts that they have been exposed to.
Once trained, they can perform various language-related tasks like gener-
ating answers to questions or carrying out tasks based on their ‘learned’
knowledge. As a consequence, we now have a new generation of students
surrounded by AI tools, creating headaches for teachers and educational
institutions who rely on assessments of essays, open-ended questions, and
term papers for grading and certifcation. It is clear that education will have
to deal with this and must be prepared.
We briefy sketch some opportunities these AI tools ofer instructional
designers applying the Ten Steps:
than those who did not. Besides using AI for producing text or gener-
ating tests to stimulate elaboration, it can also potentially take on the
role of study coach, helping learners monitor and control their learning
process.
• For designing procedural information, AI might help provide JIT infor-
mation displays. For example, when the learner is wearing augmented
reality glasses, AI could help the learner carry out procedures by high-
lighting real-life objects (e.g., tools, parts, equipment), indicating where
to look, or displaying step-by-step instructions directly in the feld of
vision. Other applications might involve recording and recognizing a
learner’s posture, location, or actions and providing immediate corrective
feedback. Its verbal capabilities could help narrate instructions or respond
to the learner’s questions, avoiding the split-attention efect often created
by manuals. In addition, AI’s generative capabilities might be useful for
creating the numerous practice items required for part-task practice.
these are precisely the central issues in task-centered models such as the Ten
Steps: a focus on complex learning, a strong basis in learning theory, and
a highly fexible design approach (Francom, 2017; Van Merriënboer et al.,
2018). We hope that the Ten Steps, as well as other task-centered models for
whole-task education, contribute to a further revival of the feld of instruc-
tional design. Such a revival is badly needed to cope with the educational
requirements of a fast-changing and increasingly more complex world.
Glossary Terms
Step 6—Analyze Mental Analyze the mental models that experts use
Models to reason about their tasks. Mental models
are representations of how the world is
organized for a particular domain. They
help to carry out nonrecurrent constituent
skills. The analysis results typically take the
form of conceptual models (What is this?),
structural models (How is this organized?),
and causal models (How does this work?).
The different types of models can also be
combined when necessary. Make a (graphic)
representation of each model by identifying
and describing simple schemata (i.e.,
concepts, plans, and principles) and how
these relate to each other.
Step 7—Design Procedural Design procedural information for each
Information learning task in the form of JIT information
displays, specifying how to perform the
recurrent aspects for this task. Give
complete information at first, and then
fade it. Teach at the level of the least
experienced learner. Do not require
memorization; instead, have the procedure
available during practice. Provide all steps in
the procedure and all facts, concepts, and
principles necessary to carry it out. Give
demonstrations of a procedure, instances
of facts/concepts, et cetera, coinciding
with case study. Give immediate, corrective
feedback about what is wrong, why it is
wrong, and corrective hints. Use solicited
information presentation if learners need to
develop deliberate practice skills.
Step 8—Analyze Cognitive Analyze expert task-performance to
Rules identify cognitive rules or procedures
that algorithmically describe correct
performance of recurrent constituent
skills. A cognitive rule describes the exact
condition under which a certain action
has to be carried out (IF condition THEN
action). A procedure is a set of steps and
decisions always applied in a prescribed
order. Perform a procedural analysis
(e.g., information processing analysis) for
recurrent skills that show a temporal order
of steps. Perform a rule-based analysis for
recurrent skills that show no temporal
order of steps.
(Continued)
358 Appendix 1
Task Class 1: Learners produce videos for fictional clients under the
following conditions.
• The video length is 1–3 minutes
• The clients desire aftermovies or event recaps, summarizing the
atmosphere at an event
• Locations are indoors
• There is plenty of time for the recording
• No interaction with other on-camera participants
Supportive Information (inductive strategy): Modeling example
Learners shadow a professional video team while they produce an aftermovie
of the yearly local cultural festival. Learners can interview the video team
during and after the project.
Supportive Information: Presentation of cognitive strategies
• Global SAP for preproduction, production, and postproduction phases
• SAP for shooting video (e.g., basic strategies for creating compositions and
capturing audio)
• SAPs for basic video editing (e.g., selecting footage and editing the video)
Supportive Information: Presentation of mental models
• Conceptual models of basic cinematography, such as composition and
lighting
• Structural models of cameras
• Causal models of how camera settings affect the image and audio (music,
effects) affects mood
Learning Task 1.1
Support: Worked-out example
Guidance: Performance constraints
Learners receive a production plan,
intermediate footage, and the final video
of an existing aftermovie. They evaluate
the quality of each aspect, but their
evaluations must be approved before
they can continue with the next aspect.
360 Appendix 2
Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3-5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Supportive Information (inductive strategy): Case study
Learners study three worked-out examples (i.e., case studies) of promotional
videos for a backpack with integrated solar panels, a virtual fitness
platform, and an urban art festival. In groups, a tutor guides them in
comparing and evaluating each example’s goals, scripts, camera use,
lighting, etc.
Supportive Information: Presentation of cognitive strategies
• SAP for developing a story for promotional videos
• SAPs for interacting with people and collaborating with the crew
• SAPs for shooting video (detailed strategies for creating compositions and
capturing audio)
Supportive Information: Inquiry for mental models. Learners are asked to
identify examples of:
• Different types of cameras, microphones, and lights (conceptual models)
• Story arcs (structural models)
Learning Task 2.1 Procedural Information
Support: Completion task Unsolicited
Guidance: Process worksheet • How-to instructions for
Learners receive the client briefing, synopsis, lighting and selecting lenses
and storyboard for a video promoting a and microphones
new coffee machine. They follow a process
worksheet to record footage and create the
final video.
• Sketching a storyboard
Part-task practice
Task Class 3: Learners produce videos for fictional clients under the
following conditions.
• The video length is increased to 5-10 minutes
• The clients desire informational or educational videos
• Locations are indoor or outdoor
• There is plenty of time for the recording
• Participant dynamics are more challenging (e.g., inexperienced/nervous
participants)
This task class employs the completion strategy.
Supportive Information: Presentation of cognitive strategies
• SAP for coaching people being filmed
• SAP for developing an informative or educational story
• SAPs for advanced video editing (e.g., animations, visualizing complex
ideas)
Supportive Information: Presentation of mental models
• Conceptual models of outdoor equipment, such as filters, reflectors, and
deadcats
• Causal models of how people learn from multimedia materials
Learning Task 3.1: Modeling Procedural Information
example • Demonstrations of using outdoor
Support: Worked-out example equipment
Guidance: Modeling • Demonstrations of how to add
Learners observe an expert thinking effects, titles, and graphics
aloud while working outdoors with
experienced and inexperienced
cyclists to create an informational
video about safe cycling.
Learning Task 3.2 Procedural Information
Support: Completion task Unsolicited
Guidance: Process worksheet • How-to instructions for adding
Learners receive a production plan effects, titles, and graphics
and footage with bad takes and good
takes. They must select good takes
and edit them into a video informing
patients about a medical product. A
process worksheet provides guidance.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 3.2.
364 Appendix 2
Bagley, E., & Shafer, D. W. (2009). When people get in the way: Promoting
civic thinking through epistemic gameplay. International Journal of Gaming
and Computer-Mediated Simulations, 1(1), 36–52. https://doi.org/10.4018/
jgcms.2009010103
Bagley, E., & Shafer, D. W. (2011). Promoting civic thinking through epistemic
game play. In R. Ferdig (Ed.), Discoveries in gaming and computer-mediated simu-
lations: New interdisciplinary applications (pp. 111–127). IGI Global.
Baillifard, A., Gabella, M., Banta Lavenex, P., & Martarelli, C. S. (2023). Implement-
ing learning principles with a personal AI tutor: A case study. https://arxiv.org/
abs/2309.13060
Barbazette, J. (2006). Training needs assessment: Methods, tools, and techniques.
Pfeifer.
Barnes, L. B., Christensen, C. R., & Hansen, A. J. (1994). Teaching and the case
method: Text, cases, and readings (3rd ed.). Harvard Business Review Press.
Bastiaens, E., van Tilburg, J., & van Merriënboer, J. J. G. (Eds.). (2017). Research-
based learning: Case studies from Maastricht University. Springer. https://doi.
org/10.1007/978-3-319-50993-8
Battig, W. F. (1966). Facilitation and interference. In E. A. Bilodeau (Ed.), Acquisi-
tion of skill (pp. 215–244). Academic Press.
Beckers, J., Dolmans, D. H. J. M., Knapen, M. M. H., & van Merriënboer, J. J. G.
(2019). Walking the tightrope with an e-portfolio: Imbalance between support
and autonomy hampers self-directed learning. Journal of Vocational Education
and Training, 71(2), 260–288. https://doi.org/10.1080/13636820.2018.14
81448
Beckers, J., Dolmans, D. H. J. M., & van Merriënboer, J. J. G. (2016). e-Portfolios en-
hancing students’ self-directed learning: A systematic review of infuencing factors.
Australasian Journal of Educational Technology, 32(2), 32–46. https://doi.org/
10.14742/ajet.2528
Beckers, J., Dolmans, D. H. J. M., & van Merriënboer, J. J. G. (2019). Perfect:
Design and evaluation of an electronic development portfolio aimed at supporting
self-directed learning. TechTrends, 63(4), 420–427. https://doi.org/10.1007/
s11528-018-0354-x
Beers, P. J., Boshuizen, H. P. A., Kirschner, P. A., & Gijselaers, W. H. (2007). The
analysis of negotiation of common ground in CSCL. Learning and Instruction,
17(4), 427–435. https://doi.org/10.1016/j.learninstruc.2007.04.002
Benjamin, A. S., & Tullis, J. (2010). What makes distributed practice efective? Cog-
nitive Psychology, 61(3), 228–247. https://doi.org/10.1016/j.cogpsych.2010.
05.004
Birnbaum, M. S., Kornell, N., Bjork, E. L., & Bjork, R. A. (2013). Why interleaving
enhances inductive learning: The roles of discrimination and retrieval. Memory &
Cognition, 41(3), 392–402. https://doi.org/10.3758/s13421-012-0272-7
Bjork, R. A. (1994). Memory and metamemory considerations in the training of
human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing
about knowing (pp. 185–205). MIT Press.
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, tech-
niques, and illusions. Annual Review of Psychology, 64, 417–444. https://doi.org/
10.1146/annurev-psych-113011-143823
398 References
Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2010). Transfer of train-
ing: A meta-analytic review. Journal of Management, 36(4), 1065–1105. https://
doi.org/10.1177/0149206309352880
Blumenfeld, P. C., Soloway, E., Marx, R. W., Krajcik, J. S., Guzdial, M., & Palincsar,
A. (1991). Motivating project-based learning: Sustaining the doing, supporting
the learning. Educational Psychologist, 26(3–4), 369–398. https://doi.org/10.10
80/00461520.1991.9653139
Bohle Carbonell, K., Stalmeijer, R. E., Könings, K. D., Segers, M., & van Merriën-
boer, J. J. G. (2014). How experts deal with novel situations: A review of adaptive
expertise. Educational Research Review, 12, 14–29. https://doi.org/10.1016/
j.edurev.2014.03.001
Boud, D. (1995). Enhancing learning through self assessment. RoutledgeFalmer.
Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solv-
ing by experts and novices: Analysis of a complex cognitive skill. Computers in
Human Behavior, 21(3), 487–508. https://doi.org/10.1016/j.chb.2004.10.005
Bray, C. W. (1948). Psychology and military profciency. Princeton University Press.
Briggs, G. E., & Naylor, J. C. (1962). The relative efciency of several training meth-
ods as a function of transfer task complexity. Journal of Experimental Psychology,
64(5), 505–512. https://doi.org/10.1037/h0042476
Brinkman, W., Tjiam, I. M., Schout, B. M. A., Hendrikx, A. J. M., Witjes, J. A.,
Scherpbier, A. J. J. A., & van Merriënboer, J. J. G. (2011). Designing simulator-
based training for nephrostomy procedure: An integrated approach of cognitive
task analysis (CTA) and 4-component instructional design (4C/ID). Journal
of Endourology, 25(Supplement 1), 29–29. https://doi.org/10.3109/01421
59X.2012.687480
Bruner, J. S. (1960). The process of education. Harvard University Press.
Bullock, A. D., Hassell, A., Markham, W. A., Wall, D. W., & Whitehouse, A. B.
(2009). How ratings vary by staf group in multi-source feedback assessment of
junior doctors. Medical Education, 43(6), 516–520. https://doi.org/10.1111/
j.1365-2923.2009.03333.x
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theo-
retical synthesis. Review of Educational Research, 65(3), 245–281. https://doi.
org/10.3102/00346543065003245
Camp, G., Paas, F., Rikers, R., & van Merriënboer, J. J. G. (2001). Dynamic prob-
lem selection in air trafc control training: A comparison between performance,
mental efort and mental efciency. Computers in Human Behavior, 17(5–6), 575–
595. https://doi.org/10.1016/S0747-5632(01)00028-0
Carlson, R. A., Khoo, B. H., & Elliott, R. G. (1990). Component practice and
exposure to a problem-solving context. Human Factors: The Journal of the Human
Factors and Ergonomics Society, 32(3), 267–286. https://doi.org/10.1177/
001872089003200302
Carlson, R. A., Sullivan, M. A., & Schneider, W. (1989). Component fuency in a prob-
lem-solving context. Human Factors: The Journal of the Human Factors and Ergo-
nomics Society,31(5), 489–502.https://doi.org/10.1177/001872088903100501
Carr, J. F., & Harris, D. E. (2001). Succeeding with standards: Linking curricu-
lum, assessment, and action planning. Association for Supervision and Curriculum
Development.
References 399
Carroll, J. M. (Ed.). (2003). Minimalism beyond the Nurnberg Funnel. MIT Press.
Carroll, J. M., & Carrithers, C. (1984). Blocking learner error states in a training
wheels system. Human Factors: The Journal of the Human Factors and Ergonomics
Society, 26(4), 377–389. https://doi.org/10.1177/001872088402600402
Chandler, P., & Sweller, J. (1996). Cognitive load while learning to use a computer pro-
gram. Applied Cognitive Psychology, 10(2), 151–170. https://doi.org/10.1002/
(SICI)1099-0720(199604)10:2<151::AID-ACP380>3.0.CO;2-U
Charlin, B., Roy, L., Brailovsky, C., Goulet, F., & van der Vleuten, C. (2000). The script
concordance test: A tool to assess the refective clinician. Teaching and Learning in
Medicine, 12(4), 189–195. https://doi.org/10.1207/S15328015TLM1204_5
Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology,
4(1), 55–81. https://doi.org/10.1016/0010-0285(73)90004-2
Chiu, J. L., & Chi, M. T. H. (2014). Supporting self-explanation in the classroom.
In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.), Applying science of learn-
ing in education: Infusing psychological science into the curriculum (pp. 91–103).
American Psychological Association.
Choi, H.-H., van Merriënboer, J. J. G., & Paas, F. (2014). Efects of the physical
environment on cognitive load and learning: Towards a new model of cognitive
load. Educational Psychology Review, 26(2), 225–244. https://doi.org/10.1007/
s10648-014-9262-6
Chu, Y. S., Yang, H. C., Tseng, S. S., & Yang, C. C. (2014). Implementation
of a model-tracing-based learning diagnosis system to promote elementary
students’ learning in mathematics. Educational Technology and Society, 17(2),
347–357.
Claramita, M., & Susilo, A. P. (2014). Improving communication skills in the South-
east Asian health care context. Perspectives on Medical Education, 3(6), 474–479.
https://doi.org/10.1007/s40037-014-0121-4
Clark, R. E. (1983). Reconsidering research on learning from media. Review of Edu-
cational Research, 53, 445–459.
Clark, R. E. (Ed.). (2001). Learning from media: Arguments, analysis, and evidence.
Information Age Publishing.
Clark, R. E., Feldon, D. F., van Merriënboer, J. J. G., Yates, K. A., & Early, S.
(2008). Cognitive task analysis. In J. M. Spector, M. D. Merrill, J. J. G. van Mer-
riënboer, & M. P. Driscoll (Eds.), Handbook of research on educational commu-
nications and technology (3rd ed., pp. 577–594). Lawrence Erlbaum Associates/
Routledge.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teach-
ing the craft of reading, writing, and mathematics. In L. B. Resnick (Ed.), Know-
ing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–493).
Lawrence Erlbaum Associates.
Collins, A., & Ferguson, W. (1993). Epistemic forms and epistemic games: Struc-
tures and strategies to guide inquiry. Educational Psychologist, 28(1), 25–42.
https://doi.org/10.1207/s15326985ep2801_3
Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2008). Selecting learning
tasks: Efects of adaptation and shared control on learning efciency and task
involvement. Contemporary Educational Psychology, 33(4), 733–756. https://doi.
org/10.1016/j.cedpsych.2008.02.003
400 References
Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2009a). Combining shared
control with variability over surface features: Efects on transfer test performance
and task involvement. Computers in Human Behavior, 25(2), 290–298. https://
doi.org/10.1016/j.chb.2008.12.009
Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2009b). Dynamic task selection:
Efects of feedback and learner control on efciency and motivation. Learning and
Instruction,19(6), 455–465.https://doi.org/10.1016/j.learninstruc.2008.07.002
Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2011). Learner-controlled
selection of tasks with diferent surface and structural features: Efects on transfer
and efciency. Computers in Human Behavior, 27(1), 76–81. https://doi.org/
10.1016/j.chb.2010.05.026
Costa, J. M., & Miranda, G. L. (2019). Using Alice software with 4C/ID model:
Efects in programming knowledge and logical reasoning. Informatics in Educa-
tion, 18(1), 1–15. https://doi.org/10.15388/infedu.2019.01
Costa, J. M., Miranda, G. L., & Melo, M. (2021). Four-component instructional
design (4C/ID) model: A meta-analysis on use and efect. Learning Environments
Research, 25, 445–463. https://doi.org/10.1007/s10984-021-09373-y
Crossman, E. R. F. W. (1959). A theory of the acquisition of speed-skill. Ergonomics,
2(2), 153–166. https://doi.org/10.1080/00140135908930419
Custers, E. J. F. M. (2015). Thirty years of illness scripts: Theoretical origins and
practical applications. Medical Teacher, 37(5), 457–462. https://doi.org/10.310
9/0142159X.2014.956052
Daniel, M., Stojan, J., Wolf, M., Taqui, B., Glasgow, T., Forster, S., & Cassese, T.
(2018). Applying four-component instructional design to develop a case presentation
curriculum. Perspectives on Medical Education, 7(4), 276–280. https://doi.org/
10.1007/s40037-018-0443-8
Davis, D. A., Mazmanian, P. E., Fordis, M., van Harrison, R., Thorpe, K. E., &
Perrier, L. (2006). Accuracy of physician self-assessment compared with observed
measures of competence: A systematic review. Journal of the American Medical
Association, 296, 1094–1102. https://doi.org/10.1001/jama.296.9.1094
De Bruin, A. B. H., & van Merriënboer, J. J. G. (Eds.). (2017). Bridging cognitive
load and self-regulated learning research [Special issue]. Learning and Instruction,
51, 1–98.
De Croock, M. B. M., Paas, F., Schlanbusch, H., & van Merriënboer, J. J. G.
(2002). ADAPTit: Instructional design tools for training design and evaluation.
Educational Technology Research and Development, 50(4), 47–58. https://doi.
org/10.1007/BF02504984
De Croock, M. B. M., & van Merriënboer, J. J. G. (2007). Paradoxical efects of in-
formation presentation formats and contextual interference on transfer of a com-
plex cognitive skill. Computers in Human Behavior, 23(4), 1740–1761. https://
doi.org/10.1016/j.chb.2005.10.003
De Groot, A. D. (1966). Perception and memory versus thought. In B. Kleinmuntz
(Ed.), Problem solving: Research, method, and theory. Wiley & Sons.
De Jong, T., Linn, M. C., & Zacharia, Z. C. (2013). Physical and virtual laboratories
in science and engineering education. Science, 340, 305–308. https://doi.org/
10.1126/science.1230579
References 401
De Jong, T., Sotiriou, S., & Gillet, D. (2014). Innovations in STEM education:
The Go-Lab federation of online labs. Smart Learning Environments, 1(3), 1–16.
https://doi.org/10.1186/s40561-014-0003-6
De Smet, M. J. R., Broekkamp, H., Brand-Gruwel, S., & Kirschner, P. A. (2011).
Efects of electronic outlining on students’ argumentative writing perfor-
mance. Journal of Computer Assisted Learning, 27(6), 557–574. https://doi.
org/10.1111/j.1365-2729.2011.00418.x
Dick, W., Carey, L., & Carey, J. O. (2014). The systematic design of instruction (8th
ed.). Pearson.
Dolmans, D. H. J. M., Wolfhagen, I. H. A. P., & van Merriënboer, J. J. G. (2013).
Twelve tips for implementing whole-task curricula: How to make it work. Medical
Teacher, 35(10), 801–805. https://doi.org/10.3109/0142159X.2013.799640
Dory, V., Gagnon, R., Vanpee, D., & Charlin, B. (2012). How to construct and im-
plement script concordance tests: Insights from a systematic review. Medical Edu-
cation, 46(6), 552–563. https://doi.org/10.1111/j.1365-2923.2011.04211.x
Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment: Implications
for health, education, and the workplace. Psychological Science in the Public Inter-
est, 5(3), 69–106. https://doi.org/10.1111/j.1529-1006.2004.00018.x
Edwards, R., & Fenwick, T. (2016). Digital analytics in professional work and learn-
ing. Studies in Continuing Education, 38, 213–227. https://doi.org/10.1080/0
158037X.2015.1074894
Ericsson, K. A. (2015). Acquisition and maintenance of medical expertise: A perspec-
tive from the expert-performance approach with deliberate practice. Academic Medi-
cine, 90(11), 1471–1486. https://doi.org/10.1097/ACM.0000000000000939
Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance:
Evidence of maximal adaptation to task constraints. Annual Review of Psychology,
47(1), 273–305. https://doi.org/10.1146/annurev.psych.47.1.273
Ertmer, P. A., & Russell, J. D. (1995). Using case studies to enhance instructional
design education. Educational Technology, 35(4), 23–31.
Eva, K. W., & Regehr, G. (2007). Knowing when to look it up: A new conception
of self-assessment ability. Academic Medicine, 82(Suppl), S81–S84. https://doi.
org/10.1097/ACM.0b013e31813e6755
Faber, T. J. E., Dankbaar, M. E. W., & van Merriënboer, J. J. G. (2021). Four-
component instructional design applied to a game for emergency medicine. In
A. L. Brooks, S. Brahman, B. Kapralos, A. Nakajima, J. Tyerman, & L. C. Jain
(Eds.), Recent advances in technologies for inclusive well-being. Intelligent systems
reference library (Vol. 196, pp. 65–82). Springer. https://doi.org/10.1007/
978-3-030-59608-8_5
Fassier, T., Rapp, A., Rethans, J.-J., Nendaz, M., & Bochatay, N. (2021). Train-
ing residents in advance care planning: A task-based needs assessment using the
4-component instructional design. Journal of Graduate Medical Education, 13(4),
534–547. https://doi.org/10.4300/JGME-D-20-01263
Fastré, G. M. J., van der Klink, M. R., Amsing-Smit, P., & van Merriënboer, J. J.
G. (2014). Assessment criteria for competency-based education: A study in nurs-
ing education. Instructional Science, 42(6), 971–994. https://doi.org/10.1007/
s11251-014-9326-5
402 References
Fastré, G. M. J., van der Klink, M. R., Sluijsmans, D., & van Merriënboer, J. J. G.
(2013). Towards an integrated model for developing sustainable assessment skills.
Assessment & Evaluation in Higher Education, 38(5), 611–630. https://doi.org/
10.1080/02602938.2012.674484
Fastré, G. M. J., van der Klink, M. R., & van Merriënboer, J. J. G. (2010). The
efects of performance-based assessment criteria on student performance and
self-assessment skills. Advances in Health Sciences Education, 15(4), 517–532.
https://doi.org/10.1007/s10459-009-9215-x
Feinauer, S., Voskort, S., Groh, I., & Petzoldt, T. (2023). First encounters with the au-
tomated vehicle: Development and evaluation of a tutorial concept to support users
of partial and conditional driving automation. Transportation Research Part F: Traf-
fc Psychology and Behaviour, 97, 1–16. https://doi.org/10.1016/j.trf.2023.06.002
Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning.
Educational Psychology Review, 28(4), 717–741. https://doi.org/10.1007/
s10648-015-9348-9
Fisk, A. D., & Gallini, J. K. (1989). Training consistent components of tasks:
Developing an instructional system based on automatic/controlled processing
principles. Human Factors: The Journal of the Human Factors and Ergonomics
Society, 31(4), 453–463. https://doi.org/10.1177/001872088903100408
Francom, G. M. (2017). Principles for task-centered instruction. In C. M. Reige-
luth, B. J. Beatty, & R. D. Myers (Eds.), Instructional design theories and models:
The learner-centered paradigm of education (Vol. 4, pp. 65–91). Routledge.
Francom, G. M., & Gardner, J. (2014). What is task-centered learning? TechTrends,
58(5), 27–35. https://doi.org/10.1007/s11528-014-0784-z
Fraser, K., Hufman, J., Ma, I., Sobczak, M., McIlwrick, J., Wright, B., & McLaugh-
lin, K. (2014). The emotional and cognitive impact of unexpected simulated
patient death. Chest, 145(5), 958–963. https://doi.org/10.1378/chest.13-0987
Frederiksen, N. (1984). Implications of cognitive theory for instruction in problem solv-
ing. Review of Educational Research, 54(3), 363–407. https://doi.org/10.3102/
00346543054003363
Frèrejean, J., Dolmans, D. H. J. M., & Van Merriënboer, J. J. G. (2022). Research
on instructional design in the health professions: From taxonomies of learning
to whole-task models. In J. Cleland & S. J. Durning (Eds.), Researching medical
education (2nd ed., pp. 291–302). Wiley. https://doi.org/10.1002/978111983
9446.ch26
Frèrejean, J., van Geel, M., Keuning, T., Dolmans, D., van Merriënboer, J. J. G., &
Visscher, A. (2021). Ten steps to 4C/ID: Training diferentiation skills in a pro-
fessional development program for teachers. Instructional Science, 395–418.
https://doi.org/10.1007/s11251-021-09540-x
Frèrejean, J., van Merriënboer, J. J. G., Condron, C., Strauch, U., & Eppich, W.
(2023). Critical design choices in healthcare simulation education: A 4C/ID
perspective on design that leads to transfer. Advances in Simulation, 8(5), 1–11.
https://doi.org/10.1186/s41077-023-00242-7
Frèrejean, J., van Merriënboer, J. J. G., Kirschner, P. A., Roex, A., Aertgeerts, B., &
Marcellis, M. (2019). Designing instruction for complex learning: 4C/ID in
higher education. European Journal of Education, 54, 513–524. https://doi.
org/10.1111/ejed.12363
References 403
Frèrejean, J., van Strien, J. L. H., Kirschner, P. A., & Brand-Gruwel, S. (2016).
Completion strategy or emphasis manipulation? Task support for teaching infor-
mation problem solving. Computers in Human Behavior, 62, 90–104. https://
doi.org/10.1016/j.chb.2016.03.048
Frèrejean, J., Velthorst, G. J., van Strien, J. L. H., Kirschner, P. A., & Brand-Gruwel,
S. (2019). Embedded instruction to learn information problem solving: Efects of
a whole task approach. Computers in Human Behavior, 90, 117–130. https://doi.
org/10.1016/j.chb.2018.08.043
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible
are jobs to computerisation? Technological Forecasting and Social Change, 114,
254–280. https://doi.org/10/gc3fzf
Gagné, R. M. (1968). Learning hierarchies. Educational Psychologist, 6(1), 1–9.
https://doi.org/10.1080/00461526809528968
Gagné, R. M., & Merrill, M. D. (1990). Integrative goals for instructional design.
Educational Technology Research and Development, 38(1), 23–30. https://doi.
org/10.1007/BF02298245
Garon-Carrier, G., Boivin, M., Guay, F., Kovas, Y., Dionne, G., Lemelin, J. P., Séguin,
J., Vitaro, F., & Tremblay, R. (2016). Intrinsic motivation and achievement in
mathematics in elementary school: A longitudinal investigation of their associa-
tion. Child Development, 87(1), 165–175. https://doi.org/10.1111/cdev.12458
Geary, D. C. (2008). An evolutionarily informed education science. Educational
Psychologist, 43(4), 179–195. https://doi.org/10.1080/00461520802392133
Gerjets, P., & Kirschner, P. A. (2009). Learning from multimedia and hypermedia.
In N. Balachef, S. Ludvigsen, A. J. M. de Jong, A. Lazonder, & S. Barnes (Eds.),
Technology-enhanced learning: Principles and products (pp. 251–272). Springer.
Gerjets, P., Walter, C., Rosenstiel, W., Bogdan, M., & Zander, T. O. (2014). Cog-
nitive state monitoring and the design of adaptive instruction in digital envi-
ronments: Lessons learned from cognitive workload assessment using a passive
brain-computer interface approach. Frontiers in Neuroscience, 8, 1–21. https://
doi.org/10.3389/fnins.2014.00385
Gessler, M. (2009). Situated learning and cognitive apprenticeship. In R. Maclean &
D. Wilson (Eds.), International handbook of education for the changing world of
work (pp. 1611–1625). Springer.
Ginns, P. (2005). Meta-analysis of the modality efect. Learning and Instruction,
15(4), 313–331. https://doi.org/10.1016/j.learninstruc.2005.07.001
Ginns, P. (2006). Integrating information: A meta-analysis of the spatial contigu-
ity and temporal contiguity efects. Learning and Instruction, 16(6), 511–525.
https://doi.org/10.1016/j.learninstruc.2006.10.001
Göksu, I., Özcan, K. V., Çakir, R., & Yuksel, G. (2017). Content analysis of research
trends in instructional design models: 1999–2014. Journal of Learning Design,
10(2), 85–109. http://dx.doi.org/10.5204/jld.v10i2.288
Gopher, D., Weil, M., & Siegel, D. (1989). Practice under changing priorities: An
approach to the training of complex skills. Acta Psychologica, 71(1–3), 147–177.
https://doi.org/10.1016/0001-6918(89)90007-3
Gorbunova, A., Van Merriënboer, J. J. G., & Costley, J. (2023). Are inductive
teaching methods compatible with cognitive load theory? Educational Psychology
Review, 35(4), 111 (1–26). https://doi.org/10.1007/s10648-023-09828-z
404 References
Gordon, J., & Zemke, R. (2000). The attack on ISD. Training, 37(4), 42–53.
Govaerts, M. J. B., van der Vleuten, C. P. M., Schuwirth, L. W. T., & Muijtjens, A.
M. M. (2005). The use of observational diaries in in-training evaluation: Student
perceptions. Advances in Health Sciences Education, 10(3), 171–188. https://doi.
org/10.1007/s10459-005-0398-5
Gropper, G. L. (1973). A technology for developing instructional materials. American
Institutes for Research.
Gropper, G. L. (1983). A behavioral approach to instructional prescription. In C. M.
Reigeluth (Ed.), Instructional design theories and models (Vol. 1, pp. 101–161).
Lawrence Erlbaum Associates.
Guasch, T., Espasa, A., Alvarez, I. M., & Kirschner, P. A. (2013). Efects of feedback on
collaborative writing in an online learning environment: Type of feedback and the
feedback-giver. Distance Education, 34(3), 324–338. https://doi.org/10.1080/
01587919.2013.835772
Gulikers, J. T. M., Bastiaens, T. J., & Martens, R. L. (2005). The surplus value of an
authentic learning environment. Computers in Human Behavior, 21(3), 509–521.
https://doi.org/10.1016/j.chb.2004.10.028
Güney, Z. (2019a). A sample design in programming with four-component instruc-
tional design (4C/ID) model. Malaysian Online Journal of Educational Technol-
ogy, 7(4), 1–14. https://doi.org/10.17220/mojet.2019.04.001
Güney, Z. (2019b). Four-component instructional design (4C/ID) model approach
for teaching programming skills. International Journal of Progressive Education,
15(4). https://doi.org/10.29329/ijpe.2019.203.11
Haji, F. A., Khan, R., Regehr, G., Ng, G., de Ribaupierre, S., & Dubrowski, A.
(2015). Operationalising elaboration theory for simulation instruction design:
A Delphi study. Medical Education, 49(6), 576–588. https://doi.org/10.1111/
medu.12726
Half, H. M. (1993). Supporting scenario- and simulation-based instruction: Issues
from the maintenance domain. In J. M. Spector, M. C. Polson, & D. J. Muraida
(Eds.), Automating instructional design: Concepts and issues (pp. 231–248). Edu-
cational Technology Publications.
Hall, K. G., & Magill, R. A. (1995). Variability of practice and contextual interfer-
ence in motor skill learning. Journal of Motor Behavior, 27(4), 299–309. https://
doi.org/10.1080/00222895.1995.9941719
Hambleton, R. K., Jaeger, R. M., Plake, B. S., & Mills, C. (2000). Setting perfor-
mance standards on complex educational assessments. Applied Psychological Meas-
urement, 24(4), 355–366. https://doi.org/10.1177/01466210022031804
Hammick, M., Freeth, D., Koppel, I., Reeves, S., & Barr, H. (2007). A best evidence
systematic review of interprofessional education: BEME Guide no. 9. Medical
Teacher, 29(8), 735–751. https://doi.org/10.1080/01421590701682576
Harden, R. M., Stevenson, M., Downie, W. W., & Wilson, G. M. (1975). Assessment
of clinical competence using objective structured examination. BMJ, 1, 447–451.
https://doi.org/10.1136/bmj.1.5955.447
Hartley, J. (1994). Designing instructional text (3rd ed.). Kogan Page.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational
Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
References 405
Hays, R. T., & Singer, M. J. (1989). Simulation fdelity in training system design:
Bridging the gap between reality and training. Springer.
Helsdingen, A. S., van Gog, T., & van Merriënboer, J. J. G. (2011a). The efects of
practice schedule on learning a complex judgment task. Learning and Instruction,
21(1), 126–136. https://doi.org/10.1016/j.learninstruc.2009.12.001
Helsdingen, A., van Gog, T., & van Merriënboer, J. J. G. (2011b). The efects of
practice schedule and critical thinking prompts on learning and transfer of a com-
plex judgment task. Journal of Educational Psychology, 103(2), 383–398. https://
doi.org/10.1037/a0022370
Hennekam, S. (2015). Career success of older workers: The infuence of social skills
and continuous learning ability. Journal of Management Development, 34(9),
1113–1133. https://doi.org/10.1108/JMD-05-2014-0047
Herrington, J., & Parker, J. (2013). Emerging technologies as cognitive tools for
authentic learning. British Journal of Educational Technology, 44(4), 607–615.
https://doi.org/10.1111/bjet.12048
Hill, J. R., & Hannafn, M. J. (2001). Teaching and learning in digital environments:
The resurgence of resource-based learning. Educational Technology Research and
Development, 49(3), 37–52. https://doi.org/10.1007/BF02504914
Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (Eds.). (1989).
Induction: Processes of inference, learning, and discovery. MIT Press.
Holsbrink-Engels, G. A. (1997). The efects of the use of a conversational model and
opportunities for refection in computer-based role-playing. Computers in Human
Behavior, 13(3), 409–436. https://doi.org/10.1016/S0747-5632(97)00017-4
Holtslander, L. F., Racine, L., Furniss, S., Burles, M., & Turner, H. (2012).
Developing and piloting an online graduate nursing course focused on experien-
tial learning of qualitative research methods. Journal of Nursing Education, 51(6),
345–348. https://doi.org/10.3928/01484834-20120427-03
Hoogerheide, V., van Wermeskerken, M., Loyens, S. M. M., & van Gog, T. (2016).
Learning from video modeling examples: Content kept equal, adults are more
efective models than peers. Learning and Instruction, 44, 22–30. https://doi.org/
10.1016/j.learninstruc.2016.02.004
Hoogveld, A. W. M., Paas, F., & Jochems, W. M. G. (2005). Training higher edu-
cation teachers for instructional design of competency-based education: Product-
oriented versus process-oriented worked examples. Teaching and Teacher Education,
21(3), 287–297. https://doi.org/10.1016/j.tate.2005.01.002
Hopkins, S., & O’Donovan, R. (2021). Using complex learning tasks to build
procedural fuency and fnancial literacy for young people with intellectual dis-
ability. Mathematics Education Research Journal, 33, 163–181. https://doi.org/
10.1007/s13394-019-00279-w
Hummel, H. G. K., Slootmaker, A., & Storm, J. (2021). Mini-games for entrepre-
neurship in construction: Instructional design and efects of the TYCON game.
Interactive Learning Environments. https://doi.org/10.1080/10494820.2021.
1995759
Hung, W. E., Dolmans, D. H. J. M., & van Merriënboer, J. J. G. (2019). A review
to identify key perspectives in PBL meta-analyses and reviews: Trends, gaps
and future research directions. Advances in Health Sciences Education, 24(5),
943–957. https://doi.org/10.1007/s10459-019-09945-x
406 References
Husnin, H. (2017). Design and development of learning material with the ten steps to
complex learning: A multiple case study. PhD thesis, University of Warwick, UK.
http://webcat.warwick.ac.uk/record=b3228128~S15
Huwendiek, S., De leng, B. A., Zary, N., Fischer, M. R., Ruiz, J. G., & Ellaway, R.
(2009). Towards a typology of virtual patients. Medical Teacher, 31(8), 743–748.
https://doi.org/10.1080/01421590903124708
Janesarvatan, F., & van Rosmalen, P. (2023). Instructional design of virtual patients
in dental education through a 4C/ID lens: A narrative review. Journal of Comput-
ers in Education. https://doi.org/10.1007/s40692-023-00268-w
Janssen-Noordman, A. M. B., van Merriënboer, J. J. G., van der Vleuten, C. P.
M., & Scherpbier, A. J. J. A. (2006). Design of integrated practice for learn-
ing professional competences. Medical Teacher, 28(5), 447–452. https://doi.
org/10.1080/01421590600825276
Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., &
Eika, B. (2012). Conveying clinical reasoning based on visual observation via eye-
movement modelling examples. Instructional Science, 40(5), 813–827. https://
doi.org/10.1007/s11251-012-9218-5
Jelley, R. B., Gofn, R. D., Powell, D. M., & Heneman, R. L. (2012). Incen-
tives and alternative rating approaches: Roads to greater accuracy in job perfor-
mance assessment? Journal of Personnel Psychology, 11(4), 159–168. https://doi.
org/10.1027/1866-5888/a000068
Jonassen, D. H. (1992). Cognitive fexibility theory and its implications for design-
ing CBI. In S. Dijkstra, H. P. M. Krammer, & J. J. G. van Merriënboer (Eds.),
Instructional models in computer-based learning environments (NATO ASI Series F)
(Vol. 104, pp. 385–403). Springer.
Jonassen, D. H. (1997). Instructional design models for well-structured and III-
structured problem-solving learning outcomes. Educational Technology Research
and Development, 45(1), 65–94. https://doi.org/10.1007/BF02299613
Jonassen, D. H. (1999). Designing constructivist learning environments. In C.
M. Reigeluth (Ed.), Instructional design theories and models: A new paradigm of
instructional theory (Vol. 2, pp. 215–239). Lawrence Erlbaum Associates.
Jonassen, D. H. (2000). Computers as mindtools for schools: Engaging critical think-
ing (2nd ed.). Prentice Hall.
Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for
instructional design. Routledge.
Jüttner, M., & Neuhaus, B. J. (2012). Development of items for a pedagogical
content knowledge test based on empirical analysis of pupils’ errors. International
Journal of Science Education, 34(7), 1125–1143. https://doi.org/10.1080/095
00693.2011.606511
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kali, Y., McKenney, S., & Sagy, O. (2015). Teachers as designers of technology
enhanced learning. Instructional Science, 43(2), 173–179. https://doi.org/
10.1007/s11251-014-9343-4
Kalyuga, S. (2009). Knowledge elaboration: A cognitive load perspective. Learning and
Instruction, 19(5), 402–410. https://doi.org/10.1016/j.learninstruc.2009.02.003
Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal efect. Edu-
cational Psychologist, 38(1), 23–31. https://doi.org/10.1207/S15326985EP3801_4
References 407
Kalyuga, S., Rikers, R., & Paas, F. (2012). Educational implications of expertise
reversal efects in learning and performance of complex cognitive and sensorimotor
skills. Educational Psychology Review, 24(2), 313–337. https://doi.org/10.1007/
s10648-012-9195-x
Kester, L., & Kirschner, P. A. (2012). Cognitive tasks and learning. In N. Seel (Ed.),
Encyclopedia of the sciences of learning (pp. 619–622). Springer.
Kester, L., Kirschner, P. A., & van Merriënboer, J. J. G. (2004). Information pres-
entation and troubleshooting in electrical circuits. International Journal of Science
Education, 26(2), 239–256. https://doi.org/10.1080/69032000072809
Kester, L., Kirschner, P. A., & van Merriënboer, J. J. G. (2005). The management
of cognitive load during complex cognitive skill acquisition by means of com-
puter-simulated problem solving. British Journal of Educational Psychology, 75(1),
71–85. https://doi.org/10.1348/000709904X19254
Kester, L., Kirschner, P. A., & van Merriënboer, J. J. G. (2006). Just-in-time
information presentation: Improving learning a troubleshooting skill. Contem-
porary Educational Psychology, 31(2), 167–185. https://doi.org/10.1016/
j.cedpsych.2005.04.002
Kester, L., Kirschner, P. A., van Merriënboer, J. J. G., & Baumer, A. (2001). Just-
in-time information presentation and the acquisition of complex cognitive skills.
Computers in Human Behavior, 17(4), 373–391. https://doi.org/10.1016/
S0747-5632(01)00011-5
Kicken, W., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2008). Scafolding advice
on task selection: A safe path toward self-directed learning in on-demand educa-
tion. Journal of Vocational Education & Training, 60(3), 223–239. https://doi.
org/10.1080/13636820802305561
Kicken, W., Brand-Gruwel, S., van Merriënboer, J. J. G., & Slot, W. (2009a). The
efects of portfolio-based advice on the development of self-directed learning skills
in secondary vocational education. Educational Technology Research and Develop-
ment, 57(4), 439–460. https://doi.org/10.1007/s11423-009-9111-3
Kicken, W., Brand-Gruwel, S., van Merriënboer, J. J. G., & Slot, W. (2009b). Design
and evaluation of a development portfolio: How to improve students’ self-directed
learning skills. Instructional Science, 37(5), 453–473. https://doi.org/10.1007/
s11251-008-9058-5
Kirschner, F., Paas, F., & Kirschner, P. A. (2009). Individual and group-based learn-
ing from complex cognitive tasks: Efects on retention and transfer efciency.
Computers in Human Behavior, 25(2), 306–314. https://doi.org/10.1016/j.
chb.2008.12.008
Kirschner, P. A. (1992). Epistemology, practical work and academic skills in science
education. Science and Education, 1(3), 273–299. https://doi.org/10.1007/
BF00430277
Kirschner, P. A. (2009). Epistemology or pedagogy, that is the question. In S. Tobias &
T. M. Dufy (Eds.), Constructivist instruction: Success or failure? (pp. 144–157).
Routledge.
Kirschner, P. A. (2015). Facebook as learning platform: Argumentation superhigh-
way or dead-end street? Computers in Human Behavior, 53, 621–625. https://
doi.org/10.1016/j.chb.2015.03.011
408 References
Kirschner, P. A., Ayres, P., & Chandler, P. (2011). Contemporary cognitive load
theory research: The good, the bad and the ugly. Computers in Human Behavior,
27(1), 99–105. https://doi.org/10.1016/j.chb.2010.06.025
Kirschner, P. A., & Davis, N. (2003). Pedagogic benchmarks for information and
communications technology in teacher education. Technology, Pedagogy and Edu-
cation, 12(1), 125–147. https://doi.org/10.1080/14759390300200149
Kirschner, P. A., Hendrick, C., & Heal, J. (2022). How teaching happens: Semi-
nal works in teaching and teacher efectiveness and what they mean in practice.
Routledge.
Kirschner, P. A., & Kirschner, F. (2012). Mental efort. In N. Seel (Ed.), Encyclope-
dia of the sciences of learning (pp. 2182–2184). Springer.
Kirschner, P. A., Martens, R. L., & Strijbos, J. W. (2004). CSCL in higher edu-
cation? A framework for designing multiple collaborative environments. In
J. W. Strijbos, P. A. Kirschner, & R. L. Martens (Eds.), What we know about
CSCL, and implementing it in higher education (pp. 3–30). Kluwer Academic
Publishers.
Kirschner, P. A., & Selinger, M. (2003). The state of afairs of teacher education with
respect to information and communications technology. Technology, Pedagogy and
Education, 12(1), 5–17. https://doi.org/10.1080/14759390300200143
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during
instruction does not work: An analysis of the failure of constructivist, discovery,
problem-based, experiential, and inquiry-based teaching. Educational Psychologist,
41(2), 75–86. https://doi.org/10.1207/s15326985ep4102_1
Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best?
Urban legends in education. Educational Psychologist, 48(3), 169–183. https://
doi.org/10.1080/00461520.2013.804395
Kirschner, P., & Wopereis, I. G. J. H. (2003). Mindtools for teacher communities:
A European perspective. Technology, Pedagogy and Education, 12(1), 105–124.
https://doi.org/10.1080/14759390300200148
Kogan, J. R., Holmboe, E. S., & Hauer, K. E. (2009). Tools for direct observation
and assessment of clinical skills of medical trainees: A systematic review. JAMA,
302(12), 1316. https://doi.org/10.1001/jama.2009.1365
Kok, E. M., de Bruin, A. B. H., Leppink, J., van Merriënboer, J. J. G., & Robben, S.
G. F. (2015). Case comparisons: An efcient way of learning radiology. Academic
Radiology, 22(10), 1226–1235. https://doi.org/10.1016/j.acra.2015.04.012
Kok, E. M., de Bruin, A. B. H., Robben, S. G. F., & van Merriënboer, J. J. G. (2013).
Learning radiological appearances of diseases: Does comparison help? Learning and
Instruction, 23, 90–97. https://doi.org/10.1016/j.learninstruc.2012.07.004
Kok, E. M., & Jarodzka, H. (2017). Before your very eyes: The value and limita-
tions of eye tracking in medical education. Medical Education, 51(1), 114–122.
https://doi.org/10.1111/medu.13066
Kolcu, M. İ. B., Öztürkçü, Ö. S. K., & Kaki, G. D. (2020). Evaluation of a dis-
tance education course using the 4C-ID model for continuing endodontics
education. Journal of Dental Education, 84, 62–71. https://doi.org/10.21815/
JDE.019.138
References 409
Linden, M. A., Whyatt, C., Craig, C., & Kerr, C. (2013). Efcacy of a powered
wheelchair simulator for school aged children: A randomized controlled trial.
Rehabilitation Psychology, 58(4), 405–411. https://doi.org/10.1037/a0034088
Littlejohn, A., & Buckingham Shum, S. (Eds.). (2003). Reusing online resources:
A sustainable approach to elearning [Special issue]. Journal of Interactive Media in
Education, 2003(1). https://doi.org/10.5334/2003-1-reuse-01
Long, Y., Aman, Z., & Aleven, V. (2015). Motivational design in an intelligent
tutoring system that helps students make good task selection decisions. In C.
Conati, N. Hefernan, A. Mitrovic, & M. F. Verdejo (Eds.), Artifcial Intelligence
in education (pp. 226–236). Springer.
Louis, M. R., & Sutton, R. I. (1991). Switching cognitive gears: From hab-
its of mind to active thinking. Human Relations, 44(1), 55–76. https://doi.
org/10.1177/001872679104400104
Lowrey, W., & Kim, K. S. (2009). Online news media and advanced learning: A test
of cognitive fexibility theory. Journal of Broadcasting & Electronic Media, 53(4),
547–566. https://doi.org/10.1080/08838150903323388
Loyens, S., Kirschner, P. A., & Paas, F. (2011). Problem-based learning. In S.
Graham, A. Bus, S. Major, & L. Swanson (Eds.), APA educational psychology
handbook-application to learning and teaching (Vol. 30, pp. 403–425). American
Psychological Association.
Lukosch, H., Bussel, R., & Meijer, S. (2013). Hybrid instructional design for
serious gaming. Journal of Communication and Computer, 10, 1–8. https://doi.
org/10.17265/1548-7709/2013.01001
Maddens, L., Depaepe, F., Raes, A., & Elen, J. (2020). The instructional design
of a 4C/ID-inspired learning environment for upper secondary school students’
research skills. International Journal of Designs for Learning, 11(3), 126–147.
https://doi.org/10.14434/ijdl.v11i3.29012
Mager, R. F. (1997). Preparing instructional objectives: A critical tool in the develop-
ment of efective instruction (3rd ed.). The Center for Efective Performance.
Maggio, L. A., Ten Cate, O., Irby, D. M., & O’Brien, B. C. (2015). Designing
evidence-based medicine training to optimize the transfer of skills from the class-
room to clinical practice: Applying the four component instructional design
model. Academic Medicine, 90(11), 1457–1461. https://doi.org/10.1097/
ACM.0000000000000769
Maran, N. J., & Glavin, R. J. (2003). Low- to high-fdelity simulation – a con-
tinuum of medical education? Medical Education, 37(S1), 22–28. https://doi.
org/10.1046/j.1365-2923.37.s1.9.x
Marcellis, M., Barendsen, E., & van Merriënboer, J. J. G. (2018). Designing a
blended course in Android app development using 4C/ID. In Proceedings of the
18th Koli International Conference on Computing Education Research (article nr.
19). New York: ACM. https://doi.org/10.1145/3279720.3279739
Marei, H. F., Donkers, J., Al-Eraky, M. M., & van Merriënboer, J. J. G. (2017). The
efectiveness of sequencing virtual patients with lectures in a deductive or inductive
learning approach. Medical Teacher, 39, 1268–1274. https://doi.org/10.1080/
0142159X.2017.1372563
Mavilidi, M. F., & Zhong, L. (2019). Exploring the development and research focus
of cognitive load theory, as described by its founders: Interviewing John Sweller,
References 411
Fred Paas, and Jeroen van Merriënboer. Educational Psychology Review, 31, 499–
508. https://doi.org/10.1007/s10648-019-09463-7
Mayer, R. E. (Ed.). (2014). The Cambridge handbook of multimedia learning (2nd
rev. ed.). Cambridge University Press.
Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia
learning: When presenting more material results in less understanding. Journal of
Educational Psychology, 93(1), 187–198. https://doi.org/10.1037/0022-0663.
93.1.187
McDaniel, M. A., & Schlager, M. S. (1990). Discovery learning and transfer of problem-
solving skill. Cognition and Instruction, 7(2), 129–159. https://doi.org/10.1207/
s1532690xci0702_3
McGaghie, W. C., Issenberg, S. B., Petrusa, E. R., & Scalese, R. J. (2010). A criti-
cal review of simulation-based medical education research: 2003–2009. Medical
Education, 44(1), 50–63. https://doi.org/10.1111/j.1365-2923.2009.03547.x
McGraw, R., Newbigging, J., Blackmore, E., Stacey, M., Mercer, C., Lam, W.,
Braund, H., & Gilic, F. (2023). Using cognitive load theory to develop an emer-
gency airway management curriculum: The Queen’s University Mastery Airway
Course (QUMAC). Canadian Journal of Emergency Medicine, 25, 378–381.
https://doi.org/10.1007/s43678-023-00495-1
Meguerdichian, M. J., Bajaj, K., & Walker, K. (2021). Fundamental underpinnings of
simulation education: Describing a four-component instructional design approach to
healthcare simulation fellowships. Advances in Simulation, 6, 18. https://doi.org/
10.1186/s41077-021-00171-3
Melo, M. (2018). The 4C/ID-model in physics education: Instructional design of
a digital learning environment to teach electrical circuits. International Journal of
Instruction, 11(1), 103–122. https://doi.org/10.12973/iji.2018.1118a
Melo, M., & Miranda, G. L. (2015). Learning electrical circuits: The efects of the
4C-ID instructional approach in the acquisition and transfer of knowledge. Journal
of Information Technology Education: Research, 14, 313–337. https://doi.org/
10.28945/2281
Merrill, M. D. (2002). A pebble-in-the-pond model for instructional design. Perfor-
mance Improvement, 41(7), 41–46. https://doi.org/10.1002/pf.4140410709
Merrill, M. D. (2020). First principles of instruction (Revised). Association for Edu-
cational Communications and Technology.
Merrill, P. (1987). Job and task analysis. In R. M. Gagné (Ed.), Instructional technol-
ogy: Foundations (pp. 141–173). Lawrence Erlbaum Associates.
Mettes, C. T. C. W., Pilot, A., & Roossink, H. J. (1981). Linking factual and procedural
knowledge in solving science problems: A case study in a thermodynamics course.
Instructional Science, 10(4), 333–361. https://doi.org/10.1007/BF00162732
Meutstege, K., Van Geel, M., & Visscher, A. (2023). Evidence-based design of a
teacher professional development program for diferentiated instruction: A whole-
task approach. Education Sciences, 13(985), 1–24. https://doi.org/10.3390/
educsci13100985
Miller, G. E. (1990). The assessment of clinical skills/competence/performance.
Academic Medicine, 65(9), S63–S67. https://doi.org/10.1097/00001888-
199009000-00045
412 References
Mills, R., Tomas, L., & Lewthwaite, B. (2016). Learning in earth and space science: A
review of conceptual change instructional approaches. International Journal of Science
Education, 38(5), 767–790. https://doi.org/10.1080/09500693.2016.1154227
Miranda, G., Rafael, M., Melo, M., Pardal, C., De Almeida, J., & Pontes, T.
(2020). 4C-ID model and cognitive approaches to instructional design and
technology: Emerging research and opportunities. IGI Global. https://doi.
org/10.4018/978-1-7998-4096-1
Moulton, C., Regehr, G., Lingard, L., Merritt, C., & MacRae, H. (2010). Slow-
ing down to stay out of trouble in the operating room: Remaining attentive in
automaticity. Academic Medicine, 85(10), 1571–1577. https://doi.org/10.1097/
ACM.0b013e3181f073dd
Moust, J. H. C., Van. Berkel, H. J. M., & Schmidt, H. G. (2005). Signs of ero-
sion: Refections on three decades of problem-based learning at Maastricht
University. Higher Education,50(4), 665–683.https://doi.org/10.1007/s10734-
004-6371-z
Mulder, Y. G., Lazonder, A. W., & de Jong, T. (2011). Comparing two types of
model progression in an inquiry learning environment with modelling facili-
ties. Learning and Instruction, 21(5), 614–624. https://doi.org/10.1016/
j.learninstruc.2011.01.003
Mulders, M. (2022). Vocational training in virtual reality: A case study using the
4C/ID model. Multimodal Technologies and Interaction, 6, 49. https://doi.org/
10.3390/mti6070049
Musharyanti, L., Haryanti, F., & Claramita, M. (2021). Improving nursing students’
medication safety knowledge and skills on using the 4C/ID learning model. Jour-
nal of Multidisciplinary Healthcare, 14, 287–295. https://doi.org/10.2147/
JMDH.S293917
Nadolski, R. J., Kirschner, P. A., & van Merriënboer, J. J. G. (2006). Process
support in learning tasks for acquiring complex cognitive skills in the domain
of law. Learning and Instruction, 16(3), 266–278. https://doi.org/10.1016/
j.learninstruc.2006.03.004
Nadolski, R. J., Kirschner, P. A., van Merriënboer, J. J. G., & Hummel, H. G. K. (2001).
A model for optimizing step size of learning tasks in competency-based multime-
dia practicals. Educational Technology Research and Development, 49(3), 87–101.
https://doi.org/10.1007/BF02504917
Nadolski, R. J., Kirschner, P. A., van Merriënboer, J. J. G., & Wöretshofer, J.
(2005). Development of an instrument for measuring the complexity of learn-
ing tasks. Educational Research and Evaluation, 11(1), 1–27. https://doi.
org/10.1080/13803610500110125
Naylor, J. C., & Briggs, G. E. (1963). Efects of task complexity and task or-
ganization on the relative efciency of part and whole training methods. Jour-
nal of Experimental Psychology, 65(3), 217–224. https://doi.org/10.1037/
h0041060
Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and
new fndings. In G. H. Bower (Ed.), Psychology of learning and motivation (Vol.
26, pp. 125–173). Elsevier. https://doi.org/10.1016/S0079-7421(08)60053-5
Newell, A., & Simon, H. A. (1972). Human problem solving. Prentice-Hall.
References 413
Nixon, E. K., & Lee, D. (2001). Rapid prototyping in the instructional design process.
Performance Improvement Quarterly, 14(3), 95–116. https://doi.org/10.1111/
j.1937-8327.2001.tb00220.x
Nkambou, R., Bordeau, J., & Mizoguchi, R. (Eds.). (2010). Advances in intelligent
tutoring systems. Springer.
Norman, G. R., & Schmidt, H. G. (2000). Efectiveness of problem-based learning
curricula: Theory, practice and paper darts. Medical Education, 34(9), 721–728.
https://doi.org/10.1046/j.1365-2923.2000.00749.x
Norman, G. R., van der Vleuten, C. P. M., & van der Graaf, E. (1991). Pitfalls in the
pursuit of objectivity: Issues of validity, efciency and acceptability. Medical Educa-
tion, 25(2), 119–126. https://doi.org/10.1111/j.1365-2923.1991.tb00037.x
Noroozi, O., Kirschner, P. A., Biemans, H. J. A., & Mulder, M. (2017). Promot-
ing argumentation competence: Extending from frst- to second-order scafold-
ing through adaptive fading. Educational Psychology Review, 30(1), 153–176.
https://doi.org/10.1007/s10648-017-9400-z
Nückles, M., Hübner, S., Dümer, S., & Renkl, A. (2010). Expertise reversal efects in
writing-to-learn. Instructional Science,38(3), 237–258.https://doi.org/10.1007/
s11251-009-9106-9
O’Flaherty, J., & Phillips, C. (2015). The use of fipped classrooms in higher educa-
tion: A scoping review. The Internet and Higher Education, 25, 85–95. https://
doi.org/10.1016/j.iheduc.2015.02.002
Paas, F., Tuovinen, J. E., van Merriënboer, J. J. G., & Aubteen Darabi, A. (2005).
A motivational perspective on the relation between mental efort and performance:
Optimizing learner involvement in instruction. Educational Technology Research
and Development, 53(3), 25–34. https://doi.org/10.1007/BF02504795
Paas, F., van Gog, T., & Sweller, J. (2010). Cognitive load theory: New conceptual-
izations, specifcations, and integrated research perspectives. Educational Psychol-
ogy Review, 22(2), 115–121. https://doi.org/10.1007/s10648-010-9133-8
Paas, F., & van Merriënboer, J. J. G. (1994). Variability of worked examples and transfer
of geometrical problem-solving skills: A cognitive-load approach. Journal of Educa-
tional Psychology, 86(1), 122–133. https://doi.org/10.1037/0022-0663.86.1.122
Paas, F., & van Merriënboer, J. J. G. (2020). Cognitive-load theory: Methods to man-
age working memory load in the learning of complex tasks. Current Directions in Psy-
chological Science, 29, 394–398. https://doi.org/10.1177/0963721420922183
Paik, E. S., & Schraw, G. (2013). Learning with animation and illusions of under-
standing. Journal of Educational Psychology, 105(2), 278–289. https://doi.org/
10.1037/a0030281
Paivio, A. (1971). Imagery and verbal processes. Holt, Rinehart, and Winston.
Paivio, A. (1986). Mental representations. Oxford University Press.
Palmeri, T. J. (1999). Theories of automaticity and the power law of practice. Jour-
nal of Experimental Psychology: Learning, Memory, and Cognition, 25(2), 543–
551. https://doi.org/10.1037/0278-7393.25.2.543
Peng, J., Wang, M. H., Sampson, D., & van Merriënboer, J. J. G. (2019). Using a
visualisation-based and progressive learning environment as a cognitive tool for
learning computer programming. Australasian Journal of Educational Technology,
35(2), 52–68. https://doi.org/10.14742/ajet.4676
414 References
Schellekens, A., Paas, F., Verbraeck, A., & van Merriënboer, J. J. G. (2010a). Design-
ing a fexible approach for higher professional education by means of simulation
modelling. Journal of the Operational Research Society, 61(2), 202–210. https://
doi.org/10.1057/jors.2008.133
Schellekens, A., Paas, F., Verbraeck, A., & van Merriënboer, J. J. G. (2010b). Flex-
ible programmes in higher professional education: Expert validation of a fexible
educational model. Innovations in Education and Teaching International, 47(3),
283–294. https://doi.org/10.1080/14703297.2010.498179
Schneider, J., Börner, D., Rosmalen, P., & Specht, M. (2016). Enhancing public speak-
ing skills: An evaluation of the presentation trainer in the wild. In K. Verbert, M. Shar-
ples, & T. Klobučar (Eds.), Adaptive and adaptable learning (pp. 263–276). Springer.
Schneider, W. (1985). Training high-performance skills: Fallacies and guidelines.
Human Factors: The Journal of the Human Factors and Ergonomics Society, 27(3),
285–300. https://doi.org/10.1177/001872088502700305
Schneider, W., & Detweiler, M. (1988). The role of practice in dual-task perfor-
mance: Toward workload modeling in a connectionist/-control architecture.
Human Factors: The Journal of the Human Factors and Ergonomics Society, 30(5),
539–566. https://doi.org/10.1177/001872088803000502
Schubert, S., Ortwein, H., Dumitsch, A., Schwantes, U., Wilhelm, O., & Kiessling,
C. (2008). A situational judgement test of professional behaviour: Development
and validation. Medical Teacher, 30(5), 528–533. https://doi.org/10.1080/
01421590801952994
Schuwirth, L. W. T., & van der Vleuten, C. P. M. (2004). Diferent written assessment
methods: What can be said about their strengths and weaknesses? Medical Educa-
tion, 38(9), 974–979. https://doi.org/10.1111/j.1365-2929.2004.01916.x
Shumway, J. M., & Harden, R. M. (2003). AMEE Guide No. 25: The assessment
of learning outcomes for the competent and refective physician. Medical Teacher,
25(6), 569–584. https://doi.org/10.1080/0142159032000151907
Si, J., & Kim, D. (2011). How do instructional sequencing methods afect cog-
nitive load, learning transfer, and learning time? Educational Research, 2(8),
1362–1372.
Sloep, P., & Berlanga, A. (2011). Learning networks, networked learning [Redes
de aprendizaje, aprendizaje en red]. Comunicar, 19(37), 55–64. https://doi.
org/10.3916/C37-2011-02-05
Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2002a). Peer
assessment training in teacher education: Efects on performance and perceptions.
Assessment & Evaluation in Higher Education, 27(5), 443–454. https://doi.
org/10.1080/0260293022000009311
Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T.
J. (2002b). The training of peer assessment skills to promote the development of
refection skills in teacher education. Studies in Educational Evaluation, 29(1),
23–42. https://doi.org/10.1016/S0191-491X(03)90003-4
Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Martens, R.
L. (2004). Training teachers in peer-assessment skills: Efects on performance and
perceptions. Innovations in Education and Teaching International, 41(1), 59–78.
https://doi.org/10.1080/1470329032000172720
References 417
Sweller, J., van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive archi-
tecture and instructional design. Educational Psychology Review, 10(3), 251–296.
https://doi.org/10.1023/A:1022193728205
Taatgen, N. A., & Lee, F. J. (2003). Production compilation: A simple mechanism to
model complex skill acquisition. Human Factors: The Journal of the Human Factors and
Ergonomics Society, 45(1), 61–76. https://doi.org/10.1518/hfes.45.1.61.27224
Taminiau, E. M. C., Kester, L., Corbalan, G., Alessi, S. M., Moxnes, E., Gijselaers,
W. H., Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Why advice on task
selection may hamper learning in on-demand education. Computers in Human
Behavior, 29(1), 145–154. https://doi.org/10.1016/j.chb.2012.07.028
Taminiau, E. M. C., Kester, L., Corbalan, G., Spector, J. M., Kirschner, P. A., & van
Merriënboer, J. J. G. (2015). Designing on-demand education for simultaneous
development of domain-specifc and self-directed learning skills. Journal of Com-
puter Assisted Learning, 31(5), 405–421. https://doi.org/10.1111/jcal.12076
Tawfk, A. A., Graesser, A., Gatewood, J., & Gishbauger, J. (2020). Role of ques-
tions in inquiry-based instruction: Towards a design taxonomy for question-asking
and implications for design. Educational Technology Research and Development,
68, 653–678. https://doi.org/10.1007/s11423-020-09738-9
Ten Cate, O. (2013). Nuts and bolts of entrustable professional activities. Jour-
nal of Graduate Medical Education, 5(1), 157–158. https://doi.org/10.4300/
JGME-D-12-00380.1
Tennyson, R. D., & Cocchiarella, M. J. (1986). An empirically based instructional
design theory for teaching concepts. Review of Educational Research, 56(1),
40–71. https://doi.org/10.3102/00346543056001040
Thijssen, J. G. L., & Walter, E. M. (2006). Identifying obsolescence and related factors
among elderly employees (Conference paper). www.ufhrd.co.uk/wordpress
Thornton, G. C., & Kedharnath, U. (2013). Work sample tests. In K. F. Geisinger,
B. A. Bracken, J. F. Carlson, J.-I. C. Hansen, N. R. Kuncel, S. P. Reise, & M. C.
Rodriguez (Eds.), APA handbook of testing and assessment in psychology: Test the-
ory and testing and assessment in industrial and organizational psychology (Vol. 1,
pp. 533–550). American Psychological Association. https://doi.org/10.1037/
14047-029
Tjiam, I. M., Schout, B. M. A., Hendrikx, A. J. M., Scherpbier, A. J. J. M., Witjes, J.
A., & van Merriënboer, J. J. G. (2012). Designing simulator-based training: An ap-
proach integrating cognitive task analysis and four-component instructional design.
Medical Teacher, 34(10), e698–e707. https://doi.org/10.3109/0142159X.2012.
687480
Topping, K. (1998). Peer assessment between students in colleges and universi-
ties. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/
00346543068003249
Torre, D. M., Schuwirth, L. W. T., & van der Vleuten, C. P. M. (2020). Theoretical
considerations on programmatic assessment. Medical Teacher, 42(2), 213–220.
https://doi.org/10.1080/0142159X.2019.1672863
Tracey, M. W., & Boling, E. (2013). Preparing instructional designers and educa-
tional technologists: Traditional and emerging perspectives. In M. Spector, D.
Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational com-
munications and technology (4th ed., pp. 653–660). Springer.
References 419
Tricot, A., & Sweller, J. (2014). Domain-specifc knowledge and why teaching
generic skills does not work. Educational Psychology Review, 26(2), 265–283.
https://doi.org/10.1007/s10648-013-9243-1
Turns, J., Atman, C. J., & Adams, R. (2000). Concept maps for engineering education:
A cognitively motivated tool supporting varied assessment functions. IEEE Trans-
actions on Education, 43(2), 164–173. https://doi.org/10.1109/13.848069
Van Boxtel, C., van der Linden, J., & Kanselaar, G. (2000). Collaborative learning
tasks and the elaboration of conceptual knowledge. Learning and Instruction,
10(4), 311–330. https://doi.org/10.1016/S0959-4752(00)00002-5
Van Bussel, R., Lukosch, H., & Meijer, S. A. (2014). Efects of a game-facilitated
curriculum on technical knowledge and skill development. In S. A. Meijer & R.
Smeds (Eds.), Frontiers in gaming simulation (pp. 93–101). Springer.
Van den Boom, G., Paas, F., & van Merriënboer, J. J. G. (2007). Efects of elic-
ited refections combined with tutor or peer feedback on self-regulated learning
and learning outcomes. Learning and Instruction, 17(5), 532–548. https://doi.
org/10.1016/j.learninstruc.2007.09.003
Van der Klink, M., Gielen, E., & Nauta, C. (2001). Supervisory support as a major
condition to enhance transfer. International Journal of Training and Develop-
ment, 5(1), 52–63. https://doi.org/10.1111/1468-2419.00121
Van der Meij, H. (2003). Minimalism revisited. Document Design, 4(3), 212–233.
https://doi.org/10.1075/dd.4.3.03mei
Van der Meij, H., & Lazonder, A. W. (1993). Assessment of the minimalist approach
to computer user documentation. Interacting with Computers, 5(4), 355–370.
https://doi.org/10.1016/0953-5438(93)90001-A
Van der Vleuten, C. P. M., Verwijnen, G. M., & Wijnen, W. H. F. W. (1996). Fifteen years
of experience with progress testing in a problem-based learning curriculum. Medi-
cal Teacher, 18(2), 103–109. https://doi.org/10.3109/01421599609034142
Van Geel, M., Keuning, T., Frèrejean, J., Dolmans, D., van Merriënboer, J., & Viss-
cher, A. J. (2019). Capturing the complexity of diferentiated instruction. School
Efectiveness and School Improvement, 30(1), 51–67. https://doi.org/10.1080/0
9243453.2018.1539013
Van Gog, T., Ericsson, K. A., Rikers, R. M. J. P., & Paas, F. (2005). Instructional
design for advanced learners: Establishing connections between the theoreti-
cal frameworks of cognitive load and deliberate practice. Educational Technology
Research and Development, 53(3), 73–81. https://doi.org/10.1007/BF02504799
Van Gog, T., Jarodzka, H., Scheiter, K., Gerjets, P., & Paas, F. (2009). Attention
guidance during example study via the model’s eye movements. Computers in
Human Behavior, 25(3), 785–791. https://doi.org/10.1016/j.chb.2009.02.007
Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2004). Process-oriented worked
examples: Improving transfer performance through enhanced understanding.
Instructional Science, 32(1/2), 83–98. https://doi.org/10.1023/B:TRUC.
0000021810.70784.b0
Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2005). Uncovering expertise-
related diferences in troubleshooting performance: Combining eye movement
and concurrent verbal protocol data: Uncovering expertise-related diferences.
Applied Cognitive Psychology, 19(2), 205–221. https://doi.org/10.1002/acp.1112
420 References
Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2006). Efects of process-
oriented worked examples on troubleshooting transfer performance. Learn-
ing and Instruction, 16(2), 154–164. https://doi.org/10.1016/j.learninstruc.
2006.02.003
Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2008). Efects of studying
sequences of process-oriented and product-oriented worked examples on trouble-
shooting transfer efciency. Learning and Instruction, 18(3), 211–222. https://
doi.org/10.1016/j.learninstruc.2007.03.003
Van Gog, T., Paas, F., van Merriënboer, J. J. G., & Witte, P. (2005). Uncovering the
problem-solving process: Cued retrospective reporting versus concurrent and ret-
rospective reporting. Journal of Experimental Psychology: Applied, 11(4), 237–244.
https://doi.org/10.1037/1076-898X.11.4.237
Van Loon, M. H., de Bruin, A. B. H., van Gog, T., & van Merriënboer, J. J. G.
(2013). The efect of delayed-JOLs and sentence generation on children’s moni-
toring accuracy and regulation of idiom study. Metacognition and Learning, 8(2),
173–191. https://doi.org/10.1007/s11409-013-9100-0
Van Luijk, S. J., van der Vleuten, C. P. M., & Schelven, R. M. (1990). Observer and
student opinions about performance-based tests. In W. Bender, R. J. Hiemstra, A.
J. Scherpbier, & R. P. Zwierstra (Eds.), Teaching and assessing clinical competence
(pp. 199–203). Boekwerk Publications.
Van Meeuwen, L. W., Brand-Gruwel, S., Kirschner, P. A., de Bock, J., & van Mer-
riënboer, J. J. G. (2018). Fostering self-regulation in training complex cognitive
tasks. Educational Technology Research and Development, 66(1), 53–73. https://
doi.org/10.1007/s11423-017-9539-9
Van Merriënboer, J. J. G. (1990). Strategies for programming instruction in high
school: Program completion vs. program generation. Journal of Educational Com-
puting Research, 6(3), 265–285. https://doi.org/10.2190/4NK5-17L7-TWQV-
1EHL
Van Merriënboer, J. J. G. (1997). Training complex cognitive skills: A four-component
instructional design model for technical training. Educational Technology
Publications.
Van Merriënboer, J. J. G. (2000). The end of software training? Journal of
Computer Assisted Learning, 16(4), 366–375. https://doi.org/10.1046/
j.1365-2729.2000.00149.x
Van Merriënboer, J. J. G. (2007). Alternate models of instructional design: Holistic
design approaches and complex learning. In R. A. Reiser & J. V. Dempsey (Eds.),
Trends and issues in instructional design and technology (2nd ed., pp. 72–81). Pear-
son/Merrill Prentice Hall.
Van Merriënboer, J. J. G. (2013). Perspectives on problem solving and instruction.
Computers & Education, 64, 153–160. https://doi.org/10.1016/j.compedu.
2012.11.025
Van Merriënboer, J. J. G. (2016). How people learn. In N. Rushby & D. W. Surry
(Eds.), The Wiley handbook of learning technology (pp. 15–34). Wiley Blackwell.
Van Merriënboer, J. J. G. (2017). Instructional design. In J. A. Dent, R. M. Harden, &
D. Hunt (Eds.), A practical guide for medical teachers (5th ed., pp. 162–169).
Elsevier.
References 421
Van Merriënboer, J. J. G., & Boot, E. (2005). A holistic pedagogical view of learn-
ing objects: Future directions for reuse. In J. M. Spector, C. Ohrazda, A. van
Schaaik, & D. A. Wiley (Eds.), Innovations in instructional technology: Essays in
honor of M. David Merrill (pp. 43–64). Lawrence Erlbaum Associates.
Van Merriënboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints
for complex learning: The 4C/ID-model. Educational Technology Research and
Development, 50(2), 39–61. https://doi.org/10.1007/BF02504993
Van Merriënboer, J. J. G., & de Bruin, A. B. H. (2014). Research paradigms and
perspectives on learning. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop
(Eds.), Handbook of research on educational communications and technology (4th
ed., pp. 21–30). Springer.
Van Merriënboer, J. J. G., & de Croock, M. B. M. (1992). Strategies for com-
puter-based programming instruction: Program completion vs. program genera-
tion. Journal of Educational Computing Research, 8(3), 365–394. https://doi.
org/10.2190/MJDX-9PP4-KFMT-09PM
Van Merriënboer, J. J. G., & de Croock, M. B. M. (2002). Performance-based ISD:
10 steps to complex learning. Performance Improvement, 41(7), 35–40. https://
doi.org/10.1002/pf.4140410708
Van Merriënboer, J. J. G., de Croock, M. B. M., & Jelsma, O. (1997). The transfer
paradox: Efects of contextual interference on retention and transfer performance
of a complex cognitive skill. Perceptual and Motor Skills, 84(3), 784–786. https://
doi.org/10.2466/pms.1997.84.3.784
Van Merriënboer, J. J. G., & Dolmans, D. H. J. M. (2015). Research on instructional
design in the health sciences: From taxonomies of learning to whole-task models.
In J. Cleland & S. J. Durning (Eds.), Researching medical education (pp. 193–
206). Wiley Blackwell.
Van Merriënboer, J. J. G., Gros, B., & Niegemann, H. (2018). Instructional design
in Europe: Trends and issues. In R. A. Reiser & J. V. Dempsey (Eds.), Trends
and issues in instructional design and technology (4th ed., pp. 192–198). Pearson
Education.
Van Merriënboer, J. J. G., Jelsma, O., & Paas, F. G. W. C. (1992). Training for
refective expertise: A four-component instructional design model for complex
cognitive skills. Educational Technology Research and Development, 40(2), 23–43.
https://doi.org/10.1007/BF02297047
Van Merriënboer, J. J. G., & Kester, L. (2008). Whole-task models in education. In
J. M. Spector, M. D. Merrill, J. J. G. van Merriënboer, & M. P. Driscoll (Eds.),
Handbook of research on educational communications and technology (3rd ed.,
pp. 441–456). Lawrence Erlbaum Associates/Routledge.
Van Merriënboer, J. J. G., & Kester, L. (2014). The four-component instructional
design model: Multimedia principles in environments for complex learning. In R.
E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd rev. ed.,
pp. 104–148). Cambridge University Press.
Van Merriënboer, J. J. G., Kester, L., & Paas, F. (2006). Teaching complex rather than
simple tasks: Balancing intrinsic and germane load to enhance transfer of learn-
ing. Applied Cognitive Psychology, 20(3), 343–352. https://doi.org/10.1002/
acp.1250
422 References
Vosniadou, S., & Brewer, W. F. (1992). Mental models of the earth: A study of con-
ceptual change in childhood. Cognitive Psychology, 24(4), 535–585. https://doi.
org/10.1016/0010-0285(92)90018-W
Vosniadou, S., & Ortony, A. (1989). Similarity and analogical reasoning. Cam-
bridge University Press.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological pro-
cesses. Harvard University Press.
Wade, C. H., Wilkens, C., Sonnert, G., & Sadler, P. (2023). Presenting a new model
to support the secondary-tertiary transition to college calculus: The secondary
precalculus and calculus four component instructional design (SPC 4C/ID)
model. Journal of Mathematics Education at Teachers College, 14(1), 1–9. https://
doi.org/10.52214/jmetc.v14i1.10483
Wasson, B., & Kirschner, P. A. (2020). Learning design: European approaches. Tech-
Trends, 64, 815–827. https://doi.org/10.1007/s11528-020-00498-0
Wedman, J., & Tessmer, M. (1991). Adapting instructional design to project
circumstance: The layers of necessity model. Educational Technology, 31(7), 48–52.
Westera, W., Sloep, P. B., & Gerrissen, J. F. (2000). The design of the virtual company:
Synergism of learning and working in a networked environment. Innovations in
Education and Training International, 37(1), 23–33. https://doi.org/10.1080/
135580000362052
Wetzels, S. A. J., Kester, L., & van Merriënboer, J. J. G. (2011). Adapting prior
knowledge activation: Mobilisation, perspective taking, and learners’ prior knowl-
edge. Computers in Human Behavior, 27(1), 16–21. https://doi.org/10.1016/
j.chb.2010.05.004
White, B. Y., & Frederiksen, J. R. (1990). Causal model progressions as a founda-
tion for intelligent learning environments. Artifcial Intelligence, 42(1), 99–157.
https://doi.org/10.1016/0004-3702(90)90095-H
Wickens, C. D., Hutchins, S., Carolan, T., & Cumming, J. (2013). Efectiveness
of part-task training and increasing-difculty training strategies: A meta-analysis
approach. Human Factors: The Journal of the Human Factors and Ergonomics Soci-
ety, 55(2), 461–470. https://doi.org/10.1177/0018720812451994
Wightman, D. C., & Lintern, G. (1985). Part-task training for tracking and manual
control. Human Factors: The Journal of the Human Factors and Ergonomics Soci-
ety, 27(3), 267–283. https://doi.org/10.1177/001872088502700304
Wiley, D., Bliss, T. J., & McEwen, M. (2014). Open educational resources: A review
of the literature. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.),
Handbook of research on educational communications and technology (4th ed.,
pp. 781–790). Springer.
Wiley, D., Hilton, J. L., III, Ellington, S., & Hall, T. (2012). A preliminary examina-
tion of the cost savings and learning impacts of using open textbooks in middle
and high school science classes. The International Review of Research in Open and
Distributed Learning, 13(3), 262. https://doi.org/10.19173/irrodl.v13i3.1153
Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evalua-
tion, 37(1), 3–14. https://doi.org/10.1016/j.stueduc.2011.03.001
Wittrock, M. C. (1989). Generative processes of comprehension. Educational
Psychologist, 24(4), 345–376. https://doi.org/10.1207/s15326985ep2404_2
Wolterinck, C., Poortman, C., Schildkamp, K., & Visscher, A. (2022). Assessment
for learning: Developing the required teacher competencies. European Journal of
Teacher Education. https://doi.org/10.1080/02619768.2022.2124912
424 References
Brown, J. S. 2, 86 Davis, J. 11
Bruner, J. S. 183 Davis, N. 89
Brusaw, C. T. 226 De Almeida, J. 9
Buckingham Shum, S. 51 de Bock, J. 304
Bullock, A. D. 148 de Bruin, A. B. H. 75, 302, 304, 340
Burles, M. 70 Deci, E. L. 348
Bussel, R. 71 de Croock, M. B. M. 8, 9, 22, 49, 75,
Butler, D. L. 176 82, 93, 112, 202, 351
De Groot, A. D. 209
Cakir, R. 340 de Jong, N. 41
Camp, G. 31 de Jong, T. 39, 143, 181
Caniels, M. C. J. 148 de leng, B. A. 67
Carey, J. O. 256, 262 Depaepe, F. 11
Carey, L. 256, 262 de Ribaupierre, S. 128
Carlson, R. A. 26, 293, 294 De Smet, M. J. R. 199
Carolan, T. 139 Detweiler, M. 113
Carr, J. F. 322 Dick, W. 256, 262
Carrithers, C. 280 Dionne, G. 350
Carroll, J. M. 226, 236, 237, 280 Dolmans, D. 11, 128, 343
Cassese, T. 80 Dolmans, D. H. J. M. 4, 6, 39, 66,
Celani, G. 148 120, 154, 342
Chandler, P. 27, 91, 227 Donkers, J. 67
Charlier, N. 277 Dory, V. 328
Charlin, B. 328 Downie, W. W. 332
Chase, W. G. 209 Dubrowski, A. 128
Chi, M. T. H. 172 Dumer, S. 93
Chiu, J. L. 172 Dumitsch, A. 328
Choi, H.-H. 350 Dunlosky, J. 302
Christensen, C. R. 80, 340 Dunning, D. 148, 302
Chu, Y. S. 289
Claramita, M. 277, 316 Early, S. 55, 56, 250
Clarebout, G. 344 Edwards, R. 347
Clark, R. E. 2, 22, 37, 42, 55, 56, 63, Eika, B. 86
75, 82, 202, 250 Elen, J. 11, 277, 343
Clement, J. 216 Ellaway, R. 67
Cocchiarella, M. J. 264 Ellington, S. 51
Collins, A. 2, 86, 177, 182 Elliott, R. G. 293, 294
Condron, C. 65 Eppich, W. 65
Corbalan, G. 31, 33, 71, 152, 155 Ericsson, K. A. 100, 191, 234, 291,
Costa, J. M. 2, 11 312, 314
Costley, J. 175 Ertmer, P. A. 80
Craig, C. 11 Espasa, A. 176
Crossman, E. R. F. W. 284, 285 Eva, K. W. 148
Cullinan, D. 343
Cumming, J. 139 Faber, T. J. E. 70, 71
Custers, E. J. F. M. 165 Fassier, T. 56
Czabanowska, K. 41 Fastré, G. M. J. 114
Feinauer, S. 221
Daniel, M. 80 Feldon, D. F. 55, 56, 250
Dankbaar, M. E. W. 70, 71 Fenwick, T. 347
Davis, D. A. 148 Ferguson, W. 177, 182
Author Index 427
Maran, N. J. 66 Nendaz, M. 56
Marcellis, M. 306, 342 Neuhaus, B. J. 238
Marei, H. F. 67 Newbigging, J. 67
Markham, W. A. 148 Newell, A. 76
Martarelli, C. S. 180, 232, 352 Newman, S. E. 2, 86
Martens, R. 10, 351 Ng, G. 128
Martens, R. L. 66, 79, 100, 149 Niegemann, H. 354
Marx, R. W. 79, 340 Nisbett, R. E. 74
Mavilidi, M. F. 30 Nixon, E. K. 50
Mayer, R. E. 66, 166, 167, 178 Nkambou, R. 35
Mazmanian, P. E. 148 Norman, G. R. 326, 340
McDaniel, M. A. 172 Noroozi, O. 36, 348, 350
McEwen, M. 51 Nückles, M. 93
McGaghie, W. C. 66 Nystrom, M. 86
McGraw, R. 67
McIlwrick, J. 350 O’Brien, B. C. 66
McKenney, S. 342, 343 O’Donovan, R. 257
McLaughlin, K. 350 O’Flaherty, J. 41, 342
Meguerdichian, M. J. 63 Olina, Z. 7
Meijer, S. 71 Oliu, W. E. 226
Meijer, S. A. 345 Ortony, A. 87
Melo, M. 2, 9, 10 Ortwein, H. 328
Mercer, C. 67 Osborne, M. A. 3
Merrill, M. D. 2, 7, 46, 52, 53, 340 O’Sullivan, P. S. 350
Merrill, P. 198 Ozcan, K. V. 340
Merritt, C. 112, 294 Ozturkcu, O. S. K. 11
Mettes, C. T. C. W. 194
Meutstege, K. 11 Paas, F. 20, 27, 30, 31, 39, 66, 76, 82,
Miller, G. E. 322 86, 87, 91, 100, 100–101, 148,
Mills, C. 114 152, 153, 154, 172, 182, 191, 306,
Mills, R. 213 314, 335, 343, 347, 349, 350, 351
Miranda, G. 9, 148 Paas, F. G. W. C. 9, 82
Miranda, G. L. 2, 10, 11 Paik, E. S. 302
Mizoguchi, R. 35 Paivio, A. 178, 271
Moerkerke, G. 148 Palincsar, A. 79, 340
Moulton, C. 112, 294 Palmeri, T. J. 285
Moust, J. H. C. 340 Panadero, E. 148
Moxnes, E. 155 Pardal, C. 9
Muijtjens, A. 330 Parker, J. 89
Muijtjens, A. M. M. 116 Peng, J. 176, 340
Mulder, M. 36, 348 Perrier, L. 148
Mulder, Y. G. 143 Petrusa, E. R. 66, 334
Mulders, M. 71 Petzoldt, T. 221
Musharyanti, L. 277 Phillips, C. 41, 342
Myers, R. D. 340 Pilot, A. 194
Plake, B. S. 114
Nadolski, R. J. 88, 90 Pontes, T. 9, 148
Narens, L. 301 Poortman, C. 343
Nauta, C. 154 Popova, A. 176
Naylor, J. C. 6 Postma, T. C. 9
Nelson, T. O. 301 Powell, D. M. 147
430 Author Index
Wedman, J. 51 Woolley, N. N. 2
Weil, M. 131 Wopereis, I. 175, 306, 308, 312, 336
Westera, W. 67 Wopereis, I. G. J. H. 89, 175, 343
Wetzels, S. A. J. 158 Woretshofer, J. 90
White, B. Y. 214 Wright, B. 350
White, J. G. 9 Wright, N. 11
Whitehouse, A. B. 148 Wrigley, W. 330
Whyatt, C. 11
Wickens, C. D. 139 Xiao, Y. 343
Wightman, D. C. 283 Xu, M. 49
Wijnen, W. H. F. W. 329
Wiley, D. 51 Yan, H. 343
Wilhelm, O. 328 Yang, C. C. 289
Wiliam, D. 237 Yang, H. C. 289
Wilkens, C. 10–11 Yates, K. A. 55, 56, 250
Wilson, G. M. 332 Young, J. Q. 350
Winne, P. H. 176 Yuan, B. 176
Witjes, J. A. 250 Yuksel, G. 340
Witte, P. 86, 100–101, 191
Wittrock, M. C. 166 Zacharia, Z. C. 39
Wolf, M. 80 Zander, T. O. 351
Wolfhagen, I. H. A. P. 4, 342 Zary, N. 67
Wolterinck, C. 343 Zemke, R. 353
Wood, D. 232 Zhong, L. 30
Wood, D. F. 306 Zhou, D. 11
Wood, H. 232 Zwart, D. P. 350
Subject Index
Note: Page numbers in italics indicate a fgure and page numbers in bold indicate
a table on the corresponding page. Page numbers followed by “b” with numbers
refer to boxes.
162b; integrated 4, 6, 19, 20; and features of 74; task classes 21;
nonrecurrent skills 176; same use of variability of practice 20, 23
the same 23; tacit 162b lenses (selection) 101, 119
knowledge progression methods 128, librarians: cognitive strategies 23;
133–134, 197–198 constituent skills for 23–24; training
programs for 19; variability of
laws 166 practice 20
layers of necessity 51 lifelike learning 345
leading questions 172 location-in-space relationships 208
learner control 34, 36–37 location-in-time relationships
learning: cognitive apprenticeship 2; 207, 217
complex 3–4; integration of 6; and low fdelity 65, 66, 71
motivation 348–351; see also transfer Luursema, J. J 230
of learning
learning aids 25, 40, 241 malrules 239, 255–256, 258
learning analytics 347 mash-ups 51
learning by doing 340 mass customization 345–347
learning goals 97–99, 104–105 massed practice 291
learning management systems 351 matching 286
learning networks 349 meaningful learning 161b
learning objectives 7, 322 means-ends analysis 91
learning processes 38 mechatronics 345, 346
learning task design: inductive learning media: blended 41; for blueprint
72b–74b; necessity of 59–60; components 37, 38, 39–40;
primary medium for 63; problem- computer based 39–40; help
solving guidance in 82–87, 88, systems 240; hypermedia 179–180;
89–91; real-life tasks 60–63, 66; multimedia 178–183; and
simulated task environments 63–67, physical fdelity 39; for procedural
70, 70; support and guidance in information 239; selection of 42;
75–76, 77–79, 80–81, 92; and smartphones and tablets 40, 241
teachers 342–343; variability of media nodes 179
practice 71, 73b, 74b, 75; for whole medicine and medical education: and
task practice 47; zigzagging in 10 assessment 326; development of 4;
learning tasks 9; adaptive learning 34– diagnostic skills for 130, 177, 227;
35; and cognitive load theory 28b; and emotions 350; history of 1–2;
completion tasks 82; control of 34; media for 39; and multidisciplinary
conventional 77–79, 80; defning tasks 61; and part-task practice
9, 16; design of performance 26–27, 36; and procedural
assessments 48, 53; imitation tasks information 234; and simulated task
86; and inductive learning 19; environments 64, 64, 65, 67, 70;
media for 37, 38, 39; monitoring and variability of practice 74b, 75
progress 116–117, 302; with a memorization 233–234
nonspecifc goal 80; on-demand mental image 267
education 35–36; real-life 62–63, mental model progression 214, 215,
350; representative 71; reverse tasks 216
80; scafolding 22; sequencing of 48, mental models: assessment of 329; and
53, 125–126; and standards 116, cognitive feedback 177
118, 117; structural features of 74; mental models analysis: and case
summative assessment of 325–326; studies 215–216; causal models
support and guidance 22–23; and 210–211; cause-efect relationships
supportive information 168; surface 217; and cognitive feedback 215;
440 Subject Index
self-directed learning (SDL): and skill clusters 139–140, 139, 143, 144
autonomy 350; deliberate practice skill decomposition: and constituent
skills 304–305; information literacy skills 97–99; skill hierarchy in
skills 304–305; learning and 96–102; validation cycles in 101
teaching skills for 304; monitoring skill hierarchy: data gathering in
and control in 301; relevant skills for 100–102; guidelines for 100–101;
305; in resource-based learning 306 horizontal relationships 99–100; and
self-directed learning skills: importance learning goals 97; and performance
of 36; second-order scafolding 36 assessment 107–109; relationships
self-explanation principle 162b in 100; simultaneous relationships
self-pacing principle 178 99; temporal relationships 99;
self-regulated learning (SRL): cues in transposable relationships 99
302; learning and teaching skills for smartphones and tablets 40, 240
301–305; monitoring and control in snowballing 140, 139, 141–142
301–303, 303 social media 182–183
self-study phase 306 social networks 349
semantic networks 207 solicited information presentation 35,
sequencing: assessment formats 231–232, 234–236, 350
147–148; assessors 147–149; solution-process guidance 22
guidelines for 156; individualized spaced practice 291
learning trajectories 145–147, 146; spatial split-attention principle 240, 241
necessity of 125–126; part-task spiral curriculum 183
138–142, 283, 283; part-whole split attention efect 226, 227, 230,
142–143, 142, 144; protocol 235, 241
portfolio scoring 149–151, 150; standard-centered assessment 117, 149,
second-order scafolding 151–153; 292
task-selection skills 151–153; whole- standards: attitudes 106; consistent
part 142–143, 142; whole-task 115; criteria 106; and performance
127–133, 138, 141, 145 assessment 114, 116–117, 118;
serious games 70 values 106
shared control 152 standards-task matrix 117, 118
signaling principle 240–241, 240 step-by-step instruction 221
simplifcation 283 strengthening: accumulating 283b; and
simplifying conditions method 128; learning processes 38; media for 37;
available time 129; location 129; and part-task practice 26, 29b–30b,
participants dynamics 129–130; 40, 283b–284b; and power law of
patent examination 130; project goal practice 284b; and practice items 277;
128–129; substantive examination and procedural information 257–258;
130; video length 128 and rule formation 221b, 284b
simulated task environments: computer structural features 75
based 67, 69, 70, 70; defning structural models: and case studies
63; fdelity of 65–67, 70, 70, 71; 169–170; defning 165; examples of
reasons for using 63–64, 64, 65; 210; identifying 208–210; and task
serious games 70 domains 212
simulated time compression 290–291 structural understanding 161b
simulation-based games 70 subgoaling 286
simulation-based performance tests 147 subordinate concepts 205, 205
simultaneous relationships 99, 193 summative assessment: at the does
situated practice 182 level 325–326; of domain-general
situational judgment tests 147, 328 skills 334–338; of part-tasks 322,
Subject Index 445