0% found this document useful (0 votes)
46 views462 pages

Ten Steps To Complex Learning A Systematic Approach To Four-Component Instructional Design (Jeroen J. G. Van Merriënboer, Paul A. Kirschner Etc.) (Z-Library)

The fourth edition of 'Ten Steps to Complex Learning' provides a systematic approach to instructional design based on the Four-Component Instructional Design (4C/ID) model, aimed at enhancing the design of training programs for complex learning. This edition includes over 50 new references, updated graphics, and a new training blueprint focused on producing video content, while also discussing the future of instructional design in light of artificial intelligence. The authors, including the late Jeroen J. G. van Merriënboer, are prominent figures in the fields of learning and instruction, contributing extensive research and expertise to the book.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views462 pages

Ten Steps To Complex Learning A Systematic Approach To Four-Component Instructional Design (Jeroen J. G. Van Merriënboer, Paul A. Kirschner Etc.) (Z-Library)

The fourth edition of 'Ten Steps to Complex Learning' provides a systematic approach to instructional design based on the Four-Component Instructional Design (4C/ID) model, aimed at enhancing the design of training programs for complex learning. This edition includes over 50 new references, updated graphics, and a new training blueprint focused on producing video content, while also discussing the future of instructional design in light of artificial intelligence. The authors, including the late Jeroen J. G. van Merriënboer, are prominent figures in the fields of learning and instruction, contributing extensive research and expertise to the book.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 462

Ten Steps to Complex Learning

Ten Steps to Complex Learning presents a path from an educational problem to a solution
in a way that students, design practitioners, and researchers can understand and easily use.
Students in the felds of instructional design and the learning sciences can use this book
to broaden their knowledge of the design of training programs for complex learning.
Practitioners can use this book as a reference guide to support their design of courses,
curricula, or environments for complex learning.
Driven by the acclaimed Four-Component Instructional Design (4C/ID) model, this
fourth edition of Ten Steps to Complex Learning is fully revised with the latest research,
featuring over 50 new references. The entire book has been updated for clarity, incorporating
new, colorful graphics and diagrams, and the guiding example used throughout the book
is replaced with a training blueprint for the complex skill of ‘producing video content.’ The
closing chapter explores the future development of the Ten Steps, discussing changes in
teacher roles and the infuence of artifcial intelligence.

Jeroen J. G. van Merriënboer (1959–2023) was Emeritus Professor of Learning and


Instruction at Maastricht University, The Netherlands. Before retiring, he was Research
Director of the Graduate School of Health Professions Education (SHE) at Maastricht
University. He also held honorary positions at the University of Bergen, Norway, and the
Open University of the Netherlands. He has published over 450 journal articles and book
chapters in the areas of learning and instruction and medical education.

Paul A. Kirschner (1951) is Emeritus Professor of Educational Psychology at the Open


University of the Netherlands as well as Honorary Doctor (Doctor Honoris Causa) at the
University of Oulu, Finland, and Guest Professor at Thomas More University of Applied
Sciences, Belgium. He owns his own educational consultancy company, kirschner-ED,
and has published more than 400 journal articles, books, and book chapters in the areas
of educational psychology, learning, and instruction.

Jimmy Frèrejean (1986) is Assistant Professor at the Faculty of Health, Medicine and Life
Sciences at Maastricht University, The Netherlands. Alongside his teaching and research
on instructional design at the School of Health Professions Education, Jimmy actively
contributes to the Maastricht University Medical Center and Academy as an educational
consultant. He specializes in simulation-based education, coordinates a national lifelong
learning program for healthcare professionals, and is part of the TRISIM expert group on
training, research, and innovation in simulation-based education in healthcare.
Ten Steps to Complex
Learning

A Systematic Approach to Four-Component


Instructional Design
Fourth Edition

Jeroen J. G. van Merriënboer, Paul A.


Kirschner, and Jimmy Frèrejean
Designed cover image: © Jeroen J. G. van Merriënboer, Paul A.
Kirschner, and Jimmy Frèrejean
Fourth edition published 2025
by Routledge
605 Third Avenue, New York, NY 10158
and by Routledge
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
Routledge is an imprint of the Taylor & Francis Group, an informa
business
© 2025 Taylor & Francis
The right of Jeroen J. G. van Merriënboer, Paul A. Kirschner,
and Jimmy Frèrejean to be identified as authors of this work
has been asserted in accordance with sections 77 and 78 of the
Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted
or reproduced or utilised in any form or by any electronic,
mechanical, or other means, now known or hereafter invented,
including photocopying and recording, or in any information
storage or retrieval system, without permission in writing from
the publishers.
Trademark notice: Product or corporate names may be
trademarks or registered trademarks and are used only for
identification and explanation without intent to infringe.
First edition published by Routledge 2007
Third edition published by Routledge 2017
ISBN: 978-1-032-34508-6 (hbk)
ISBN: 978-1-032-33311-3 (pbk)
ISBN: 978-1-003-32248-1 (ebk)
DOI: 10.4324/9781003322481
Typeset in Galliard
by Apex CoVantage, LLC
To Jeroen, our colleague, mentor, teacher, example, and
above all, friend.

Your wisdom and guidance inspired all who knew you,


leaving an indelible mark on them and the field.

We will sorely miss you!


Contents

About the Authors ix


Preface xii
Acknowledgments xiv

1 A New Approach to Instruction 1

2 Four Blueprint Components 15

3 Ten Steps 45

4 Step 1: Design Learning Tasks 59

5 Step 2: Design Performance Assessments 95

6 Step 3: Sequence Learning Tasks 125

7 Step 4: Design Supportive Information 157

8 Step 5: Analyze Cognitive Strategies 189

9 Step 6: Analyze Mental Models 201

10 Step 7: Design Procedural Information 219

11 Step 8: Analyze Cognitive Rules 247

12 Step 9: Analyze Prerequisite Knowledge 261


viii Contents

13 Step 10: Design Part-Task Practice 275

14 Domain-General Skills 299

15 Programs of Assessment 321

16 Closing Remarks 339

Appendix 1 355
Appendix 2 359
Glossary 366
References 396
Author Index 425
Subject Index 433
About the Authors

Jeroen J. G. van Merriënboer (1959–2023) was Emeritus Professor of


Learning and Instruction at Maastricht University, The Netherlands. Be-
fore retiring, he was Research Director of the Graduate School of Health
Professions Education (SHE) at Maastricht University. He also held hon-
orary positions at the University of Bergen, Norway, and the Open Uni-
versity of the Netherlands. He obtained a master’s degree in experimental
psychology from the Vrije Universiteit Amsterdam (1984) and a PhD de-
gree in instructional technology from the University of Twente (1990).
Van Merriënboer specialized in cognitive architecture and instruction,
instructional design for complex learning, holistic approaches to instruc-
tional design, and the use of ICT in education. He has published more
than 450 journal articles and book chapters in the areas of learning and
instruction and medical education. More than 50 PhD candidates com-
pleted their thesis under his supervision. He served on the editorial board
of highly ranked scientifc journals, such as Cognitive Processing; Com-
puters in Human Behavior; Educational Research Review; Educational
Technology Magazine; Educational Technology Research and Development;
International Journal for Virtual and Personal Learning Environments;
Journal of Computing in Higher Education; Learning and Instruction;
and Technology Instruction Cognition and Learning. His prize-winning
monograph, Training Complex Cognitive Skills (1997), describes his
four-component instructional design model for complex skills training
and ofers a systematic, research-based approach to designing environ-
ments for complex learning. He received several awards and prizes for his
scientifc work; apart from prizes for publications and PhD supervision,
he was declared World Leader in Educational Technology by Training
Magazine and received the International Contributions Award from the
Association for Educational Communications and Technology.
Paul A. Kirschner (1951) is Emeritus Professor of Educational Psychology
at the Open University of the Netherlands as well as Honorary Doctor
x About the Authors

(Doctor Honoris Causa) at the University of Oulu, Finland, Guest Pro-


fessor at Thomas More University of Applied Sciences in Belgium, and
owns his own educational consultancy company, kirschner-ED. Before
retiring, he was University Distinguished Professor at the Open Univer-
sity of the Netherlands and Visiting Professor of Education with a special
emphasis on Learning and Interaction in Teacher Education at the Uni-
versity of Oulu, Finland. He has published over 400 scientifc articles in
the areas of learning and instruction and, at the moment of this writing,
has supervised 46 PhD candidates who have successfully completed their
theses under his supervision. He is an internationally recognized expert
in the felds of educational psychology and instructional design. He was
Research Fellow of the Netherlands Institute for Advanced Study in the
Humanities and Social Science. He was President of the International
Society for the Learning Sciences (ISLS) in 2010-2011 and is Research
Fellow of the American Educational Research Association (the frst Eu-
ropean to receive this honor) and Fellow of the International Society
for the Learning Sciences. He was a member of the Scientifc Technical
Council of the Foundation for University Computing Facilities (SURF
WTR) in the Netherlands and was a member of the Dutch Educational
Council and, as such, was advisor to the Minister of Education (2000–
2004). He is Chief Editor of the Journal of Computer Assisted Learning
and Commissioning Editor of Computers in Human Behavior. In addi-
tion to this book, he is also (co)author of a number of very successful
books, including How Learning Happens: Seminal Works in Educational
Psychology and What They Mean in Practice and How Teaching Happens:
Seminal Works in Teaching and Teacher Efectiveness and What They Mean
in Practice, Evidence Informed Learning Design, as well as two diferent
volumes of Urban Legends about Learning and Education. He also co-
edited two other books, Visualizing Argumentation and What we know
about CSCL. His areas of expertise include how we learn and what that
means for instruction, instructional design, collaboration for learning
(computer supported collaborative learning), and the use of media in
teaching and learning.
Jimmy Frèrejean (1986) is Assistant Professor at Maastricht University’s
Faculty of Health, Medicine and Life Sciences, specializing in instruc-
tional design within health professions education. He holds a master’s de-
gree in work and organizational psychology from Maastricht University
and a PhD in instructional design from the Open University of the Neth-
erlands. During his PhD, Jimmy frst learned about 4C/ID and the Ten
Steps and has continued studying and researching it ever since. He col-
laborated closely with Professor Jeroen J. G. van Merriënboer in teaching
About the Authors xi

master’s and post-master’s courses, conducting research, publishing


papers and book chapters, and supervising PhD students at the School
of Health Professions Education. Alongside his teaching and research
responsibilities, Jimmy actively contributes to the Maastricht University
Medical Center and Academy as an educational consultant, helping im-
prove training programs for healthcare professionals. He specializes in
simulation-based education, coordinates a national lifelong learning pro-
gram for healthcare professionals (SimNEXT), and is part of the TRISIM
expert group on training, research, and innovation in simulation-based
education in healthcare. Jimmy is committed to carrying forward the
insights presented in this book, upholding the legacy that shaped the
landscape of instructional design.
Preface

More than 30 years ago, the award-winning article Training for Refective
Expertise: A Four-Component Instructional Design Model for Complex Cog-
nitive Skills (van Merriënboer, Jelsma, & Paas, 1992) described the frst
precursor of the Ten Steps to Complex Learning. Five years later, in 1997,
Jeroen J. G. van Merriënboer published his award-winning book, Train-
ing Complex Cognitive Skills. That book presented a comprehensive descrip-
tion of a training design system for acquiring complex skills or professional
competencies, based on research conducted on the learning and teaching
of knowledge needed for jobs and tasks in a technologically advanced and
quickly changing society. The basic claim was that educational programs for
complex learning must consist of four basic interrelated components: learn-
ing tasks, supportive information, procedural information, and part-task
practice. Each component was linked to a fundamental category of learning
processes. The instructional methods prescribed for each component were
derived from a broad body of empirical research.
Whereas Training Complex Cognitive Skills was very well received in
the academic feld of learning and instruction, practitioners in the feld of
instructional design frequently complained that they found it difcult to
systematically design educational programs based on the four components.
The article Blueprints for Complex Learning: The 4C/ID Model (van Mer-
riënboer, Clark, & De Croock, 2002) was the frst attempt to provide more
guidance on this design process. The systematic approach described in this
article was further developed in the frst edition of the book Ten Steps to
Complex Learning (2007), which can best be seen as a complement to the
psychological foundation described in Training Complex Cognitive Skills.
The Ten Steps describe a path from a training problem to a training solution
in a way that students, practitioners (both instructional designers and teach-
ers), and researchers can understand and use. This book was a great success
and has been translated into Korean, Chinese, and, in part, in Spanish and
Persian. It also spawned a Dutch language analog, of which a new edition
appeared at the time of publication of this book.
Preface xiii

Sadly, Jeroen J. G. van Merriënboer got very ill and passed away before
completing this fourth edition. Despite this, Jeroen played a pivotal role
in overseeing and contributing to all the modifcations. We want to clarify
that any changes made after his passing were discussed with him before-
hand. In short, the updates to this book come in three categories. First, this
fourth edition has been completely updated with the newest insights into
teaching, training, and instructional design. More than 50 new references
have been added, and, where relevant, the latest insights from the feld have
been included. This can be seen in all the chapters in the book. We have
also updated the text about cognitive load theory, especially the concepts of
intrinsic and extraneous cognitive load.
Second, we have signifcantly changed the example training-blueprint
that formed the book’s backbone. In the frst three editions, we used the
moderately complex skill of ‘searching for literature’ by a librarian or docu-
mentalist to explain and illustrate most of the Ten Steps. This blueprint
is now replaced with a more appealing one for a training program for the
complex skill of ‘producing video content’ by a video content producer.
The new blueprint is more extensive and detailed, giving a better example
of the Ten Step’s design approach.
Finally, many smaller and larger changes and additions were made
throughout the book to improve readability and comprehensibility. Also,
where the third edition included some occurrences of (s)he, this is now
replaced with they. Concrete examples and cases were added or updated
where useful. In addition, several fgures and tables have been added or
revised, and all fgures now appear in color. The fnal chapter consists of an
updated perspective on the future of the Ten Steps, including the role of
artifcial intelligence in developing instruction and training.
The structure of this book is straightforward. Chapters 1, 2, and 3 con-
cisely introduce the Ten Steps to Complex Learning. Chapter 1 presents a
holistic approach to the design of instruction for achieving the complex
learning required by modern society. Chapter 2 relates complex learning
to the four blueprint components: learning tasks, supportive information,
procedural information, and part-task practice. Chapter 3 describes the use
of the Ten Steps for developing detailed training blueprints. Then, the Ten
Steps are discussed in detail in Chapters 4 through 13. Chapter 14 discusses
the teaching/training of domain-general skills in programs based on the
Ten Steps, and Chapter 15 discusses the design of summative assessment
programs that are fully aligned with the Ten Steps. Finally, Chapter 16 posi-
tions the Ten Steps in the feld of the learning sciences and discusses future
directions.
Acknowledgments

This is the fourth revised edition of this book. To begin, we would like to
thank all the teachers, trainers, students, instructional designers, and educa-
tors who have bought and used this book to the extent that the publisher
asked us to bring out a new, highly revised edition.
Our debt to colleagues and graduate students who in some way contrib-
uted to the development of the Ten Steps described in this book is enor-
mous. Without them, this book would not exist. We thank John Sweller,
Paul Ayres, Paul Chandler, and Slava Kalyuga for working together on the
development of Cognitive Load Theory, which heavily afected the formu-
lation of the Ten Steps. We thank Iwan Wopereis, who helps manage the
www.4cid.org website and organize the recurring Dutch 4C/ID User’s
Day. We thank Ameike Janssen-Noordman and Bert Hoogveld for contrib-
uting to the Ten Steps when writing the Dutch books Innovatief Onderwijs
Ontwerpen [Innovative Educational Design, 2002/2009/2017] and Inno-
vatief Onderwijs Ontwerpen in de Praktijk [Innovative Educational Design
in Practice, 2011]. We thank our research groups at the Open University of
the Netherlands, the Thomas More University of Applied Sciences in Ant-
werp, Belgium, and Maastricht University. We especially thank our master’s
and PhD students for the ongoing academic debate and for giving us many
inspiring ideas. We thank the many educational institutes and organizations
that use the Ten Steps and provide us with useful feedback. Last but not
least, we thank all the learners who, in some way, participated in the research
and development projects that ofered the basis for writing this book.
Finally, we thank Roel Willems, professional content creator and audio-
visual teacher. Without his expert help and guidance, it would have been
impossible to accurately describe an example 4C/ID blueprint for training
a video content producer.
Jeroen J. G. van Merriënboer, Paul A. Kirschner, Jimmy Frèrejean
Maastricht/Hoensbroek, December 2023
Chapter 1

A New Approach to Instruction

When Rembrandt van Rijn painted The Anatomy Lesson of Dr. Nicolaes
Tulp in 1632, our understanding of human anatomy, physiology, and mor-
phology was extremely limited. The tools of the trade were rudimentary
at best and barbarian at worst. Medicine was dominated by the teachings
of the church, which regarded the human body as a creation of God, and
the ancient Greek view of the four humors (i.e., blood, phlegm, black bile,
and yellow bile) prevailed. Sickness was attributed to an imbalance in these
humors, and treatments, such as bloodletting and inducing vomiting, aimed
to restore this balance. Surgical instruments were basic. A surgeon would
perform operations with the most basic tools: a drill, a saw, forceps, and

DOI: 10.4324/9781003322481-1
2 A New Approach to Instruction

pliers for removing teeth. If a trained surgeon was not available, it was usu-
ally the local barber who performed operations and removed teeth. The
trained surgeon was more an ‘artist’ than a ‘scientist.’ For example, because
there were no anesthetics, surgeons took pride in the speed with which they
operated, even amputating a leg in just a few minutes. Progress in anatomy,
physiology, morphology, and medical techniques was virtually nonexistent
or painfully slow. Although microscopes existed, they lacked the power to
reveal bacteria, hindering our understanding of the causes of diseases. This
meant that there was also little improvement in medical treatments.
Compare this to today’s situation, where hardly a day goes by without new
medical discoveries, diseases, drugs and treatments, and medical and surgical
techniques. Just a generation or two ago, medicine, medical knowledge and
skills, and even the attitudes of medical practitioners toward patients and the
way patients approach and think about their doctors were radically diferent
than they are today. It is no longer enough for surgeons to master the tools
of the trade during their studies and then apply and perfect them through-
out their careers. Competent surgeons today (and tomorrow) must master
complex skills and professional competencies—both technical and social—
during their studies and never stop learning throughout their careers. This
book is about how to design instruction for this complex learning.

1.1 Complex Learning


Complex learning involves integrating knowledge, skills, and attitudes, coor-
dinating qualitatively diferent constituent skills, and often transferring what
is learned in school or training settings to daily life and work. The current
interest in complex learning is evident in popular educational approaches that
call themselves inquiry, guided discovery, case method, project based, problem
based, design based, team based, challenge based, and competency based, many
of which have no solid basis in empirical research (Kirschner et al., 2006). The-
oretical design models promoting complex learning are, for example, cognitive
apprenticeship learning (Collins et al., 1989; Woolley & Jarvis, 2007), frst
principles of instruction (Merrill, 2020), constructivist learning environments
(Jonassen, 1999), learning by doing (Schank, 2010), and the four-component
instructional design model (4C/ID; Van Merriënboer, 1997). While these
theoretical models difer in many ways, they all emphasize learning tasks based
on real-life, authentic tasks as the driving force for teaching, training, and
learning. The fundamental idea behind this focus is that such tasks assist learn-
ers in integrating knowledge, skills, and attitudes, stimulate the coordination
of constituent skills, and facilitate the transfer of what is learned to new prob-
lem situations (Merrill, 2020; Van Merriënboer, 2007). Until now, 4C/ID
is the only whole-task instructional design model for which a meta-analysis
is available (Costa et al., 2021), showing a large positive efect size (Cohen’s
d = .79) on student performance in educational programs developed with it.
A New Approach to Instruction 3

The interest in complex learning has grown rapidly since the beginning
of the 21st century. It is an inevitable reaction of education and teaching
to societal and technological developments and students’ and employers’
uncompromising views about the value of education and training for updat-
ing old knowledge and skills to prevent obsolescence (Hennekam, 2015) and
learning new ones. Machines have taken over routine tasks, and the complex
cognitive tasks humans must perform are becoming increasingly complex
and important (Frey & Osborne, 2017; Kester & Kirschner, 2012). More-
over, the nature of available jobs is changing, necessitating the acquiring and
applying new and diferent skills and quickly making the information relevant
to carrying out those jobs obsolete (Thijssen & Walter, 2006). This poses
higher demands on the workforce, with employers stressing the importance
of problem solving, reasoning, decision making, and creativity to ensure that
employees can and will fexibly adjust to rapid changes in their environment.
Two examples might drive this home. Many aspects of the job of an air
trafc controller have been technologically automated. But even though this
is the case, the complexity of the controller’s responsibilities has grown sub-
stantially due to the enormous increase in air trafc, the growing number
of safety regulations, and the advances in the technical aids themselves (see
Figure 1.1). The same is true for family doctors (General Practitioners/GPs),

Figure 1.1 Air traffic control screen.


4 A New Approach to Instruction

who need to address the physical, psychological, and social aspects of their
patients but also encounter a much more diverse clientele with diferent
cultural backgrounds, a food of new medicines, tools and treatments, and
issues dealing with registration, liability, insurance, and more.
The feld of education and training has increasingly recognized these
new demands posed by society, business, and industry. In response to these
demands, there has been a concurrent increase in the attempts to better
prepare graduates for the labor market and help them develop adaptive
expertise (Carbonell et al., 2014). The educational approaches mentioned
earlier, emphasizing complex learning and the development of professional
competencies throughout the curriculum, strive to reach this goal. How-
ever, educational institutes lack proven design approaches. This often results
in implementing innovations that undeniably aim at better preparation of
students for the labor market but that do so with varying degrees of success
(Dolmans et al., 2013).
An often-heard student complaint is that they experience their curricu-
lum as a disconnected set of courses or modules, with only implicit relation-
ships between the courses and an unclear relevance of what they should
learn for their future professions and why. This is often complicated by
‘fexible’ curricula that ofer a broad range of possibilities but leave it to
the student to choose what and when they want to study without giving
them any support or guidance. Often, as a compromise with those instruc-
tors who want to ‘teach their subject areas,’ curricula implement a separate
stream in which problems, projects, cases, or other learning tasks are used
for the development of complex skills or competencies, hopefully in a—for
the student—recognizable and relevant situation. However, even in those
curricula, students struggle to link what they are required to do to both
the theoretical coursework, which is typically divided into traditional sub-
jects, and what they perceive to be important for their future professions,
which often lies at the basis of the learning tasks. Not surprisingly, students
have difculties combining everything they learn into an integrated knowl-
edge-base and employing it to perform real-life tasks and solve practical
work-related problems once they have graduated. The whole is inevitably
greater than the sum of its parts. In other words, they do not achieve the
envisioned and required ‘transfer of learning’ or ‘transfer of training’ (Blume
et al., 2010).
The fundamental problem facing the feld of instructional design is the
inability of education and training to achieve this necessary transfer. Design
theory must support the development of training programs for learners
who need to learn and transfer professional competencies or complex skills
acquired in their study to an increasingly varied set of real-world contexts
and settings. The Ten Steps to Complex Learning (from this point on, referred
to as the Ten Steps) claims that a holistic approach to instructional design is
A New Approach to Instruction 5

necessary to reach this goal (cf. Tracey & Boling, 2013). In the next section,
we discuss this holistic design and why it is thought to help improve transfer
of learning. Subsequently, we position the instructional design model dis-
cussed in this book within the feld of learning and instruction, describing
its four core components and the ten steps. Finally, we provide an overview
of the book’s structure and contents.

1.2 A Holistic Design Approach


A holistic design approach is the opposite of an atomistic one, which con-
tinually reduces complex contents and tasks to simpler or smaller elements,
such as facts and simple skills. This reduction usually continues to a level
where each element can be easily transmitted to the learners through pres-
entation (of facts) and/or practice (of skills). Though such an approach may
work well if there are not many interactions between the elements, it does
not work well if the elements are closely interrelated. When this is the case,
the whole is more than the sum of its parts because it contains the elements and
the relationships between them. This is the basis of the holistic approach.
Holistic design approaches attempt to deal with complexity without losing
sight of the interrelationships between the elements taught (Van Merriën-
boer & Kester, 2008). Using a holistic design approach can ofer a solution
for three persistent problems in education: compartmentalization, fragmen-
tation, and the transfer paradox.

Compartmentalization

Instructional design models usually focus on one particular learning domain,


such as the cognitive, afective, or psychomotor domains. A further distinc-
tion, for example, in the cognitive domain, is the diferentiation between
models for declarative learning, emphasizing instructional methods help-
ing learners to construct conceptual knowledge and models for procedural
learning, emphasizing instructional methods helping learners to acquire
skills. Everything in a neat compartment. This compartmentalization—
separating a whole into distinct parts or categories—has had disastrous
efects on learning.
Suppose you have to undergo surgery. Would you prefer a surgeon with
great technical skills but no knowledge of the human body? Or would you
prefer a surgeon with great knowledge of the human body but with two left
hands? Or would you want a surgeon with great technical skills but who has
a horrible bedside manner and a hostile attitude toward patients? Or, fnally,
a surgeon with all of the knowledge, skills, and attitudes learned 35 years ago
but has not kept them up-to-date? Of course, your answer would be “None
of the above”. You want a surgeon with up-to-date knowledge and skills who
6 A New Approach to Instruction

knows how your body functions (i.e., its anatomy and physiology), is techni-
cally dexterous, and has a good bedside manner. This indicates that it makes
little sense to distinguish learning domains for professional competencies.
Many complex surgical skills simply cannot be performed without in-depth
knowledge of the structure and workings of the human body because this
allows for the necessary fexibility in behavior. Many skills cannot be per-
formed acceptably if the performer does not exhibit particular attitudes.
And so forth. Therefore, holistic design models for complex learning aim
to integrate declarative learning, procedural learning (including perceptual
and psychomotor skills), and afective learning (including the predisposition
to keep all of these aspects up-to-date, including patient skills). So, they
facilitate the development of an integrated knowledge base that increases
the chance that transfer of learning occurs (Janssen-Noordman et al., 2006).

Fragmentation

Traditional instructional design models use fragmentation—the act or pro-


cess of breaking something into small, incomplete, or isolated parts—as
their basic technique (Frèrejean et al., 2022; Van Merriënboer & Dolmans,
2015). Typical of 20th-century instructional design models is that they frst
analyze a chosen learning domain, then divide it into distinct performance
and/or learning objectives (e.g., remembering a fact, applying a procedure,
understanding a concept, and so forth), after which diferent instructional
methods are selected for reaching each of the separate objectives (e.g., rote
learning, skills labs, problem solving, respectively). In the training blueprint
or lesson plan for that domain, the objectives are dealt with one at a time.
For complex skills, each objective corresponds with one subskill or con-
stituent skill, and the sequencing of the objectives naturally results in a part-
task sequence. Thus, the learner is taught only one or a very limited number
of constituent skills at the same time. New constituent skills are gradually
added, often each with a diferent instructional approach, which, in itself, can
cause further fragmentation, and it is not until the end of the instruction—
if at all—that the learner can practice the whole complex skill.
In the 1960s, Briggs and Naylor (1962; Naylor & Briggs, 1963) reported
that this approach is only suitable if little coordination of constituent skills is
required and each of the separate constituent skills is difcult for the learners
to acquire. The problem with this fragmented approach is that most com-
plex skills or professional competencies are characterized by numerous inter-
actions between the diferent aspects of task performance, with very high
demands on their coordination. In the past half-century, overwhelming evi-
dence has been obtained showing that breaking a complex domain or task
down into a set of distinct elements or objectives, then teaching or train-
ing each of those objectives without taking their interactions and required
A New Approach to Instruction 7

coordination into account, does not work because learners ultimately are
not capable of integrating and coordinating the separate elements in transfer
situations (e.g., Gagné & Merrill, 1990; Lim et al., 2009; Rosenberg-Kima
et al., 2022; Spector & Anderson, 2000). To facilitate transfer of learning,
holistic design models focus on reaching highly integrated sets of objectives
and, especially, the coordinated attainment of those objectives in real-life
task performance.

The Transfer Paradox

In addition to compartmentalization and fragmentation, using a noninte-


grated list of specifc learning objectives as the basis for instructional design
has a third undesired efect. Logically, the designer will select instructional
methods that minimize the number of practice items required to learn
or master something, the time-on-task spent doing this, and the learners’
investment of efort made to reach those objectives; that is, they strive for
efciency. Designing and producing practice items costs time and money,
which are often scarce. In addition, the learner does not have unlimited
time or motivation to study (as are almost all of us, the learner is a homo
economicus—one who strives to minimize costs and optimize profts). Take
the situation that learners must learn to diagnose three diferent types of
errors (e1, e2, e3) in a complex technical system, such as a chemical factory.
If a minimum of three practice items is required to learn to diagnose each
error, one may frst ask the learners to diagnose error 1, then error 2, and
fnally, error 3. This leads to the following training blueprint:

e1, e1, e1, e2, e2, e2, e3, e3, e3

Although this ‘blocked’ practice schedule will be most efcient for reach-
ing the three objectives, minimizing the required time-on-task and learners’
investment of efort, it also yields low transfer of learning. This is because
the chosen instructional method invites learners to construct highly specifc
knowledge for diagnosing each distinct error, allowing them to perform
in the way specifed in the objectives but not to performances beyond the
given objectives. If a designer aims at transfer and the objective is that learn-
ers can correctly diagnose as many errors as possible in a technical system,
then it is far better to train them to diagnose the three errors in a random
order. This leads, for example, to the following training blueprint:

e3, e2, e2, e1, e3, e3, e1, e2, e1

This ‘random’ practice schedule (also called ‘interleaving’; Birnbaum et al.,


2013) is less efcient than the former one for reaching the three isolated
8 A New Approach to Instruction

objectives because it may increase the necessary time-on-task or the invest-


ment of efort by the learners. It might even require four instead of three
practice items to reach the same level of performance for each separate
objective. But in the long run, it yields a much higher transfer of learning!
The reason for this increase in transfer is that a random schedule invites
learners to compare and contrast the diferent errors with each other and
thus construct knowledge that is general and abstract rather than entirely
bound to the three concrete, specifc errors. Variability of practice/inter-
leaving requires learners to see the conceptual similarities in problems or
tasks that look very diferent at the surface level and see the conceptual
diferences between tasks and problems that look very similar at the surface
level. This allows learners to diagnose new, not earlier encountered, errors
better. This phenomenon—where the methods that work the best for reach-
ing isolated, specifc objectives are often not the methods that work best for
reaching integrated objectives and increasing transfer of learning—is known
as the transfer paradox (Helsdingen et al., 2011a, 2011b; Van Merriën-
boer et al., 1997). A holistic design approach considers the transfer paradox
and is always directed toward more general objectives beyond a limited list
of highly specifc objectives. The diferentiation between diferent types of
learning processes should ensure that learners who are confronted with new
problems not only have specifc knowledge to perform the familiar aspects
of those problems but, above all, have the necessary general and abstract
knowledge to deal with the unfamiliar aspects of those problems.
To recapitulate, traditional design models usually follow an atomis-
tic approach and, as a result of this, are not very successful in preventing
compartmentalization and fragmentation or dealing with the transfer para-
dox. In contrast, a holistic approach where the tasks are interleaved ofers
alternative ways of dealing with complexity. Most holistic approaches intro-
duce some notion of modeling to attack this problem. A powerful two-step
approach to modeling frst develops simple-to-complex models of reality or
real-life tasks and then ‘models these models’ from a pedagogical perspective
to ensure that they are presented so learners can learn from them (Achten-
hagen, 2001). In this view, instruction should begin with a simplifed but
‘whole’ model of reality. This is the essence of what Reigeluth (1992) calls
an epitome in his elaboration theory: “a very simple kind of case . . . that is
as representative as possible of the task as a whole”. This is then conveyed
to the learners according to sound instructional principles. The Ten Steps
ofers a broad range of instructional methods to deal with complexity with-
out losing sight of whole, real-life tasks.

1.3 Four Components and Ten Steps


The Ten Steps is a practical, modifed, and—as strange as it may sound—
simplifed version of the 4C/ID model (four-component instructional design;
A New Approach to Instruction 9

Miranda et al., 2020; Van Merriënboer, 1997; Van Merriënboer et al.,


1992, 2002). Previous descriptions of this model had an analytic-descriptive
nature, emphasizing the cognitive-psychological basis of the model and the
relationships between design components and learning processes. The Ten
Steps, in contrast, is mainly prescriptive and aims to provide a version of
the model that is practicable for teachers, domain experts involved in train-
ing design, and instructional designers. The focus of this book is on design
rather than on learning processes. But for interested readers, some of the
chapters include text boxes in which the psychological foundations for par-
ticular design principles are briefly explained.
The basic assumption that forms the basis of both 4C/ID and the Ten
Steps is that blueprints of educational programs for complex learning can
always be described by four basic components; namely, (a) learning tasks,
(b) supportive information, (c) procedural information, and (d) part-task
practice (see the left-hand column of Table 1.1).

Table 1.1 Four blueprint components of 4C/ID and the Ten Steps.

Blueprint components of 4C/ID Ten steps to complex learning

Learning Tasks 1. Design Learning Tasks


2. Design Performance Assessments
3. Sequence Learning Tasks
Supportive Information 4. Design Supportive Information
5. Analyze Cognitive Strategies
6. Analyze Mental Models
Procedural Information 7. Design Procedural Information
8. Analyze Cognitive Rules
9. Analyze Prerequisite Information
Part-task Practice 10. Design Part-task Practice

The term ‘learning task’ is used here in a very generic sense: A learning
task may refer to a case study that the learners must study, a project that
must be carried out, a problem that must be solved, a professional task that
must be performed, and so forth. The supportive information helps learn-
ers perform nonroutine aspects of learning tasks that often involve prob-
lem solving, decision making, and reasoning (e.g., information about the
teeth, mouth, cheeks, tongue, and jaw helps a student in dentistry with
clinical reasoning; Postma & White, 2015, 2016). The procedural informa-
tion enables learners to perform the routine aspects of learning tasks; that
is, those aspects of the learning task that are always performed in the same
way (e.g., how-to instructions for measuring blood pressure help a medical
student with conducting physical examinations). Finally, part-task practice
pertains to additional practice; that is, overlearning of routine aspects to help
learners develop a high level of automaticity of these aspects and improve
10 A New Approach to Instruction

their whole-task performance (e.g., practicing cardiopulmonary resuscita-


tion [CPR] allows a nurse or ‘frst responder’ to be better prepared for
emergencies).
As indicated in the right-hand column of Table 1.1, the four blueprint
components directly correspond with four design steps: the design of learn-
ing tasks (Step 1), the design of supportive information (Step 4), the design
of procedural information (Step 7), and the design of part-task practice (Step
10). The other six steps are auxiliary to these design steps and are only per-
formed when necessary. In Step 2, performance assessments are designed
based on objectives and standards for acceptable performance. Step 3 organ-
izes learning tasks in simple-to-complex levels, ensuring that learners initially
work on simple tasks (i.e., not complex) and then smoothly increase in com-
plexity. Steps 5 and 6 may be necessary for in-depth analysis of the supportive
information that would help learn to carry out the nonroutine aspects of
learning tasks. Steps 8 and 9 may be necessary for in-depth analysis of the pro-
cedural information needed for performing routine aspects of learning tasks.
It should be noted that real-life design projects are never a straightfor-
ward progression from Step 1 to Step 10. New results and decisions will
often require the designer to reconsider previous steps, causing iterations
in the design process. One may design a few learning tasks, in a process of
rapid prototyping, before designing the complete educational program. In
addition, particular steps may be excessive for particular design projects. As
a result, zigzagging (see Section 3.2) between the Ten Steps is common.
Then, the trick of the trade is to keep a good overview of all—intermediate—
design and analysis products and their relations to the ultimate training
blueprint. Computer-based tools will be benefcial to carry out larger design
projects, as they facilitate the systematic development of an educational
blueprint and help designers to keep the required overview of the whole
project—even when they zigzag between diferent steps (Van Merriën-
boer & Martens, 2002).

1.4 Using the Ten Steps


The Ten Steps can be seen as a model of instructional design specifcally
directed toward programs of vocational and professional education (both
at the secondary and higher education level), profession-oriented university
programs (e.g., medicine, business administration, law), and competency-
based training programs in business, industry, government, and military
organizations. In higher education, it is used to design courses for research
and other complex skills in various domains (Bastiaens et al., 2017; Frèrejean
et al., 2019). Yet, applications of the model also appear in general second-
ary education and even primary education for teaching complex skills in
traditional school subjects (e.g., Melo, 2018; Melo & Miranda, 2015; Wade
A New Approach to Instruction 11

et al., 2023) and specifc skills (e.g., Costa & Miranda, 2019; Güney, 2019a,
2019b; Linden et al., 2013; Maddens et al., 2020; Zhou et al., 2022).
Finally, it is used in continuous professional development; for example, in
the felds of teacher training (Frèrejean et al., 2021; Kukharuk et al., 2023;
Meutstege et al., 2023) and medical training (Kolcu et al., 2020). The model
will typically be used to develop training programs of substantial duration—
ranging from several weeks to several years. In terms of curriculum design,
the model will typically be used to design a—substantial—part of a curricu-
lum to develop one or more professional competencies or complex skills.
While the Ten Steps is strongly informed by decades of research on learn-
ing and cognitive psychology and ofers many clear recommendations for
design activities, it is essential to consider the unique needs of each situation
when selecting an instructional design model. When deciding to use the Ten
Steps, the following considerations may help:

• We recommend the Ten Steps when your instructional objective is to


develop one or more complex skills requiring integrating knowledge,
skills, and attitudes and coordinating constituent skills. Such skills involve
substantial amounts of problem solving, reasoning, or decision making.
If your instructional objective is to develop only knowledge, a single skill,
or an attitude, the integration and coordination central to the Ten Steps
are less relevant.
• We recommend the Ten Steps if your goal is to achieve transfer of learning
and prepare learners for real-life situations or workplace activities, espe-
cially in felds characterized by rapid changes or frequent occurrences of
unfamiliar tasks. In contrast, the Ten Steps approach can be excessive for
tasks involving little problem solving, reasoning, and/or decision mak-
ing; for example, for teaching procedural tasks such as stocking shelves or
calibrating a machine. Alternatively, if the course does not aim for clear,
real-world applications, the Ten Steps might not ofer signifcant advan-
tages over more traditional methods or may even be disadvantageous.
• We recommend the Ten Steps if your learning environment allows learn-
ers to work on whole tasks in your domain, such as group or project
work, simulations, computer-based tasks, etc. Applying the Ten Steps
becomes challenging if this is not possible; for example, if your building
has large lecture halls but no small group rooms or adequate computer
facilities for working on learning tasks.
• We recommend the Ten Steps if your program can have sufcient length
to develop the intended complex skill or skills. A rule-of-thumb is that
complex skill development in programs shorter than 100 hours is unlikely.
While you could apply the Ten Steps to design a short program, learners
are unlikely to achieve mastery and transfer of learning when given insuf-
fcient time to practice and refect.
12 A New Approach to Instruction

• We recommend the Ten Steps if you have sufcient resources for devel-
opment. Designing the four components may require the involvement
of educational specialists, subject-matter experts, practitioners, teach-
ers, students, and multimedia developers (see Chapter 16). While rapid
instructional design approaches exist and can be helpful, this book will
make apparent that many design decisions require a robust analysis of the
situation and an understanding of which methods are efective in that
situation. Working fast or with limited information to inform your deci-
sions can bring the risk of sacrifcing efectiveness.
• Finally, we recommend the Ten Steps if you have sufcient autonomy to
make design changes in your current program or decisions about a new
program. Without control over course length, assessment methods and
moments, or structure and planning, your opportunities for adequately
applying the design principles are constrained. Another challenge may
occur in subject-based educational programs with a strong separation of
teams or persons responsible for each subject, where it may require efort
to integrate subject domains into whole tasks.

The Ten Steps is a versatile instructional design model applicable across


various educational contexts. It is important to recognize that the Ten Steps
is not a rigid recipe but a systematic approach that allows creativity and
fexibility. No two applications of the Ten Steps are identical, and designers
frequently make changes to accommodate the constraints of their situation.
Such adaptations are not only common but also expected. The previous
considerations serve as a guide, signaling situations that might require more
extensive deviations.
The remainder of this book describes the Ten Steps in 16 chapters. Chap-
ters 2 and 3 introduce the four blueprint components and ten steps. Chap-
ters 4 to 13 make up the main part of this book. Each chapter contains a
detailed explanation of one of the ten steps. Chapters 14 and 15 describe,
in order, how domain-general skills can be taught in educational programs
designed with the Ten Steps and how programmatic assessment can be
applied in those programs. Chapter 16 discusses future directions in the
feld of complex learning.
Practitioners in the feld of instructional design may use this book as a
reference guide to support their design of courses, materials, and/or envi-
ronments for complex learning. To make optimal use of the book, it may be
helpful to consider the following points:

• It is probably best for all readers to study Chapters 1, 2, and 3 frst,


regardless of their reason for using this book. They introduce the four
blueprint components and the Ten Steps.
A New Approach to Instruction 13

• Chapters 4 through 13 describe the Ten Steps in detail. You should always
start your design project with Step 1, but you only need to consult the
other chapters if these steps are required for your project. Each chapter
starts with general guidelines that may help you decide whether the step
is relevant to your project.
• The next two chapters are relevant for readers with a specifc interest
in training domain-general skills (Chapter 14) or designing programs of
assessment (Chapter 15). Minimum requirements for assessment (per-
formance assessment of learning tasks; Step 2 in Chapter 5) and teaching
domain-general skills (task selection by the learner in on-demand educa-
tion; Step 3 in Chapter 6) are already an integral part of the Ten Steps.

If you are a student in the feld of instructional design and want to broaden
your knowledge of the design of training programs for complex learning,
we advise you to study all chapters in the order in which they appear. For all
readers, whether practitioner or student, we tried to make the book as useful
as possible by including the following:

• Each chapter ends with a Summary of its main points and design
guidelines.
• Key concepts are listed at the end of each chapter and included in a Glos-
sary. This glossary contains pure defnitions of terms that might not be
familiar and, in certain cases, may be more extensive (in the case of sem-
inal or foundational concepts, theories, or models) and contain back-
ground information. In this way, the glossary can help you organize the
main ideas discussed in this book.
• In several chapters, you will fnd Boxes in which the psychological founda-
tions for particular design decisions are briefy explained.
• Two Appendices with example materials are included at the end of this
book.
Chapter 2

Four Blueprint Components

When an architect designs a house or an industrial designer designs a prod-


uct, they make a blueprint for the fnal house or product after consulting
with the client and determining the program of requirements. An example
is Leonardo da Vinci’s ‘fying machine’ blueprint. Such a blueprint is not
just a schematic drawing of the fnal product but is, rather, a detailed plan of
action, scheme, program, or method worked out beforehand to achieve an
objective. This is also the case for the instructional designer.

DOI: 10.4324/9781003322481-2
16 Four Blueprint Components

Having globally discussed a holistic approach to design and the Ten Steps
in Chapter 1, this chapter proceeds to describe the four main components of
a training blueprint; namely, (a) learning tasks, (b) supportive information, (c)
procedural information, and (d) part-task practice. After a brief description of
a training blueprint built from the four components in Section 1, Sections 2
through 6 explain how well-designed blueprints deal with the three problems
discussed in the previous chapter. Section 2 describes how the blueprint pre-
vents compartmentalization by focusing on integrating skills, knowledge, and
attitudes into one interconnected knowledge base. Section 3 describes how
the blueprint avoids fragmentation by focusing on learning to coordinate con-
stituent skills in real-life task performance. Section 4 describes how the blue-
print deals with the transfer paradox by acknowledging that complex learning
involves qualitatively diferent learning processes with diferent requirements
for instructional methods. Section 5 explains how the dynamic selection of
learning tasks by the teacher/system or by the ‘self-directed’ learner makes
individualized instruction possible. Section 6 discusses the use of traditional
and new media for each component. The chapter concludes with a summary.
It should be noted that most of what is discussed in this chapter is fur-
ther elaborated on in Chapters 4–13; in particular, Chapter 4 deals with
designing learning tasks, Chapter 7 with designing supportive information,
Chapter 10 with designing procedural information, and Chapter 13 with
designing part-task practice. The current chapter provides the basic knowl-
edge about the four components and is an advance organizer to help better
understand and integrate what follows.

2.1 Training Blueprints


The central message of this chapter and, indeed, of this whole book is that
environments for complex learning can always be described in terms of four
interrelated blueprint components (see Figure 2.1); namely:

1. Learning tasks: authentic whole-task experiences based on real-life tasks


and situations integrating knowledge, skills, and attitudes. The whole
set of learning tasks exhibits high variability, is organized in simple-to-
complex task classes, and exhibits diminishing learner support and guid-
ance within each task class (i.e., scafolding).
2. Supportive information: information helpful for learning and performing
the problem-solving, reasoning, and decision-making aspects of learn-
ing tasks, explaining how a domain is organized and how problems in
that domain are (or should be) approached. Supportive information is
specifed per task class and is always available to learners. This information
bridges what learners already know and what they need to know to carry
out the learning tasks successfully.
Four Blueprint Components 17

3. Procedural information: information prerequisite for learning and per-


forming routine aspects of learning tasks. Procedural information speci-
fies exactly how to perform the routine aspects of the task (i.e., how-to
instructions) and is best presented just in time, precisely when learners
need it. It is quickly faded as learners gain more expertise.
4. Part-task practice: practice items provided to help learners reach a very
high level of automaticity for selected routine aspects of a task. Part-task
practice typically provides huge amounts of repetitive practice but only
starts after the routine aspect has been introduced in the context of a
whole, meaningful learning task.

Figure 2.1 A schematic training blueprint for complex learning and the four
components’ main features.

The next three Sections explain how these four components can help solve
the problems of compartmentalizing knowledge, skills, and attitudes; the
fragmentation of what is learned in small parts; and the transfer paradox.

2.2 Preventing Compartmentalization


Complex learning always involves a learner trying to reach integrated sets
of learning goals. Its ultimate aim is to integrate knowledge, skills, and atti-
tudes into one rich, interconnected knowledge base. If people encounter
a new and, thus, unfamiliar situation, such an interconnected knowledge
base allows them to activate many different kinds of knowledge that may
help them to solve the problem or carry out the task. Figure 2.2 provides a
18
Four Blueprint Components
Figure 2.2 A hierarchy of constituent skills with an indication of associated knowledge and attitudes for the complex
skill ‘producing video content.’
Four Blueprint Components 19

schematic representation of the constituent skills and associated knowledge


and attitudes that make up the complex skill of ‘producing video content’
as performed by a video content producer. These professionals work inde-
pendently or in small teams, managing the artistic and technical aspects of
the production process, from initial idea to fnal edit. They are responsible
for planning and creating video content for clients. They handle scripting,
camera operation, lighting, and video editing for corporate flms, documen-
taries, training videos, and promotional content.
A well-designed training program will not teach and train each of the
required constituent skills separately but in an integrated fashion by having
the learners start by making simple videos, followed by increasingly complex
videos for their (fctitious) clients. As can be seen from Figure 2.2, a hier-
archy of constituent skills is used as an organizing framework for the whole
knowledge base. Knowledge and attitudes are fully integrated in this frame-
work, subordinate to the constituent skills. Constituent skills that are hori-
zontally adjacent to each other can be performed sequentially (e.g., frst, you
‘create the production plan’ before you ‘produce footage’) or simultaneously
(e.g., you simultaneously ‘shoot video’ and ‘collaborate with the crew’ dur-
ing production). Constituent skills at a lower level on the vertical dimension
enable the learning and performance of skills higher in the hierarchy (e.g.,
you must be able to ‘create a composition’ to be able to ‘shoot video’).
Furthermore, to perform many constituent skills, learners need the nec-
essary knowledge about the task domain (e.g., you can only ‘create the com-
position’ if you have the necessary knowledge about how light infuences
video quality, the interplay between aperture, shutter speed, and ISO, cam-
era and light placement, types of shots and movements, and how these con-
tribute to the desired artistic outcome) and the right attitudes (for instance,
‘interacting with people being flmed’ requires patience and empathy to
efectively coach individuals during the flming process). Chapter 5 discusses
the construction of a skill hierarchy in more detail.

Learning Tasks

Learners work on tasks that help them develop an integrated knowledge base
through inductive learning, in which they induce new knowledge from con-
crete experiences (see Box 4.1—Induction and Learning Tasks). Therefore,
each learning task should ofer whole-task practice that confronts the learner
with a set of constituent skills allowing for real-life task performance, together
with their associated knowledge and attitudes (Van Merriënboer & Kester,
2008). In the video-production example, the frst learning task would ideally
confront learners with the creation of a production plan (i.e., preproduction),
production of footage (i.e., the production), and creation of the fnal product
(i.e., postproduction). All learning tasks are meaningful, authentic, and repre-
sentative of a professional’s tasks in the real world. In this whole-task approach,
20 Four Blueprint Components

learners develop a holistic vision of the task that is gradually embellished during
the training. A sequence of learning tasks provides the backbone of a training
program for complex learning. In a schematic blueprint, it simply looks like this:

Variability of Practice

The frst requirement, thus, is that each learning task is a whole task to encour-
age the development of an integrated knowledge base. In addition to this, all
learning tasks must difer from each other on all dimensions on which tasks
also difer in the real world, such as the context or situation in which they
are performed, how they are presented, the saliency of the defning charac-
teristics, and so forth. Thus, the learning tasks in a training program must be
representative of the breadth of the variety of tasks and situations in the real
world. The variation allows learners to generalize and abstract away from the
details of each single task. For example, learning tasks for the video-produc-
tion example may difer on the type of video that must be produced (e.g.,
an ad, an informative clip, a documentary), locations where the video must
be recorded (e.g., outdoor, indoor, well-lit, dark), tools required (e.g., dif-
ferent types of cameras and microphones), and type of client (e.g., informal,
friendly, corporate, difcult). There is strong evidence that such variability of
practice is of utmost importance for achieving transfer of learning—both for
relatively simple tasks and highly complex real-life tasks (Paas & van Mer-
riënboer, 1994; Van Merriënboer et al., 2006). In the sequence of learning
tasks, variability of practice is indicated by little triangles placed at diferent
positions in the learning tasks. Schematically, it looks like this:

2.3 Avoiding Fragmentation


Learning to carry out a complex skill is, to a large degree, learning to
coordinate the—often many—constituent skills that make up real-life task
performance. Note that the whole complex skill is more than the sum of
its parts; playing a musical piece on the piano with two hands is more than
playing it with the left hand and right hand separately, and, in our exam-
ple, ‘creating a composition’ is meaningless if it is not coordinated with
‘capturing audio.’ Constituent skills, thus, often need to be controlled by
higher-level strategies because they make little sense without considering
Four Blueprint Components 21

their related constituent skills and associated knowledge and attitudes. For
this reason, constituent skills are seen as aspects rather than parts of a com-
plex skill, which is also why the term ‘constituent skill’ and not ‘subskill’ is
used here. In a whole-task approach, learners are directly confronted with
many diferent constituent skills from the start of the training, although
they cannot be expected to coordinate all those aspects independently at
that moment. Thus, it is necessary to simplify the tasks (i.e., make them
less complex) and give learners sufcient support and guidance.

Task Classes

It is not possible to begin a training program with very complex learning tasks
with high demands on the coordination of many constituent skills. All tasks
are made up of separate elements that can be more or less interrelated (i.e.,
interact with each other). The number of elements inherent to a task and the
degree of interaction between those elements determines the task’s complex-
ity. The more elements a task entails and interactions between them, the more
complex the task. Please note, we are not talking about complex as opposed
to easy or simple, as opposed to difcult, because easiness or difculty is not
only determined by task complexity but also by the level of expertise or prior
knowledge of the learner (they are subjective terms). In our defnition, a
learning task with a particular complexity can, thus, be easy for a learner with
high prior knowledge but difcult for a learner with low prior knowledge.
Learners, thus, start working on relatively simple but whole learning tasks
that appeal to only a part of all constituent skills (i.e., few elements) and
progress toward more complex, whole tasks that appeal to more constituent
skills and, thus, also usually require more coordination (i.e., more interac-
tion between the elements). Categories of learning tasks, each representing
a version of the task with a particular level of complexity, are called task
classes. For example, the simplest task class in the video-production exam-
ple contains learning tasks that confront learners with situations where they
only have to produce a short recap video summarizing an indoor event with
plenty of time to complete it. The most complex task class contains learn-
ing tasks that confront learners with situations where the desired videos
are long, deal with difcult topics (e.g., documentaries), and potentially
require outdoor recording in challenging weather conditions, with limited
time available. Additional task classes of an intermediate complexity level can
be added between these two extremes.
Learning tasks within a particular task class are always equivalent
because the tasks can be performed based on the same body of knowledge
and already acquired skills. However, a more complex task class requires
more or embellished knowledge for efective performance than the pre-
ceding, simpler task classes. This is also one of the reasons we made a
distinction between simple-complex and easy-difcult. Each new task class
22 Four Blueprint Components

will contain more complex learning tasks than previous ones. However,
because learners will have increasingly more prior knowledge when dealing
with subsequent task classes, they will experience all learning tasks across
task classes as more or less equally difcult. The blueprint organizes the
learning tasks in an ordered sequence of task classes (i.e., the dotted boxes)
representing simple-to-complex versions of the whole task. Schematically,
it looks like this:

Support and Guidance

When learners begin to work on a new, more complex task class, it is essen-
tial that they also receive the support and guidance needed to coordinate the
diferent aspects of their performance (Kirschner et al., 2006). Support—
actually, task support—focuses on providing learners with assistance with
the task elements involved in the training; namely, the steps in a solution
that get them from the givens to the goals (i.e., it is product-oriented).
Guidance—actually, solution-process guidance—focuses on assisting
learners with the processes inherent to fnding a solution (i.e., it is process-
oriented). These two topics will be discussed in more depth in Chapter 4.
Both the support and the guidance diminish in a process of scafolding as
learners acquire more expertise (Reiser, 2004). The continuum of learning
tasks with high support to learning tasks without support is exemplifed by
the continuum of support techniques ranging from case studies to conven-
tional tasks. The highest level of support in the video-production example
could, for instance, be provided by a case study where learners receive an
interesting documentary and are asked questions about the efectiveness of
the approach taken (i.e., the given solution), possible alternative approaches,
the quality of the fnal editing, the thought processes of the video producer,
and so on. Intermediate support might take the form of an incomplete case
study where the learners receive the client’s assignment, a script, and a list
of necessary materials for recording, and they produce and edit the fnal
video (i.e., they have to complete a given, partial solution). Finally, no sup-
port is given by a conventional task, for which learners have to perform all
actions by themselves. This type of scafolding, known as the completion
strategy (Van Merriënboer, 1990; Van Merriënboer & de Croock, 1992)
or fading-guidance strategy (Renkl & Atkinson, 2003), is highly efective.
In the schematic training blueprint, each task class starts with one or more
learning tasks with a high level of support and guidance (indicated by the
flling in the circles), continues with learning tasks with a lower level of
Four Blueprint Components 23

support and guidance, and ends with conventional tasks without any sup-
port and guidance:

2.4 Dealing with the Transfer Paradox


Figure 2.2 illustrates another typical characteristic of complex learning out-
comes: for expert task-performers, there are qualitative diferences between
constituent skills involved (Kahneman, 2011; Van Merriënboer, 2013).
Some constituent skills are controlled, schema-based processes performed
variably from problem situation to problem situation. For example, ‘devel-
oping the story’ involves problem solving, reasoning, and decision making
to cope with the specifc requirements of each new project. Experienced
video producers can carry out such skills efectively because they possess
knowledge in the form of cognitive schemata or concrete memories that
they can interpret to be able to reason about the task domain (i.e., in the
form of mental models) and guide their actions in this domain (i.e., in the
form of cognitive strategies). These constituent skills, thus, involve the dif-
ferent use of the same knowledge in a new task situation. Sometimes, the task
performer interprets generalized cognitive schemata to generate new behav-
ior; sometimes, concrete cases retrieved from memory serve as an analogy.
Other constituent skills, positioned lower in the skill hierarchy, may be
rule-based processes performed in a highly consistent way from problem
situation to problem situation. For example, ‘operating camera and equip-
ment’ is a constituent skill that requires no problem solving reasoning, or
decision making from an experienced video producer. They ‘just do it.’
Experts can efciently (i.e., with little cognitive efort) and efectively (i.e.,
very accurately) perform such constituent skills because they have formed
cognitive and psychomotor rules (Anderson & Lebiere, 1998) that directly
drive particular actions under particular circumstances, such as when the fn-
ger movements of a touch-typist are directly driven by reading a text or
hearing someone speak. These constituent skills thus involve using the same
knowledge in a new problem situation (i.e., the touch-typist uses the same
fnger movements, regardless of whether the text is science or history). It
might even be argued that these skills do not rely on ‘knowledge’ because
this knowledge is fully embedded in rules. Indeed, the rules are often dif-
fcult to articulate by an expert and are not open to conscious inspection.
Experts may reach a level of performance where they operate the camera
24 Four Blueprint Components

fully ‘automatically,’ without paying any attention to it. Conscious control is


no longer required because the rules have become fully automated. In our
example, the result is that trained experts can focus on other things while
operating the camera. It is important to note that they are not ‘multitasking’
when fully automated skills are involved, as this process no longer needs con-
scious information processing or thinking (Kirschner & van Merriënboer,
2013).
Although they simultaneously occur in complex learning situations,
schema-based and rule-based processes develop in fundamentally diferent
ways (Van Merriënboer, 2013). As already indicated in Section 2.2, the key
to developing schema-based processes is variability of practice. In a process
of schema construction, learners construct general schemata that abstract
information from the details and provide models and approaches that can be
used in a wide variety of situations. In contrast, repetitive practice is the key
to developing rule-based processes. In a process of rule or schema automa-
tion, learners develop highly specifc cognitive and psychomotor rules that
evoke particular—mental or physical—actions under particular conditions.
Constituent skills are classifed as nonrecurrent skills if they need to be
performed as schema-based processes after the training: These are the prob-
lem-solving, reasoning, and decision-making aspects of behavior that can,
nonetheless, be quite efcient because of available mental models and cog-
nitive strategies. Constituent skills are classifed as recurrent skills if they will
be performed as rule-based processes after the training, routine aspects, and
sometimes fully automatic aspects of behavior. For instance, recurrent con-
stituent skills in the video-production example are ‘operating camera and
equipment,’ ‘selecting camera lenses,’ and ‘adding efects, titles, and graphics’
because the performance of these skills is highly consistent from problem situ-
ation to problem situation (note that, in Figure 2.2, these skills are indicated in
italics and do not have any knowledge associated with them!). Classifying skills
as nonrecurrent or recurrent is important in the Ten Steps because instruc-
tional methods for efectively and efciently acquiring them are very diferent.

Supportive Versus Procedural Information

Supportive information is important for those constituent skills classifed as


nonrecurrent. It explains to learners how a task domain is organized and
how to approach problems. It is made available to learners to work on the
problem-solving, reasoning, and decision-making aspects of learning tasks
within the same task class (i.e., equivalent learning tasks that can be per-
formed based on the same body of knowledge). In the video-production
example, supportive information would explain the components of a cam-
era, with the lens, sensor, and the diferent settings that afect the image
(i.e., to help learners build a mental model of cameras) and could present
Four Blueprint Components 25

heuristics for creating a composition (i.e., to help learners build a cogni-


tive strategy). Instructional methods for presenting supportive information
should facilitate schema construction such that learners are encouraged to
process the new information deeply, particularly by connecting the new
information to existing schemata in memory in a subprocess of schema
construction called elaboration (see Box 7.1). Because supportive infor-
mation is relevant to all learning tasks within the same task class, it may
be presented before learners start to work on a new task class and/or
made available to them during their work on the learning tasks in this task
class. This is indicated in the blue L-shaped areas in the schematic training
blueprint:

Procedural information is primarily important for those constituent skills


classifed as recurrent. It specifes how to carry out the routine aspects of
the learning tasks (how-to instructions) and preferably takes the form of
direct, step-by-step instruction. In the video-production example, we could
present procedural information for adding text and graphics to a video in a
quick reference guide or learning aid. In the case of an e-learning applica-
tion, we could present it by using clickable hyperlinks to the information or
by displaying windows that become visible when you move the cursor to a
certain area of the screen. Instructional methods for the presentation of pro-
cedural information should facilitate schema automation. They should make
the information available during task performance so that it can be easily
embedded in cognitive rules in a subprocess of schema automation called
rule formation (see Box 10.1). Because procedural information is relevant to
routine aspects of learning tasks, it is best presented to learners exactly when
they frst need it to perform a task (i.e., just in time), after which it is faded
for subsequent learning tasks as the learner masters it. In the schematic
blueprint, the procedural information (yellow beam with upward-pointing
arrows) is thus linked to the separate learning tasks:
26 Four Blueprint Components

Part-Task Practice

As described in the previous sections, learning tasks only provide whole-task


practice. The shift from part-task to whole-task training, characteristic of the
Ten Steps, prevents compartmentalization and fragmentation but may not
always be sufcient. There are situations where it may be necessary to include
additional part-task practice in a training program. This is usually the case when
a very high level of automaticity is desired for particular recurrent aspects of a
task. In such a case, the series of learning tasks may not provide enough repeti-
tion to reach that level. For those aspects classifed as to be automated recurrent
constituent skills, additional part-task practice may be provided—such as when
children drill and practice multiplication tables, medical students drill and
practice suturing wounds, or musicians drill and practice specifc musical scales.
In the video-production example, part-task practice could be provided
for learning to operate a camera or learning to quickly create a sketch for a
storyboard (Carlson et al., 1989). The instructional methods used for part-
task practice facilitate schema automation and a particular subprocess called
strengthening, whereby cognitive rules accumulate strength each time they are
successfully applied by the learner (see Box 13.1). Part-task practice for a par-
ticular recurrent aspect of a task should only begin after it has been introduced
in a meaningful whole learning task. In this way, the learners start their practice
in a fruitful cognitive context. For video production, learners would only start
to practice operating the camera after observing how experts operate the cam-
era. As for recurrent aspects of learning tasks, procedural information might
also be relevant for part-task practice because this always concerns a recurrent
constituent skill (note that, according to the Ten Steps, no part-task practice for
nonrecurrent constituent skills is provided!). In the schematic training blue-
print, part-task practice is indicated by series of small circles (i.e., practice items):

This concludes the construction of the training blueprint and completes


the schematic outline originally introduced in Figure 2.1. A well-designed
training blueprint ensures that learners are not overwhelmed with the com-
plexity of a task because tasks are ordered from simple to complex, support
and guidance are given when needed, and diferent types of information
and part-task practice are presented precisely at the right time (see Box 2.1,
Four Blueprint Components 27

which relates the four components to Cognitive Load Theory; Kirschner


et al., 2011; Paas et al., 2010; Sweller et al., 2019). The learner will experi-
ence all tasks throughout the training program as being more or less equally
difcult because as tasks become more complex, the learner will have mas-
tered increasingly more knowledge and skills. As a result, learners do not
need all their cognitive resources to perform the learning tasks and are also
able to invest sufcient mental efort (Kirschner & Kirschner, 2012) in gen-
uine learning; that is, schema construction and schema automation. Only
then can transfer of learning to daily or professional life be expected.

Box 2.1 Cognitive Load Theory and the Four


Components

Recent instructional theories discussed in this book stress using


authentic, whole tasks to drive learning. A severe risk of the use of
such tasks, however, is that learners have difculty learning because
they are overwhelmed by task complexity. John Sweller’s Cognitive
Load Theory ofers guidelines to deal with the very limited processing
capacity of the human mind.

Cognitive Load Theory (CLT)


Central to CLT is the notion that human cognitive architecture should
be a major consideration when designing instruction. According to CLT,
this cognitive architecture consists of a severely limited working memory
with partly independent processing units for visual/spatial and audi-
tory/verbal information, which interacts with a comparatively unlim-
ited long-term memory. The theory distinguishes between two types of
cognitive load, dependent on the type of processing causing it; namely,

1. Intrinsic cognitive load, which is a direct function of performing


and learning the task, particularly of the number of elements that
must be simultaneously processed in working memory (’element
interactivity’). It is caused by:
a. Task processing. A task with many constituent skills that must be
coordinated (e.g., writing grammatically correct sentences in a
foreign language) yields a higher intrinsic load than a task with
fewer constituent skills that need to be coordinated (e.g., trans-
lating single words to a foreign language). Here, expertise greatly
afects intrinsic load because more expert learners combine
28 Four Blueprint Components

multiple elements into one that they can process in working


memory as a single element—a process known as chunking.
b. Germane processing. These are processes that directly contribute to
learning; in particular, to schema construction and schema auto-
mation. For instance, consciously connecting new information
with what is already known (e.g., comparing and contrasting gram-
matical rules of a foreign language with similar rules from the own
language) is a process that yields additional intrinsic cognitive load.
2. Extraneous cognitive load, which is the extra load beyond the intrin-
sic cognitive load mainly resulting from poorly designed instruc-
tion. For instance, if learners must search in their instructional
materials for information needed to perform a learning task (e.g.,
searching for the translation of a word in a dictionary), this search
process does not directly contribute to learning and thus causes
extraneous cognitive load.

A basic assumption of CLT is that an instructional design that results in


unused working memory capacity due to low extraneous cognitive load
because of appropriate instructional procedures may be further improved
by encouraging learners to engage in conscious cognitive processing
directly relevant to learning. Intrinsic and extraneous cognitive load are
additive in that, if learning is to occur, the total load cannot exceed the
working memory resources available. Consequently, the greater the ger-
mane processing created by the instructional design and the lower the
extraneous cognitive load, the greater the potential for learning.

Four Components and Cognitive Load


1. The cognitive load associated with performing learning tasks is
controlled in two ways. First, intrinsic cognitive load is managed by
organizing the learning tasks in simple-to-complex task classes. For
learning tasks within a simpler task class, there is less element inter-
activity in that fewer elements, and fewer interactions between the
elements need to be processed simultaneously in working memory;
as the task classes become more complex, the number of elements
and interactions between the elements increases. Second, extrane-
ous cognitive load is managed by providing a large amount of sup-
port and guidance for the frst learning task(s) in a task class, thus
preventing weak-method problem solving and its associated high
extraneous load. This support and guidance decreases as learners
gain more expertise (‘scafolding’). The combination of lowering
Four Blueprint Components 29

intrinsic load through simple-to-complex task classes and lowering


extraneous load by providing support and guidance offers good
opportunities for germane processing aimed at schema construc-
tion and automation.
2. Because supportive information typically has high element inter-
activity, it is preferable not to present it to learners while work-
ing on the learning tasks. Simultaneously performing a task and
studying the information would almost certainly cause cognitive
overload. Instead, supportive information is best presented before
learners start working on a learning task or, at least, apart from
working on a learning task. In this way, a cognitive schema can be
constructed in long-term memory that can subsequently be acti-
vated in working memory during task performance. Retrieving the
already-constructed cognitive schema is expected to be less cogni-
tively demanding than activating the externally presented complex
information in working memory during task performance.
3. Procedural information consists of step-by-step instructions and
corrective feedback and typically has much lower element inter-
activity than supportive information. Furthermore, the formation
of cognitive rules requires that relevant information is active in
working memory during task performance so that it can be embed-
ded in those rules. Studying this information beforehand has no
added value; therefore, procedural information is preferably pre-
sented precisely when learners need it. For example, when teachers
give learners step-by-step instructions during practice, acting as an
‘assistant looking over the learners’ shoulders.’
4. Finally, part-task practice automates particular recurrent aspects of
a complex skill. In general, an over-reliance on part-task practice is
not helpful for complex learning. However, the automated recur-
rent constituent skills may decrease the cognitive load associated
with performing the whole learning tasks, making performance of
the whole skill more fluid and decreasing the chance of making
errors due to cognitive overload.

Limitations of CLT
CLT is fully consistent with the four components, but this is not to
say that CLT alone is sufficient to develop a useful instructional design
model for complex learning at the level of whole educational programs.
Applying CLT prevents cognitive overload and (equally important)
frees up processing resources that can be devoted to germane process-
ing; that is, learning. To ensure that the freed-up resources are actually
30 Four Blueprint Components

devoted to learning, the Ten Steps relies on several specifc learning


theories to prescribe instructional methods for each of its four com-
ponents: models of inductive learning for learning tasks (see Box 4.1);
models of elaboration for supportive information (see Box 7.1);
models of rule formation for procedural information (see Box 10.1),
and models of strengthening for part-task practice (see Box 13.1).

Further Reading
Mavilidi, M. F., & Zhong, L. (2019). Exploring the development
and research focus of cognitive load theory, as described by its
founders: Interviewing John Sweller, Fred Paas, and Jeroen
van Merrienboer. Educational Psychology Review, 31, 499–508.
https://doi.org/10.1007/s10648-019-09463-7
Paas, F., & van Merriënboer, J. J. G. (2020). Cognitive-load theory:
Methods to manage working memory load in the learning of com-
plex tasks. Current Directions in Psychological Science, 29, 394–398.
https://doi.org/10.1177/0963721420922183
Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory.
Springer. https://doi.org/10.1007/978-1-4419-8126-4
Van Merriënboer, J. J. G., & Sweller, J. (2005). Cognitive load
theory and complex learning: Recent developments and future
directions. Educational Psychology Review, 17, 147–177.
https://doi.org/10.1007/s10648-005-3951-0

2.5 Individualized Instruction


The training blueprint just described might suggest that the same sequence
of learning tasks needs to be presented to all learners. Indeed, this is often
an acceptable option in those training situations dealing with homogene-
ous groups of learners. However, this is not and need not necessarily be the
case. The Ten Steps also allows for highly individualized and fexible types of
learning by using the training blueprint as an organizing framework, which
allows for the dynamic selection of learning tasks from a database in such a
way that the learning needs of the individual learner are taken into account.
Thus, rather than ofering one-and-the-same educational program to all
learners, a unique educational program can be ofered, with each learner
receiving a unique sequence of learning tasks adapted to their individual
needs, progress, and preferences (Schellekens et al., 2010a, 2010b). This
section frst discusses dynamic task selection as an approach to individual-
ized instruction. Then, the question is answered: Who should be responsible
Four Blueprint Components 31

for selecting learning tasks and other blueprint components: an external,


intelligent agent or the self-directed learner?

Dynamic Task Selection

Dynamic task selection makes it possible to ofer individual learners a


sequence of learning tasks optimally adjusted to their individual and specifc
learning needs. Such individualized programs typically yield higher learning
outcomes and better transfer performance than their one-size-fts-all coun-
terparts (Corbalan et al., 2008, 2009a; Salden et al., 2006a, 2006b). Fur-
thermore, in an individualized program, high-ability learners may quickly
proceed from simple learning tasks to complex ones and work mainly on
tasks with little support. In contrast, lower-ability learners can use many
learning tasks, progress more slowly from simple tasks to complex ones, and
work more on tasks with a great deal of support before the support is faded.
Therefore, high-ability learners will be more challenged and low-ability
learners will be less frustrated, making the training program more enjoyable
and efcient (Camp et al., 2001; Salden et al., 2006c).
The Ten Steps provides a good starting point for the design of individual-
ized educational programs. For each learner, it is possible, at any given time, to
select the best task class to work on (i.e., tasks with optimal complexity) and to
select a learning task from within this task class with an optimal level of support
and guidance. Three rules-of-thumb that correspond with the principles of task
classes, support and guidance, and variability of practice can be applied here:

1. Task classes
• If performance on unsupported learning tasks meets all standards for
acceptable performance (e.g., criteria related to accuracy and speed,
attitudes, values), then the learner proceeds to the next task class and
works on a more complex learning task with a high level of support
and/or guidance.
• If performance on unsupported learning tasks does not yet meet all
standards for acceptable performance, then the learner proceeds—at
the current complexity level—to either another unsupported learning
task or a learning task with specifc support and/or guidance.
2. Support and guidance
• If performance on supported learning tasks meets all standards for
acceptable performance, then the learner proceeds to a next learning
task with less support and/or guidance.
• If performance on supported learning tasks does not yet meet all stand-
ards for acceptable performance, then the learner proceeds to either a
32 Four Blueprint Components

learning task with the same level of support and/or guidance or a


learning task with a higher level of specific support and/or guidance.
3. Variability
• New learning tasks are always selected so that the whole set of learn-
ing tasks eventually varies on all dimensions that also vary in the real
world.

Dynamic selection of learning tasks requires continuous assessment of the


performance of individual learners (see Figure 2.3). Such performance
assessments occur based on standards for all constituent skills relevant to the
learning tasks. In general, scoring rubrics will be used to assess the learn-
ers’ different aspects of performance on all relevant standards (see Step 2,
Design performance assessments, in Chapter 5).

Figure 2.3 The cycle for dynamic task selection based on continuous assessment
of individual performance on learning tasks.
Four Blueprint Components 33

As indicated earlier, learning tasks without support (i.e., what could be


considered ‘normal’ tasks in most instructional settings) will typically be
used to make decisions on progressing to more complex tasks (i.e., a next
task class). Learners may proceed to the next task class if they meet the
standards for all constituent skills involved. If desired, performance assess-
ments of unsupported learning tasks may not only be used to make progress
decisions (i.e., used in a formative way) but can also be used as a form of
summative assessment. In such a case, the tasks can better be seen as test
tasks, providing a basis for grading, pass/fail decisions, and certifcation (this
will be further explained in Chapter 15). If the learner has not yet reached
the standards for all constituent skills involved, additional learning tasks at
the same level of complexity can be provided. If only additional practice
is necessary, these will again be learning tasks without support. If learners
have difculties with particular aspects of performance, these will be learning
tasks with specifc additional support and/or guidance to help them improve
their performance on precisely those aspects they are having trouble with.
Assessment of performance on learning tasks with support and/or guidance
will typically be used to make decisions about adjusting the level of support
and/or guidance for subsequent tasks. Performance assessment of supported
learning tasks is only used for formative assessment, meaning that its sole goal
is to improve the quality of the learning process. If a learner meets the stand-
ards for all constituent skills, they will receive a subsequent learning task with
less support and/or less guidance and, eventually, a learning task without
support. If the learner has not yet reached the standards for all constituent
skills involved, additional learning tasks with support will be provided. If only
additional practice is necessary, these will again be learning tasks with roughly
the same level of support. If learners have difculties with particular aspects
of performance, these will be learning tasks with specifc additional support
and/or guidance directed at improving performance on precisely those aspects.

Who Is in Control?

Dynamic task selection is a cyclical process that enables the creation of indi-
vidualized learning trajectories. But who exactly is responsible for select-
ing the proper learning tasks? Simply said, this can be the responsibility
of an intelligent agent such as a teacher or e-learning application or of the
learner, who then acts in a self-directed fashion. With system control, the
teacher or the e-learning application assesses if the standards for acceptable
performance have been met and, based upon this appraisal, selects the next
learning task or task class for a learner. With learner control, the self-directed
learner assesses whether the standards have been met and selects the next
learning task or task class from all available tasks (Corbalan et al., 2011). As
shown in Table 2.1, the distinction between system and learner control is
34 Four Blueprint Components

relevant to all four components of the Ten Steps. In addition, system con-
trol and learner control can be combined in a system of ‘shared control,’ as
will be further discussed later, in the section “Second-order scafolding for

Table 2.1 Examples of system control and learner control for each of the four
blueprint components.

Blueprint Who is in control in individualized instruction?


component Some examples.

Teacher or system Self-directed Learner


(system control) (learner control)
1. Learning tasks Adaptive learning: On-demand education:
The teacher/system The self-directed
selects and presents learner searches and
suitable learning tasks selects his or her own
for each individual learning tasks.
learner.
2. Supportive Planned information Resource-based learning:
information provision: The self-directed
The teacher/ learner searches
system presents and studies
relevant supportive useful, supportive
information before information from all
learners start working available resources
on a new task class (e.g., the Internet,
(i.e., more complex library, ‘study
tasks) and ensures this landscape’).
information remains
available.
3. Procedural Unsolicited information Solicited information
information presentation: presentation:
The teacher/system The self-directed
acts as an assistant- learner searches and
looking-over-the consults manuals,
learner’s-shoulder and quick reference
presents procedural guides, or mobile
information precisely technologies when
when a learner needed during task
needs it. performance.
4. Part-task Dependent part-task Independent part-task
practice practice: practice:
The teacher/system The self-directed
provides part-task learner searches and
practice for a selected practices part-tasks
to-be-automated in order to improve
recurrent aspect after his or her whole-task
this aspect has been performance.
introduced in a whole
learning task.
Four Blueprint Components 35

self-directed learning” and, in more detail, in Section 6.3 of the chapter on


sequencing learning tasks.
For learning tasks, adaptive learning can be contrasted with on-demand
education. In adaptive learning, the teacher or some intelligent agent
selects and presents suitable learning tasks for each learner (intelligent
tutoring systems often contain an agent for selecting learning tasks; they
will not be further discussed in this book; the interested reader should see,
for example, Long et al., 2015; Nkambou et al., 2010). In on-demand
education, the learner searches and selects suitable tasks from all available
tasks. As explained previously, learners should preferably select learning
tasks so that the tasks are at an appropriate level of complexity, provide an
optimal level of support and/or guidance, and exhibit sufcient variability
(cf. Figure 2.3).
For supportive information, planned information provision can be con-
trasted with resource-based learning. In planned information provision, an
intelligent agent explicitly presents relevant supportive information before
learners start to work on tasks at a higher level of complexity. In resource-
based learning, the learner searches and consults resources (i.e., books, flms,
videos, internet sources, software programs, experts, etc.) that may help
improve performance on problem-solving, reasoning, and decision-making
aspects of the learning tasks. For example, architecture students working
on a task that requires them to design an ofce building may need to inter-
view experts (i.e., working architects) on their preferred approaches and
rules-of-thumb. Or student teachers trying to improve their motivational
skills may watch diferent children’s television shows to study the techniques
the show’s producers used there. Learners should preferably select learn-
ing resources that contain neither too much nor too little information and
learning resources that are accurate and reliable.
For procedural information, unsolicited information presentation can be
contrasted with solicited information presentation. Unsolicited information
presentation is when an intelligent agent acts as an ’assistant looking over
your shoulder’ (the teacher’s name is ALOYS), explicitly presenting proce-
dural information on performing routine aspects of the task precisely when
needed. With solicited information presentation, the learner searches and
consults procedural information in, for example, manuals, quick reference
guides, or a mobile phone to fnd out how particular routine aspects of
a task need to be performed. For example, aircraft maintenance students
troubleshooting an electrical system might decide to consult the technical
manual for a specifc type of aircraft. Or secretarial training students writing
business letters may consult a computer’s online help function to fnd out
how particular word-processing functions work. Learners should, thus, be
able to locate necessary and accurate information and divide their attention
between consulting the procedural information and working on the learn-
ing task.
36 Four Blueprint Components

Finally, dependent part-task practice can be contrasted with independent


part-task practice. Dependent part-task practice is when an intelligent agent
explicitly provides part-task practice for a selected to-be-automated recurrent
aspect after this aspect has been introduced in the context of whole, mean-
ingful learning tasks. For independent part-task practice, the learner decides
which routine aspects of the learning tasks will receive additional practice
and when they will be practiced. A learner’s initiative to carry out additional
part-task practice is generally triggered by a desire to improve performance
on whole learning tasks. Economics students, for example, working on learn-
ing tasks dealing with fnancial analysis may feel the need to become more
profcient in working with spreadsheet programs and, thus, decide to take
an online do-it-yourself course on using a specifc spreadsheet application.
Medical students working in a hospital may need to improve their lifesav-
ing skills (e.g., cardiopulmonary resuscitation, intubation, external cardiac
massage) and, therefore, enroll in a nonrequired workshop. Learners should
thus be able to identify routines that may help improve their whole-task per-
formance and must also be able to fnd opportunities for part-task practice.

Second-Order Scaffolding for Self-Directed Learning

Nowadays, the importance of teaching self-directed learning skills is stressed


in many educational sectors, which makes it tempting to use on-demand
education, resource-based learning, solicited information presentation, and
independent part-task practice in an educational program (right column in
Table 2.1). Then, the argument for doing this is that the learners ‘must be
self-directed.’ However, giving full control to learners will only be efec-
tive if they already have well-developed self-directed learning skills, meaning
that they can plan their task execution, monitor or assess their performance,
control or regulate their learning, and orient themselves to learning oppor-
tunities that best help them improve their performance (Van Merriënboer,
2016). Yet, as Kirschner and van Merriënboer (2013) indicated, learners
often do not possess these self-directed (i.e., metacognitive) learning skills!
Then, giving them full control over their learning will have disastrous efects.
In the situation where the learner lacks these skills, one might decide to
teach not only the complex cognitive skills or professional competencies that
the training program is aiming at but also to help the learners acquire the
self-directed learning skills that will help them become competent profes-
sionals who can continue learning in their future professions (i.e., lifelong
learning). This requires the design of an educational program in which the
teaching of (frst-order) domain-specifc complex skills and (second-order)
self-directed learning skills is intertwined (Noroozi et al., 2017). In such a
program, exactly the same instructional principles apply for both frst-order
and second-order skills; namely, variability of practice, increasing complexity
Four Blueprint Components 37

of tasks, and, above all, decreasing support and guidance in a process of scaf-
folding (Van Merriënboer & Sluijsmans, 2009).
We speak about second-order scafolding for teaching self-directed learn-
ing skills because it pertains not to the complex cognitive skill being taught
but to the intertwined self-directed learning skills. Second-order scafolding
involves a gradual transition from teacher/system control to learner con-
trol, thus from adaptive learning to on-demand education, from planned
information presentation to resource-based learning, from unsolicited to
solicited information presentation, and from dependent to independent
part-task practice. How we can use second-order scafolding in a system of
‘shared control’ to teach learners to select their learning tasks is discussed
in Step 3, sequencing learning tasks (Section 6.4). How we can use second-
order scafolding to teach learners information literacy and deliberate prac-
tice skills is discussed in Chapter 14, which deals with domain-general skills.

2.6 Media for the Four Components


Some media are better for supporting, enabling, or sustaining particular
learning processes than others, though no medium is in itself better than any
other (Clark, 1983, 2001). In 1983, Clark wrote “media are mere vehicles
that deliver instruction but do not infuence student achievement any more
than the truck that delivers our groceries causes changes in our nutrition”
(p. 445). Diferent media may be better suited to support diferent com-
ponents because each of the four blueprint components aims at a diferent
learning process (i.e., inductive learning, elaboration, rule formation, and
strengthening). Table 2.2 indicates the relationships between learning pro-
cesses, the four blueprint components, and suitable media.
Learning tasks help learners construct cognitive schemata through induc-
tive learning from concrete experiences. Suitable media must, thus, allow
learners to work on those learning tasks and will usually take the form of a
real or simulated task environment, including tools and objects necessary for
carrying out the task. In some cases, the real task environment, such as the
future workplace, is suitable for learners to perform their learning tasks (i.e.,
an internship). There may, however, be good reasons not to choose this but,
rather, to choose to practice carrying out the learning tasks in a simulated
environment. Particularly in the earlier phases of the learning process (i.e.,
task classes at the beginning of the educational program), simulated task
environments may ofer better opportunities for learning than real task envi-
ronments in that they provide a safe environment where learners can make
errors, an environment free of extraneous stimuli which may hinder learn-
ing, and one where tasks can be provided at an optimal level of complexity
and with an optimal level of support and/or guidance.
38
Four Blueprint Components
Table 2.2 Relationships among basic learning processes, blueprint components, and suitable media.

Learning processes Blueprint components Examples of suitable media

Main processes Sub processes Traditional media New media

Schema Inductive learning 1. L earning tasks Real task environment, Computer-simulated task
construction (Box 4.1) role play, project environments, computerized serious
groups, problem-based games, computerized high-fidelity
learning (PBL) groups simulators (e.g., mannequins), virtual
reality
Elaboration 2. S upportive information Textbooks, teachers Hypermedia (e.g., Internet), multimedia
(Box 7.1) giving lectures, systems (video, animation), social
physical models, realia media, computer-supported
(e.g., a skeleton) collaborative learning, microworlds,
AI, Large Language Models
Schema Rule formation 3. P rocedural information Teacher acting as Online job aids and help systems,
automation (Box 10.1) ‘assistant looking mobile technologies (smartphones,
over your shoulder,’ tablets), wizards, pedagogical agents,
job and learning augmented reality
aids, quick reference
guides, manuals
Strengthening 4. Part-task practice Paper and pencil, skills Drill-and-practice computer-based
(Box 13.1) laboratory, practicals, training (CBT), part-task trainers,
real task environment games for basic skills training
Four Blueprint Components 39

An efective approach might work from low physical fdelity, which is the
similarity between the simulated task environment and the real task environ-
ment, to high physical fdelity, either in each separate task class or across task
classes. For example, physical fdelity might frst be low, as with problem-
based learning (PBL; Hung et al., 2019; Loyens et al., 2011) where groups
of students work on paper-based authentic cases; then be intermediate, as
for a project group working on an authentic assignment that a real company
brought in; and fnally, be very high, as for medical students who role-play
with simulated patients played by highly trained actors. The same continuum
from low to high physical fdelity can be observed for new media, which may
be computer-simulated or virtual-reality task environments. A low-fdelity
simulation may take the form of textual case descriptions of patients pre-
sented in a web-based course; a moderate-fdelity simulation may take the
form of lifelike simulated characters (i.e., avatars or, in a medical context,
virtual patients) that can be interviewed in a virtual reality environment;
and a high-fdelity simulator may take the form of a full-fedged operating
room, where medical residents treat a computerized mannequin who reacts
just like a real patient.
Supportive information helps learners construct cognitive schemata
through elaboration; they must actively integrate new information with
prior knowledge already available in long-term memory. Traditional media
for teaching supportive information are textbooks, teachers giving lectures,
and ‘realia’ (i.e., real things). They describe models of a domain and how
to approach tasks in that domain systematically. In addition, they illustrate
the ‘theory’ with case studies and examples of how experts solve prob-
lems in the domain. Computer-based hypermedia and multimedia systems
may take over these functions (Gerjets & Kirschner, 2009). Such systems
present theoretical models and concrete cases that illustrate the models
and cases in a highly interactive way, explain problem-solving approaches,
and illustrate these approaches by showing, for example, expert models
on video or via animated, lifelike avatars. Computer-based simulations of
conceptual domains are a special category of multimedia in that they ofer
a highly interactive approach to presenting cases where learners can change
the settings of particular variables and study the efects of those changes on
other variables (De Jong et al., 2013). The main goal of such microworlds
is not to help learners practice a complex skill (as is the case in computer-
simulated task environments) but to help them construct, through active
exploration and experimentation, mental models of how the world is organ-
ized and cognitive strategies of how to systematically explore this world.
Procedural information helps learners automate their cognitive schemata
via rule formation. It is preferably presented precisely when and where the
learners need it for working on the learning tasks. The traditional media
for presenting procedural information are the teacher and all kinds of job
40 Four Blueprint Components

aids and learning aids. The teacher’s role is to walk through the classroom,
laboratory, or workplace, peer over the learner’s shoulder, and give direc-
tions for performing the routine aspects of learning tasks (e.g., “No—you
should hold that instrument like this,” “Watch, you should now select
this option”). Job aids may be the posters with frequently used software
commands that are hung on the walls of computer classes, quick reference
guides adjacent to a piece of machinery, or booklets with instructions on the
house style for interns at a company. In computer-based environments, pro-
cedural information is often presented by online job aids and help systems,
wizards, and (intelligent) pedagogical agents. Smartphones and tablets are
quickly becoming important tools for presenting procedural information.
Such devices are particularly useful for presenting small displays of informa-
tion that tell learners how to perform the routine aspects of the task at hand
correctly during task performance. Nowadays, augmented reality (Limbu
et al., 2019) also makes it possible to present procedural information just in
time, triggered by the learner who is looking at a particular object and then
receives instruction on how to manipulate that object or who is looking at
a tool and then receives instruction on how to use that tool (Figure 2.4).

Figure 2.4 Presenting procedural information just in time with augmented reality.

Part-task practice helps learners, through overlearning, automate the cog-


nitive schemata that drive routine aspects of behavior through strengthen-
ing. Traditional media include paper-and-pencil for doing small exercises
(e.g., simple addition, verb conjugation), skills labs and part-task trainers for
practicing perceptual-motor skills (e.g., operating machinery, giving intra-
venous injections), and the real task environment (e.g., marching on the
street, taking penalty kicks on the soccer feld). The computer has proved its
worth in the last decades for part-task practice. Drill and practice computer-
based training (CBT) is a very successful type of educational software. The
computer is sometimes criticized for its use of drill-and-practice, but most
Four Blueprint Components 41

criticism misses the point. Critics contrast drill-and-practice CBT with edu-
cational software focusing on rich, authentic learning tasks. According to the
Ten Steps, however, part-task practice never replaces meaningful whole-task
practice. It merely complements work on rich learning tasks. It is applied only
when the learning tasks themselves cannot provide enough practice to reach
the desired level of automaticity for automating required to-be-automated
recurrent aspects. If such part-task practice is necessary, the computer is a
highly suitable medium because it can make drill-and-practice efective and
appealing through the presentation of procedural support, by compressing
time so that more exercises can be completed than in real time, by giving
knowledge of results and immediate feedback on errors, and by using multi-
ple representations, gaming elements, sound efects, and so forth.

Flipped Classroom and Double-Blended Learning

Blended learning is typically defned as a combination of face-to-face and


online learning. However, any combination of diferent types of learn-
ing (e.g., a teacher with a textbook) can be considered blended learning.
A meta-analysis on blended learning (Spanjers et al., 2015) showed that, on
average, blended learning is somewhat more efective than traditional face-
to-face learning, and learners evaluate it as equally attractive but perceive
it as more cognitively demanding. However, the efects on efectiveness,
attractiveness, and perceived demands greatly difer between studies. Mod-
erator analyses show that frequent quizzes positively afect the efectiveness
and attractiveness of blended learning, probably because they help learners
self-direct their learning.
As illustrated in Table 2.2, educational programs based on the Ten Steps
will typically use a broad variety of both traditional media and newer online
media. The distinction between face-to-face and online learning can be
made for each component, leading to at least 24 or 16 diferent types of
‘blends.’ This large number of diferent blends is not very useful from a
design point of view. Instead, the Ten Steps describes two types of blends
as particularly relevant. The frst is the ‘fipped classroom’ (O’Flaherty &
Phillips, 2015). Strictly speaking, a fipped classroom is a blended approach
to instruction whereby instructional content delivery that normally occurs
in the classroom takes place outside of the classroom (often online, but not
necessarily), and the activities where the content is used in application situ-
ations (traditionally via homework) happen in the classroom. In terms of
the four components, it means that the supportive information is presented
online but that the work on the learning tasks is conducted in a face-to-face
setting. Thus, in general education, it ensures that students use scheduled
contact hours to work on learning tasks under the guidance of a teacher
while they can prepare for these tasks in an online setting.
42 Four Blueprint Components

The second blend relates to the combination of working on learning


tasks in both a simulated setting and a real task environment. For example,
in each task class or on each level of complexity, students can first work on
tasks with sizable support and guidance in a simulated setting and then work
on other tasks with less support and guidance in the real task environment,
which may be a work setting or daily life. Many training programs designed
with the Ten Steps adhere to this model of double-blended learning (see
Figure 2.5; for an example, see Vandewaetere et al., 2015).

Figure 2.5 Double-blended learning in the schematic training blueprint.

To conclude this chapter, it must be stressed that the Ten Steps does not
provide guidelines for the final selection and production of media. Media
selection is a gradual process in which media choices are narrowed down as
the design process continues (Clark, 2001). The final selection is influenced
not only by the learning processes involved but also by factors such as con-
straints (e.g., available personnel, equipment, time, money), task require-
ments (e.g., media attributes necessary for carrying out learning tasks and
the required response options for learners), and target group characteristics
(group size, computer literacy, handicaps). The reader should consult spe-
cialized models for the final selection of media and their production.

2.7 Summary
• A high-variability sequence of whole, authentic learning tasks provides
the backbone of a training program for complex learning.
• Simple-to-complex sequencing of learning tasks in task classes and sup-
porting learners’ performance through scaffolding are necessary to help
them learn to coordinate all aspects of real-life task performance.
• To facilitate the construction of cognitive schemata supportive informa-
tion explains how a domain is organized and how to approach problems
in this domain so that learners can fruitfully work on the nonrecurrent
aspects of learning tasks within the same task class.
Four Blueprint Components 43

• To facilitate schema automation, (a) procedural information specifes


exactly how to perform the recurrent aspects of learning tasks and (b) part-
task practice provides additional repetition for to-be-automated recurrent
aspects that need to be developed to a very high level of automaticity.
• Training blueprints built from the four components are fully consistent
with cognitive load theory because they reduce cognitive load through
simple-to-complex sequencing and providing support and guidance,
freeing up cognitive resources that can be devoted to schema construc-
tion and schema automation.
• Individualized learning trajectories can be realized in a dynamic task-
selection process by an external intelligent agent such as a teacher or
e-learning application (system control) or the self-directed learner
(learner control).
• Learner control can only be efective if the learners already possess well-
developed self-directed learning skills. Second-order scafolding, which
gradually replaces system control with learner control, is necessary for
teaching self-directed learning skills.
• The four components can be supported by diferent media: computer-
simulated task environments for learning tasks; hyper- and multimedia
for supportive information; online-help systems and mobile technologies
for procedural information; and drill-and-practice computer-based train-
ing for part-task practice.
• The physical fdelity of the task environment in which learners work on
learning tasks may gradually shift from low fdelity (e.g., paper-based
cases, authentic problems presented in a Web-based course) to high fdel-
ity (e.g., real task environment, full-fedged simulators).
• In double-blended training programs, there is a blend of face-to-face
learning (learning tasks) and online learning (supportive information),
as well as a blend of learning tasks performed in a simulated setting and
learning tasks performed in the real work setting or daily life.

Glossary Terms

4C/ID model; Adaptive learning; Atomistic design; Blended learning;


Cognitive load theory; Compartmentalization; Complexity; Dependent
part-task practice; Double-blended learning; Elaboration; Extraneous
cognitive load; Flipped classroom; Fragmentation; Germane cognitive
load; Holistic design; Independent part-task practice; Inductive learn-
ing; Intrinsic cognitive load; Learning task; On-demand learning; Part-
task practice; Planned information provision; Procedural information;
Resource-based learning; Rule formation; Schema automation; Schema
construction; Solicited information presentation; Strengthening; Sup-
portive information; Task class; Transfer of learning; Transfer paradox;
Unsolicited information presentation
Chapter 3

Ten Steps

Painting a room, which, for most of us, is a fairly simple task, involves a
straightforward procedure. First, you empty the room and remove all panel
work, wall sockets and fxtures, foor coverings, hanging lamps, and so on,
storing them away. Next, you strip old wallpaper and/or faking paint from
the walls and repair the walls and ceilings (e.g., plaster, sand, spackle). Then,
you paint the room, window, and/or door frames (often with diferent

DOI: 10.4324/9781003322481-3
46 Ten Steps

paints and paint colors), and return the panel work, wall sockets, and fx-
tures to their places. Finally, you rehang lamps, carpet the foors, and return
the furniture to the room. When painting a whole house—a much more
complex task—you could follow these same steps in linear order. However,
in practice, a diferent approach is typically taken. This involves avoiding the
impracticality of frst removing and storing all the furniture in the whole
house, removing all panel work, fxtures, and wall hangings, covering all of
the foors, steaming of all of the wallpaper, and so forth, until, in the reverse
order, all of the furniture can be moved back into the repainted house.
Unfortunately, this is not only unfeasible, but the results will also often not
be very satisfying. It is impracticable because house occupants would have
nowhere to eat, sleep, and live during the entire repainting. It would also
probably not lead to the most satisfying results because those involved could
not proft from lessons learned and new ideas generated along the way.
Instead of following the fxed procedure, a more pragmatic zigzag strategy
through the house is typically employed, involving doing certain parts of
diferent rooms in a specifc order until the whole house is completed.
This third and fnal introductory chapter describes the instructional design
process in ten steps; thus, the Ten Steps to Complex Learning. But to do
this, Section 1 begins with describing ten design activities rather than steps.
The reason is simple. Though there are theoretically ten steps that could
be followed in a specifc order, in real-life instructional design projects, it is
common to switch between activities, resulting in zigzag design behaviors,
as in the house-painting example. Section 2 describes these system dynam-
ics. Nevertheless, a linear description of the activities must present a work-
able and understandable model description for a systematic approach to the
design process. To this end, we use David Merrill’s (2020) pebble-in-the-pond
approach to order the activities described in Section 3. This approach takes a
practical and content-oriented view of design, starting with the key activity at
the heart of our model; namely, the design of whole tasks. The learning task
is the pebble cast in the pond and the frst of the Ten Steps. This one pebble
starts all the other activities. Section 3 then briefy discusses the nine remain-
ing steps in the order in which they are needed after designing learning tasks.
Section 4 positions the Ten Steps within the Instructional Systems Design
(ISD) process and the ADDIE model (i.e., Analysis, Design, Development,
Implementation, and Evaluation). The chapter concludes with a summary.
The pace of this chapter should not daunt the reader. This chapter merely
provides an overview; in Chapters 4–13 each of the Ten Steps are elaborated.

3.1 Ten Design Activities


Figure 3.1 presents, in one glance, the whole process of designing instruc-
tion for complex learning. The numbered boxes in the fgure show the ten
activities carried out when designing training blueprints. An instructional
Ten Steps 47

designer typically employs these activities to produce effective, efficient, and


enjoyable educational or training programs. This section explains the differ-
ent elements in the figure.

Figure 3.1 A schematic overview of the ten activities in the design process for
complex learning.

The lower part of the figure is identical to the schematic training blue-
print presented in Chapter 2 (Figure 2.1) and contains the four activities
that correspond with the four blueprint components. The design of learning
tasks is the heart of this training blueprint. For each task class, designers cre-
ate learning tasks that provide learners with variable whole-task practice at a
particular complexity level—a task class—until they can independently carry
out these tasks up to prespecified standards, after which they continue to the
next, more complex task class. The design of supportive information pertains
to all information that may help learners carry out the problem-solving,
reasoning, and decision-making (i.e., nonrecurrent) aspects of the learning
tasks within a particular task class. The design of procedural information per-
tains to all information that exactly specifies how to carry out the routine
(i.e., recurrent) aspects of the learning tasks. Finally, the design of part-task
practice may be necessary for developing selected to-be-automated recur-
rent aspects to a very high level of automaticity.
48 Ten Steps

The two activities on the central axis of the fgure sustain the design of
learning tasks. At the top, the design of performance assessments makes it pos-
sible to determine to what degree learners have reached prespecifed standards
for acceptable performance. It is up to the designer or design team to deter-
mine what is acceptable. For some tasks, it can be that something minimally
works; for others, mastery; and for still others, perfection. Because complex
learning deals with highly integrated sets of learning objectives, the focus is
on decomposing a complex skill into a hierarchy describing all aspects or con-
stituent skills relevant to performing real-life tasks. Performance assessments
should make it possible to measure performance on these constituent skills and
monitor learners’ progress over learning tasks; that is, over time. At the center
of the fgure, the sequencing of learning tasks describes a simple-to-complex
progression of categories of tasks that learners may work on. It organizes the
tasks in such a way that learning is optimized. The simplest tasks connect to
the entry level of the learners (i.e., what they can already do when they enter
the training program), and the fnal, most complex tasks connect to the fnal
attainment level of the whole training program. In individualized learning
trajectories based on frequent performance assessments, each learner receives
a unique sequence of learning tasks adapted to their individual learning needs.
In on-demand education, learners can select their learning tasks but often
receive support and guidance for doing this (i.e., second-order scafolding).
The two activities on the left side of the fgure, analyzing cognitive strate-
gies and mental models, sustain the design of supportive information. They
are drawn next to each other because they have a bidirectional relationship:
One is not conditional to another. The analysis of cognitive strategies answers
the question: How do profcient task performers systematically approach
problems in the task domain? The analysis of mental models answers the
question: How do profcient task performers organize the task domain? The
resulting systematic approaches to problem solving (SAPs) and domain mod-
els are used to design supportive information for a particular task class (see
Section 7.2). There is a clear reciprocity between the sequencing of learn-
ing tasks in simple-to-complex task classes and the analysis of nonrecurrent
task aspects: More complex task classes require more detailed and/or more
embellished cognitive strategies and mental models than simpler task classes.
The two activities on the right side of the fgure, analyzing cognitive
rules and analyzing prerequisite knowledge, sustain the design of procedural
information and part-task practice. They are drawn on top of each other
because they have a conditional relationship: Cognitive rules require the
availability of prerequisite information. The analysis of cognitive rules identi-
fes the condition-action pairs that enable experts to perform routine aspects
of tasks without conscious efort (IF condition THEN action). The analysis
of prerequisite knowledge identifes what experts need to know to apply those
condition-action pairs correctly. Together, the results of these analyses pro-
vide the basis for the design of procedural information (see Section 10.2).
Ten Steps 49

In addition, identifed cognitive rules are precisely those rules that require
automation through part-task practice.
As indicated by the arrows in Figure 3.1, some activities provide pre-
liminary input for other activities. This suggests that the best order for
performing the activities would, for example, be to start by designing
the necessary performance assessments, then continue by analyzing the
nonrecurrent and recurrent aspects and sequencing the learning tasks,
and end with designing the four blueprint components. Indeed, the ten
activities have previously been described in this analytical order (e.g.,
Van Merriënboer & de Croock, 2002), but in real-life design projects,
each activity afects and is afected by all other activities. This makes it an
open question as to which order for carrying out the ten activities is most
fruitful.

3.2 System Dynamics


The Four Component Instructional Design model takes a system dynamics
view of instruction, which it shares with many other instructional design
models. This view emphasizes the interdependence of the elements constitut-
ing an instructional system and recognizes the dynamic nature of this inter-
dependence that makes the system an irreducible whole. Such an approach
is both systematic and systemic (Stefaniak & Xu, 2020). It is systematic
because the input-process-output paradigm is inherent to it. According to
this paradigm, the outputs of particular system elements serve as inputs to
other elements, and the outputs of particular design activities serve as inputs
for other activities. For example, the output of an analysis of nonrecurrent
task aspects is the input for the design of supportive information in the
blueprint. At the same time, it is also systemic because one accepts that the
performance or function of each element directly or indirectly impacts—or
is impacted by—one or more of the other elements, thereby making the
design process highly dynamic and nonlinear. For example, this same analy-
sis of nonrecurrent aspects of a skill can also afect the sequencing of learn-
ing tasks. We will explain this further in the following sections.

Iteration

The preliminary input-process-output relations depicted in Figure 3.1 indi-


cate the systematic nature of the Ten Steps. Performance assessments yield
input for sequencing learning tasks because they make it possible to iden-
tify individual learning needs and, thus, to provide a unique sequence of
tasks to each learner. They also provide input for analyzing nonrecurrent
and recurrent aspects of the complex skill because instruments for measur-
ing performance clarify what learners should know and be able to do to
carry out learning tasks up to the standards. Analyzing nonrecurrent aspects
50 Ten Steps

yields further input for designing supportive information because analyzed


cognitive strategies and mental models are part of this information. Ana-
lyzing recurrent aspects yields input for designing procedural information
because analyzed cognitive rules and prerequisite knowledge are part of this
information. Finally, analyzing cognitive rules also informs the design of
part-task practice by highlighting those rules that should be strengthened
through repetitive practice. This whole process, however, is not a singular
one, as it occurs in iterations where the results of activities located in lower
parts of Figure 3.1 provide input for activities in higher parts of this fgure,
requiring ‘redoing’ something that has already been ‘done.’ In other words,
iteration involves repeating a cycle of operations to approximate a desired
or required result. These iterations are very important and are a sign of the
systemic nature of the process. The analyses of nonrecurrent and recurrent
aspects of a skill, for example, often provide new insights into the structure
of a complex skill, yielding input for the revision of performance assess-
ments. The design of the four components often reveals weaknesses in the
analysis results, providing input for more detailed or alternative analyses and
design decisions. Even media choices, such as whether to use e-learning for
implementing parts of a training program, may call for additional task and
knowledge analyses to reach the required level of specifcity and detail.
Iteration will always occur in real-life design processes, so it may be
worthwhile to plan the major iterations in a form of rapid prototyping
(Nixon & Lee, 2001). In rapid prototyping, the designer develops one or
more learning tasks (prototypes; cf. Figure 3.2) that ft one particular task
class and then tests them with real users. The results of these user tests refne
the prototype and impact the whole design process, including the design of
performance assessments, sequencing of learning tasks, and analysis of dif-
ferent aspects of the complex skill.

Figure 3.2 Breadboard with a prototype of the Tube Screamer and the final
product.
Ten Steps 51

Layers of Necessity
In real-life design projects, designers often will not perform all design activities,
or at least, not all at the same level of detail. Based upon allotted development
time and available resources, they choose what activities they incorporate into
the project and what level of detail is necessary for those activities. In other
words, they fexibly adapt their professional knowledge. Wedman and Tess-
mer (1991) describe this process in terms of layers of necessity. The instruc-
tional design model is then described as a nested structure of submodels,
ranging from a minimalist model for situations with severe time and resource
limitations to a highly sophisticated model for ideal situations with generous
time and resources. A minimalist version of the Ten Steps (i.e., the frst layer
of necessity) might, for example, only contain developing a set of learning
tasks, as this is at the heart of the training blueprint. The most sophisticated
version might contain the development of a highly detailed training blueprint
where the descriptions of supportive information, procedural information,
and part-task practice are based on comprehensive task- and content analy-
ses. The key to using layers of necessity is a realistic appraisal of the time and
resource constraints associated with the goals of a particular design project.
A related issue is reusing instructional materials (Van Merriënboer & Boot,
2005; Wiley et al., 2012). Many instructional design projects do not design
training programs from scratch but redesign existing training programs. This
reduces the need to carry out certain analysis and design activities and almost
certainly reduces the need to do this at a very high level of detail. The redesign
of an existing training program according to the Ten Steps, for example, will
always start with specifying a series of learning tasks, which are then organized
into task classes. Concerning designing the information that learners will need
for working productively on those tasks, it might be sufcient to reorganize
already-available instructional materials to better connect them to the relevant
task classes and learning tasks. There is no need for an in-depth task and
content analysis. Furthermore, reuse of materials is also increasingly popular
for the design of new courses. Mash-ups, for example, recombine and modify
existing digital media fles into new instructional materials. This approach is
becoming increasingly popular and often employs what is known as Open
Educational Resources (OERs): freely accessible, openly licensed documents
and media for teaching, learning, and assessing (Littlejohn & Buckingham
Shum, 2003; Wiley et al., 2014). Again, the analysis necessary to determine
which existing media fles are needed to create a mash-up is probably much less
detailed than the analysis necessary to develop those materials from scratch.

Zigzag Design

As indicated in the previous sections, designers iterate activities and often


do not carry out some of the activities, or at least, choose not to perform all
activities at the same high level of detail. In addition, there is no preferred
52 Ten Steps

order for some activities. For example, in Figure 3.1, there is no preferred
order for analyzing the nonrecurrent and the recurrent aspects of the skill,
no preferred order for the design of supportive information, procedural
information, and part-task practice, and, fnally, no necessity of completing
one step before beginning on another. Iterations, layers of necessity, and
switches between independent activities result in highly dynamic, nonlinear
forms of zigzag design. Nevertheless, it is important to prescribe the basic
execution of the ten activities in an order that gives optimal guidance to
designers. We will discuss this in the next section.

3.3 The Pebble-in-the-Pond: From Activities to Steps


Merrill’s (2020) pebble-in-the-pond approach for instructional design is
fully consistent with the Ten Steps (see Figure 3.3). It is a content-centered
modifcation of traditional instructional design in which a designer frst spec-
ifes the contents to be learned and not the abstract learning objectives. As
described by Merrill (2002, p. 42), “the Pebble-in-the-Pond design model
consists of a series of expanding activities initiated by frst casting in a peb-
ble, that is, a whole task or problem of the type that learners will be taught
to accomplish by the instruction”. This simple little pebble initiates further
ripples in the design pond. Thus, whereas the Ten Steps acknowledges that
designers do not design linearly and allows for zigzag design behaviors, the
steps are ordered according to the pebble-in-the-pond approach. This way
of presenting the steps is considered workable and useful for teachers and
other practitioners in the feld of instructional design.

Figure 3.3 David Merrill’s pebble-in-the-pond.


Ten Steps 53

A Backbone of Learning Tasks: Steps 1, 2, and 3

The frst three steps aim at the development of a series of whole tasks that
serve as the backbone for the educational blueprint:

• Step 1: Design Learning Tasks.


• Step 2: Design Performance Assessments.
• Step 3: Sequence Learning Tasks.

The frst step, throwing the pebble in the pond, specifes a set of typical
learning tasks representing the whole complex skill the learner should be
able to perform after following the instruction. In this way, it becomes
clear from the beginning, and at a very concrete level, what the training
program must achieve. The frst ripple in the design pond, Step 2, per-
tains to the articulation of standards that learners must reach if we are
to conclude that they can carry out the tasks in an acceptable fashion.
The design of performance assessments for the whole task makes it pos-
sible to (a) determine whether learners met the standards and (b) give
learners the necessary feedback on the quality of their performance. By
frst specifying the learning tasks and, only after having done this, the
standards and performance assessments, the pebble-in-the-pond approach
avoids the common design problem of abandoning or revising learning
objectives that, early in the process, are determined to correspond more
closely to the content that has fnally been developed (Merrill, 2020). The
next ripple in the design pond, Step 3, pertains to the sequencing (i.e.,
the progression; cf. Figure 3.3) of learning tasks. When there is a set of
learning tasks and an instrument to assess performance, it is important to
order the tasks to optimize the learning process. One approach to achieve
this is by defning a sequence of tasks that gradually increases in complex-
ity. It is important to note here that making the task more complex is not
the same thing as making the task more difcult. Complexity is a feature
of the task while difculty is also determined by the knowledge or skill
level of the learner. A task with a given level of complexity can, thus, be
easy for an advanced learner but difcult for a novice learner. When we
increase complexity over a sequence of learning tasks, the difculty of
these tasks should remain more or less the same for the individual learner!
At each level of complexity, the level of support and guidance decreases.
In this way, if learners can successfully carry out all tasks, they are consid-
ered to have mastered the prespecifed knowledge, skills, and attitudes.
Alternatively, this can be accomplished by using performance assessments
to dynamically choose learning tasks tailored to the learning needs of
individual learners.
54 Ten Steps

Component Knowledge, Skills, and Attitudes: Steps 4 to 10

Further ripples in the design pond identify the knowledge, skills, and atti-
tudes necessary to perform each learning task in the progression of tasks.
This results in the remaining blueprint components (cf. Figure 3.3), which
are subsequently connected to the backbone of learning tasks. We distinguish
supportive information, procedural information, and part-task practice. The
steps followed for designing and developing supportive information are:

• Step 4: Design Supportive Information.


• Step 5: Analyze Cognitive Strategies.
• Step 6: Analyze Mental Models.

Supportive information helps learners carry out the nonrecurrent aspects


of the learning tasks related to problem solving, reasoning, and decision
making. While each aspect may be or even is needed to carry out ‘all’ tasks
within the task class, their implementation varies from task to task. This
makes them nonrecurrent. Units of supportive information connect to task
classes, and more complex task classes typically require more detailed or
more embellished supportive information than simpler task classes. If useful
instructional materials are already available, Step 4 may be limited to reor-
ganizing existing instructional materials and assigning them to task classes.
Steps 5 and 6 can then be skipped. But if instructional materials need to be
designed and developed from scratch, it may be helpful to carry out Step
5, which involves analyzing the cognitive strategies that profcient task per-
formers use to solve problems in the domain and describing SAPs, and/or
Step 6, which involves analyzing the mental models that describe the organi-
zation of the domain and describing domain models. The results of the
analyses in Steps 5 and 6 provide the basis for designing supportive informa-
tion. Analogous to the design and development of supportive information,
the steps for designing and developing procedural information are:

• Step 7: Design Procedural Information.


• Step 8: Analyze Cognitive Rules.
• Step 9: Analyze Prerequisite Knowledge.

Procedural information is necessary to carry out the learning tasks’ recur-


rent aspects. It specifes exactly how to carry out these aspects (in the ter-
minology of the Ten Steps, these aspects are procedural) and is preferably
presented just in time, precisely when learners need it while working on
the learning tasks. This procedural information is appropriately faded for
subsequent learning tasks, often replaced by new specifc information for
new recurrent task aspects. If useful instructional materials such as job aids
Ten Steps 55

or quick reference guides are available (nowadays, often in the form of an


app for a mobile device), Step 7 may be limited to updating those materials
and linking them to the appropriate learning tasks. Steps 8 and 9 can then
be skipped. But if the procedural information needs to be designed from
scratch, it may be helpful to carry out Step 8, which involves analyzing the
cognitive rules that drive routine behaviors and describing IF-THEN rules
or procedures, and Step 9, which involves analyzing the knowledge prereq-
uisite to correct use of these IF-THEN rules. The results of the analyses in
Steps 8 and 9 then provide the basis for the design of procedural informa-
tion. Finally, depending on the nature of the task and the knowledge and
skills needed to carry it out, it may be necessary to carry out the tenth and
fnal step:

• Step 10: Design Part-task Practice.

Under certain circumstances, additional practice may be necessary to develop


a very high level of automaticity for selected to-be-automated recurrent
aspects of a complex skill. This, for example, may be the case for recurrent
constituent skills that are critical because their incorrect performance could
cause danger to life and limb, loss of expensive or hard-to-replace materials,
or damage to or loss of equipment. It might also be important for those
recurrent constituent skills that require split-second or fuent performance.
If part-task practice needs to be designed, the analysis results of Step 8 (i.e.,
the IF-THEN rules or procedures) provide useful input. The remaining
chapters of this book discuss each of the Ten Steps in detail (see Appendix
1 for a brief description).

3.4 Ten Steps within an ISD Context


Designers will often apply the Ten Steps in the context of instructional sys-
tems design (ISD). ISD models have a broad scope and typically divide
the instructional design process into fve phases: (a) analysis, (b) design,
(c) development, (d) implementation, and (e) summative evaluation (Van
Merriënboer, 2017). In this so-called ADDIE model, designers conduct
formative evaluations during all phases. The Ten Steps is narrower in scope
and focuses on the frst two phases of the instructional design process: task
and content analysis and design. In particular, the Ten Steps concentrates
on the analysis of a to-be-trained, complex skill or professional competency
in an integrated process of task and content analysis (also called cognitive
task analysis; Clark et al., 2008) and the conversion of the results of this
analysis into a training blueprint or ‘strategy’ (cf. Figure 3.3) that is ready
for development and implementation. It is best to apply the Ten Steps in
combination with an ISD model to support activities not treated in the
56 Ten Steps

Ten Steps, such as needs assessment and analysis, development of interfaces


and production of instructional materials (cf. Figure 3.3), implementation
and delivery of materials, and summative evaluation of the implemented
program.
At the front end, the Ten Steps assumes that there is a performance
problem that can be solved through training and that there is an overall
instructional goal; namely, to teach the complex skill or professional compe-
tency. If this assumption cannot be fully justifed, conducting a needs assess-
ment before the Ten Steps is desirable to fnd answers to questions as: What
should the learners be able to do after the instruction? Can they already do
this, or is there a performance problem or gap? What might be the possible
causes for a signaled performance problem? Can the performance problem
be solved with training? (Barbazette, 2006; Fassier et al., 2021). A detailed
task analysis should be conducted after such a needs assessment. The analysis
techniques discussed in this book (in Chapters 5–6, 8–9, and 11–12) are
fully integrated in the Ten Steps but share many features with other com-
prehensive task-analytical models such as integrated task analysis (Ryder &
Redding, 1993) and concepts, processes, and principles (Clark et al., 2008;
Yates & Feldon, 2011).
At the back end, the Ten Steps results in a highly detailed training blue-
print that forms the basis for developing a learning environment and pro-
ducing instructional materials (Husnin, 2017). In the terminology of ISD,
it marks the transition from the design phase to the development or produc-
tion phase.

3.5 Summary
• The instructional design process consists of ten activities: the design of
learning tasks, the design of supportive information, the design of pro-
cedural information, the design of part-task practice, the design of per-
formance assessments, the sequencing of learning tasks, the analysis of
cognitive strategies, the analysis of mental models, the analysis of cogni-
tive rules, and the analysis of prerequisite knowledge.
• System dynamics indicate that the output of each activity afects all other
activities. In real-life design projects, iterations, skipping activities (layers
of necessity), and switching between activities are common, resulting in
zigzag design behaviors.
• The pebble-in-the-pond approach is a content-centered modifcation of
traditional instructional design in which one or more learning tasks are
frst specifed rather than abstract learning objectives. Thus, the process is
initiated by casting a pebble—one or more whole learning tasks—in the
instructional design pond. The process unrolls as a series of expanding
activities or ripples initiated by this pebble.
Ten Steps 57

• The Ten Steps, ordered according to the pebble-in-the-pond approach,


includes the following activities: (1) design learning tasks, (2) design per-
formance assessments, (3) sequence learning tasks, (4) design supportive
information, (5) analyze cognitive strategies, (6) analyze mental models,
(7) design procedural information, (8) analyze cognitive rules, (9) ana-
lyze prerequisite knowledge, and (10) design part-task practice.
• The Ten Steps is best used in combination with a broader ISD model that
provides guidelines for needs assessment, production, implementation,
and summative evaluation.

Glossary Terms

ADDIE model; Instructional systems design (ISD); Iteration; Layers of


necessity; Mash-up; Open educational resources (OERs); Pebble-in-the-
pond approach; Rapid prototyping; System dynamics; Zigzag design
Chapter 4

Step 1
Design Learning Tasks

4.1 Necessity
Learning tasks are the frst design component and provide the backbone for
your training blueprint. You should always perform this step.

Traditional school tasks are highly constructed, well defned, short, ori-
ented toward the individual, and designed to best ft the content (i.e., the
curriculum) instead of reality. An archetypical problem of this type is: “Two

DOI: 10.4324/9781003322481-4
60 Step 1: Design Learning Tasks

trains traveling toward each other at a speed of . . . leave their stations at . . .


o’clock. How long will it take?” Such tasks, though often seen as highly
suitable for acquiring simple skills, are neither representative of the type
of problems that students perceive as relevant (unless you are a rail trafc
controller whose job it is to determine when trains will crash), nor have they
proven to be especially efective for acquiring complex skills and competen-
cies or for achieving transfer of learning.
This chapter presents guidelines for designing learning tasks, the frst and
most critical design component of the Ten Steps. It is the pebble that cre-
ates the ripples in the pond. These tasks immediately clarify what learners
must do during and after training. Traditional instructional design models
typically use the presentation of subject matter as the skeleton of the train-
ing and then add learning tasks, often called practice items, as meat on these
bones. In contrast, the Ten Steps starts by designing meaningful whole tasks
and uses those as the backbone for connecting the other design compo-
nents, including the subject matter.
The structure of this chapter is as follows. Section 2 describes how real-life
tasks are the basis for designing learning tasks. Section 3 discusses how learn-
ers carry out these tasks in real and/or simulated task environments. Such
environments might range from low to high fdelity but should always allow
learners to work on the learning tasks. Section 4 emphasizes the importance
of using a varied set of learning tasks. This variability of practice is probably
the most powerful instructional method for achieving transfer of learning.
Section 5 explains the concepts of built-in task support and problem-solving
guidance. Sections 6 and 7 discuss, in order, learning tasks with diferent lev-
els of built-in support, including conventional tasks, completion tasks, and
cases, among others, and diferent approaches to give problem-solving guid-
ance to learners working on the tasks. Section 8 explains the principle that
both support and guidance should be diminished in a process of scafolding
as learners acquire more expertise. The chapter concludes with a summary.

4.2 Real-Life Tasks


Real-life tasks in this book difer from typical school tasks on at least three
dimensions. First, they are almost always ill-structured rather than well-
structured. Second, they are often multidisciplinary rather than ftting into a
single discipline. Finally, they may be team tasks rather than individual ones.

Ill-Structured Problems

School tasks are usually well-structured: They present all problem elements
to the learner, require applying a limited number of rules and/or proce-
dures, and have knowable, understandable solutions. They are also often
Step 1: Design Learning Tasks 61

convergent, meaning one ‘correct’ answer exists. Real-life tasks, in contrast,


are usually ill-structured problems that confront the task performer with
unknown elements, have multiple acceptable solutions (or even no solu-
tion at all!), possess multiple criteria for evaluating solutions, and often
require learners to make judgments (Jonassen, 1997). Rittel and Webber
(1973) took this a step further by discussing some real-life problems that
are ‘wicked.’ Such problems are difcult and sometimes impossible to solve
because of incomplete, contradictory, and/or changing requirements.
Thus, tackling ill-structured (and wicked) problems is not a matter of
simply applying rules or procedures but requires knowledge of the domain
(i.e., having the necessary mental models) and knowledge about how to
approach problems in the domain (i.e., possessing relevant cognitive strate-
gies) to fnd an acceptable solution. This is also why, by default, the Ten
Steps treats complex skills and their constituent skills as nonrecurrent. These
skills are knowledge-based and may allow one to solve problems in a par-
ticular domain of learning but do not guarantee reaching a solution: They
are heuristic (i.e., they employ a general method not guaranteed to be opti-
mal or perfect but sufcient for their solution—an approach) rather than
algorithmic (i.e., they follow a fxed set of rules and procedures for their
solution—a recipe).
Whereas the distinction between ill-structured and well-structured prob-
lem solving is valid from a theoretical point of view, real-life tasks will almost
always require a mix of solving ill-structured and well-structured problems
(Van Merriënboer, 2013). In addition, the task performer must coordinate
the cognitive processes for ill-structured problem solving (i.e., nonrecurrent
skills) and well-structured problem solving (i.e., recurrent skills). Profcient
task performers will typically need their mental models and cognitive strate-
gies to complete a real-life task, but they can do so because they can apply
rules or automated schemata for the familiar, routine aspects of carrying out
the task, “making available controlled-processing resources for the novel
aspects of problem solving” (Frederiksen, 1984, p. 365).

Multidisciplinary Tasks

Whereas well-structured problems are often limited to one subject matter


domain or discipline, real-life tasks almost always require knowledge from
diferent domains or disciplines. A medical doctor diagnosing a patient
needs knowledge of anatomy, physiology, pathology, pharmacology, and
other medicine-related disciplines as well as of ethics, law, and social psy-
chology to relate to the patient. A researcher setting up an experiment needs
knowledge of the theoretical feld the study contributes to, the feld’s meth-
odology, statistics, research ethics, and other disciplines, plus, they must be
able to communicate the fndings and thus use writing skills, logic, rhetoric,
62 Step 1: Design Learning Tasks

and so forth. Finally, a hairdresser cutting a client’s hair needs knowledge


of modern and traditional hairstyles, current societal trends, chemistry (for
safely dealing with hair colorings and styling), hairstyling products, payment
methods, and other disciplines. Thus, when using real-life tasks as a basis for
the design of learning tasks, it is inevitable that the supportive information
for those tasks has a multidisciplinary nature.

Team Tasks and Interprofessional Education

Real-life tasks are often performed by a team rather than by an individual.


This is obvious for professional tasks performed by the police, frefght-
ers, and the military. But this is also often the case in many other areas (a
researcher rarely works alone). Team tasks often also require the collabora-
tion of professionals from diferent felds. Emergency teams in medicine
include doctors, nurses, and paramedics; development teams in Web design
include creative designers, ICT specialists, and programmers; design teams
for a new product include customer researchers, materials experts, engi-
neers, marketing people, and jurists; and government teams preparing envi-
ronmental policies include ecologists, health scientists, jurists, and others. If
real-life tasks are team tasks, according to the Ten Steps, the learning tasks
based on those real-life tasks will also be team tasks. Consequently, educa-
tional programs developed with the Ten Steps may include interprofessional
education (Hammick et al., 2007), where learners from diferent disciplines
work together on learning tasks.

From Real-Life Tasks to Learning Tasks

The most efective approach for identifying real-life tasks for designing
learning tasks involves interviewing professionals working in the feld along
with trainers with experience teaching in that domain. To prepare for this
process, it is essential to study documentation materials such as technical
handbooks, on-the-job documentation, and function descriptions—as well
as existing educational programs and OERs—to avoid duplicate work. The
document study should provide an instructional designer with enough
background information to interview professionals efectively and efciently.
In later phases of the design process, it will be helpful to include subject-
matter experts from diferent disciplines to do justice to the multidiscipli-
nary nature of the tasks.
Using real-life tasks as the basis for learning tasks ensures that they engage
learners in activities that directly involve them with the constituent skills
involved—as opposed to activities in which they have to study general infor-
mation about the skills or related to them. Such a careful design should also
guarantee that the tasks put learning before performance. In other words,
Step 1: Design Learning Tasks 63

the tasks should stimulate learners to focus on the cognitive processes for
learning rather than solely on the outcomes of executing the tasks. This can
be achieved by changing the real-life or simulated environment in which the
tasks are performed, ensuring variability of practice, and providing proper
support and guidance to the learners carrying out the tasks (Kirschner et al.,
2006).

4.3 Real and Simulated Task Environments


The primary goal of the learning tasks is to help learners inductively construct
cognitive schemata from their concrete experiences, inferring general laws
from particular instances. Thus, the task environment should allow learners
to work on learning tasks that ofer concrete experiences (Meguerdichian
et al., 2021). For this reason, the medium that allows learners to work on
those learning tasks is called the primary medium. It can be a real task envi-
ronment with regular tools and objects or a simulation. Sometimes, the real
task environment is suitable for learners to carry out their learning tasks.
Computer programming, for example, can be taught in a regular program-
ming environment; repairing cars, in an actual garage; and troubleshoot-
ing electronic circuits, by having learners diagnose and repair actual faulty
electronic circuits in the workplace. There may, however, be good reasons
to practice the learning tasks in a simulated task environment. These rea-
sons can be educational (e.g., real environments rarely show the breadth of
problems that a learner can and will come across), practical (e.g., it would be
impossible to fnd enough training positions for all vocational education stu-
dents to carry out certain tasks), or instrumental (e.g., performing the task
in a real environment could be dangerous or could cause damage). If this
is the case, a simulation might be a better option, and an important design
decision then concerns the fdelity of the simulation, defned as the degree of
similarity between the simulated and the real task environment.

Simulated Task Environments

Especially in the earlier phases of the learning process (i.e., learning tasks
at the beginning of a task class or task classes at the beginning of the edu-
cational program), simulated task environments may ofer more favorable
opportunities for learning than real task environments. Table 4.1 lists the
major reasons for using a simulation. As can be seen, not only can simulated
task environments improve learning, but real task environments may some-
times even hamper learning. They may make it difcult or even impossible
to control the (sequence of) learning tasks, with the risk that learners must
practice with tasks that are either much too difcult or much too easy for
them (e.g., training air trafc controllers in a hectic airport) or that show
64 Step 1: Design Learning Tasks

Table 4.1 Reasons to offer learning tasks in a simulated rather than real task
environment.

Reason for using a simulated rather Example


than real task environment

Controlling the sequence of tasks Learners deal with increasingly


offered to learners demanding customers (i.e., tasks)
in a simulated store rather than
depending on arbitrary clients
walking into a real store.
Better opportunity to add Learners make strategy decisions
support and guidance to tasks in a management game with the
(i.e., change their format) opportunity to consult experts and
peers, rather than making them in
the boardroom of a real company.
Prevent unsafe and dangerous Medical students perform surgical
situations while performing the operations on corpses rather than
tasks real patients.
Speed up or slow down the Learners steer a large ship in a
process of performing the time-compressed simulator rather
tasks than one on open seas.
Reduce costs of performing the Learners shut down a simulated
tasks nuclear power plant rather than
letting them shut down a real one.
Create tasks that rarely occur in Flight trainees deal with
the real world emergencies in an aircraft
simulator rather than waiting for
these situations to happen in a
real aircraft.
Create tasks that would Student dentists fill cavities in
otherwise be impossible due to porcelain molars rather than in
limited materials or resources real patients’ teeth.

insufcient variability (e.g., training teachers with only one group of stu-
dents). They may also make it difcult to provide the necessary support or
guidance to learners (e.g., training fghter pilots in a single-person aircraft if
no qualifed people are available at the time and/or place needed to provide
support and guidance). They may lead to dangerous, life-threatening situ-
ations or loss of materials (e.g., if novice medical students were to practice
surgery on real patients). They may lead to inefcient training situations that
take much longer than necessary (e.g., a chemical titration where the chemi-
cal reaction is slow). They may make the educational program extremely
expensive (e.g., training frefghters to extinguish burning aircraft). They
may make it impossible to present the needed tasks (e.g., training how to
deal with situations such as calamities that rarely occur or technical problems
Step 1: Design Learning Tasks 65

that occur either sporadically or intermittently and thus cannot be predicted


or assured to happen). Finally, the materials necessary for learning may not
always be sufciently available in the real task environment (e.g., training
how to cut and polish diamonds). Thus, using simulated task environments
that ofer a safe and controlled environment where learners may develop and
improve skills through well-designed practice is often worthwhile.

Fidelity of Task Environments

Simulated task environments difer in fdelity, defned as the degree of corre-


spondence of a given quality of the simulated environment with that of the
real world (Frèrejean et al., 2023). A common distinction made is between
psychological fdelity, functional fdelity, and physical fdelity (Hays &
Singer, 1989). Psychological fdelity pertains to the degree to which a sim-
ulated-task environment replicates the psychological factors experienced in
the real-task environment. This includes replicating the required skills as
well as factors such as stress, fear, boredom, and more. Functional fdel-
ity pertains to the degree to which a simulated-task environment mimics
the real-task environment in response to the learner’s actions. For example,
when a chemical process simulation produces the same results as it would
in a real laboratory. Physical fdelity pertains to the degree to which a simu-
lated-task environment looks, sounds, feels, or even smells like the real-task
environment (see Figure 4.1).

Figure 4.1 Virtual reality (VR) parachute trainer with high physical fidelity.
66 Step 1: Design Learning Tasks

According to the Ten Steps, simulated-task environments must allow the


learner to carry out authentic learning or training tasks based on real-life tasks
right from the beginning of the learning experience. Because of this, many
aspects of psychological fdelity are always high because carrying out the learn-
ing tasks is more or less similar to carrying out real-life tasks; there is a clear
correspondence between the cognitive processes involved in carrying out the
learning task in the simulated environment and the cognitive processes involved
in carrying out a real-life task in the real environment. This also implies that
well-designed learning tasks may, under particular circumstances, have low
functional and physical fdelity. Though medicine students, for example, need
to learn to diagnose and treat diseases right from the start, it is not advisable or
even possible to have them immediately begin practicing with real patients in
a hospital. They may, for example, start with paper-based textual problems or
case descriptions of prospective patients for which they must reach a diagnosis
and make a treatment plan. This is common practice in problem-based medical
curriculums (Hung et al., 2019; Loyens et al., 2011). Although the paper-
based case descriptions have very low functional and physical fdelity, they are
based on real-life tasks and thus have acceptably high psychological fdelity.
A general fnding is that, for efective learning of complex cognitive skills,
the psychological and, to a lesser degree, functional fdelity of a simulated task
environment is initially more important than its physical fdelity (McGaghie
et al., 2010). Moreover, high-fdelity task environments may even be detrimen-
tal for novice learners because they provide too many ‘seductive details’ that
distract the learner and confront them with extra unnecessary information and
work stress that interferes with learning (e.g., compare an Airbus 380 cockpit
simulator with a low-resolution fight simulator on a desktop computer). For
novice learners, excluding overabundant, irrelevant, and/or seductive details
often positively afects learning outcomes and transfer (Mayer et al., 2001).
Yet higher-fdelity task environments become increasingly important for more
experienced learners because they eventually need to practice in environments
that also physically resemble the real task environment (Gulikers et al., 2005).
For example, if the real-life environment contains many irrelevant and/or
seductive details and causes stress, high-fdelity simulation should also contain
these details and induce stress. This approach allows experienced learners to
learn to ignore the details and deal with the stress. Thus, according to the Ten
Steps, the psychological fdelity must always be high, but it may sometimes be
desirable to start the training or a task class in an environment with relatively
low functional and physical fdelity and then gradually increase the fdelity as
learner expertise increases (cf. Maggio et al., 2015; Maran & Glavin, 2003).
Initially, learners begin to learn and practice in an environment with rela-
tively low functional and physical fdelity. Such an environment only rep-
resents those aspects of the real environment that are strictly necessary to
carry out the learning tasks. It does not contain those details or features
Step 1: Design Learning Tasks 67

that are irrelevant in the current stage of the learning process but may nev-
ertheless attract learners’ attention and disrupt their learning. As explained,
one might, for example, present business students with a paper-based case
study of a company with fnancial problems with the assignment to develop
a business strategy to increase proft or present paper-based descriptions of
patients to students in medicine with the assignment to reach a diagnosis
and develop a treatment plan. Figure 4.2 provides an example of a medical
case generated by ChatGPT. Artifcial intelligence in large language models
such as ChatGPT can help develop case materials, although a fnal check by
an expert is still crucial before using them in teaching materials. The pulmo-
nologist who checked the case in Figure 4.2 did not observe errors but had
valuable suggestions for improving it.
In a second stage, learners continue practicing in a task environment with
higher functional fdelity; that is, an interactive environment that reacts in
response to the actions executed by the learner. For example, medical stu-
dents may be exposed to so-called ‘virtual patients,’ computer-based patient
simulations that enable them to interrogate the patient, request laboratory
tests, and carry out other related actions (Huwendiek et al., 2009; Janesar-
vatan & van Rosmalen, 2023; Marei et al., 2017). Alternatively, role playing
can be used, where peer students take on the roles of simulated patients. For
business students, management games allow them not only to develop but
also to test business strategies, while so-called ‘virtual companies’ (Westera
et al., 2000) make it possible to work on real projects in a Web-based envi-
ronment that more or less resembles reality. Finally, for the acquisition of
presentation skills, there are environments such as The Presentation Trainer
(Schneider et al., 2016), an augmented reality toolkit for learning and prac-
ticing nonverbal public speaking skills (see Figure 4.3). The program tracks
and analyzes the user’s body posture, body movements, speaking cadence,
and voice volume to give instructional feedback on nonverbal communica-
tion skills (sensor-based learning) on screen both during and after practice.
In a third stage and with increasingly advanced learners, more details of
the real task environment become relevant. This, in turn, may make it neces-
sary to carry out the learning tasks in a high-fdelity simulated-task environ-
ment. For example, medical students may engage role-playing professional
actors who simulate real accident victims or patients. Or patients may take
the form of computer-controlled mannequins that react as real patients to
practice resuscitation skills in emergency teams (see Figure 4.4; McGraw
et al., 2023). For business students, high-fdelity simulation may occur in a
simulated ofce space where project teams work on a real task brought in by
a commercial client. These kinds of simulations smoothly fow into real-task
environments, where medical students work with real patients in the hospital
and business students work with real clients in companies. There are even
some situations where the high-fdelity, simulated and real-task environments
68 Step 1: Design Learning Tasks

Figure 4.2 ChatGPT 3.5 generating a case study of a medical problem.


Step 1: Design Learning Tasks 69

Figure 4.3 Augmented reality (AR) presentation trainer.

Figure 4.4 Emergency team practicing resuscitation skills on a computer-con-


trolled mannequin.

are indistinguishable. This, for example, is the case for satellite radar data
imaging, where the only diference between the two might be that the sim-
ulated-task environment uses a database of stored satellite data and images,
while the real-task environment uses real-time satellite data and images.
70 Step 1: Design Learning Tasks

Computer-Based Simulations and Serious Games

The principles just outlined also apply to computer-based simulated task envi-
ronments, including serious games: simulation-based games designed not pri-
marily for entertainment, but rather, to learn or change complex skills in felds
such as science and engineering, environmental policy, health care, emergency
management, and so forth (e.g., Akkaya & Akpinar, 2022; Faber et al., 2021;
Hummel et al., 2021). Table 4.2 provides examples of computer-based simu-
lated-task environments ordered from low to high—functional and physical—
fdelity. Low-fdelity environments are often Web-based and present realistic
learning tasks and problems to learners but ofer either no or very limited inter-
activity (e.g., Holtslander et al., 2012). Medium-fdelity environments typically

Table 4.2 Examples of computer-simulated task environments (online learn-


ing environments) ordered from low to high functional and physical
fidelity.

Task Low fidelity Medium fidelity High fidelity

Making A textual case Textual and Clients are


diagnoses in study that video cases presented in a
psychotherapy describes with clients virtual reality
a client’s in different environment with
characteristics therapeutic lifelike, simulated
and their sessions. avatars that the
psychological learners may
complaints. interview.
Repairing A complex system A complex A complex system
complex is represented system is is represented
technical by a non- represented by by 3D Virtual
systems interactive figure an interactive Reality, allowing
on the screen, figure, allowing the learner to
combined with a learners to test and make
list of its major test and make changes to
malfunctions. changes by objects with
clicking the regular tools.
mouse at
appropriate
places.
Designing A textual case In a multimedia In a virtual
instruction study that environment, company,
illustrates a students students
performance interview participate in
problem that stakeholders a (distributed)
needs to be and consult project team
solved. resources working on a
to analyze a performance
performance problem brought
problem. in by a real client.
Step 1: Design Learning Tasks 71

react to the learners’ actions (i.e., high functional fdelity) and, for team tasks,
allow learners to interact with each other. Many serious games are good exam-
ples of computer-based simulated-task environments with high functional but
low physical fdelity. They are attractive because they are less expensive to design
and use than high-fdelity simulators. Furthermore, they include gaming ele-
ments that may enhance learners’ motivation and can be used across a range of
users in a variety of locations (Faber et al., 2021; Lukosch et al., 2013). High
physical fdelity simulation, fnally, typically is used only in situations where
the ‘see, feel, hear, smell and taste’ of the task environment is relatively easy to
implement or where practicing in the real environment is out of the question.
In general, although learning tasks may not be initially performed in the
real-task environment from the start of the training program, they are eventu-
ally performed in the real environment at the end of the program or the end
of each task class. Thus, medical students will treat real patients in a hospi-
tal, learners in a fight training program will fy real aircraft, and trainees in
accountancy will deal with real clients and conduct real fnancial audits, all
under supervision. The simple reason is that even high-fdelity simulation usu-
ally cannot compete with the real world. There can be exceptions for learning
tasks that rarely occur in the real world (e.g., disaster management such as an
earthquake or food, dealing with failures in complex technical systems such
as a nuclear meltdown, conducting rare surgical operations), tasks linked to
high expenses (e.g., launching a missile, shutting down a large plant), and
tasks where the simulated environment is virtually identical to the real environ-
ment (e.g., satellite image processing, robotic surgery). For such tasks, high-
fdelity simulation using virtual reality with advanced input-output facilities
(e.g., VR helmets, data gloves) and complex software models running in the
background may help limit the gap between the real world and its simulations
(Mulders, 2022).

4.4 Variability of Practice


As stated, learning tasks are taken from real-life tasks and can be carried
out in simulated or real-task environments. Regardless of the context, these
tasks should always facilitate a process known as inductive learning. This
process involves learners constructing general cognitive schemata of how to
approach problems in the domain and how the domain is organized based
on their concrete experiences ofered by the tasks (see Box 4.1). Incorpo-
rating a varied set of learning tasks further stimulates this process. The set
of learning tasks should difer from each other on all dimensions on which
tasks in the real world also difer. In other words, the learning tasks in an
educational program must be representative of all possible real-life tasks
a learner may encounter in the real world after completing the program.
Using a varied set of tasks is probably the most recommended method for
enhancing transfer of learning (Corbalan et al., 2011).
72 Step 1: Design Learning Tasks

Box 4.1 Induction and Learning Tasks

Well-designed learning tasks ofer learners concrete experiences


for constructing new cognitive schemata and modifying existing
ones in memory. Inductive learning or ‘learning by doing’ is at the
heart of complex learning and refers both to generalization and
discrimination:

Generalization
When learners generalize or abstract away from concrete experiences,
they construct schemata that leave out the details so that they apply to
a wider range or less tangible events. Learners may construct a more
general or abstract schema if they create successful solutions for a class
of related learning tasks or problems. Then, the schema describes the
common features of successful solutions. For instance, a child may dis-
cover that 2 + 3 and 3 + 2 both add up to 5. It may induce the simple
schema or principle ‘if you add two digits, the sequence in which you
add them is of no consequence for the outcome’—the law of commu-
tativity. It may also induce another, even more general schema ‘if you
add a list of digits, the sequence in which you add the digits is of no
consequence for the outcome.’

Discrimination
In a sense, discrimination is the opposite of generalization. Suppose
the child makes the overgeneralization ‘if you perform a computa-
tional operation on two digits, the performance sequence is of no con-
sequence for the outcome.’ In this case, discrimination is necessary
to arrive at a more efective schema. The child may construct such
a more efective schema if failed solutions are created for a class of
related problems. Then, particular conditions may be added to the
schema and restrict its range of use. For instance, if the child fnds out
that 9 – 4 = 5 but that 4 – 9 = −5 (minus 5), the more specifc schema
or principle induced is ‘if you perform a computational operation on
two digits, and this operation is not subtraction (added condition), the
sequence in which you perform it is of no consequence for the out-
comes.’ While this schema is still overgeneralized (i.e., it is true for
multiplication but not for division), discrimination has made it more
efective than the original schema.
Step 1: Design Learning Tasks 73

Mindful Abstraction
Inductive learning is typically a strategic and controlled cognitive pro-
cess requiring conscious processing from the learner to generate plau-
sible alternative conceptualizations and/or solution paths when faced
with novel or unfamiliar tasks or task situations. Mindful abstraction can
originate from one single learning task but is greatly facilitated by incor-
porating multiple learning tasks that vary across the same dimensions as
real-world tasks. Such variability of practice should encourage inductive
processing because it increases the chances that learners can identify
similar features and distinguish relevant ones from irrelevant ones.
This variability of practice comes in two ‘flavors’; namely,

• Interleaving: Robert Bjork (1994) describes interleaving as ‘vary-


ing the conditions of practice’ and explains them as variation and
unpredictability in the learning environment. This is about varying
the practice of various parameters of a to-be-learned task (Hall &
Magill (1995) refer to it as ‘schema enhancement’).
• Contextual interference: The contextual interference effect (doing
the same thing often but in different situations or contexts) was first
demonstrated by Battig (1966). It’s very similar to interleaving, but
here, you make the task environment—not the task itself—more
variable or unpredictable in a way that creates a temporarily inter-
ference for the learner (Kirschner et al., 2022).

Mindful abstraction includes processes such as comparing and con-


trasting information, searching for analogical knowledge, analyzing
new information in its parts or kinds, and so on. These skills can be
learned. Such domain-general skills are the key to effective schema
construction, and for some target groups, explicitly teaching self-study
skills and learning strategies may be necessary.

Implicit Learning
Some tasks lacking clear decision algorithms involve integrating large
amounts of information. In this case, implicit learning is sometimes
more effective than mindful abstraction to induce the construction of
cognitive schemata. Implicit learning is more or less unconscious and
occurs when learners work on learning tasks that confront them with a
wide range of positive and negative examples. For example, if air traffic
controllers must learn to recognize dangerous air traffic situations on
74 Step 1: Design Learning Tasks

a radar screen, one may confront them with thousands of examples of


dangerous and safe situations and ask them to categorize those situations
as ‘dangerous’ or ‘safe’ as quickly as possible. In this way, they learn to
distinguish between these situations in a split second without the need
to articulate the schema that allows them to make the distinction.

Further Reading
Battig, W. F. (1966). Facilitation and interference. In E. A. Bilodeau
(Ed.), Acquisition of skill (pp. 215–244). Academic Press.
Bjork, R. A. (1994). Memory and metamemory considerations in the
training of human beings. In J. Metcalfe & A. Shimamura (Eds.),
Metacognition: Knowing about knowing (pp. 185–205). MIT Press.
Hall, K. G., & Magill, R. A. (1995). Variability of practice and contextual
interference in motor skill learning. Journal of Motor Behavior, 27(4),
299–309. https://doi.org/10.1080/00222895.1995.9941719
Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (Eds.).
(1989). Induction: Processes of inference, learning, and discovery.
MIT Press.
Kirschner, P. A., Hendrick, C., & Heal, J. (2022). How teaching hap-
pens: Seminal works in teaching and teacher efectiveness and what
they mean in practice. Routledge.
Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus perfor-
mance: An integrative review. Perspectives on Psychological Science,
10(2), 176–199. https://doi.org/10.1177/1745691615569000
Reber, A. S. (1996). Implicit learning and tacit knowledge: An essay on
the cognitive unconscious. Oxford University Press.

One may ensure that learning tasks difer from each other on dimensions
such as the conditions under which it is performed (e.g., a time-constrained
video-production task, as is the case for commissioned work, or a for-free
creative work, which is not), the way of presenting it (e.g., a video-pro-
duction task with detailed criteria for the fnal product or an open-ended
task leaving room for interpretation and creativity), the saliency of its defn-
ing characteristics (e.g., a video-production task to stimulate product sales
or one to infuence audience attitudes), and its familiarity (e.g., a video-
production task in a familiar domain or one in a domain foreign to the
learner). Furthermore, one must ensure that learning tasks difer from each
other on both surface features and structural features:

• Tasks that difer from each other on surface features may look diferent
from each other but can nevertheless be carried out in the same way. For
Step 1: Design Learning Tasks 75

example, a medical student who must learn to diagnose disease X will


best practice diagnosing this particular disease with diferent patients—
for example, patients with diferent socio-economic and cultural back-
grounds, male and female patients, and so forth.
• Tasks that difer from each other on structural features may look similar to each
other but need nevertheless be carried out in diferent ways. For example, a
medical student who must learn to distinguish disease X from disease Y would
best practice diagnosing these diseases by comparing and contrasting diferent
patients that have either disease X or disease Y (Kok et al., 2013, 2015).

A special kind of variability deals with the issue of how to order learning tasks
at the same level of complexity (i.e., in the same task class). If adjacent learn-
ing tasks cause learners to practice the same constituent skills, then so-called
contextual interference (Bjork, 1994) is low. Conversely, when adjacent learn-
ing tasks require learners to practice diferent versions of constituent skills,
contextual interference is high (also called ‘interleaving’; Birnbaum et al.,
2013). This, in turn, helps learners develop a more integrated knowledge
base. For example, if a medical student learns to diagnose the three diseases—
d1, d2, and d3—with 12 diferent patients, it is better to use a random prac-
tice schedule (d3-d3-d2-d1, d1-d3-d2-d1, d2-d2-d1-d3) than a blocked one
(d1-d1-d1-d1, d2-d2-d2-d2, d3-d3-d3-d3). Studies on contextual interfer-
ence and interleaving thus show that the variability and its structure across
learning tasks determine the extent of learning transfer. Practice under high
contextual interference stimulates learners to compare and contrast adjacent
tasks and mindfully abstract away from them, resulting in higher transfer of
learning than practice under low contextual interference (for examples, see
De Croock & van Merriënboer, 2007; Helsdingen et al., 2011a, 2011b).
In summary, it is wise to use a varied set of learning tasks in which tasks
difer on both surface and structural features and to sequence them randomly.
Although variability and random sequencing may lead to a longer training
time, a higher number of learning tasks needed to reach a prespecifed perfor-
mance level, and/or more mistakes during the learning process, every bit of
it pays itself back in higher transfer of learning. This is an example of what has
been called the transfer paradox in Chapter 1. If you want learners to reach
transfer of learning, they need to work a bit harder and longer. As the saying
goes: “No pain, no gain”. Bjork (1994) calls this a desirable difculty, making
it harder but in a good way. Therefore, it may be helpful to tell learners that
variability is applied and explain to them in some detail why it is applied to
increase their awareness and willingness to invest efort in mindful abstraction.

4.5 Learner Support and Guidance


Whether performed in real or simulated task environments, learning tasks
should provide support and guidance to learners (Kirschner et al., 2006). To
76 Step 1: Design Learning Tasks

design support and guidance, it is of the utmost importance not to merely


describe the real-life tasks on which the learning tasks are based in terms of a
given situation professionals might encounter in their domain but as a fully
worked-out example, including acceptable solutions for the problem and, if
possible, the problem-solving processes used for generating the solution(s).
In other words, two key components are required to design appropriate
learning tasks and their corresponding support and guidance: an accept-
able solution that informs the design of built-in task support and process-
related information used to reach this solution, which informs the design of
problem-solving guidance.
A general human problem-solving framework (Newell & Simon, 1972)
helps distinguish between task support and guidance. According to this
framework, fully describing the learner’s work on a learning task or prob-
lem requires four elements: (a) the given state a learner is confronted with;
(b) the criteria for an acceptable goal state; (c) a solution—that is, a sequence
of operators that enables the transformation from the given state to the goal
state; and (d) a problem-solving process, which may be seen as the tentative
application of mental operations or the learner’s attempts to reach a solution
(see Figure 4.5).

Figure 4.5 A model to distinguish task support provided by different types of


learning tasks and guidance provided by measures that guide learn-
ers through the problem-solving process.
Step 1: Design Learning Tasks 77

For complex learning, the nonrecurrent aspects of learning tasks are ill-
structured; there is no one optimal solution but many acceptable solutions.
Also, there may be several intermediate solutions besides the fnal solution
(e.g., for our video-production example, production plans, scripts, storyboards,
and produced footage may be seen as intermediate solutions). There may also
be several acceptable solution paths to arrive at similar or diferent solutions.
Finally, the goal states and given states may also be ill-defned, requiring prob-
lem analysis on the part of the learner (which is then one of the constituent
skills involved). These characteristics sometimes make it difcult to analyze a
real-life task in its given state, goal state, acceptable solution, and problem-
solving process for generating a solution. Table 4.3 provides some examples.
The distinction between a solution and the problem-solving process for
fnding an acceptable solution is characteristic of complex learning. This type
of learning primarily deals with nonrecurrent skills; both the whole skill and a
number of its constituent skills are nonrecurrent. To perform such skills, learn-
ers tentatively apply operators to problem states, searching for a sequence that
transforms the initial state into a new state, meeting the criteria for an accept-
able goal state. It is like playing chess or checkers, where the executed moves
represent the solution, while the player’s mental simulations of potential moves
represent the problem-solving process. In the worst case, the player randomly
selects operators or employs trial-and-error strategies, making behavior look
aimless to an outside observer. But normally, players do not use trial-and-error
because cognitive schemata and/or instructional measures guide their search
process. This can manifest as cognitive strategies in the learner’s head or SAPs
in the instructional materials that allow the learner to approach the problem
systematically and as mental models in the learner’s head or domain models in
the instructional materials to reason about the domain.
In the next sections, we use the framework to distinguish between built-
in task support and guidance. Task support does not pay attention to the
problem-solving process itself but only involves given states, goal states, and
solutions (e.g., if you are going to go on a vacation trip, this would be where
the trip begins, where the trip ends, and the given routes). Guidance, on the
other hand, also takes the problem-solving process itself into account and
typically provides useful approaches and heuristics to guide the problem-
solving process and help fnd a solution (for that same vacation, this would
be a set of guidelines for how to fnd interesting routes yourself; for exam-
ple, with historical or picturesque things to see along the way).

4.6 Built-in Task Support


Diferent types of learning tasks provide diferent amounts of support by
providing diferent amounts of information on the given state, the goal
state, and/or the solution. At one end of the continuum are conventional
Table 4.3 Short description of given(s), goal(s), acceptable solution, and problem-solving process for four real-life tasks.

78
Real-life task Given(s) Goal(s) Acceptable solution Problem-solving process

Step 1: Design Learning Tasks


Producing video Client’s briefing Finished video Actions (e.g., scripting, Systematically making
content containing meeting shooting video, editing) decisions about planning
requirements the client’s needed to capture video (i.e., preproduction),
for the video requirements and audio and turn footage obtaining footage
with available into the final video (i.e., production), and
resources processing footage (i.e.,
(e.g., time, postproduction) to create
budget) and a video meeting the client’s
equipment requirements
Carrying out Client’s List with One or more search queries Applying rules-of-thumb to
literature research relevant that can be run on selected interview clients and to
searches question research database(s) to produce a construct search queries
articles list of relevant articles
Troubleshooting Electrical Tested Actions (e.g., traversing Reasoning about the mal/
electrical circuit with electrical a fault tree, diagnostic functioning of the system to
circuits unknown circuit with testing) necessary to reach come up with a solution
status known status the goal state
Controlling air Radar and voice Radar and Actions (e.g., ascertaining Continuously devising
traffic information voice aircraft speeds and strategies that may help
reflecting a information directions, giving directions maintain or reach a safe
potentially reflecting a to pilots) to maintain or situation
dangerous safe situation reach a safe situation
situation
Designing List of Blueprint and Actions (e.g., requirement Creating alternative solutions
buildings requirements, detailed specification, drafting (functional, aesthetic,
building building plan blueprint) to design a artistic, etc.) within
location building plan the constraints of given
requirements
Step 1: Design Learning Tasks 79

tasks that confront the learner with only a given state and a set of criteria for
an acceptable goal state (e.g., starting with a certain mixture, distill alcohol
with a purity of 98%). These conventional tasks provide the learner with no
support (i.e., part of the solution), and the learner must generate a proper
solution themselves. As shown at the top of Figure 4.6, conventional tasks
come in diferent forms, depending, on the one hand, on their structure
and the equivocality of their solution and, on the other, the solver. A con-
ventional task may be well structured and have only one acceptable solution
(e.g., compute the sum of 15 + 34 in base 10). But, as explained, the Ten
Steps will typically use learning tasks based on real-life tasks, yielding con-
ventional tasks that take the form of ill-structured problems.

Figure 4.6 Different types of conventional tasks and worked-out examples.

Concerning the solver, a conventional task can be carried out by an individ-


ual learner or a group. We often see a group of learners carrying out the task—
though this need not strictly be necessary—in problem-based learning where
the solution often takes the form of an explanation for a particular phenom-
enon (Loyens et al., 2011), project-based learning where the solution takes
the form of an advice or the design or production of a product that answers
a—research or practical—question (Blumenfeld et al., 1991), or interprofes-
sional learning where learners from diferent professional felds carry out a
team-based professional task (Hammick et al., 2007). These types of group
learning can be either collaborative (i.e., where there is an interdependence
of tasks such that no one person in the team can work fully independently of
the others) or cooperative (i.e., where each team member has a specifc task to
carry out independently of the other team members; Kirschner et al., 2004).
At the other end of the continuum, a worked-out example provides the
highest level of support. This task type confronts the learners’ with a given
state, a goal state, and a full solution to be studied or evaluated. Like con-
ventional tasks, worked-out examples come in diferent forms, depending on
80 Step 1: Design Learning Tasks

their structure and/or equivocality and solver. Worked-out examples based


on real-life tasks often take the form of case studies or cases (see bottom of
Figure 4.6). This is typically called the case method when a group of
learners—though this need not be strictly necessary—study the cases (Barnes
et al., 1994; for an example based on 4C/ID, see Daniel et al., 2018).
A well-designed case study presents learners with descriptions of actual or
hypothetical problem situations situated in the real world and requires them
to actively participate in the given solution; for example, by asking them to
criticize the given solution or to generate an alternative solution (Ertmer &
Russell, 1995).
For the video-production example presented in Chapter 2, a case study
might confront learners with a client briefng as well as information on the
availability of resources and equipment (i.e., the ‘given state’), the desired
goal for the fnal video with its length and production quality (i.e., criteria
for the ‘goal state’), worked-out examples of intermediate solutions (e.g.,
production plans, scripts, storyboards, footage), and the fnal solution with
the completed and edited video (i.e., the ‘solution’; see the top row of
Table 4.4). To arouse interest, it may be desirable to use a case study that
describes a spectacular success story or failure. For example, the case study
may present a commercial that completely failed because the target audi-
ence misunderstood it. A well-designed case study would require learners
to answer questions that provoke deep processing of the problem state and
the solution and compare that case with other cases to induce generalized
solutions. By studying the (intermediate) solutions, learners can get a clear
idea of the organization of a particular domain. In our example, they get a
good idea of the structure of storyboards and storylines.
The Ten Steps distinguishes several other types of learning tasks that
are—in terms of built-in task support—situated between conventional tasks
(ill-structured problems) and worked-out examples (case studies). Con-
structing such tasks involves manipulating the information given, the goal
state, and/or the solution, as indicated in Table 4.4 by the columns labeled
‘Given,’ ‘Goal,’ and ‘Solution.’
A reverse task, for example, presents both a goal state and an acceptable
solution (indicated by a plus sign), but the learners have to trace the impli-
cations for diferent situations. In other words, they have to predict the
given. In the context of troubleshooting, Half (1993) described reverse-
troubleshooting tasks as tasks that confront learners with a particular faulted
or failed component. They are then required to predict the system’s behav-
ior based on this information (i.e., what they should have observed to reach
a correct diagnosis themselves; usually the ‘given’ in a traditional trouble-
shooting task). Like case studies, reverse tasks focus learners’ attention on
useful solutions and require them to relate solution steps to given situations.
Tasks with a nonspecifc goal (also called goal-free problems; Ayres, 1993)
stimulate learners to explore relationships between solutions and the goals
Step 1: Design Learning Tasks 81

Table 4.4 Examples of different types of learning tasks for the complex skill
‘producing video content’ are ordered from high task support (i.e.,
worked-out example) to no task support (i.e., conventional task).
A plus-sign (+) in the columns ‘Given,’ ‘Goal,’ and ‘Solution’ means
that this aspect is presented to the learner; a designation (e.g., pre-
dict) denotes a required action from the learner.

Learning task Given Goal Solution Task description

Worked-out + + + Learners receive a client


example briefing, the available
resources, script, footage,
and finished video. They
must evaluate its quality.
Reverse task Predict + + Learners receive a finished
video. They must predict
possible client briefings for
which this video is relevant
and which resources were
available.
Nonspecific + Define Find Learners receive a client
goal task briefing, the available
resources, and the highly
nonspecific goal to come
up with as many different
videos that meet the
client’s requirements. They
must script those videos.
Completion + + Complete Learners receive a client
task briefing, available
resources, and the goal of
producing a finished video
that meets the client’s
requirements. They receive
an unfinished video and
must complete the video.
Conventional + + Find Learners receive a client
task briefing and available
resources. They carry out
preproduction, production,
and postproduction to
create the final video
that meets the client’s
requirements.

that those solutions can reach. Usually, learners receive goal-specifc prob-
lems, such as “A car with a mass of 950 kg accelerating in a straight line
from rest for 10 seconds travels 100 meters. What is the fnal velocity of the
car?” This problem could easily be made goal nonspecifc by replacing the
last line with “Calculate the value of as many of the variables involved here
82 Step 1: Design Learning Tasks

as you can”. Here, the learner would not only calculate the fnal velocity but
also the acceleration and the force exerted by the car at top acceleration.
And if the word ‘calculate’ was replaced by ‘represent,’ the learner could
also include graphs and the like. Nonspecifc goal problems invite learners
to move forward from the givens and to explore the problem space, which
may help them construct cognitive schemata, in contrast to conventional
goal-specifc problems that force learners to work backward from the goal.
Working backward is a cumbersome process for novice learners that may
hinder schema construction (Sweller et al., 1998, 2019).
Completion tasks give learners a given state, criteria for an acceptable goal
state, and a partial solution. The learners must then complete the partial
solution by determining the missing steps and then adding them, either at
the end or at one or more places in the middle of the solution. A particu-
larly strong aspect of completion tasks is that learners must carefully study
the partial solution provided. They cannot develop a complete solution if
they do not do this. Completion tasks seem to be especially useful in design-
oriented task domains and were originally developed in the domain of soft-
ware engineering (Van Merriënboer, 1990; Van Merriënboer & de Croock,
1992), where learners had to fll in missing command lines in computer
programs. Well-designed completion tasks ensure learners understand the
partial solution and still have to perform a nontrivial completion.
The learning tasks’ common element is directing the learners’ attention
to problem states, acceptable solutions, and useful solution steps. This helps
them mindfully abstract information from good solutions or use inductive
processes to construct cognitive schemata that refect generalized solutions
for particular types of tasks. Research on learning tasks other than unguided
conventional tasks has provided strong evidence that they facilitate schema
construction and transfer of learning for novice learners (for an overview,
see Van Merriënboer & Sweller, 2005, 2010). The bottom line is that hav-
ing learners solve many problems independently is often not the best way to
teach them problem solving (Kirschner et al., 2006; Sweller et al., 2007)!
For novice learners, studying useful solutions and the relationships between
the characteristics of a given situation and the solution steps applied is much
more important for developing problem solving, reasoning, and decision-
making skills than solving equivalent problems. Only more experienced
learners, who have already developed most of the cognitive schemata neces-
sary to guide their problem solving, should use conventional tasks.

4.7 Problem-Solving Guidance


Though diferent types of learning tasks provide diferent amounts of sup-
port, depending on the information presented in the givens, goals, and/
or solutions, they do not deal with the problem-solving process needed
Step 1: Design Learning Tasks 83

to generate acceptable solutions (see the bottom square in Figure 4.5).


Another way to provide support is to guide learners through the problem-
solving process. To provide such guidance, one could specify the phases an
expert typically goes through when performing the task or solving the prob-
lem (think of the fve phases of ADDIE discussed in Chapter 3) as well as the
rules-of-thumb that may be helpful to complete each of the phases. Such a
Systematic Approach to Problem Solving (SAP) may result from an analysis of
cognitive strategies (see Step 5 in Chapter 8).
Problem-solving guidance can take the form of modeling, process
worksheets, performance constraints, and/or tutor guidance. As shown in
Table 4.5, designers can combine diferent types of guidance with difer-
ent types of built-in support. For our video-production example, Table 4.5
provides examples of combinations of built-in support and guidance for a
learning task with the following properties:

• The client’s briefng presents the given situation: Create a 3-minute pro-
motional video for a local bakery called ‘Breaducation’ specializing in
pastries and desserts. The promotional video will showcase its products
and create a connection with the local community.
• The goal situation consists of a promotional video highlighting the bak-
ery’s products and character that can be shared on the website and social
media channels, attracting new customers and generating buzz for the
business.
• The problem-solving process leading to the solution consists of preproduc-
tion (e.g., creating a production plan including a storyboard outlining the
video’s shots and sequences), production (e.g., recording footage of the
bakers creating pastries and desserts, the variety of available wares, cus-
tomers enjoying their treats, the bakery’s interior, as well as conducting
brief interviews with the owner and satisfed customers to add a personal
touch), and postproduction (e.g., editing the video to create a seamless
and engaging narrative including the interviews and footage of pastries
and desserts, text overlays or graphics highlighting key information such
as the bakery’s location, contact details, and opening hours, accompanied
by cheerful, uplifting background music).

In the upper left cell of Table 4.5 are learning tasks that provide maximum
support and guidance. Along with worked-out examples describing the solu-
tion for a given state and goal state, they make the expert’s problem-solving
process visible (i.e., modeling examples). In the bottom right cell of Table 4.5
are learning tasks that provide minimal support and guidance. They provide
conventional, unsupported tasks without any guidance. The next sections
describe the diferent types of guidance and provide further examples of how
to combine guidance with diferent types of built-in support.
Table 4.5 Combinations of built-in task support and guidance for the learning task “Create a promotional video (i.e., promo)

84
for a local bakery”.

Step 1: Design Learning Tasks


Guidance

Built-in task Modeling Process Worksheet Performance Tutoring None


support Shows and explains Provides critical Constraints Provides different No guidance.
the problem- phases and helpful Makes undesired types of guidance as
solving process that rules-of-thumb to actions or needed.
may help reach a reach a solution. behaviors
solution. unavailable.
Worked-out Modeling Example Learners receive the Learners receive the Learners receive the Learners receive
example Learners observe an bakery’s briefng, the bakery’s briefing, bakery’s briefing, the the bakery’s
Describes the expert thinking script, footage, and the script, footage, script, footage, and briefing, the
solution for aloud during the fnished promo. and the finished the finished promo. script, footage,
a given state preproduction, They must evaluate promo and must They must evaluate and the finished
and goal state production, and their quality following evaluate their their quality under promo. They must
postproduction a process worksheet quality, but each the guidance of a evaluate their
of the promo that identifying the critical evaluation must be tutor. quality.
serves as a worked- phases and decisions approved before
out example. in the task. proceeding to the
next.
Reverse task Learners observe an Learners receive a Learners receive a Learners receive a Learners receive a
Asks for expert who receives finished promo. finished promo. finished promo. They finished promo.
possible given the final promo They must predict They predict must predict possible They must predict
states for a and thinks aloud possible briefings possible scripts, briefings for which possible briefings
goal state and while predicting for which this which require this promo is relevant for which this
solution. possible briefings for promo is relevant approval before under the guidance of promo is relevant.
which this promo is by following a predicting possible a tutor.
relevant. process worksheet. briefings for which
this promo is
relevant.

(Continued)
Table 4.5 (Continued)

Guidance
Nonspecifc goal Learners observe an Learners receive the Learners receive the Learners receive the Learners receive
task expert who is given bakery’s briefng and bakery’s briefng. bakery’s briefng and the bakery’s
Asks for possible the bakery’s briefng must brainstorm They brainstorm as must brainstorm and briefing and must
solutions for and thinks aloud and script as many many ideas for scripts script as many diferent brainstorm and
diferent goals while brainstorming diferent promos as possible, requiring promos that meet the script as many
for a given and scripting as many that meet the approval before requirements under the different promos
state. diferent promos that requirements writing the scripts. guidance of a tutor. that meet the
meet the requirements. following a process requirements.
worksheet.
Completion Learners observe an Learners receive the Learners receive the Learners receive the Learners receive the
task expert who receives bakery’s briefing, a bakery’s briefng, a bakery’s briefing, a bakery’s briefing,
Gives a partial the bakery’s briefing, partial script, and partial script, and partial script, and a partial script,
solution to a partial script, and unfinished footage incomplete footage. unfinished footage and unfinished
be completed unfinished footage with the goal of They complete with the goal of footage with the
for a given and thinks aloud completing the preproduction, completing the goal of completing
state and goal while completing the promo to meet the production, and promo to meet the the promo

Step 1: Design Learning Tasks


state. promo to meet the requirements by postproduction, but requirements under to meet the
requirements. following a process each phase must the guidance of a requirements.
worksheet. be approved before tutor.
proceeding to the next.
Conventional Imitation task Learners receive the Learners receive the Learners receive the Unguided
task Learners observe an bakery’s briefing. bakery’s briefing. bakery’s briefing. conventional
Asks for a expert thinking aloud They carry out They carry out They carry out task
solution for during preproduction, preproduction, preproduction, preproduction, Learners receive the
a given state production, and production, and production, and production, and bakery’s briefng.
and goal postproduction. postproduction postproduction, postproduction They carry out
state. They imitate the to create the final but each phase to create the final preproduction,
expert while making a promo that meets must be approved promo that meets the production, and
promotional promo for the requirements before proceeding requirements under postproduction
an ice cream shop. by following a to the next. the guidance of a to create the fnal

85
process worksheet. tutor. promo that meets
the requirements.
86 Step 1: Design Learning Tasks

Modeling

Maximum guidance is provided by modeling, where learners observe pro-


fessionals carrying out the complex task, simultaneously explaining why
they do the things they do. Modeling, thus, not only shows a worked-out
example but also pays explicit attention to the problem-solving processes
used to develop this example; that is, to reach an acceptable solution (Van
Gog et al., 2006, 2008). So-called modeling examples or process-oriented
examples combine modeling with worked-out examples (upper left cell
in Table 4.5). According to cognitive apprenticeship (Collins et al., 1989),
where modeling is one of the main features, it is essential to present a cred-
ible, appropriate role model.
Thinking aloud during the solution process may be a very helpful tech-
nique for bringing the hidden mental problem-solving processes of the pro-
fessional into the open (Van Gog et al., 2005). Thinking-aloud protocols
yield the information necessary to specify the process information in a mod-
eling example, or the transcript may be directly presented to the learners.
A related approach in perceptual domains (e.g., medical diagnosis, air traf-
fc control) uses eye-movement modeling examples (Van Gog et al., 2009).
Suppose that students in medicine must learn to diagnose infant patients
displaying behavioral patterns that may signal epileptic seizures. Learners
can study videos of infant patients sufering from these epileptic seizures.
Eye-movement modeling examples would augment such videos with both
a verbal explanation of an expert of what they are looking at/for (and why)
and the gaze focus of this expert superimposed onto the video; thus, for
each moment in time, the learner can see what the expert is looking at and in
what sequence (Kok & Jarodzka, 2017). Learners learn more from studying
eye-movement modeling examples than from traditional videos (Jarodzka
et al., 2012). As for worked-out examples, learners who study modeling
examples often can either be confronted with the ‘thinking’ or the ‘looking’
involved in the case’s narrative or can be asked to answer questions that pro-
voke deep processing and abstraction of cognitive strategies supplemental
to the narrative. By studying the modeling example, learners can under-
stand the problem-solving phases professionals go through and the rules-of-
thumb they use to overcome impasses and successfully complete each phase.
Of course, such examples can also be ‘canned,’ using video or multimedia
materials in which the model solves a problem while simultaneously explain-
ing what they are doing (Hoogerheide et al., 2016).
Modeling combined with a conventional task is also called an imitation
task (see the bottom left cell in Table 4.5). It presents a conventional task
in combination with a modeling example of an analogous task. The prob-
lem-solving process presented in the modeling example provides a blueprint
for approaching the new task, focusing on possibly useful solution steps.
Step 1: Design Learning Tasks 87

The required imitation is a sophisticated cognitive process where learners


must identify the analogy between the modeling example and the given
task and use the example to map a new solution (Vosniadou & Ortony,
1989). Imitation tasks are quite authentic because experts often rely on
their knowledge of specifc cases to guide their problem-solving behavior
on new problems—a process known in cognitive science as case-based or
analogical reasoning. For the video-production example, an imitation task
might frst take the form of a 1- or 2-day internship, allowing the learner to
shadow and observe an experienced professional video content producer.
These observations would give the learner a complete picture of the whole
task and include conversations with the professional about decisions made
during that video’s preproduction, production, and postproduction. The
professional should explain what they are doing and why they do it in a
particular way. Subsequently, the learner must develop a similar (but not
identical) video by going through the same phases and applying the same
rules-of-thumb as the professional.

Process Worksheets

A process worksheet (Van Merriënboer, 1997; Van Gog et al., 2004) pro-
vides learners with the phases they need to go through to solve a problem
and guides them through the problem-solving process. In other words, it
provides them with a SAP, including rules-of-thumb for carrying out the
learning task.
A process worksheet may be as simple as a sheet of paper indicating the
problem-solving phases (and, if applicable, subphases) that might help carry
out the learning task. The learner uses it as a guide for solving the problem.
The worksheet provides rules-of-thumb that may help complete each phase.
These rules-of-thumb may take the form of statements (e.g., when prepar-
ing a presentation, consider the audience’s prior knowledge and take it into
account) or guiding questions (e.g., what aspect(s) of your audience should
you take into account when preparing a presentation and why?). An advan-
tage of using the interrogatory form (i.e., epistemic questions) is that learners
are provoked to think about the rules-of-thumb. Furthermore, if they write
down their answers to these questions on the process worksheet, a teacher
can observe their work and provide feedback on the applied problem-solving
strategy. It should be clear that both the phases and the rules-of-thumb are
heuristic: They may help the learner to solve the problem, but they do not
necessarily do so. This distinguishes them from algorithmic rules or proce-
dures. Table 4.6 provides an example of phases in problem solving and rules-
of-thumb used in a course for law students being trained to plead a case in
court.
88 Step 1: Design Learning Tasks

Table 4.6 Phases and rules-of-thumb for the complex skill ‘preparing a plea’.

Phase in problem-solving process Rules-of-thumb/guiding questions

1. Order the documents in the file Try to order the documents


chronologically, categorically (e.g.,
legal documents, letters, notes), or by
relevance.
2. Get acquainted with the file Answer questions such as “Which
sub domain of law is relevant here?”
or “How do I estimate my client’s
chances?”
3. Study the file thoroughly Answers questions such as “What is the
specific legal question here?”, “What
sections of the law are relevant in this
case?”, or “What legal consequence is
most convenient for my client?”
4. Analyze the situation for Answer questions such as “Which judge
preparing and conducting will try the case?”, “Where will the
the plea trial take place?”, or “At what time of
day?”
5. Determine a useful strategy Weigh the importance of the results of
for preparing and conducting phases 3 and 4 and consider your own
the plea capabilities (e.g., your plea style) when
deciding what to include in your plea.
6. Determine the way to proceed Write a draft plea note in spoken
from strategy to plea language using the results of phases
3 and 5. Always remember your goal
and use a well-argued style to express
yourself.
7. Determine the way to proceed Transform the plea note into index cards
from the plea-note to conducting containing the basic outline of your
the plea plea and practice the plea with the
index cards, paying attention to verbal
and nonverbal aspects of behavior.
8. Make the plea and practice it Ask friends to give you feedback on your
plea and record your practice pleas on
videotape for self-evaluation.
9. Plead in court Pay attention to the reactions of the
various listeners and adapt your style
to them.
Source: Adapted from Nadolski et al., 2001.

For the video-production example, a process worksheet for a conventional


task (bottom cell in the column ‘process worksheet’ in Table 4.5) would
specify the main phases (i.e., preproduction, production, postproduction)
Step 1: Design Learning Tasks 89

and subphases necessary to make a video as well as rules-of-thumb that


may help to complete each phase and subphase. It will systematically guide
the learners through the problem-solving process. Process worksheets that
guide the cognitive process will take a slightly diferent form for learning
tasks with built-in support than for conventional tasks. For a reverse task,
learners must fgure out for which given situations a particular video ofers
a good solution (see the cell ‘process worksheet/reverse task’ in Table 4.5).
The process worksheet might then prescribe phases to analyze the criteria
for an acceptable goal state, analyze the given solution (i.e., the fnished
video and possibly intermediate solutions such as unedited footage), and
predict the given state(s) for which the video ofers a good solution. Rules-
of-thumb may help learners complete each phase.
Process worksheets can be as simple as a piece of paper specifying phases,
subphases, and rules-of-thumb, but they can also be complex and highly
sophisticated. Computer-supported applications can add new functionalities
to traditional process worksheets. Some SAPs, for example, might be branched
such that certain phases may difer for diferent problems. With the aid of a
computer, it is possible to change or adapt an electronic process worksheet
according to the learners’ decisions, their progress through the problem solu-
tion, and the outcomes of the completed phases. In addition to providing a
process worksheet to learners, one may also ofer cognitive tools (Herrington &
Parker, 2013; Jonassen, 2000) that help them perform the problem-solving
activities for a particular phase. Cognitive tools are not pieces of specialized
software that teach a subject (i.e., learning from the tool) but computer pro-
grams and applications that facilitate meaningful professional thinking and
working (i.e., learning with the tool; Kirschner & Davis, 2003; Kirschner &
Wopereis, 2003). Such tools invite and help learners approach the problem
as an expert would. For example, a cognitive tool helping learners to pre-
pare a plea could ofer an electronic form for performing a situational analysis
(Phase 4 in Table 4.6), or another, more advanced, tool could ofer facilities for
evaluating video recordings of pleas on their strong and weak points (Phase 8).

Performance Constraints

Process worksheets are designed to guide learners through the problem-solv-


ing process, but learners are free to use them and, if they do, skip phases, ignore
rules-of-thumb, and so forth. A more directive approach to giving guidance
uses performance constraints. The basic idea is to make irrelevant actions in
a particular phase of the problem-solving process unavailable to the learners.
Learners can only perform those actions after completing the previous phase
or phases and when they start working on the new phase for which the actions
are relevant. For instance, law students learning to prepare a court plea would
not be allowed to start reading documents (Phase 2) before they have accept-
ably ordered all documents (Phase 1), or they would not be allowed to use the
90 Step 1: Design Learning Tasks

electronic form for performing a situational analysis (a cognitive tool for Phase 4)
before thoroughly studying the whole fle (Phase 3; Nadolski et al., 2006).
For the video-production example, performance constraints for a con-
ventional task (bottom cell in the column ‘performance constraints’ in
Table 4.5) may, for instance, require approval of the preproduction phase
by a teacher or supervisor before learners start the production phase and
require approval of the production phase before they start the postproduc-
tion phase. Performance constraints that guide the cognitive process will be
diferent for learning tasks with built-in support than for conventional tasks.
For a nonspecifc goal task, learners must generate as many—intermediate—
solutions as possible (see the cell ‘performance constraints/nonspecifc goal
task’ in Table 4.5). Performance constraints may then require the learners to
give an accurate summary of the given state and defne the goal state before
they can start their brainstorming. Well-designed performance constraints
might also reduce the number of phases or decrease the specifcity of each
phase if learners acquire more expertise (Nadolski et al., 2005). Because per-
formance constraints are more directive than process worksheets, they may
be particularly useful for early phases in the learning process.

Tutor Guidance

The diferent types of problem-solving guidance discussed can also be given


by an expert or teacher. When they fulfll this role, they are typically called a
‘tutor’ in the context of problem-based learning, a ‘coach’ in the context of
project-based learning, or a ‘workplace supervisor’ in the context of appren-
ticeship learning. A tutor can closely monitor the learner’s problem-solving
process and take appropriate actions when deemed necessary. For example,
if there is an impasse in the problem-solving process, the tutor may demon-
strate efective problem-solving strategies or, in perceptual domains, point
out where the learners should look (cf. modeling examples), provide use-
ful rules-of-thumb, lead learners into the next problem-solving phase (cf.
process worksheets), or stop them to ensure that they complete one phase
before starting on the next (cf. performance constraints).
For the video-production example, tutoring for a conventional task (bot-
tom cell in the column ‘tutoring’ in Table 4.5) might take the form of a
supervisor who monitors and guides the learner during a video’s preproduc-
tion, production, and postproduction phases. As explained earlier, tutoring
in learning tasks with built-in support might difer slightly from tutoring in
conventional tasks. For a completion task, learners must fnish incomplete
videos or intermediate solutions such as scripts and storyboards (see the cell
‘tutoring/completion task’ in Table 4.5). The tutor will then not only help
learners with the preproduction, production, and postproduction of the
missing parts but will also help them study and interpret the available partial
solutions. A great advantage of one-on-one tutor guidance is that real-time
Step 1: Design Learning Tasks 91

monitoring of the problem-solving process makes it possible to notice and


follow specifc difculties that learners encounter during problem solving.
A disadvantage, however, is the amount of stafng involved. Due to its high
complexity, tutor guidance for whole-task performance is usually given by
humans, but this might change over time with the upsurge of artifcial intel-
ligence (see the section on future developments in Chapter 16).

4.8 Scaffolding Support and Guidance


Scafolding is typically seen as providing an optimal level of support and
guidance and fading of that support and guidance, when appropriate (Reiser,
2004), as in a scafold that supports the construction of a new building and
that is slowly taken away as the building nears completion. Initially, support
and guidance enable learners to achieve goals or carry out actions not achiev-
able without that support and guidance. When the learner can achieve the
desired goal or carry out the required action, the support and guidance are
gradually diminished or removed until no support and guidance are needed.
Because irrelevant, inefective, excessive, or insufcient support and guid-
ance can hamper the learning process (by adding extraneous cognitive load
to the learner), it is critical to determine the right type and amount of learner
support and guidance needed and to fade it at the appropriate time and rate.
This is similar to Vygotsky’s (1978) ‘zone of proximal development’: To
optimize learning, learning tasks must be challenging and a little beyond
the reach of the learner, but thanks to available guidance and support, the
learner can complete the tasks (see also Step 3 on sequencing tasks).
Scafolding complex performance does not ‘direct’ learners, as is the
case when teaching an algorithm, but rather, guides them during their
work on rich learning tasks. Modeling the use of cognitive strategies by
thinking aloud or eye-movement modeling examples; providing process
worksheets, guiding questions, and checklists; applying performance con-
straints; and giving parts of the solution as is done in several types of learn-
ing tasks are all examples of such problem-solving support and guidance
(see Table 4.7).
There is a strict necessity for scafolding due to the expertise reversal efect
(for examples of this efect, see Kalyuga et al., 2003, 2012). Research on
expertise reversal indicates that highly efective instructional methods for
novices can lose efectiveness and even have negative efects when used with
more knowledgeable learners (i.e., experts) and vice versa. There is over-
whelming evidence that unguided conventional tasks force novice learners to
use weak problem-solving methods such as means-ends analysis, where they
are recursively searching for means that reduce the diference between the
current state and a goal state. These weak methods approach yields a very
high cognitive load and bears little relation to schema-construction processes
concerned with learning to recognize problem states and their associated
92 Step 1: Design Learning Tasks

Table 4.7 Scaffolding techniques and type of fading.

Technique Fading

Modeling cognitive strategies by Begin by clarifying all decision-making,


thinking aloud problem-solving, and reasoning
processes in detail but reduce the
level of detail as learners acquire
more expertise.
Modeling cognitive strategies by eye Begin by giving video-examples with
movement modeling examples dynamic information showing the eye
movement patterns of experts but
remove the eye-tracking information
in a later stage.
Providing process worksheets, Begin by presenting the whole process
guiding questions, or checklists and then slowly reduce the amount
of (sub) phases, questions, and rules-
of-thumb given to the learner.
Applying performance constraints Begin by blocking all learner actions
not necessary to reach a solution
and continuously make more and
more actions available to the learner.
Examples or parts of the solution Work from case studies or fully
worked examples via completion
assignments towards conventional
tasks. This fading guidance is also
known as the ‘completion strategy.’

solution steps. Thus, for novices, learning to carry out unguided conven-
tional tasks is diferent from and incompatible with how they are ‘supposed
to be’ carried out; that is, how experts carry them out. Giving novices proper
support and guidance is necessary for learning (van Merriënboer et al., 2003).
For learners with more expertise, support and guidance may not be nec-
essary or may even be detrimental to learning because they have already
acquired the cognitive schemata that guide their problem solving, reasoning,
and decision-making processes. They have their own proven personal and/
or idiosyncratic ways of working. These cognitive schemata may interfere
with the examples, process worksheets, or other means of support and guid-
ance provided. Rather than risking confict between the experts’ available
cognitive schemata and the support and guidance provided by the instruc-
tion, it is preferable to greatly reduce or even eliminate the support and
guidance. This means providing large amounts of support and guidance for
learning tasks early in a task class (when the learners are novices for the tasks
at a particular level of complexity). In contrast, no support and guidance
should be given for the fnal learning tasks in this task class (when these same
learners have gained the necessary expertise at this level of complexity).
Step 1: Design Learning Tasks 93

One especially powerful approach to scafolding is known as the com-


pletion strategy (Van Merriënboer, 1990; Van Merriënboer & de Croock,
1992), where learners frst study cases, then work on completion tasks,
and fnally carry out conventional tasks (Appendix 2 presents an example
of this strategy). Completion tasks ofer a bridge between case studies and
conventional tasks, with case studies essentially serving as completion tasks
that provide a fully provided solution and conventional tasks representing
completion tasks with no provided solution. This strategy was implemented
in an e-learning program for introductory computer programming (Van
Merriënboer & Luursema, 1996). Learners studied, evaluated, and tested
existing computer programs at the beginning of their training to develop
cognitive schemata of the templates (i.e., stereotyped patterns of code)
used in computer programs. During the training, learners had to complete
larger and larger parts of given computer programs. The completion tasks
were (dynamically) constructed such that learners received a partial pro-
gram consisting of templates for which they had not yet constructed cog-
nitive schemata and had to complete this partial program with templates
for which they already had useful schemata in memory. Finally, they had
to design and write full computer programs from scratch independently.
Experimental studies have consistently shown positive efects on learning
and transfer in several other domains (Nückles et al., 2010; Renkl & Atkin-
son, 2003).

4.9 Summary of Guidelines


• If you design learning tasks, you need to take real-life tasks as a starting
point for design.
• If you design task environments, you need to consider starting with safe
and simulated task environments and work via increasingly higher-fdelity
task environments toward the real task environment.
• If you design learning tasks, their psychological fdelity should always
be high, but the functional and physical fdelity of the environment may
gradually change from low to high to limit the amount of seductive and
irrelevant details for novice learners.
• If you design a sequence of equally complex learning tasks, you need to
ensure that they vary on the dimensions that also vary in the real world
and sequence them in a randomized order.
• If you design learner support for learning tasks, you must distinguish
between built-in task support and problem-solving guidance.
• If you design built-in task support, you need to consider using case stud-
ies, reverse tasks, tasks with nonspecifc goals, and completion tasks.
• If you design problem-solving guidance for problem solving, you need to
consider the use of modeling examples, process worksheets, performance
constraints, and tutor guidance.
94 Step 1: Design Learning Tasks

• If you design a sequence of learning tasks, you must ensure that learners
start with learning tasks with a high level of support and guidance but
end with tasks without support and guidance (this is called ‘scafolding’).

Glossary Terms

Authentic task; Case method; Case study; Cognitive tool; Completion


strategy; Completion task; Contextual interference; Conventional task;
Desirable difculty; Expertise reversal efect; Fidelity; Functional fdelity;
Guidance; Imitation task; Interprofessional learning; Means-ends analy-
sis; Modeling example; Nonspecifc goal task; Performance constraint;
Physical fdelity; Primary medium; Problem-based learning; Process
worksheet; Project-based learning; Psychological fdelity; Reverse task;
Scafolding; Support; Variability of practice; Worked-out example
Chapter 5

Step 2
Design Performance Assessments

5.1 Necessity
An integrated set of performance objectives provides standards for accept-
able performance. Assessment instruments use these standards for perfor-
mance assessment. We strongly recommend carrying out this step.

In the Netherlands, children aiming to get their frst swimming diploma


must complete a range of specifc requirements set by the Royal Dutch Swim-
ming Federation. These performance objectives include, but are not limited

DOI: 10.4324/9781003322481-5
96 Step 2: Design Performance Assessments

to, having to, fully clothed, tread water and then swim 12.5 meters, swimming
4 × 25 meters with two diferent strokes (alternating between breaststroke
and backstroke), and swimming underwater for 3 meters. Each of these ‘per-
formances’ is further defned, for example, concerning what constitutes ‘fully
clothed’ (i.e., pants, shirt, and shoes), that they have to swim underwater
through a ring, how deep the ring is placed underwater (3 meters), that they
have to turn along the axis of their body between the diferent strokes, that
there is a specifc minimum time for treading water (15 seconds) and a maxi-
mum time for completing the two strokes, et cetera. Children get this diploma
only when they have met all these standards for acceptable performance.
This chapter discusses identifying, formulating, and classifying perfor-
mance objectives and their use for developing performance assessments. As
specifed in the previous chapter, learning tasks already give a good impression
of what learners will do during and after the training. But performance objec-
tives give more detailed descriptions of the desired ‘exit behaviors,’ includ-
ing the conditions under which the complex skill needs to be performed
(e.g., fully clothed), the tools and objects that can or should be used during
performance (e.g., through a hoop), and, last but not least, the standards
for acceptable performance (e.g., 25 meters, turning along their body axis).
According to the Ten Steps, learners learn from and practice almost exclu-
sively on whole tasks to help them reach an integrated set of performance
objectives representing many aspects of the complex skill. Performance
objectives help designers diferentiate the many aspects of whole-task per-
formance and connect the front end of training design (i.e., what do learners
need to learn?) to its back end (i.e., did they learn what they were supposed
to?). Performance assessments make it possible to determine whether stand-
ards have been met and to provide informative feedback to learners.
The structure of this chapter is as follows. Section 2 describes skill decom-
position as identifying relevant constituent skills and their interrelationships.
The result of this decomposition is a skill hierarchy. Section 3 discusses for-
mulating a performance objective for each constituent skill in this hierarchy.
The whole set of objectives provides a concise description of the contents
of the training program and sets the standards for acceptable performance.
Section 4 delves into categorizing performance objectives related to specifc
constituent skills, classifying them as nonrecurrent or recurrent. Nonrecur-
rent constituent skills always involve problem solving, reasoning, or deci-
sion making and require presenting supportive information. In contrast,
recurrent constituent skills involve applying rules or procedures and require
presenting procedural information. The classifcation further encompasses
to-be-automated recurrent constituent skills requiring part-task practice
and double-classifed constituent skills. Moreover, certain constituent skills
will not be taught because learners have already mastered them. The per-
formance objectives for acquiring skills serve multiple purposes. They form
Step 2: Design Performance Assessments 97

the basis for discussing the training program’s content with diferent stake-
holders, give input to further analysis and design activities, and, importantly,
provide the standards for performance assessments. Section 5 discusses the
design of these performance assessments. This includes the specifcation of
scoring rubrics for assessing learners’ performance and their progress when
implemented in development portfolios. The chapter concludes with a
summary.

5.2 Skill Decomposition


Skill decomposition—splitting a skill into all its components or basic ele-
ments—leads to a description of a complex skill’s constituent skills and the
interrelationships between them. The result of this decomposition process is
a skill hierarchy. As discussed in Section 3.4, the Ten Steps assumes design-
ers have conducted a needs assessment. This has led to the conclusion that
there is a performance problem to solve with training and a preliminary
overall learning goal for teaching the complex skill involved. This overall
learning goal is a statement of what learners will be able to do after they
have completed the training program. In an iterative design process, the
overall goal helps decompose the complex skill, while the decomposition
helps specify the overall goal. The real-life tasks and learning tasks identifed
in Step 1 will be particularly helpful in facilitating brainstorming about a
good decomposition of the complex skill into its constituent skills and fur-
ther specifcation of the overall learning goal. For instance, the preliminary
learning goal for a training program on producing video content might be:

After the training program, participants are able to independently plan,


produce, and edit high-quality video content for a variety of purposes, efec-
tively interact with people being flmed, and handle all aspects of video pro-
duction, including scripting, storyboarding, camera operation, lighting,
and video and sound editing, using appropriate tools and equipment, to
meet the functional, creative and technical needs of client projects.

Skill Hierarchy

Developing a skill hierarchy starts with the overall learning goal (i.e., the
top-level skill), which provides the basis for identifying the more specifc
constituent skills that enable the performance of the whole skill. The idea
behind this is that constituent skills lower in the hierarchy enable the learn-
ing and performance of skills higher in the hierarchy. This vertical, enabling
relationship is also called a prerequisite relationship (Gagné, 1968). Fig-
ure 2.2 in Chapter 2 presented a skill hierarchy for producing video content.
It is repeated here in Figure 5.1 with a classifcation of constituent skills.
98
Step 2: Design Performance Assessments
Figure 5.1 A hierarchy of constituent skills for the complex skill ‘producing video content,’ with a classification of its
constituent skills.
Step 2: Design Performance Assessments 99

This hierarchy indicates that, to be able to produce video content, the


learner must be able to create a production plan; that, to create a produc-
tion plan, the learner must be able to develop a story for the video; that, to
develop a story, the learner must be able to write a script; and so forth. Thus,
the basic question to reach the next, lower level in a skill hierarchy is: Which
more specifc skills are necessary to perform the more general skill under
consideration? Levels may be added to the hierarchy until ‘simple’ skills are
identifed—skills that can be further analyzed with regular task-analytical
techniques (see Chapters 8–9 and 11–12).
When expanding one particular level of a skill hierarchy, the basic question
is: Are there any other skills necessary to carry out the skill under considera-
tion? This horizontal relationship is indicated from left to right. Figure 5.1
indicates that, to be able to produce footage, one must be able to interact
with people being flmed, collaborate with crew members, and shoot video.
This horizontal relationship can be a:

• Temporal relationship. This—default—relationship indicates that the skill


on the left-hand side is carried out before the skill on the right-hand side.
For instance, when you drive a car, you start the engine before you drive
away (see Figure 5.2). In Figure 5.1, ‘creating a production plan’ is done
before ‘producing footage.’
• Simultaneous relationship. This relationship indicates that the skills may
be carried out at the same time. For example, you will usually use the gas
pedal and the steering wheel simultaneously when you drive a car. In Fig-
ure 5.1, ‘collaborating with the crew’ may be done simultaneously with
‘shooting video’. A double-headed arrow between the skills indicates a
simultaneous relationship.
• Transposable relationship. This relationship indicates that the skills can
be carried out in any desired order. For instance, when you drive a car,
you may switch of the engine and then set the hand brake, or you could
do this the other way around. In Figure 5.1, ‘color correcting,’ ‘adding
sound and music,’ and ‘adding efects, titles, and graphics’ can be done
in any order (or even simultaneously). A double-headed dotted arrow
between the skills indicates the transposable relationship.
100 Step 2: Design Performance Assessments

Figure 5.2 Common relationships in a skill hierarchy.

Figure 5.2 summarizes the main relationships distinguished in a skill hierar-


chy. In principle, such a hierarchy can also contain relationships other than
horizontal and vertical relations. In a heterarchical organization, the other
relationships are limited to the same horizontal level in the hierarchy; thus, a
network of constituent skills characterizes each level. A retiary organization
(i.e., resembling or forming a net or web) can contain a complex mapping
of relationships between any two elements; thus, the hierarchy becomes a
complex network in which constituent skills may have nonarbitrary relation-
ships with any other constituent skills (this is also called a ‘competence map’;
Stoof et al., 2006, 2007). For instance, you may specify a similarity relation-
ship between two constituent skills, indicating that they can be easily mixed
up, or you may specify an input-output relationship, indicating that the per-
formance of one skill provides input for performing another. The identifica-
tion of such relationships may be helpful for further training design.

Data Gathering

Building a skill hierarchy is typically done by professionals or subject-matter


experts in the task domain or by an instructional designer in close coopera-
tion with professionals or subject-matter experts. To this end, domain experts
are best confronted with real-life tasks or related learning tasks (from Step
1) so that they can explain how to perform those tasks. Alternatively, they
can carry out the tasks or parts of them while thinking aloud, allowing close
observation, or they can be video recorded while performing the tasks and
then questioned afterward about what they did and why they did it while
looking at the recording (i.e., cued retrospective reporting; Van Gog et al.,
Step 2: Design Performance Assessments 101

2005). Professionals are necessary to identify the constituent skills and verify
the skill hierarchy in several validation cycles. As a nonexpert in the domain
being taught but with expertise in instruction and its design, the designer has
a key role. The designer tries to help the expert overcome what is known as
the curse of expertise/knowledge. The curse of knowledge/expertise is a cog-
nitive bias that occurs when someone communicating with others—often,
the expert—assumes that the others have the knowledge that they need to
understand what is being communicated. In other words, they assume they
all share a background and understanding. A second role is to help decom-
pose what the expert says that they do. Many skills that the expert carries out
are so automated that the expert does not realize the underlying constituent
skills. In these validation cycles, the designer, thus, checks whether the hier-
archy contains all the constituent skills necessary to learn and perform the
complex cognitive skill and whether lower-level skills facilitate the learning
and performance of those higher in the hierarchy. If this is not the case, the
hierarchy needs to be refned or reorganized. Skill decomposition is a dif-
fcult and time-consuming process that typically requires several validation
cycles and is frequently updated after working on other steps.
Three guidelines that help build a skill hierarchy are to focus on (a) simple
versions of the task before analyzing more complex versions of it, (b) objects and
tools used by the task performer, in addition to looking at overall performance,
and (c) defciencies in novice performance, and thus, do not only focus on desired
task performance. When it comes to task complexity, which, as stated earlier, is a
function of the number of elements in a task and the interactivity between those
elements, it is best to frst confront analysts with relatively simple real-life tasks.
Only introduce more complex tasks after all constituent skills for the simpler tasks
have been identifed. Step 3 (described in the next chapter) provides approaches
for distinguishing between simple and complex versions of the whole task.
Objects refer to those things that are changed or attended to while suc-
cessfully carrying out a task. As a task performer switches their attention
from one object to another, this often indicates that diferent constituent
skills are involved. If, for example, a video producer switches their attention
from the camera lenses to a light meter, this may indicate that at least two
constituent skills are involved; namely, ‘selecting camera lenses’ and ‘light-
ing the scene.’ If a surgeon switches attention from the patient to a donor
organ waiting to be transplanted, it may indicate the constituent skills ‘pre-
paring the patient’ and ‘getting the donor organ ready for the transplant.’
Tools refer to things used to change objects. As was the case for objects,
tool switches often indicate that diferent constituent skills are involved (see
also Figure 5.3). For example, if a video-content producer switches between
a camera lens, a light refector, and a microphone, this may indicate that
there are three constituent skills involved; namely, ‘selecting lenses’ (where
the lens is used), ‘lighting the scene’ (where the light refector used), and
‘capturing audio’ (where the microphone is used). If a surgeon switches
102 Step 2: Design Performance Assessments

Figure 5.3 Observing the use of objects and tools by proficient task performers
may help identify relevant constituent skills.

from using a scalpel to using a forceps, it may indicate the constituent skills
‘making an incision’ and ‘removing an organ.’
Finally, it is often worthwhile not only to focus on gathering data on the
desired performance but also to look at the performance defciencies of the
target group. This change of focus is particularly useful if the target group
consists of persons already involved in carrying out the task or parts of it
(e.g., employees who will be trained). This is usually the case when perfor-
mance problems on the work foor lead to the development of a training pro-
gram. Performance defciencies indicate discrepancies between the expected
or desired performance and the actual task performance. The most common
method used to assess such defciencies is interviewing the trainers (i.e., ask-
ing them what the typical problems are that their learners encounter), the
target learners (i.e., asking employees what problems they encounter), and
their managers or supervisors (i.e., asking the target learners’ superiors which
undesired efects of the observed problems are most important for them or are
most harmful to the organization). The to-be-developed training program will
focus on constituent skills in which learners exhibit performance defciencies.

5.3 Formulating Performance Objectives


Many instructional design models use performance objectives—the desired
results of learning experiences—as the main input for design decisions. Such
models recommend selecting instructional methods for each performance
Step 2: Design Performance Assessments 103

objective and creating one or more corresponding test items for each
objective. This is absolutely not the case for the Ten Steps! The Ten Steps
emphasize that, in complex learning, the training program must actively
support integrating and coordinating the constituent skills outlined in the
performance objectives (i.e., integrative goals; Gagné & Merrill, 1990).
Thus, instructional methods cannot be linked to one specifc objective but
must always link to interrelated sets of objectives that can be hierarchical,
heterarchical, or retiary and have a temporal, simultaneous, or transposable
relationship. This means that design decisions in the Ten Steps are based
directly on the characteristics of learning tasks (Step 1) and task analysis
results (Steps 5–6 and 8–9), not on the separate objectives. Nevertheless, a
performance objective is specifed for each constituent skill identifed in the
skill hierarchy because, in the performance of the whole skill, these aspects
will also become visible—often in the form of points of improvement—
and will give the designer or trainer a foothold for solving the learning
problems, flling the defciencies, or designing task support and feedback.
The integrated set of objectives describes the diferent aspects of efective
whole-task performance. Well-formulated performance objectives (Mager,
1997) contain an action verb that clearly refects the desired performance
after the training, the conditions under which the skill is carried out, the
tools and objects required (most discussions of performance objectives do
not include this aspect), and—last but not least—the standards for accept-
able performance, including criteria, values, and attitudes (see Figure 5.4).

Figure 5.4 Four main elements of a performance objective.


104 Step 2: Design Performance Assessments

Action Verbs

An action verb clearly states what learners can do after completing the train-
ing or learning experience. It should indicate observable, attainable, and
measurable behaviors. The most common mistake is to use verbs like ‘com-
prehend,’ ‘understand,’ ‘be aware of,’ ‘be familiar with,’ or ‘know.’ These
verbs should be avoided in performance objectives because they do not
describe what learners can do after the training but what they need to know
to do this. See Table 5.1 for an example of the types of action verbs you
can use for the two highest levels (create and evaluate) of Bloom’s Revised
Taxonomy (Anderson & Krathwohl, 2001). Although this taxonomy talks
about learning goals, we prefer the term ‘performance objectives’ for the
highest levels because they are not about the things that must be learned

Table 5.1 Action verbs for the two highest levels in Bloom’s revised taxonomy
in the cognitive domain.

Creating—Assembling a whole into parts. Combines elements to form new


entity from original one, the creative process. Requires analysis in order to
synthesize.
Adapt Develop Minimize
Build Discuss Modify
Change Elaborate Original
Choose Estimate Originate
Combine Formulate Plan
Compile Happen Predict
Compose Imagine Propose
Construct Improve Solution
Create Invent Solve
Delete Make up Suppose
Design Maximize Test theory
Evaluating—Assessing the value of ideas and things. Involves acts of decision
making, judging, or selecting based on criteria and rationale. Requires synthesis
in order to evaluate.
Agree Defend Mark
Appraise Determine Measure
Assess Disprove Opinion
Award Estimate Perceive
Choose Evaluate Prioritize
Compare Explain Prove
Conclude Importance Rate
Criteria Influence Recommend
Criticize Interpret Rule on
Decide Judge Select
Deduct Justify Support
Value
Source: Anderson & Krathwohl, 2001.
Step 2: Design Performance Assessments 105

but about the things the learner must be able to do after the educational
program. However, the term ‘learning goals’ is appropriate for lower lev-
els in Bloom’s taxonomy. Learning goals for supportive information can be
described on the levels of analyzing and understanding; learning goals for
procedural information can be described on the level of remembering, and
learning goals for part-task practice can be described on the level of applying.
It is important to note that these analyses are specifc to Steps 5–6 and 8–9.

Performance Conditions

The performance conditions specify the circumstances under which the con-
stituent skill must be carried out. For instance, in the context of the swimming
diploma, one condition was ‘fully clothed’ for the skill of ‘treading water,’ as
that is often the case in real life where a person—in the Netherlands—might
fall into a canal or other waterway. They may include safety risks (e.g., if the
skill is needed in a critical working system), time stress (e.g., where delays
could cause major problems), workload (e.g., in addition to tasks that cannot
be delegated to others), environmental factors (e.g., amount of noise, light,
or the weather conditions), time-sharing requirements (e.g., when the skill
has to be performed alongside other skills), social factors (e.g., in a hostile or
friendly group/environment), and more. It is crucial to defne these condi-
tions in a way that minimizes the transfer-of-training problem.
Consider surgeons trained under optimal conditions in a sterile operat-
ing room with good lighting, modern tools, and a full staf of well-trained
colleagues. If they are military surgeons also sent to combat zones, they will
have to perform that same surgery under less-than-optimal conditions, in a
battlefeld hospital with poor lighting, limited tools, and minimal and possi-
bly poorly trained, local staf. Often, relevant conditions already appear dur-
ing the design of learning tasks when the designer determines dimensions
on which real-world tasks difer (see Section 4.4 on variability of practice).
One dimension relates to the conditions under which to perform the task,
which are also relevant when formulating performance objectives.

Tools and Objects

The performance objective for a specifc constituent skill outlines the tools
and objects necessary for its performance. This documentation is crucial for
creating a suitable learning environment for practicing and learning to carry
out the tasks. All objects and tools, or (low or high-fdelity) simulations
or imitations of them, must be available in this task environment. On the
other hand, because some tools and objects may quickly change from year to
year (such as computer hardware, input devices, software programs, medi-
cal diagnostic and treatment equipment, tax laws, codes, and regulations,
etc.), it is equally important to document which performance objectives
106 Step 2: Design Performance Assessments

and related constituent skills are afected by the introduction of new objects
and tools. This will greatly simplify updating existing training programs and
designing programs for retraining.

Standards: Criteria, Values, and Attitudes

Performance objectives should contain standards for acceptable perfor-


mance, including the relevant criteria, values, and attitudes. Criteria refer
to minimum requirements for accuracy, speed, productivity, percentage of
errors, tolerances and wastes, time requirements, and so forth (in the swim-
ming example: how many seconds the child has to tread water). This will
answer any question such as: How many? How fast? How well? Examples of
criteria are: ‘at least fve will be produced,’ ‘within 10 minutes,’ or ‘without
error.’
Values typically do not specify a (quantifable) minimum requirement but
indicate that the constituent skill should be performed according to appro-
priate rules, regulations, or conventions (in the swimming example, there
are conventions of how to turn along one’s body axis when going from the
breaststroke to the backstroke). Examples of such values are: ‘taking the
International Civil Aviation Organization (ICAO) safety regulations into
account,’ ‘without violating the trafc rules,’ or ‘in accordance with Euro-
pean laws.’
Attitudes are also treated as standards and, like knowledge structures, are
fully integrated with constituent skills. For example, you neither specify that
a video-content producer must have a ‘client-centered attitude’ nor that
the complex skill of ‘producing video content’ requires such an attitude.
Video content producers do not need to be client-centered outside work-
ing hours or when performing skills that do not involve clients. However,
for assignable constituent skills such as ‘coaching people being flmed,’ a
client-centered attitude may be necessary for acceptable performance. It
is only required to specify the attitude in the performance objective for
these relevant constituent skills. If possible, all observable behaviors that
indicate or demonstrate the attitude should be formulated or specifed in
a way that they are observable! The standard ‘with a smile on your face’ is
more concrete and observable than ‘friendly,’ ‘performing hourly checks’ is
more concrete and observable than ‘punctual,’ and ‘frequently giving rel-
evant arguments on the topic’ is more concrete and observable than ‘being
persuasive.’
After specifying the relevant actions, conditions, tools/objects, and
standards for a constituent skill, designers can fnally formulate the perfor-
mance objective for this skill. The performance objective at the top of the
skill hierarchy specifes the overall learning goal. It is also called the terminal
objective. Lower-level objectives formulate the desired exit behavior in more
Step 2: Design Performance Assessments 107

and more detailed terms. An example of a terminal objective for ‘producing


video content’ (Figure 5.1) is:

After the training program, learners are able to plan, produce, and edit
high-quality video content for a variety of purposes, including promotional,
informative, and documentary content [conditions], and handle all aspects
of the video production process, including creating scripts and storyboards
[object], operating the camera [object], lighting [object], coaching people
being flmed [object] and editing the video [object], using relevant equip-
ment such as lenses, light refectors, microphone mounts and video editing
software [tools], to meet the creative and technical needs [value] of clients.

Another example of a lower-level performance objective is related to the


constituent skill ‘creating a composition’:

After the training program, learners are able to create compositions by oper-
ating and placing cameras [object] with appropriate lenses [object] and
correctly arranged lighting [object] so that conventional framing conven-
tions (e.g., rule of thirds) are followed [value], rough shapes, colors, and
brightness of primary objects complement each other [value], visual elements
feel balanced [value], and foreground is clearly distinguished from the
background [criterion] to achieve the desired artistic outcome [value].

5.4 Classifying Performance Objectives


Classifying constituent skills and their related performance objectives is
most important for designing the training blueprint. This classifcation takes
place along three dimensions related to whether a constituent skill:

1. Will or will not be taught. By default, classify constituent skills as skills that
will be taught.
2. Is treated as nonrecurrent, recurrent, or both. By default, classify con-
stituent skills as nonrecurrent, involving schema-based problem solving,
reasoning, and decision making after the training and requiring the avail-
ability of supportive information during the training.
3. Needs to be automated or not. By default, classify recurrent constituent
skills as skills that do not need full automation. Nonautomated recurrent
skills involve applying rules after the training and require presenting pro-
cedural information during the training. If you classify recurrent constit-
uent skills as skills that need to be fully automated, they may also require
additional part-task practice during the training program (see Step 10 in
Chapter 13).
108 Step 2: Design Performance Assessments

Classifcation along these three dimensions eventually results in fve classes


of constituent skills with related performance objectives (see Table 5.2). The
following sections discuss skills and objectives in an order that gives prior-
ity to the default classifcations; namely, (a) nonrecurrent constituent skills,
(b) recurrent constituent skills that do not need to be automated, (c) recur-
rent constituent skills that need to be automated, (d) constituent skills clas-
sifed as both nonrecurrent and recurrent, and (e) constituent skills that
will not be taught. Figure 5.1 shows constituent skills from the frst three
categories for the video-production example.

Table 5.2 Classification of constituent skills and the main characteristics of


related performance objectives.

Constituent skills to be taught Constituent


skills not to
Nonrecurrent Recurrent constituent skills Double classified be taught
constituent constituent skills
skills Not to be To be
automated automated
Performance Performance Performance Performance Not
objective objective objective objective applicable.
relates to relates relates relates to
schema- to the to the the ability to
based application availability recognize when
problem of rules or of a fully- a routine does
solving and the use of a automated not work and
reasoning. procedure. routine. to switch to a
problem solving
and reasoning
mode.
Learning tasks Learning tasks Additional Additional part- Not
require require part-task task practice applicable.
supportive procedural practice is is necessary in
information. information. necessary. combination
with learning
tasks in which
routines
sometimes do
not work.

Nonrecurrent Constituent Skills


The terminal objective at the top of the skill hierarchy is always classifed
as nonrecurrent because a complex cognitive skill, by defnition, involves
schema-based problem solving, reasoning, and decision making. Thus, a
Step 2: Design Performance Assessments 109

top-level skill can never be recurrent when using the Ten Steps. By default,
its constituent skills are also considered nonrecurrent. A constituent skill is
classifed as recurrent only if it will be performed based on specifc cogni-
tive rules after training. Performance objectives for nonrecurrent constitu-
ent skills describe exit behaviors that vary from one problem situation to
another. However, this behavior remains efective because it is guided by
cognitive schemata that steer the problem-solving behavior (using cognitive
strategies) and allow for reasoning about the domain (using mental models).
After training, learners should possess the necessary schemata to fnd a solu-
tion whereby their behavior is efective, efcient, and fexibly adaptable to
new and often unfamiliar situations.
For example, in the video-production example, the constituent skill ‘writ-
ing a script or synopsis’ (see Figure 5.1) is classifed as nonrecurrent because
this process difers for each new project (i.e., each new project requires a
new script or synopsis). Skills enabled by this particular nonrecurrent con-
stituent skill, like ‘developing the story’ and ‘creating the production plan,’
must also be nonrecurrent since nonrecurrent skills can never enable recur-
rent ones. In the training blueprint, nonrecurrent constituent skills require
the availability of supportive information for their development (Step 4).

Recurrent Constituent Skills

Particular aspects of nonrecurrent constituent skills can be carried out using


specifc cognitive rules. These aspects typically appear lower in the skill hier-
archy and are classifed as recurrent constituent skills. Performance objectives
for recurrent constituent skills describe exit behavior that is highly similar
across diferent problem situations. This consistency stems from applying
domain-specifc rules or step-by-step procedures that link particular char-
acteristics of the problem situation to particular actions that must be taken.
In video production, the constituent skill ‘color correcting’ (see Figure 5.1)
is classifed as recurrent because this is typically done in the same way (e.g.,
using video editing software to adjust white balance, exposure, saturation,
and contrast of each clip to achieve visual consistency throughout the whole
video), irrespective of the video project. After the training, learners should
possess the cognitive rules that allow them to come to a solution relatively
quickly with no errors. Consistent skills that result in the same response to
a specifc situation each time it occurs (as opposed to variable skills; Fisk &
Gallini, 1989) are typically classifed as recurrent. The prerequisite or ena-
bling skills for recurrent skills (i.e., those lower in the hierarchy) must also be
recurrent: A recurrent constituent skill can never have nonrecurrent aspects!
In the training blueprint, recurrent constituent skills require the availability
of procedural information for their development (Step 7).
110 Step 2: Design Performance Assessments

Please note: Classifying a constituent skill as nonrecurrent or recurrent


requires careful analysis of the desired exit behavior, including the condi-
tions and standards of performance. The same constituent skill can be clas-
sifed as nonrecurrent in one training program and recurrent in another!
A vivid example is military aircraft maintenance training. In peacetime, a
standard might involve ‘reaching a highly specifc diagnosis so that repairs
can occur most economically.’ Time is not critical; thorough testing pro-
cedures can correct possible diagnosis errors. Thus, maintaining aircraft in
peacetime will probably be classifed as a nonrecurrent skill. In wartime, the
standard shifts to ‘diagnose the component that is not functioning as quickly
as possible so that the whole component can be replaced.’ Speed becomes
paramount, and economic considerations are much less important because
the fghter jet needs to get airborne as quickly as possible. Consequently,
maintaining aircraft in wartime will probably be classifed as a recurrent skill.
As a result, a training program for aircraft maintenance in peacetime will be
diferent from a training program in wartime.

To-Be-Automated Recurrent Constituent Skills

By default, learners will only practice recurrent constituent skills in learning


tasks because they do not need to fully automate them. There may be, how-
ever, a specifc subset of recurrent constituent skills that require a very high
level of automaticity after training. These skills may receive additional train-
ing through part-task practice (Step 10), ensuring they can be performed
quickly and efortlessly as routines post-training. In many training and learn-
ing programs, particularly in academic felds, there are no or very few con-
stituent skills requiring automation. In certain other training programs, a
very high level of automaticity may be desired for constituent skills that:

• Enable the performance of many other constituent skills higher in the


hierarchy. For example, musicians continually practice musical scales to
automate fundamental skills essential for performance. Similarly, chil-
dren practice multiplication tables to automate basic arithmetic skills that
underpin various math tasks.
• Have to be performed simultaneously with many other constituent skills.
For instance, process operators in the chemical industry may automate
the reading of display panels as they diagnose the situation. Management
assistants may automate typing skills to take minutes of a meeting because
this is performed simultaneously with active listening and summarizing
participants’ contributions.
• Are critical in terms of loss of capital, danger to life, or equipment
damage. For instance, air trafc controllers automate the detection of
Step 2: Design Performance Assessments 111

dangerous in-fight situations from a radar screen, and naval navigation


ofcers automate the performance of standard maneuvering procedures
on large vessels.

If a recurrent constituent skill needs to be automated because it meets one


or more of the requirements just described, the designer must address the
question: Is it practically feasible to automate this skill? Theoretically, learn-
ers can automate all recurrent skills because they are consistent: A given
situation consistently triggers the same response. But, in instructional
design, the focus is often not on the specifc mappings between situations
and responses but on higher-level or global consistencies that link vast or
even infnite numbers of situations to particular responses (Fisk & Gallini,
1989; Van Merriënboer, 2013). Practically, reaching full automaticity is,
therefore, highly unlikely because it could take a lifetime to achieve. Sup-
pose a learner practices adding number pairs less than 1,000. Addition
involves a simple procedure taught in primary school. However, when it
comes to automation, it is not the difculty of the procedure that matters
but the number of situation-response mappings. The possible situations
refer to all number pairs from 0 + 0 to 999 + 999 or one million diferent
situations (1,000 × 1,000). If 100 practice items of 5 seconds are needed to
reach full automaticity for each situation-response pair, then 100,000,000
items, and thus, 500,000,000 seconds are needed for a training program.
To reach full automaticity, this is approximately 139,000 hours, 17,360
eight-hour working days, or more than 70 years (excluding vacations).
This explains why most people compute the sum of 178 + 539 instead
of automatically responding with the answer 717. It also explains why
primary schools often only teach the addition tables of 1–10, not those
of 1–100.
At the other extreme are simple motor skills that can be automated very
fast because there is a specifc mapping between situations and responses.
Steering a car is an example of a recurrent skill that is relatively easy to auto-
mate. Suppose that keeping the car on track on a pretty straight highway is
practiced. The situation that the learner is confronted with can be seen as
the angle that the front of the car makes with the lines on the highway. With
an accuracy of 1˚, and assuming that the maximum correction is 45˚ to the
left or right (sharper curves are highly uncommon on a highway), there are
90 diferent situations. If 100 practice items of 1 second each are necessary
to automate each situation-response pair, we need 9,000 items or 9,000
seconds. This is only 2½ hours and explains why most people have little
trouble keeping a car going in a straight line on the highway—even during
a conversation with passengers.
112 Step 2: Design Performance Assessments

Double-Classified Constituent Skills

The decision on whether to automate a recurrent skill involves a delicate


balance between the desirability of automation and the feasibility of automa-
tion. However, in exceptional situations, the potential consequences of not
automating a skill, such as signifcant delay, danger, or damage, outweigh
its usual classifcation as not to-be-automated recurrent or nonrecurrent. In
such cases, these skills are also double-classifed as to-be-automated recur-
rent skills. ‘Safely shutting down a nuclear power plant in case of calamities,’
for example, will be performed in various ways, depending on the specifc
circumstances, and would involve strategic problem solving, reasoning about
the system’s working, and decision making based on incomplete information.
Nevertheless, it may be classifed as a to-be-automated recurrent skill due to
the catastrophic consequences of slowness or errors; that is, shut it down
quickly because it is better to be safe than sorry. This classifcation comes at a
high cost for the instructional designer and trainees! Performing an algorith-
mic analysis of the skill involves a costly and labor-intensive process, often
requiring a team of technical specialists to work for months or even years.
Training time will also increase exponentially, sometimes demanding thou-
sands of hours of training, often in a high-fdelity simulation environment.
A common problem with this special category of to-be-automated con-
stituent skills is that there may always be situations where a developed routine
does not work. Despite substantial analyses and extensive training, task per-
formers may fnd themselves in unforeseen situations no one has or could have
anticipated. These unforeseen situations or faults—or combinations of them—
can lead to disasters, such as the tsunami that hit Fukushima, Japan, in 2011,
causing a massive failure in a nuclear reactor, or near-disasters. Consequently,
this special skill class is classifed as both ‘nonrecurrent’ and ‘to-be-automated
recurrent.’ In general, learners should be explicitly taught that this is the case
and be trained to switch from an automated mode to a problem solving, rea-
soning, and decision-making mode if they encounter an impasse while carrying
out a learned routine. This ability has been described as ‘refective expertise’
(Van Merriënboer et al., 1992), ‘adaptive expertise’ (Bohle Carbonell et al.,
2014), ‘switching cognitive gears’ (Louis & Sutton, 1991), and ‘cognitively
slowing down’ (Moulton et al., 2010). The double classifcation and its result-
ing training design, intermixed training (see Section 13.7), maximize the like-
lihood of efectively handling familiar and unfamiliar problem situations.

Constituent Skills Not to Be Taught

Finally, there may also be reasons not to teach specifc constituent skills.
An instructional designer may focus on performance objectives for which
learners have shown performance defciencies, thereby excluding all other
Step 2: Design Performance Assessments 113

objectives from the training program. Alternatively, due to time constraints,


the designer may include only those objectives that are particularly impor-
tant, difcult, or unfamiliar to the target learners. However, the designer
should be very careful when excluding particular objectives from the train-
ing program because, explicitly taught or not, these objectives are always
part of a highly interrelated set of constituent skills. If learners have mas-
tered a particular constituent skill in isolation, this does not guarantee they
can perform it in the context of a whole task. Performing a particular con-
stituent skill in isolation difers from doing it in the context of a whole task,
and the automaticity of a constituent skill developed through extensive part-
task practice is often not retained in whole-task performance (Schneider &
Detweiler, 1988). For example, someone taking driving lessons who has
mastered using the clutch and shifting on a straight and empty road may
not be able to do the same when driving through an unfamiliar city in heavy
trafc. Additionally, a particular skill might not be very important on its own
(e.g., adjusting the rearview mirror) but might enable the performance of
other important constituent skills (e.g., monitoring the vehicles behind to
safely change lanes). Therefore, the instructional designer should be aware
that neglecting one or more constituent skills might have the same efect as
removing one or more building blocks from a wobbly tower.

Using Performance Objectives

In many instructional design models, training design is based on performance


objectives. As stated, this is not the case for the Ten Steps, where real-life
tasks are the starting point for good design. Nevertheless, the complete and
interrelated set of constituent skills and their associated performance objec-
tives serve several important functions in the design process. A complete and
interrelated set of performance objectives for the ‘skills to be taught’ pro-
vides a clear overview of the training program’s contents. This serves as a solid
foundation basis for discussions with all relevant stakeholders. A skill hier-
archy or competence map illustrates how diferent aspects of complex task
performance are interrelated. Well-defned performance objectives specify
what learners can do after completing the training program, outlining their
exit behavior. In addition, classifying objectives as recurrent and nonrecur-
rent further specifes the exit behavior. This classifcation signifcantly impacts
the further design process, as the nonrecurrent aspects require designing
supportive information, recurrent aspects require procedural information,
and recurrent-to-be-automated aspects require part-task practice.
Performance objectives also provide valuable input for further analysis
and design activities. In-depth task and knowledge analyses may be neces-
sary if instructional materials for supportive and/or procedural information
114 Step 2: Design Performance Assessments

must be developed from scratch. In this situation, performance objectives


for constituent skills classifed as nonrecurrent provide important input for
analyzing cognitive strategies (Step 5; Chapter 8) and mental models (Step
6; Chapter 9), and performance objectives classifed as recurrent provide
similar input for analyzing cognitive rules (Step 8; Chapter 11). These analy-
sis activities provide specifc descriptions of supportive and procedural infor-
mation that help develop a highly detailed training blueprint. Lastly, the
standards that are part of the performance objectives provide a basis for
developing performance assessments, as described in the next section.

5.5 Performance Assessments


Performance objectives specify, among other things, the standards for
acceptable performance, which consist of relevant criteria, values, and atti-
tudes. These standards are necessary because they give learners informa-
tion about the desired exit performance and, when compared with actual
performance, the realized quality of their performance on the learning tasks
(Fastré et al., 2010; Hambleton et al., 2000). The Ten Steps is, in the frst
place, concerned with the assessment of the performance of whole tasks and
not with acquired knowledge and/or part-tasks because learners primarily
learn to perform these whole tasks (i.e., the frst component of 4C/ID) and
only acquire knowledge and carry out part-task practice because it helps
them improve their whole-task performance. Moreover, the focus of the Ten
Steps is on formative assessment, with improving learning as its main goal
because learning is considered more important than performance. Thus,
in this section, constituent skills or aspects of performance that do not yet
satisfy the standards are not seen as shortcomings but rather as points of
improvement for the learner. Chapter 15 will briefy describe programmatic
assessment in educational programs based on the Ten Steps, including sum-
mative assessment of knowledge and part-tasks.

Scoring Rubrics

For performance assessment, standards are typically included in assessment


forms or scoring rubrics that contain an indication of each aspect or con-
stituent skill assessed, the standards for acceptable performance of the skill,
and a scale of values for rating each standard (Fastré et al., 2013, 2014)
In addition, a scoring rubric may also include defnitions and examples to
clarify the meaning of specifc constituent skills and standards. Table 5.3
provides an example of a (small part of a) scoring rubric and contains aspects
of performance classifed as nonrecurrent, recurrent, and recurrent to-be-
automated. All are constituent skills of the complex skill of ‘producing video
content’ (Figure 5.1).
Step 2: Design Performance Assessments 115

Table 5.3 Example of a partial scoring rubric for the complex skill ‘producing
video content.’
Performance aspect/ Standards as specified in the Scale of values
constituent skill performance objective
Attitude: • Exemplary empathetic
Coaching is done with an coaching
empathetic attitude, actively • Shows active listening
listening to participants’ and attempts to address
concerns and adjusting concerns
coaching strategies to alleviate • Attentive but with room for
discomfort. improvement
• Failure to actively listen and
address concerns
Please explain your answer.
Value: • Exceptional adherence
The coaching adheres to ethical and • Strong adherence
cultural sensitivity guidelines by • Basic adherence
respecting personal boundaries • Limited adherence
and (cultural) sensitivities and Please explain your answer.
Coaching people
avoiding language or gestures
being filmed
that may be ofensive or
(nonrecurrent)
inappropriate.
Value: • Effective and engaging
Communication is clear and communication
effective, without jargon or • Consistently effective
technical terms that may be communication
confusing. • Mostly effective
communication
• Ineffective communication
Please explain your answer.
Criterion: Yes/No
The participant briefing is
completed at least 15 minutes
before the scheduled start
time of the filming session.
Criterion: Yes/No
Audio levels are set correctly,
preventing clipping or
Capturing audio
distortion.
(recurrent,
not-to-be Value: • Sufficient
automated) Back-up and safeguarding • Almost sufficient
protocols are followed to • Insufficient
prevent data loss.
Operating camera Criterion: Yes/No
and equipment Switching between camera
(recurrent, settings is faultless and very
to-be-automated) fast.
To be continued for all other relevant aspects of performance . . .
116 Step 2: Design Performance Assessments

For nonrecurrent aspects of performance, such as ‘coaching people being


flmed,’ standards will take the form of criteria, values, and attitudes. While
criteria relate to minimum requirements that are either met or not (e.g.,
yes/no), values and attitudes typically require scale values with qualitative
labels (e.g., insufcient, almost sufcient, just sufcient, amply sufcient,
excellent). Each point on the scale should be clearly labeled and defned.
There are two rules-of-thumb here. First, avoid scales with more than six
points, as such scales give a false sense of exactness, and it is often hard
for assessors to make such subtle assessment decisions. Second, use only
as many points as necessary to adequately cover the range that needs to be
determined. Depending on the criterion, this might range from ‘very poor’
to ‘excellent’ performance but could be simply ‘can’ or ‘cannot’ perform the
activity. Standards are relatively frm for recurrent aspects of performance. In
contrast to the quality of nonrecurrent aspects, assessors can judge the accu-
racy of recurrent aspects more often with a simple ‘correct’ or ‘incorrect’
(i.e., according to specifed rules or not). A well-designed scoring rubric
pays attention to all aspects of performance and typically contains more than
one standard for each aspect or constituent skill (Baartman et al., 2006).
Narrative reports should normally complement the rated scoring rubrics
(in Table 5.3, indicated by “Please explain your answer”). Qualitative, nar-
rative information carries much weight; it is often much richer and more
appreciated by learners than simple numerical ratings (Govaerts et al., 2005).
For instance, if a learner receives a rating of 2 out of 5 for counseling skills
in a patient encounter, it is evident that there is cause for concern. How-
ever, this score alone does not tell us what the learner did and the points of
improvement for future performance. Therefore, efective formative assess-
ment often necessitates narrative information to complement quantitative
scores.

Monitoring Progress

Scoring rubrics serve as a means to assess performance on individual learn-


ing tasks, but, more importantly, they can also be used to monitor progress
made across various learning tasks. It is important to use one constant set
of standards to assess all learning tasks throughout the whole educational
program. At the start of the frst task class, learners try to reach the relevant
standards for the simplest versions of the task and with substantial support
and guidance. At the end of that task class, the learners should be able to
carry out the simplest versions of the task without support and guidance
while adhering to the relevant standards. This pattern continues until the
beginning of the fnal task class, where learners try to reach the same stand-
ards for the most complex versions of the task, frst with ample support and
guidance. At the end of the fnal task class, they carry out the most complex
versions of the task without support and guidance, still at those standards.
Step 2: Design Performance Assessments 117

Thus, it is not the level of the standards that changes throughout the
educational program but, in contrast, the complexity of the learning tasks
and the support and guidance provided for carrying out those tasks. Learn-
ers must frst reach particular standards for a simple/completely guided and
supported task and later for increasingly more complex/less guided and
supported tasks. The standards remain the same. This has two important
advantages. First, learners can be introduced to all standards from the begin-
ning of the program, providing a good impression of the fnal attainment
level they must reach at the end of the program. In this way, learners know
what to aim for. Second, it ensures that assessments are based on a broad
and varied sample of learning tasks, which is important for reaching reliable
assessments. ‘One measure of performance is no measure of performance,’
and most studies conclude that reliable performance assessments require at
least 8–10 tasks (Kogan et al., 2009).
Figure 5.5 provides the standards-tasks matrix as a schematic representa-
tion of assessments gathered. In the columns are the learning tasks, and an
X indicates on which standards the tasks are assessed. Many of the tasks will
contain support and guidance, but at the end of each task class or level of
complexity, there will also be unsupported tasks (in Figure 5.5, T3, T7, and
T10). In the rows are the standards for performance. The hierarchical order-
ing of the standards follows the skill hierarchy (in Figure 5.5, there are four
hierarchical levels, but there can be more). The standards-tasks matrix illus-
trates three principles: (a) the distinction between standard-centered and
task-centered assessment, (b) the increasing number of relevant standards
when tasks become more complex, and (c) the opportunity to counterbal-
ance the increasing number of relevant standards later in the teaching or
training goals by assessing on a more global level.
Standard-centered assessment focuses on one particular standard: It gives
information about how a learner is doing and developing over tasks on one
particular performance aspect. It refects the learner’s mastery of distinct
aspects of performance and yields particularly important information for
identifying points for improvement. In contrast, task-centered assessment
considers all standards: It gives information about how a learner is doing
overall and how overall performance is developing over tasks (Sluijsmans
et al., 2008). It, thus, refects the learner’s mastery of the whole complex
skill and is more appropriate for making progress decisions, such as whether
a learner is ready to continue to a next, more complex, task class. The use of
task-centered and standard-centered assessments for selecting new learning
tasks is part of Step 3 of the Ten Steps and will be further discussed in the
next chapter (Section 6.5, Individualized Learning Trajectories).
Second, Figure 5.5 shows that the number of relevant standards increases
when tasks become more complex. There are more Xs in the later columns than
in the early columns. Although there is a constant set of standards, this does
not imply that each separate learning task can be assessed based on the same
118
Step 2: Design Performance Assessments
Figure 5.5 Standards-tasks matrix for monitoring learner progress.
Step 2: Design Performance Assessments 119

set of standards. Not all standards are relevant for all tasks; more standards will
often become relevant for more complex tasks. For example, constituent skills
of the complex skill ‘creating the composition’ include ‘selecting lenses’, ‘light-
ing the scene’, and ‘operating camera and equipment’ (cf. Figure 5.1). Each
constituent skill has its performance objectives and standards. However, sup-
pose learners only produce videos with smartphones or camcorders in the frst
task class. In that case, the standards for ‘selecting lenses’ are irrelevant here
because these cameras have no interchangeable lenses. More and more stand-
ards will become relevant as learners progress and tasks become more complex.
Finally, performance objectives and related standards are hierarchically
ordered according to the skill hierarchy, making it possible to vary the level
of detail of the assessments. Figure 5.6, for example, depicts a part of the
standards for ‘producing video content’ ordered according to its corre-
sponding skill hierarchy. Assessors use standards associated with constituent
skills high in the hierarchy to assess performance at a global level, standards
lower in the hierarchy to assess performance at an intermediate level, and
standards at the lowest level of the hierarchy to assess performance at a
highly detailed level (cf. the hierarchical levels of standards in the left side of
Figure 5.5). For example, only the standards at the level of the constituent
skills ‘creating the production plan,’ ‘producing footage,’ and ‘creating the
fnal product’ are considered for global assessment. Medium-level assess-
ment takes into account standards that are part of the performance objec-
tives for constituent skills lower in the hierarchy: In addition to standards
for ‘producing footage,’ for example, standards for ‘interacting with people
being flmed,’ ‘collaborating with crew,’ and ‘shooting video’ are also taken
into account (e.g., creating a friendly atmosphere [value], clear communica-
tion with crew [criterion], ftting composition [value], etc.). For a highly
detailed assessment, standards that are part of the performance objectives at
the lowest levels of the hierarchy are also taken into account: In addition to
standards for ‘shooting video,’ for example, standards for ‘creating the com-
position’ and ‘selecting lenses’ are also taken into account (e.g., following
conventional framing conventions such as the rule of thirds [value], and the
properties of the selected lens, such as aperture or shutter speed, align with
the artistic requirements [criterion]). In general, highly detailed assessments
will be limited to those constituent skills that are not yet mastered by the
learners or for which learners still have to improve their performance. This
makes it possible to counterbalance the fact that more and more standards
become relevant later as the training progresses: Standards learners have
already reached are then only assessed globally.
120 Step 2: Design Performance Assessments

Standards for
producing

Global assessment
video content

Medium-level assessment
Preproduction Production

Highly detailed assessment


Standards for creating Standards for
the production plan producing footage

Standards for Standards for Standards for Standards for Standards for Standards for
briefing with developing creating a interacting collaborating shooting
the client the story schedule with people with crew video

Standards for Standards for Standards for Standards for Standards for Standards for
doing writing a creating a coaching interviewing creating the
background script or storyboard people people composition
research synopsis

Standards for Standards for Standards for Standards for


sketching selecting lighting the operating
scenes lenses scene camera &
equipment

Figure 5.6 Level of detail of assessments related to the hierarchy of constitu-


ent skills, with their corresponding performance objectives and
standards.

Development Portfolios

Development portfolios provide excellent opportunities to implement the


principles just outlined (Kicken et al., 2009a, 2009b; Van Merriënboer & van
der Vleuten, 2012). In addition, electronic development portfolios (Beckers
et al., 2019; Beckers et al., 2016, 2019) may take over administrative duties
and computational tasks to provide overviews and summaries, give standard-
centered and task-centered assessments, switch to more global assessments
when the learner has reached specific standards, and so forth. A develop-
ment portfolio includes scoring rubrics for assessing the learner’s perfor-
mance on (both supported and unsupported) learning tasks. The assessor
can select the relevant aspects of performance for each task, for example,
from a hierarchical list with global objectives at the top and more detailed
objectives lower in the hierarchy. In the portfolio, each objective includes
the standards (i.e., criteria, values, attitudes) for acceptable performance and
the associated scoring rubrics, allowing the assessor to rate the performance
aspect under consideration. This process is repeated for all relevant aspects
of the assessed learning task, and if multiple tasks are assessed, for all tasks.
To improve the informative value of the portfolio, scoring rubrics should
include both quantitative ratings of specific performance aspects and nar-
rative reports. The assessor can provide these reports in a separate text box
and may include multimedia information such as spoken messages, photo-
graphs, and video fragments uploaded into the portfolio. Here, too, the
Step 2: Design Performance Assessments 121

same development portfolio with the same scoring rubrics and the same
standards should be used throughout so that learners are exposed to all rel-
evant standards from the start. Although learners will not be assessed on all
standards immediately, the fnal attainment levels of the program are com-
municated right from the beginning, helping learners work towards them.
For example, Figure 5.7 presents a screenshot of an electronic devel-
opment portfolio used for student hairstylists (STEPP—Structured Task
Evaluation and Planning Portfolio; Kicken et al., 2009a, 2009b). After each
learning task—typically, styling a client’s or a lifelike model’s hair—the asses-
sor updates the development portfolio using an electronic device. If pre-
ferred, they update the portfolio after several tasks. The portfolio describes
each completed task in terms of its complexity, determined by the task class
it belongs to, and its level of support and guidance. This may range from
independently performed tasks, tasks performed partly by the employer or
trainer and partly by the learner, or even tasks performed by a more expe-
rienced peer student or trainer and only observed by the learner (i.e., a
modeling example). The assessor can also upload digital photographs of the
client’s hairstyle before and after the treatment. To assess performance on
a completed task, the assessor selects the constituent skills relevant to the
performed task from the left side of the screen (see left part of Figure 5.7).

Figure 5.7 Screenshot of an electronic development portfolio for student


hairstylists.
122 Step 2: Design Performance Assessments

The constituent skills of the complex skill ‘styling hair’ are hierarchically
ordered and include, for example, washing and shampooing, haircutting,
permanent waving, and coloring, but also communicating with clients, sell-
ing hair care products, operating the cash register, making appointments,
and so forth. Clicking the right mouse button on a selected constituent
skill shows the performance objective for this skill, including standards for
acceptable performance (i.e., criteria, values, attitudes). The assessor selects
a relevant aspect at the desired level of detail by clicking the left mouse
button, revealing a scoring rubric for this aspect on the right side of the
screen. This also makes it possible to judge the global aspects frst and, only
if desired, unfold them to judge the more specifc aspects until the highest
level of detail (i.e., the lowest level of the hierarchy) is reached. This ofers
the opportunity to use a high level of detail for judging new or problematic
aspects of the completed task and, simultaneously, use a more global level
for judging the already-mastered aspects of the task. At the chosen level of
detail, the assessor flls out the scoring rubric, the time spent on this aspect
of performance, and, in a separate text box, points for improvement. This
process repeats for all aspects relevant to the learning task being assessed,
and if more than one task is assessed, it is repeated for all tasks. The next
chapter will discuss using a development portfolio for individualizing the
sequence of learning tasks. In that case, several assessors may update the
portfolio: the teacher, the learner themselves (self-assessments), peers (peer
assessments), but also the employer, clients, and other stakeholders.

5.6 Summary of Guidelines


• If you set performance objectives, then you need to start from an overall
learning goal and frst describe all relevant constituent skills and their
relationships in a skill hierarchy.
• If you construct a skill hierarchy, then you need to distinguish between
vertical relationships where skills lower in the hierarchy enable or are
prerequisite for skills higher in the hierarchy and horizontal relationships
where skills adjacent to each other are temporally ordered.
• If you specify a performance objective for a constituent skill, then you
need to clearly state what the learners can do after the training (action
verb), under which conditions they can do it, which tools and objects
they must use, and which standards apply for acceptable performance.
• If you specify the standards for acceptable performance, then you need
to make a distinction between criteria that must be met (e.g., minimum
requirements for speed and accuracy), values that must be taken into
account (e.g., satisfying particular rules, conventions, and regulations),
and attitudes that the task performer must exhibit (e.g., friendliness, will-
ingness to help).
Step 2: Design Performance Assessments 123

• If you classify performance objectives, then you need to make a distinc-


tion between nonrecurrent constituent skills (involving problem solving,
reasoning, and decision making), recurrent constituent skills (involving
application of rules or use of procedures), and skills that are both nonre-
current and recurrent (recognizing when rules/procedures do not work
to switch to a problem-solving, reasoning, and decision-making mode).
• If you classify performance objectives as recurrent, you must distinguish
further between not-to-be-automated constituent skills (requiring pres-
entation of procedural information) and to-be-automated constituent
skills (also requiring part-task practice). Feasibility is an important con-
sideration because reaching full automaticity is often an extremely time-
consuming process.
• If you develop performance assessments, then you need to construct scor-
ing rubrics for all aspects of performance, including descriptions of the
standards and the scales that enable the assessor to judge them. Adding
narrative information to the ratings will be very helpful for the learners.
• If you use performance assessments to monitor learner progress, then you
should apply a constant set of standards, distinguish between standard-
centered and task-centered assessments, and distinguish levels of assess-
ment from highly specifc to global. Electronic development portfolios
are useful tools for this.

Glossary Terms

Competence map; Constituent skill; Development portfolio; Double-clas-


sifed constituent skill; Formative assessment; Task-centered assessment;
Nonrecurrent constituent skill; Performance assessment; Performance
objective; Recurrent constituent skill; Scoring rubric; Skill decomposi-
tion; Skill hierarchy; Standards; Terminal objective; To-be-automated
recurrent constituent skills; Standard-centered assessment
Chapter 6

Step 3
Sequence Learning Tasks

6.1 Necessity
The Ten Steps approach organizes learning tasks in simple-to-complex cat-
egories or task classes. In exceptional cases, such as when designing short,
rigid training programs, you might skip this step and treat all learning tasks
as belonging to a single task class.

DOI: 10.4324/9781003322481-6
126 Step 3: Sequence Learning Tasks

Although the Ten Steps uses authentic whole tasks, it is not the case
that the learner is “thrown into the deep end of the swimming pool to
either sink or swim”. When learning to swim—in the Netherlands, in any
event—children begin in very shallow water with simple techniques and dif-
ferent fotation devices. They progress through a series of eight stages until
they are capable of doing all of the things necessary for getting their frst
swimming diploma (see Chapter 5), such as treading water for 15 seconds
fully clothed, alternately swimming two diferent strokes (breaststroke and
backstroke) twice for 25 meters each, and swimming underwater through
a ring for 3 meters. After getting this diploma, they can go on to their next
diplomas in the same way. The techniques they must learn and the steps
they must carry out are mostly organized from simple to complex through
the stages.
This chapter discusses methods for sequencing task classes that form sim-
ple-to-complex groups of learning tasks. For each task class, the designer
needs to specify the characteristics or features of its learning tasks. This
allows them to correctly categorize already developed learning tasks and
select and develop additional tasks. The progression of task classes—flled
with learning tasks—provides a global outline of the training program.
The structure of this chapter is as follows. Section 2 discusses whole-
task sequencing for defning task classes, including simplifying conditions,
emphasis manipulation, and knowledge progression methods. All these
methods start the training program with a task class containing whole
tasks representative of the simplest (that is to say, least complex) real-world
tasks. Section 3 discusses the issue of learner support for tasks within that
task class. Whereas learning tasks within the same task class are equally
complex, a high level of support is typically available for the frst learning
task or tasks. This support gradually and systematically decreases during
the following tasks until no support is available for the fnal learning task or
tasks within the task class. Section 4 discusses part-task sequencing. Find-
ing a task class simple enough to start the training may be impossible in
rare cases. Then, it may be necessary to break the complex skill into mean-
ingfully interrelated clusters of constituent skills (i.e., the ‘parts’) that can
be addressed in the training program. Section 5 discusses the realization of
individualized learning trajectories. A structure of task classes, each con-
taining learning tasks with diferent levels of support and guidance, serves
as a task database. Based on performance assessments, tasks that best ft the
learning needs of individual learners are then selected from this database.
It is also possible and may be desirable to teach learners the self-directed
learning skills necessary for selecting suitable learning tasks (through what
is known as ‘second-order scafolding’). The chapter concludes with a
summary.
Step 3: Sequence Learning Tasks 127

6.2 Whole-Task Sequencing of Learning Tasks


Whole-task sequencing ensures learners work on whole tasks in each task
class: Task classes only difer in complexity. This is distinct from part-task
sequencing, where learners sometimes work on parts of the whole task, and
task classes may difer in the parts practiced (see Section 6.4). The Ten Steps
strongly advocates a whole-task approach to sequencing. This approach is
based on the premise that learners should quickly acquire a complete view
of the whole skill, which is progressively enhanced during training. Ideally,
even the frst task classes consist of examples of the least complex version of
whole real-world tasks that experts encounter. Each new task class contains
learning tasks in the learner’s zone of proximal development (Vygotsky, 1978):
“the distance between the actual development level as determined by inde-
pendent problem solving and the level of potential development as deter-
mined through problem solving under adult guidance or in collaboration
with more capable peers” (p. 86). This provides learners the best opportuni-
ties to pay attention to integrating all skills, knowledge, and attitudes and the
necessary coordination of constituent skills. It is similar to the global-before-
local skills principle in Cognitive Apprenticeship Learning (Gessler, 2009)
and the zoom lens metaphor in Reigeluth’s Elaboration Theory (2007; see
Figure 6.1). If you study a picture through a camera’s zoom lens, you will
usually start with a wide-angle view, which provides you with the whole pic-
ture with its main parts and the relationships between them but without any
detail. Zooming in on diferent parts of the picture allows you to see more
detail of the subparts and the relationships between those subparts. In the
Ten Steps, a continuing process of zooming in and out of the whole picture
allows the learner to gradually progress to the levels of detail and breadth
desired. For example, for diagnosing a car vibration problem, an initial step
involves determining whether the vibration can be felt or heard when sta-
tionary, moving, or both. If it is while moving, the next step is determining
whether a specifc speed or condition initiates the vibration or causes it to
stop. Then, you continually zoom in and out to look at each specifc part,
such as the frame, the power train, the exhaust system, and so forth, until
the problem has been located. After that, the problem needs to be solved.

Figure 6.1 The zoom lens metaphor in the author’s garden.


128 Step 3: Sequence Learning Tasks

The next subsections discuss three whole-task methods for sequencing simple-
to-complex task classes. The simplifying conditions method identifes conditions
that simplify task performance and describes each task class in terms of whether
those conditions are present or not. The emphasis manipulation method identi-
fes sets of constituent skills that may be emphasized or de-emphasized during
training and includes an increasing number of sets of emphasized skills for later
task classes. Finally, knowledge progression methods base a sequence of task classes
on the results of in-depth task and knowledge analyses.

Simplifying Conditions

In the simplifying conditions approach, the learner practices the execution of


all constituent skills of a task at the same time, but the conditions under which
those skills are trained change and gradually become more complex during the
training (also called ‘simplifying assumptions’ or ‘complexity factors’; for exam-
ples, see Haji et al., 2015; Van Geel et al., 2019). The frst task class is repre-
sentative of the simplest/least complex version of whole tasks that professionals
might encounter in the real world (i.e., an epitome or “overview containing the
simplest and most fundamental ideas,” Reigeluth, 1987, p. 248); the fnal task
class represents all real-life tasks that the learners must be able to perform after
the training. Chapter 2 illustrated this approach for ‘producing video content.’
We can identify the following simplifying conditions for this skill:

Length of the desired video

Shorter videos are generally, but not always, easier to produce, requiring
less content and editing. Longer videos require more planning and content
creation. They may involve more detailed scripting and editing. Producing
longer videos, such as documentaries, is complex due to extensive planning,
production, and postproduction work.

Project goal

The overarching goal of the project signifcantly infuences the complex-


ity of video production. Creating aftermovies or event recaps is generally
considered the easiest, as they involve assembling footage already captured
during an event without needing intricate planning. Promotional content,
such as advertisements, introduces more complexity, as it aims to showcase a
product or service to engage the audience and encourage them to purchase
it. Informative videos require thorough research and clear communication
of often intricate concepts. Collaboration with subject-matter experts may
be necessary, and these videos may incorporate animations or data visualiza-
tions to enhance understanding. Finally, documentaries demand extensive
research and intricate storytelling and often delve into sensitive or complex
Step 3: Sequence Learning Tasks 129

topics. Their production requires meticulous planning and execution to


achieve a compelling narrative.

Location

Controlled, indoor settings are generally easier to work with, as they usually
ofer more predictable conditions. Outdoor shoots can be challenging due
to weather, lighting, and environmental variables that are harder to control.

Available time

When there is plenty of time, there is room for multiple takes, adjustments,
and successive ‘tweaking,’ resulting in a smoother and less stressful produc-
tion process. A tight schedule limits the number of takes and adjustments,
potentially afecting the fnal quality of the production.

Participant dynamics

When participants have prior on-camera experience, collaboration tends


to be smoother, facilitating an easier production process. If participants
lack on-camera experience but are receptive to coaching, the complexity
is moderate, as support and coaching can be provided. However, deal-
ing with young children, animals, non-native speakers, individuals with
special needs, or inexperienced and nervous participants adds additional
complexity.

These simplifying conditions indicate the lower and upper limits of the pro-
gression of simple-to-complex task classes. The simplest task class would
contain learning tasks requiring learners to produce a short recap or after-
movie summarizing the atmosphere at an event, shot in indoor locations
with plenty of time and without the need for interviews or interaction with
participants. The most complex task class includes learning tasks requiring
learners to produce long videos such as documentaries in unpredictable loca-
tions and circumstances, in limited time, and with challenging participant
dynamics (e.g., inexperienced individuals, high-profle fgures, animals). We
can then defne a limited number of task classes within these limits by vary-
ing one or more simplifying conditions. For an example of task classes, see
the preliminary training blueprint in Table 6.2 later in this chapter.
Consider the highly complex skill of patent examination as another exam-
ple. Patent examiners follow a two-year training program before they can
deal with all common types of patent applications. For new patent applica-
tions, they must frst prepare a ‘search report.’ They carefully analyze the
application, search vast databases for already granted related patents, and
enter the results of this examination in a search report. If the report reveals
130 Step 3: Sequence Learning Tasks

the presence of similar patents, the examiner advises the applicant to end the
application procedure. Otherwise, they conduct a ‘substantive examination’
and discuss necessary changes to the patent application with the applicant.
This process eventually leads to either granting or rejecting the application.
Several factors, including the following simplifying conditions, infuence the
complexity of patent examination:

• Clarity of the patent application (clear or unclear).


• Complexity of the claims made in the application (one single, independ-
ent claim or several related and interdependent claims).
• Need to analyze replies from the applicant (absent or present).
• Necessity for intermediate revision during the examination process
(absent or present).

Given these simplifying conditions, the frst task class might consist of learn-
ing tasks that require learners to handle a clear application involving a single,
independent claim with one clear and complete reply from the applicant and
no need for intermediate revision during the examination process. The fnal
task class would consist of learning tasks that require learners to handle an
unclear application involving several claims with multiple interdependen-
cies and many unclear and incomplete replies from the applicant and with a
need for intermediate revisions during the examination process. As shown
in Table 6.1, additional task classes with intermediate complexity levels may
be added between these two extremes by varying one or more of the sim-
plifying conditions (for constituent skills involved, see also Figure 6.4 later
in this chapter).

Emphasis Manipulation

In emphasis manipulation, learners perform the whole task from the begin-
ning to the end, but diferent task classes emphasize diferent sets of con-
stituent skills. This is also called ‘changing priorities’; for an example, see
Frèrejean et al. (2016). This approach allows learners to focus on the
emphasized aspects of the task without losing sight of the whole task. Learn-
ers also experience and/or learn the costs of carrying out the de-emphasized
aspects of the task. For example, when teaching medical students to examine
a patient, particular phases of the program may emphasize specifc diag-
nostic skills. This not only helps them further develop their diagnostic
skills but also allows them to experience the costs of developing other skills
because their interpersonal and communication skills may sufer from the
emphasis on diagnostic skills. The emphasis manipulation approach involves
emphasizing and de-emphasizing diferent (sets of) constituent skills dur-
ing training, requiring learners to manage their priorities and change the
Step 3: Sequence Learning Tasks 131

Table 6.1 Example of task classes for training the highly complex skill of patent
examination.

Task Class 1
Learning tasks that require learners to handle a clear application involving
a single independent claim with one clear and complete reply from the
applicant and no need for intermediate revision during the examination
process. Learners must prepare the search report and carry out the
substantive examination.
Task Class 2
Learning tasks that require learners to handle a clear application involving a
single, independent claim with many unclear and incomplete replies from
the applicant and a need for intermediate revisions during the examination
process. Learners must prepare the search report and carry out the
substantive examination.
Task Class 3
Learning tasks that require learners to handle an unclear application involving
several claims with multiple dependencies and many unclear and incomplete
replies from the applicant but no need for intermediate revisions during the
examination process. Learners must prepare the search report and carry
out the substantive examination.
More task classes may be inserted, as necessary
Task Class n
Learning tasks that require learners to handle an unclear application involving
several claims with multiple dependencies and many unclear and incomplete
replies from the applicant and with a need for intermediate revisions during
the examination process. Learners must prepare the search report and
carry out the substantive examination.

focus of their attention accordingly. This approach is expected to facilitate


the development of cognitive schemata, enabling learners to better coor-
dinate the constituent skills involved. In contrast to the simplifying condi-
tions approach, which reduces complexity by simplifying tasks, the emphasis
manipulation approach exposes learners to the full complexity of the whole
task throughout the training, with attention directed to a subset of skills.
This often makes emphasis manipulation less useful for defning early task
classes for highly complex tasks.
The success of the emphasis manipulation approach relies heavily on a
well-chosen sequence of task classes in which diferent sets of constituent
skills are emphasized and de-emphasized. Gopher et al. (1989) propose
emphasizing (sets of) constituent skills that meet specifc criteria: (a) they
are inherently complex and demanding for learners; (b) their application
results in marked changes in the style of performance for the whole task;
and (c) they are diverse enough to cover all of the diferent aspects that need
emphasis. For example, in teacher training programs where student-teachers
132 Step 3: Sequence Learning Tasks

learn to prepare and give lessons (i.e., the whole task), relevant constituent
skills are, for example, presenting subject matter, questioning learners, lead-
ing group discussions, and so forth. Four possible task classes based upon
the emphasis manipulation approach would be to:

1. Prepare and then present lessons, focusing on the presentation of the


subject matter.
2. Prepare and then present lessons, focusing on questioning students.
3. Prepare and then present lessons, focusing on initiating, maintaining, and
leading group discussions.
4. Prepare and present lessons, focusing on all of the above.

Note that the emphasis manipulation approach typically assumes that the
subsequent task class is inclusive, meaning there is a logical order or pro-
gression of tasks. In other words, once learners have learned to prepare and
present a lesson with a subject matter focus, they will continue to do this in
the next task class where the focus is on questioning (i.e., it is impossible to
prepare and present a lesson without content!).
Another example, but this time without such a logical order, is the replace-
ment of jet engines (see Figure 6.2). This is a complex but straightforward
process with few simplifying conditions. One can hardly ask a maintenance
person to replace a jet engine but leave out some of its parts! In this case,
emphasis manipulation may be a good alternative. The main phases in the

Figure 6.2 Replacing jet engines.


Step 3: Sequence Learning Tasks 133

process are removing the old engines, testing electrical, fuel, and mechanical
connections to ensure that they can properly supply the new engines, install-
ing the new engines, and running a test program. A training program for
aircraft maintenance could use the following task classes:

1. Replace engines, focusing on safety issues.


2. Replace engines, focusing on efcient tool use.
3. Replace engines, focusing on accuracy and speed.
4. Replace engines, focusing on costs.
5 Replace engines, focusing on all the above.

Knowledge Progression

Learning tasks within one task class are always equivalent in that they all draw
upon the same body of knowledge for carrying them out. More complex
task classes require more detailed or enriched knowledge than simpler ones.
Consequently, each task class can be characterized by its underlying body of
knowledge, or conversely, a progression of ‘bodies of knowledge’ can be used
to defne or refne task classes. However, this requires an in-depth task and
knowledge analysis. One approach involves analyzing a progression of cogni-
tive strategies. These strategies specify how to efectively approach tasks in the
domain. The progression starts with simple, straightforward strategies and
ends with complex, more elaborate ones. Each cognitive strategy in the pro-
gression then defnes a task class, containing learning tasks that can be car-
ried out by applying that cognitive strategy at the given level of specifcation.
See Chapter 8 for further discussion and examples. Another approach entails
analyzing a progression of mental models that specify the organization of the
domain. Again, the progression starts with rudimentary/basic models and
proceeds toward highly detailed ones. Each mental model is used to defne
a task class containing learning tasks that can be solved by reasoning based
on the associated mental model. More details and examples are provided in
Chapter 9.
In conclusion, it is important to highlight that diferent methods for
whole-task sequencing can be easily combined. Usually, simplifying condi-
tions is tried out frst. If it is hard to fnd sufcient simplifying conditions,
either emphasis manipulation or a combination of simplifying conditions
with emphasis manipulation may be applicable. For instance, the frst task
class for training patent examination (see the top row in Table 6.1) may
be further divided into simpler task classes by emphasizing the analysis
of applications, carrying out searches, recording pre-examination results,
performance of substantive examinations, and fnally, all those aspects at
the same time (and in that order). Knowledge progression is suitable if in-
depth task, and knowledge analyses have been conducted in Steps 5 and 6.
134 Step 3: Sequence Learning Tasks

If these analysis results are available, they are particularly useful for refning
an already existing global description of task classes.

6.3 Task Classes and Learner Support


Built-in task support and problem-solving guidance should decrease as
learners acquire more expertise (see Step 1 in Chapter 4). This pattern of
diminishing support and guidance (i.e., scafolding) repeats in each task
class. Once the task classes are defned, the designer can classify any exist-
ing learning tasks accordingly and/or develop additional learning tasks.
Such a clear specifcation of task classes is very helpful for fnding real-life
tasks serving as the basis for learning tasks. It is essential to have a sufcient
number of diferent learning tasks for each task class to ensure that learners
can practice the tasks until mastery (i.e., until they meet the standards) and
are confronted with a set of varied tasks (i.e., variability of practice) before
they continue to a subsequent task class with more complex learning tasks.
The descriptions of the task classes may guide the process of fnding useful
real-life tasks and developing (additional) learning tasks. One could, for
example, specifcally ask an experienced video-content producer to come
up with concrete examples of successful 3- to 5-minute promotional vid-
eos recorded in indoor locations with plenty of time and favorable partici-
pant dynamics (i.e., tasks that ft the second task class in Table 6.2, noted
later). In general, a clear specifcation of task classes is very helpful for
fnding appropriate real-life tasks that serve as the basis for learning tasks.
Repeating the diminishing support and guidance pattern in each task
class yields a saw-tooth pattern of support and guidance throughout the
training program (see Figure 6.3).

Figure 6.3 Training program with a typical saw-tooth pattern of support.

Combined with the ‘variability of practice’ principle (see Chapter 4),


this results in a training blueprint consisting of (a) simple-to-complex task
classes, (b) a varied set of learning tasks within each task class, and (c) learn-
ing tasks with high support and guidance at the beginning of the task class
and without support and guidance at the end. Table 6.2. illustrates this basic
structure for the video content production example.
Step 3: Sequence Learning Tasks 135

Table 6.2 A preliminary training blueprint: Task classes and learning tasks with
decreasing support in each task class for the complex skill ‘producing
video content’.
Task Class 1: Learners produce videos for fictional clients under the
following conditions.
• The video length is 1–3 minutes
• The clients desire aftermovies or event recaps, summarizing the
atmosphere at an event
• Locations are indoors
• There is plenty of time for the recording
• No interaction with other on-camera participants
Learning Task 1.1
Support: Worked-out example
Guidance: Performance constraints
Learners receive a production plan, intermediate footage, and the final video
of an existing aftermovie. They evaluate the quality of each aspect, but
their evaluations must be approved before they can continue with the next
aspect.
Learning Task 1.2
Support: Completion task
Guidance: Tutoring
Learners receive a production plan and intermediate footage. They must
select the footage and edit the video into the final product. A tutor guides
learners in studying the given materials and using the postproduction
software.
Learning Task 1.3: Imitation task
Support: Conventional task
Guidance: Modeling
Learners study a modeling example of how a teacher/expert created a recap
video for an (indoor) automotive show. In groups, students imitate this but
for a local exposition.
Learning Task 1.4
Support: Conventional task
Guidance: None
Learners create an individual recap video for an indoor event of their
choosing.

(Continued)
136 Step 3: Sequence Learning Tasks

Table 6.2 (Continued)

Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3–5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Learning Task 2.1
Support: Completion task
Guidance: Process worksheet
Learners receive the client briefing, synopsis, and storyboard for a video
promoting a new coffee machine. They follow a process worksheet to
record footage and create the final video.
Learning Task 2.2
Support: Reverse task
Guidance: Tutoring
Learners study a promotional video about a new startup in the field of
artificial intelligence. A tutor helps them work backward to explain
critical decisions in the production phase and develop a storyboard that
fits the video and meets the client’s requirements.
Learning Task 2.3: Imitation task
Support: Conventional task
Guidance: Modeling
Learners study a modeling example of how a teacher/expert creates a
short social media advertisement video for a small online clothing store.
Learners remake the ad for a small online art store.
Learning Task 2.4
Support: Conventional task
Guidance: Tutoring
Under guidance from a tutor, learners create a promotional video
highlighting the products or services of a local store.

(Continued)
Step 3: Sequence Learning Tasks 137

Table 6.2 (Continued)

Task Class 3: Learners produce videos for fictional clients under the
following conditions.
• The video length is increased to 5–10 minutes
• The clients desire informational or educational videos
• Locations are indoor or outdoor
• There is plenty of time for the recording
• Participant dynamics are more challenging (e.g., inexperienced/nervous
participants)
This task class employs the completion strategy.
Learning Task 3.1: Modeling example
Support: Worked-out example
Guidance: Modeling
Learners observe an expert thinking aloud while working outdoors with
experienced and inexperienced cyclists to create an informational video
about safe cycling.
Learning Task 3.2
Support: Completion task
Guidance: Process worksheet
Learners receive a production plan and footage with bad takes and good
takes. They must select good takes and edit them into a video informing
patients about a medical product. A process worksheet provides guidance.
Learning Task 3.3:
Support: Completion task
Guidance: Tutoring
Learners receive a production plan and footage of an expert (i.e., actor)
explaining content but showing nervousness and making mistakes. Learners
must reshoot the footage with the actor, coaching them to arrive at the
desired result and finish the final video.
Learning Task 3.4
Support: Completion task
Guidance: Performance constraints
Learners receive a synopsis for a training video on a construction site and
must write the script, record the footage, and create the final video. Each
step requires approval from a teacher before they can continue.
Learning Task 3.5
Support: Conventional task
Guidance: None
An expert in home organization and decluttering with no on-camera
experience wants an explainer video. Learners carry out all phases to
create the final video for the client.

(Continued)
138 Step 3: Sequence Learning Tasks

Table 6.2 (Continued)

Task Class 4: Learners produce videos for fictional clients under the
following conditions.
• The video length is longer than 10 minutes
• The clients desire documentaries or interview videos
• Locations can be outdoors in bad weather
• There is limited time for the recording
• Participant dynamics are challenging (e.g., interviewing people, working
with animals)
Learning Task 4.1
Support: Worked-out example
Guidance: Tutoring
Learners study the production plans and completed videos of three
documentaries. A tutor facilitates a group discussion about recording
outdoors, storytelling, interviewing techniques, etc.
Learning Task 4.2
Support: Nonspecific goal task
Guidance: Tutoring
Learners receive a script for a documentary about a historic outdoor
location. They visit the site and simulate various challenging situations,
such as unexpected weather conditions, breaking equipment, etc. Learners
must develop approaches for dealing with such challenges.
Learning Task 4.3
Support: Conventional task
Guidance: Process worksheet
Learners create a 15-minute documentary about a farmer, requiring them to
interview, work with animals, and record outdoors. They receive a process
worksheet for guidance.
Learning Task 4.4
Support: Conventional task
Guidance: None
Learners create a 30-minute documentary on a topic of their choosing.

6.4 Part-Task Sequencing of Learning Tasks


For most training programs, whole-task sequencing works well and results
in a preliminary training blueprint such as those presented in Tables 6.1 and
6.2. However, in exceptional cases, it might not be possible to fnd a task
class simple enough to start the training, and it would take quite a bit of
preparation before learners can even start to work on the frst learning task.
Examples of such training situations are complete educational programs
(i.e., curricula) for doctors, pilots, or lawyers. Then and only then is part-
task sequencing of learning tasks used to supplement whole-task sequenc-
ing. You may skip this section if you are not dealing with such an exceptional
case. Part-task sequencing involves deciding the order in which (clusters of)
constituent skills (i.e., parts) will be addressed in the instruction. Part-task
sequencing is very efective for reducing task complexity but may hinder the
Step 3: Sequence Learning Tasks 139

integration of knowledge, skills, and attitudes and limits the opportunities


to learn to coordinate the constituent skills (Wickens et al., 2013). There-
fore, it should be used sparingly and with great care.

Skill Clusters as Parts

If subject-matter experts in patent examination indicate that it will take


many weeks to prepare new trainees for even the first task class indicated
in Table 6.1 (i.e., clear applications with a single claim, a clear and com-
plete reply of the applicant, with no need for intermediate revision), part-task
sequencing may be necessary. In this scenario, the instructional designer first
identifies a small number of skill clusters, usually between two and five. These
skill clusters are sets of meaningfully interrelated constituent skills. A small
number of skill clusters or ‘parts’ provides better opportunities for integrat-
ing and coordinating knowledge, skills, and attitudes. The chosen clusters
must allow learners to start practicing within a reasonable timeframe (e.g.,
in the order of hours or days), and each cluster must reflect an authentic,
real-life task. For example, three meaningfully interrelated parts of ‘examin-
ing patent applications’ (see Figure 6.4) are the branches ‘preparing search

Figure 6.4 
T hree skill clusters for the highly complex skill ‘examining pat-
ent applications.’ Part A consists of ‘preparing search reports’ and
lower-level skills; part B consists of ‘issuing communications or
votes’; and part C consists of ‘re-examining applications’ and lower-
level skills.
140 Step 3: Sequence Learning Tasks

reports’ (called, for generalization, part A instead of branch A), ‘issuing com-
munications or votes’ (part B), and ‘re-examining applications’ (part C).

Forward and Backward Chaining with and without Snowballing

There are essentially two basic approaches to part-task sequencing: forward


chaining and backward chaining. Forward chaining deals with parts in the same
natural-process order in which they occur during normal task performance. It
may take two forms; namely, with and without snowballing (building upon the
previous and, thus, growing larger as with a snowball that rolls down a snow-
covered hill and gathers more and more snow as it rolls). Simple forward chain-
ing (A-B-C) deals with the parts one by one. In our example, one would start the
teaching with ‘preparing search reports’ (part A), then continue the task with
‘issuing communications or votes’ (part B), and end with ‘re-examining appli-
cations’ (part C). Note that learners never practice the whole task. When they
practice ‘issuing communications or votes,’ they will typically do so based on the
reports they prepared in the previous phase. When they practice ‘re-examining
applications,’ they will typically do so based on the reports and communica-
tions or votes they prepared in the previous two phases. Forward chaining with
snowballing (A-AB-ABC) includes the previous part(s) in each new part. In our
example, one would start with teaching ‘preparing search reports’ (part A), then
continue with ‘preparing search reports’ plus ‘issuing communications or votes’
(part AB), and would end with ‘preparing search reports’ plus ‘issuing commu-
nications or votes’ plus ‘re-examining applications’ (ABC, which is the whole
task). Snowballing generally increases the time necessary, but this investment is
usually more than worthwhile because it provides better opportunities for inte-
gration and learning to coordinate the diferent skill clusters that serve as parts.
In contrast, backward chaining deals with parts of a whole task in the reverse
order or one that is counter to performance. Like forward chaining, it may
take two diferent forms. Simple backward chaining (CAB-BA-A) deals with the
parts, one by one. In our example, one would start with teaching ‘re-examin-
ing applications’ (part CAB), then continue with ‘issuing communications or
votes’ (part BA), and end with ‘preparing search reports’ (part A). As in simple
forward chaining, learners never practice the whole task, but they can only start
with part C if they receive ready-made search reports and communications/
votes because they have not yet prepared them—that is to say, carried out those
parts—themselves. The subscripts indicate this: CAB indicates that the learn-
ers ‘re-examine applications’ based on search reports plus communications/
votes given by the instructor or that are included in the training program. BA
indicates that they ‘issue communications/votes’ based upon search reports
given by the instructor or that are included in the training program. Back-
ward chaining with snowballing (CAB-BCA-ABC) includes the previous part(s)
in each new part. In our example, the training would start with ‘re-examining
applications’ based on ready-made search reports plus communication/votes
Step 3: Sequence Learning Tasks 141

(part CAB) and continue with ‘issuing communications/votes’ plus ‘re-exam-


ining applications’ based on ready-made search reports (part BCA) and end
with ‘preparing search reports’ plus ‘issuing communications/votes’ plus ‘re-
examining applications’ (ABC, which is the whole task). Table 6.3 summarizes
the four part-task sequencing techniques and presents guidelines for their use.
The guidelines in Table 6.3 are based on two principles. First, sequenc-
ing with snowballing is more efective than sequencing without snowball-
ing because it allows learners to practice the whole task (i.e., ABC) and
thus helps them integrate and learn to coordinate the diferent task-parts.
Consequently, snowballing is preferred if available instructional time allows.
Second, backward chaining is more efective than forward chaining because
it confronts learners with useful examples and models of the whole task
right from the start of the program. For instance, if learners frst practice
the re-examination of patent applications based on given search reports and
communications/votes, they get to see and study many useful examples of
search reports and communications/votes by the time they start to practice
preparing those documents themselves. These principles do not apply to
sequencing practice items in part-task practice (see Step 10 in Chapter 13).

Table 6.3 Part-task sequencing techniques with guidelines for usage.

Simple forward A-B-C Do not use this for sequencing


chaining learning tasks. Use it only
for sequencing practice items
in part-task practice and if
instructional time is severely
limited (see Chapter 13)
Forward chaining A-AB-ABC Do not use this for sequencing
with snowballing learning tasks. Use it only
for sequencing practice items
in part-task practice (see
Chapter 13).
Simple backward C AB -B A-A Use this only for sequencing
chaining learning tasks if instructional
time is limited and if it is
impossible to find whole
tasks that are simple enough
to start the training with.
Backward chaining C AB -BC A-ABC The default strategy for
with snowballing sequencing learning tasks if
it is impossible to find whole
tasks that are simple enough
to start the training with.
Note: Use these techniques only when it is exceptionally difficult to find simple whole
tasks to begin with.
142 Step 3: Sequence Learning Tasks

Several studies have shown that backward chaining with snowballing can
be a very efective sequencing strategy. In an old study, Gropper (1973)
used it to teach instructional systems design. Initially, learners learn to try
out and revise instructional materials (i.e., traditionally, the last part of the
instructional design process). They are given model outputs for design
tasks ranging from task descriptions to materials development. In subse-
quent stages, students learn to design and develop instructional materials.
The strategy proved very efective because learners had the opportunity to
inspect several model products before being required to perform tasks such
as strategy formulation, sequencing, or task analysis themselves. Along the
same lines, Van Merriënboer and Krammer (1987) described a backward
chaining approach with snowballing for teaching computer programming.
Initially, learners evaluated existing software designs and computer programs
through testing, reading, and hand tracing (i.e., traditionally, the last part of
the development process). In the second phase, they modifed, completed,
and scaled up existing software designs and computer programs. Only in the
third phase did they design and develop new software and computer pro-
grams from scratch. The strategy was much more efective than traditional
forward chaining, probably because learners could better base their perfor-
mance on the many models and example programs they had encountered in
the earlier phases.

Whole-Part Versus Part-Whole Sequencing

Instructional designers can combine whole- and part-task sequencing in two


ways: whole-part and part-whole sequencing (see Figure 6.5).

Figure 6.5 A schematic representation of a regular whole-task sequence, a


whole-part sequence, and a part-whole sequence. Smaller circles
indicate the ‘parts’ of the whole task.
Step 3: Sequence Learning Tasks 143

A whole-part sequencing approach involves developing a sequence of sim-


ple-to-complex task classes with whole tasks, using simplifying conditions,
emphasis manipulation, and/or knowledge progression (for examples, see
Mulder et al., 2011; Si & Kim, 2011). If the frst task class is still too com-
plex to start the training, designers can use part-task sequencing techniques
to divide this and other task classes into skill clusters or parts ranging from
simple to complex. The basic idea is to start with a simple-to-complex
sequence of whole tasks before dividing them into parts. In contrast, a part-
whole sequencing approach develops a sequence of simple-to-complex parts
or skill clusters frst. If the frst part or skill cluster is too complex to begin
training, designers can use whole-task sequencing techniques to sequence
the parts further in simple-to-complex task classes. Please note that the term
whole-task sequencing is somewhat confusing here because it pertains to
one part or skill cluster being treated as a whole task.
Table 6.4 compares whole-part task sequencing with part-whole sequencing.
There are three simple-to-complex task classes (ABC: wholes), three simple-
to-complex skill clusters with backward chaining with snowballing (CAB, BCA,
ABC: parts), and three learning tasks with high, low, or no support for each
task class–skill cluster combination. A clear advantage of whole-part sequenc-
ing (left column) over part-whole sequencing (right column) is that learn-
ers in a whole-part sequence get the opportunity to practice the whole task
(indicated by the shaded cells) relatively quickly. This should facilitate integra-
tion and coordination. Also, a whole-part sequence makes it possible to easily
switch from a whole-part approach to a genuine whole-task approach later in
the training program. For example, designers might use a whole-part approach
in the frst task class but then switch to a whole-task approach by deleting the
cells CABms and BCAmm from the second task class and CABcs and BCAcm from the
third task class. Such a switch is not possible in a part-whole sequence.
To recapitulate, whole-part task sequencing techniques provide better
opportunities to teach coordination and reach integration than part-whole
task sequencing techniques and are the preferred approach to sequencing
complex learning tasks. Only use part-whole task sequencing techniques for
skills that are difcult to learn but require little coordination of constituent
skills, such as highly complicated recurrent constituent skills (see Step 10 in
Chapter 13 for examples).
Table 6.5 provides an example of a whole-part task sequencing approach
for teaching patent examination. It starts with two task classes identical to
those presented in Table 6.1 (left column). In the frst task class, learners
deal with clear applications, single independent claims, clear and complete
replies, and no need for intermediate revisions. In the second task class,
they still deal with clear applications and single independent claims, but
now, there are unclear and incomplete replies from the applicant and a need
144 Step 3: Sequence Learning Tasks

Table 6.4 A comparison of whole-part sequencing, from task classes to skill


clusters, and part-whole sequencing, from skill clusters to task classes.
Skill clusters are based on backward chaining with snowballing.

Whole-part sequencing Part-whole sequencing

Task class Skill cluster Learning task Skill cluster Task class Learning task
(whole) (part) (part) (“whole”)
High support High support
C ABss Low support C ABss Low support
No support No support
High support High support
ABC s
BC A sm
Low support C AB s
C AB sm
Low support
No support No support
High support High support
ABC sc
Low support C AB sc
Low support
No support No support
High support High support
C ABms Low support BC Ams Low support
No support No support
High support High support
ABC m
BC A mm
Low support BC A m
BC A mm
Low support
No support No support
High support High support
ABC mc
Low support BC A mc
Low support
No support No support
High support High support
C ABcs Low support ABC cs Low support
No support No support
High support High support
ABC c
BC A cm
Low support ABC c
ABC cm
Low support
No support No support
High support High support
ABC cc
Low support ABC cc
Low support
No support No support
Notes:
Superscripts refer to the complexity of task classes or learning tasks: ss = simple task
class and simple skill cluster; sm = simple task class and medium skill cluster or vice
versa, mc = medium task class and complex skill cluster or vice versa, and so forth.
Subscripts refer to the output of previous skills that is given to the learner: C AB = perform
C based on given output from A and B, and BC A is perform B and C based on given
output from A.
Step 3: Sequence Learning Tasks 145

for intermediate revisions during the examination process. Each task class
is now further divided into three subclasses representing parts of the task
that are carried out in the following order: Learners (a) re-examine applica-
tions based on given search reports and communications/votes; (b) issue
communications/votes and re-examine applications based on given search
reports, and (c) prepare search reports, issue communications/votes, and
re-examine applications. In this approach, each task class ends with the
whole task at an increasingly higher level of complexity.

Table 6.5 Two task classes (wholes) with three skill clusters (parts) each for
the examination of patents, based on a whole-part sequence and
backward chaining with snowballing for the parts.

Task Class 1 Skill Cluster C AB—Task Class 1.1


Learning tasks that require Learners have to re-examine applications based
learners to handle a clear on given search reports and communications
application involving a or votes
single independent Skill Cluster BC A—Task Class 1.2
claim with one clear and Learners have to issue communications or votes
complete reply from the and re-examine applications based on given
applicant and no need search reports
for intermediate revision Skill Cluster ABC—Task Class 1.3
during the examination Learners have to prepare search reports, issue
process. communications or votes, and re-examine
applications
Task Class 2 Skill Cluster C AB—Task Class 2.1
Learning tasks that require Learners have to re-examine applications based
learners to handle a clear on given search reports and communications
application involving a or votes
single independent claim Skill Cluster BC A —Task Class 2.2
with many unclear and Learners have to issue communications or votes
incomplete replies from and re-examine applications based on given
the applicant and a need search reports
for intermediate revisions Skill Cluster ABC—Task Class 2.3
during the examination Learners have to prepare search reports, issue
process. communications or votes, and re-examine
applications
Add additional task classes/skill clusters

6.5 Individualized Learning Trajectories


The previous sections described several techniques for sequencing learning
tasks, from simple to complex task classes, with support and guidance in each
task class ranging from high to low. The resulting sequence can serve as a uni-
form training blueprint, where all learners receive the same sequence of learn-
ing tasks (i.e., one-size-fits-all). This approach is practical and cost-effective
146 Step 3: Sequence Learning Tasks

for teaching and/or training programs with homogeneous groups of learners.


This, however, is not always the case. An alternative strategy involves devel-
oping a task database containing a wide array of learning tasks with different
levels of support and guidance. This database allows a dynamic selection of
tasks that best fit the learning needs of individual learners (please refer back
to Figure 2.3) and the optimal amount and type of support and guidance.
The effect of this approach is that there is not one educational program for
all learners but that each learner has an individualized learning trajectory that
provides them with the best opportunities for performance improvement.
Figure 6.6 depicts the cyclical process necessary for individual learning
trajectories. The process starts with a learner receiving a learning task specifi-
cally chosen for them from the task database; that is, there is some form of
formative assessment before beginning. To enable this selection, each task
in the task database is equipped with metadata encompassing (a) context
features that enable variability of practice, such as descriptions of dimensions
on which tasks differ from each other in the real world; (b) task complex-
ity, describing the task class to which it belongs; and (c) level and type of
support/guidance such as task format (e.g., worked-out example, comple-
tion task, conventional task, etc.) and/or available guidance (e.g., available
process worksheets, adopted performance constraints). Furthermore, the
metadata includes the relevant performance standards for each task. After
completing a learning task, performance is assessed based on the relevant
standards. Assessment results are added to the development portfolio, which
contains an overview of all previously performed learning tasks and their

Figure 6.6 A cyclical model for the individualization of learning trajectories.


Step 3: Sequence Learning Tasks 147

assessment results (see Step 2 in the previous chapter). Based on the infor-
mation in the development portfolio, a new task is selected from the task
database that best fulflls the learner’s learning needs; that is, a task that
ofers the best opportunity to work on identifed points of improvement
and/or that ofers further practice on an optimal level of complexity, sup-
port, and guidance. After receiving the learning task, the learner works on
this new task, and the cycle begins again.
The following subsections elaborate on three main elements of Fig-
ure 6.6: (1) performance assessment, (2) the development portfolio, and
(3) task selection. First, for assessing performance on relevant standards,
we will argue that using diferent assessment formats and assessors benefts
the cyclical process. Second, for the development portfolio, the protocol
portfolio scoring will be discussed as a systematic approach to gathering
assessment results, storing them, and using them for task selection. Third,
second-order scafolding is discussed as a systematic way to help learners
develop self-directed learning skills necessary for fruitful task selection.

Assessment Formats and Assessors

Step 2, described in the previous chapter, involved developing scoring


rubrics for performance assessment. Yet, when using performance assess-
ments for selecting learning tasks, two further questions need to be answered:
(1) Which assessment methods should be used? and (2) Who will take care of
the assessments? Concerning the frst question, there are several methods for
assessing task performance. Situational judgment tests (Lievens & De Soete,
2015), an approach often used in personnel selection, describe work-related
situations or case studies and require learners to choose a course of action
by responding to questions (e.g., What would you do frst? What is the most
important action to take?). Simulation-based performance tests (Reigeluth
et al., 2012) require learners to perform the learning tasks in a simulation-
based setting. Work sample tests (Thornton & Kedharnath, 2013) require
learners to perform tasks equivalent to those performed on the job but not
in the job setting. And on-the-job performance assessments (Jelley et al., 2012)
observe learners’ task performance under normal working conditions. There
are many other performance assessment methods, and a full discussion of
them falls beyond the scope of this book, but the point is that they all have
advantages and disadvantages (Baartman et al., 2006). Therefore, the Ten
Steps recommends using a mix of performance assessment methods, a natu-
ral consequence of using diferent learning tasks with varying degrees of
support and guidance. This way, the disadvantages of particular assessment
methods are counterbalanced by the strengths of other methods.
Concerning the second question, important assessors include teach-
ers, instructors, and other experts in the task domain; clients, customers,
and other people served by the learners; and employers and responsible
148 Step 3: Sequence Learning Tasks

managers. The whole group of assessors provides assessments (i.e., assess-


ments from all views = 360-degree assessment), taking diferent perspec-
tives on a learner’s performance (Sluijsmans & Moerkerke, 1999). Mixing
assessors helps get a complete picture of the learner’s level of performance.
It creates a strong basis for decision making, answering questions such as:
What are important points of improvement for this learner? Which learning
tasks should this learner do next? The selection of assessors may introduce
biases because they might be inclined, for example, to use only the positive
part of the scale so as not to compromise their relationship with the learner
or to avoid the extra work that is often contingent on giving more nega-
tive evaluations (Bullock et al., 2009). To reduce biases, assessors should be
well trained in using the assessment instruments, assessment inconsisten-
cies should be discussed between assessors, and formative roles (e.g., coach)
should be separated from summative roles (e.g., examiner; see Chapter 15).
Another important assessor is the learner themself. The literature, how-
ever, is clear about the quality of self-assessment: Learners are poor self-asses-
sors and often overconfdent (Davis et al., 2006; Dunning et al., 2004; Eva &
Regehr, 2007; Pontes et al., 2018). A recent meta-study (León et al., 2023)
found that students signifcantly overestimate their learning (this is known
as Judgement of Learning—JoL) but that this overestimation diminishes
when they receive feedback and possess greater self-assessment experience
and content knowledge, exactly what 4C/ID and the Ten Steps promote.
Further, those performing least well, as determined by external assessment
procedures, also have been found to self-assess less well. Nevertheless, at
least two strong arguments exist for including self-assessments. First, incor-
porating learners in the assessment process helps them feel responsible for
their learning process and has been shown to positively afect enthusiasm
and motivation (Boud, 1995). Second, developing self-assessment skills is
important because they are critical to self-directed learning skills and life-
long learning (Van Merriënboer et al., 2009). Whereas self-assessment can
never stand on its own and should always be triangulated with other assess-
ment information, relating self-assessments with information from others is
an efort that pays of in the long run because it stimulates the development
of self-directed learning skills.
A fnal group of possible assessors is formed by peer learners (Prins et al.,
2005; Sluijsmans et al., 2002a; Topping, 1998). Peer assessment can be seen
as a stepping stone from being assessed by others to assessing oneself (self-
assessment), as assessing others helps develop skills that are also valuable
for self-assessment (Van Zundert et al., 2010). Another, more trivial reason
for using peer assessment concerns the efciency of the assessment process.
Frequent assessment of learning tasks is an extremely labor-intensive job for
a teacher or instructor and is impossible if there are many students. There-
fore, the Ten Steps promotes using self-assessment and peer assessments to
Step 3: Sequence Learning Tasks 149

replace some of the assessments made by teachers and others, provided that
the balance does not tip in the direction of self and peer assessment (Kon-
ings et al., 2019; Sluijsmans et al., 2002b, 2004).

Protocol Portfolio Scoring

Protocol portfolio scoring (Sluijsmans et al., 2008) builds on the organiza-


tion of a development portfolio as a ‘standards-tasks matrix’ (see Figure 5.4
in the previous chapter). It uses a constant set of standards for assessing per-
formance, stores assessment results from various methods and assessors, and
allows for standard-centered and task-centered assessments. Table 6.6 pro-
vides an example of protocol portfolio scoring. The rows correspond to the
learning tasks.
The frst four columns present the task class number, task number, task
assessment format (i.e., the kind of task and how it is assessed), and assessor.
The next eight columns relate to eight standards on which performance is
assessed. This involves standard-centered assessment in which we would read
the table vertically. The constant set of standards may refer to criteria, values,
and/or attitudes for both routine and problem-solving aspects of behavior.
In the example, standard-centered minimum scores are set to various levels,
such as 4.7 for the frst aspect, 3.5 for the second, and 2.5 for the third,
using a 6-point scale (the specifc content of these standards is irrelevant
here). Performance on each learning task is assessed on all aspects relevant
to the particular task. For example, in Table 6.6, assessor HK (AHK) judges
the frst standard (shaded column) on the frst learning task and assigns a
score of 3, which falls below the minimum of 4.7 for that standard. This
results in a negative decision, indicating that this aspect requires improve-
ment in subsequent tasks. The learner self-assesses the frst standard for the
second learning task and assigns a score of 5, yielding an average score of
4.0 over the frst two learning tasks. This moving average is still below the
minimum score of 4.7 for that standard, indicating the frst aspect remains
a point for improvement. Assessor GS (AGS) judges the frst standard for
performance on the third learning task with a score of 6, yielding an average
score of 4.7 over the frst three learning tasks. This moving average is now
equal to the minimum score, signifying there is no longer a need to consider
this aspect a point for improvement. The same assessment process applies to
the other seven standards of performance (i.e., the other seven columns in
Table 6.6). Standard-centered assessment results mainly guide the emphasis
or de-emphasis of specifc performance aspects in selecting upcoming learn-
ing tasks. They indicate which learning task or tasks would be most suitable
from the task database because they (a) require the application of standards
that have not yet been met and (b) do not require the application of stand-
ards that have already been met.
Table 6.6 Example of a fictitious protocol portfolio scoring.

150
Standard-centered standards (for 8 aspects)

Step 3: Sequence Learning Tasks


Class Task Format Assessor 4.7 3.5 2.5 3.5 3.5 3.5 4.0 4.5 Task-centered standards (over scored aspects)

Score per aspect (maximum score = 6 for each aspect) Horizontal standard Mean score Decision

1 1.1 WOE-MCT A HK 3 3 4 2 3
average 3.0 3.0 4.0 2.0 3.0 3.74 3.0 −
decision − + + − −
1 1.2 COM-SJT SA 5 2 1 3 3 4 2
average 4.0 2.0 2.0 3.0 3.5 3.0 2.0 3.0 3.71 2.8 −
decision − − − − + − − −
1 1.3 WOE-WST AGS 6 5 5 5 6 6 5
average 4.7 3.5 3.0 3.0 4.0 4.0 4.0 4.0 3.71 3.8 +
decision + + + − + + + −
1 1.4 CON-POJ PA 3 4 2 4 3 4
average 4.3 3.7 3.0 2.5 4.0 4.0 3.7 4.0 3.71 3.7 −
decision − + + − + + − −
1 1.5 COM-WST A HK 5 4 5 6 5 6 5
average 4.3 4.0 3.3 3.3 4.4 4.3 4.3 4.3 3.71 4.0 +
decision − + + − + + + −
1 1.6 CON-POJ A AH 5 6 6 5 6 6
average 4.4 4.0 3.8 4.0 4.5 4.3 4.6 4.6 3.71 4.3 +
decision − + + + + + + +

2 2.1 WOP-MCT PA 2 5 3 3 5 4
average 2.0 5.0 3.0 3.0 5.0 4.0 3.9 3.7 −
decision − + − − + −
Etcetera

Note:
Format: WOE‑MCT = worked-out example with multiple-choice test; COM‑SJT = completion assignment with situational judgment test;
WOE‑WST = worked-out example with work-sample test; CON‑POJ = conventional task with performance on-the-job assessment; COM-
WST = completion assignment with work sample test; WOP‑MCT = worked-out example plus process support with multiple-choice test.
Assessor: SA = self-assessment; PA = peer assessment; A = assessments by others (subscript is the initial of the other assessor).
Step 3: Sequence Learning Tasks 151

The fnal three columns in Table 6.6 concern the task-centered assessments.
Here, we read the table horizontally. The task-centered minimum score is the
average of the individual minimum scores for all standards relevant to that
learning task. The mean score is the average of all measured assessment scores
on that particular learning task. Continuing with the previous example, the
mean assessment score of assessor HK for the frst learning task is 3.0 (see
shaded column), calculated by summing up individual scores (15.0) divided
by the number of scores (5.0). This score falls below the minimum score of
3.74 for that learning task (also calculated over fve measured aspects). This
results in a negative decision, meaning the next task should again include
learner support. The learner self-assesses the second learning task with an
average score of 2.8, which remains below the task-centered minimum score
of 3.71 (the average of the minimum scores for all eight standards). This
suggests a decline in performance, and as a result, the next task will provide
additional support: It will be a worked-out example instead of a completion
assignment. Assessor GS gives an average assessment score of 3.8 for the
third learning task, above the task-centered minimum score. Therefore, the
next learning task will be a conventional task without support or guidance.
A peer gives an average assessment score of 3.7 for the fourth, unsupported
task, which is still a little below the task-centered minimum score. Therefore,
the next task will again provide support and guidance. Assessor HK gives an
average assessment score of 4.0 for the ffth task, above the task-centered
minimum score. Consequently, the next task is a conventional task without
support. Assessor AH gives an average assessment of 4.3 for the sixth, unsup-
ported task, well above the task-centered minimum score. Consequently, the
next learning task will be more complex and part of a second task class.
This example shows that task-centered assessments are critical for select-
ing learning tasks with the right amount of support/guidance and at the
right level of complexity. Support decreases when task-centered assessment
results improve, support increases when task-centered assessment results
decline, and support is stopped when task-centered assessment results
exceed the standard (cf. Figure 2.3 in Chapter 2). Task-centered assess-
ment of unsupported tasks is critical for determining the desired level of
complexity. The learner progresses to a next task class or complexity level
only when assessment results for unsupported tasks are above the task-cen-
tered minimum score. This process repeats itself until the learner success-
fully performs the conventional, unsupported tasks in the most complex
task class. Thus, it yields a unique learning trajectory optimized for an indi-
vidual learner.

Second-Order Scaffolding for Teaching Task-Selection Skills

An intelligent agent can select learning tasks in an adaptive learning system,


for example by using the protocol portfolio scoring, or the self-directed
152 Step 3: Sequence Learning Tasks

learner can themself select learning tasks in a system of on-demand edu-


cation. Yet, on-demand education will only be efective if learners already
possess the self-directed learning skills to select appropriate learning tasks.
The problem here is that this will often not be the case. Then, task-selection
skills can be taught in a process of second-order scafolding (for an example,
see Kostons et al., 2012). This requires shared control over task selection
where the learner and teacher/system work together to plan an optimal
individualized learning trajectory (Corbalan et al., 2009b). The responsibil-
ity for selecting learning tasks gradually shifts from the teacher/system to
the learner (Kicken et al., 2008).
For frst-order scafolding, we distinguished built-in task support and
problem-solving guidance (see Step 1 in Chapter 4). The same distinction
exists for second-order scafolding. For example, built-in task support can
involve limiting the number of learning tasks the learner can choose from.
The idea is that the number of tasks the learner may choose from should
neither be too low nor too high. Furthermore, this optimal range should
evolve as the learner’s self-directed learning skills develop. In other words,
the number of tasks to choose from and decisions to make should always
be in the learner’s zone of proximal development. This scenario can beneft

Table 6.7 Second-order scaffolding of built-in task support—gradually increas-


ing the number of learning tasks that learners may choose from to
facilitate their development of task-selection skills.

Phase Pre-selection of tasks by Final selection of tasks by


teacher or system (system learner (learner control)
control)

1. Pre-select tasks with The learner bases the final


a suitable level selection on context
of complexity, a features of the tasks and
suitable level of should ensure variability
support/guidance, over learning tasks (e.g.,
and the associated select based on cover stories
standards that make varying on dimensions that
it possible to work also vary in the real world).
on identified points of
improvement.
2. Pre-select tasks with The learner bases the final
a suitable level of selection on context
complexity and a features and the presence
suitable level of of standards that make
support/guidance. it possible to work
on identified points of
improvement (indicated by
task-centered assessments).

(Continued)
Step 3: Sequence Learning Tasks 153

Table 6.7 (Continued)

Phase Pre-selection of tasks by Final selection of tasks by


teacher or system (system learner (learner control)
control)
3. Pre-select tasks of The learner bases the final
a suitable level of selection on context
complexity. features, the presence
of standards that need
improvement, and offered
support and guidance
(e.g., study a worked-out
example, finish a completion
assignment, work on a
conventional problem,
request tutor presence, use
a process worksheet, etc.).
4. No pre-selection by The self-directed learner
teacher or system. now bases task selection
on context features, the
presence of standards that
need improvement, the
available support/guidance
(indicated by standard-
centered assessments
of supported tasks), and
the level of complexity
(indicated by standard-
centered assessments of
unsupported tasks).

from a shared-control model, where an intelligent agent preselects suitable


learning tasks from the pool of available tasks (system control), and the
learner then makes a fnal selection from this preselected subset (learner
control; cf. Table 2.1). This allows for a gradual transfer of responsibility for
task selection from the system to the learner. It helps the learner develop the
self-directed learning skills necessary for task selection. Table 6.7 describes
four phases in an educational program that ofer learners increasing control
over task selection, analogous to the scafolding of built-in task support.
In typical scenarios, built-in task support should be complemented by
providing problem-solving guidance for task selection. A teacher or system
then prompts the learner to refect on their choices and outcomes, par-
ticularly concerning various factors, such as context features, standards that
need improvement, the amount of support/guidance given, and the level
of complexity of the tasks (Raaijmakers et al., 2017). Ofering this guid-
ance requires knowledge of a learner’s performance, progress, and points of
154 Step 3: Sequence Learning Tasks

improvement. This is precisely the information that is available in a devel-


opment portfolio. To provide this guidance, it is helpful to organize regu-
lar coaching meetings in which the development portfolio can be discussed
with the learner. These coaching meetings serve a dual purpose. First, they
serve as a retrospective examination of previous learning tasks, identifying
specifc points for improvement and assessing the overall level of the learn-
er’s performance (Van den Boom et al., 2007; Van der Klink et al., 2001).
Equally important, the second function is to look ahead and strategically
select future tasks that most efectively address these points of improvement
and raise overall performance.
Electronic development portfolios, as described in Step 2 in the previ-
ous chapter, may free both the coach and the learner from many adminis-
trative and arithmetic duties. They streamline the retrospective evaluation
by providing a systematic overview of performed tasks and their standard-
centered and task-centered assessment results, often using the protocol
portfolio scoring. Moreover, the portfolio can automatically detect possible
conficts between assessments made by diferent assessment methods or dif-
ferent assessors (including self- and peer-assessments) and be used as input
for discussion in the coaching sessions. For example, confronting learn-
ers with conficts between self-assessments and assessments of others helps
them refect on and further develop their self-assessment skills if there is
enough feedback of the proper type (see Chapter 10). Consequently, elec-
tronic development portfolios may support a process of refection (Beck-
ers et al., 2016). They also streamline the prospective aspect because they
provide a systematic overview of characteristics of potential future learning
tasks, including information related to relevant standards, context features,
support/guidance, and complexity. Thus, they may also support a process
of planning or ‘prefection’ (Lehmann et al., 2014). As a result, coaches
and learners can use the electronic development portfolio to select future
learning tasks that are most benefcial for improving specifc aspects of the
learner’s performance and, ultimately, overall performance.
In the coaching meetings, it is crucial to emphasize how to system-
atically approach the task selection process. The model presented in Fig-
ure 2.3 or the protocol portfolio scoring can then serve as a systematic
approach to problem solving (SAP) upon which a ‘second-order’ process
worksheet can be built. This worksheet, or the coach’s explanation of it,
guides the learners in identifying the conditions that should help them
select simpler, equally complex, or more complex tasks; select tasks with
less, equal, or more support; and select tasks enabling them to focus on
specifc points of improvement. In principle, the guidance could take the
form of procedural advice involving the application of the same algorithmic
rules applied to system control over task selection (note that ‘task selection’
Step 3: Sequence Learning Tasks 155

is then actually classifed as a recurrent skill). However, to enable learners


to use their acquired task-selection skills fexibly, it is better to provide
strategic advice by giving them a SAP and general rules-of-thumb for task
selection (Taminiau et al., 2013, 2015). Finally, it is important to decrease
the guidance as learners’ self-directed learning skills develop; thus, the fre-
quency of coaching meetings and the level of detail of given advice should
gradually diminish.

6.6 Summary of Guidelines


• If you sequence classes of learning tasks, then you need to start with
‘whole’ tasks that represent the simplest tasks a professional might
encounter in the real world.
• If you sequence task classes for tasks that occur in simple and more
complex versions, then you need to identify all conditions that simplify
task performance and specify the task classes using those simplifying
conditions.
• If you sequence task classes for tasks that do not occur in simple and
more complex versions, then you need to consider the use of empha-
sis manipulation, where the frst task class emphasizes relatively sim-
ple aspects and later task classes emphasize increasingly more complex
aspects of the task.
• If you want to refne an existing sequence of task classes, then you need
to try to identify a progression of cognitive strategies (Step 5, Chap-
ter 8) or a progression of mental models (Step 6, Chapter 9), enabling
the learner to perform the tasks within increasingly more complex task
classes.
• If you fll a task class with learning tasks, then you need to apply vari-
ability of practice and scafolding, which results in a saw-tooth pattern of
support/guidance throughout the training program.
• If you are sequencing task classes and it proves to be impossible to fnd a
class of whole tasks that is simple enough to start the training with, then
you need to identify a small number of skill clusters or meaningfully inter-
related sets of constituent skills (i.e., parts) and sequence these clusters
using the backward-chaining with snowballing approach.
• If you combine whole-task sequencing with part-task sequencing for
learning tasks requiring much coordination, then you must apply whole-
part sequencing rather than part-whole sequencing.
• If you want to design individualized learning trajectories, then you need
to apply a cyclical process in which a learner performs a task, performance
is assessed, and the assessment is stored in a development portfolio that
guides the selection of the next task.
156 Step 3: Sequence Learning Tasks

• If you want to assess learner performance to update a development port-


folio and select new tasks, then apply multiple assessment instruments and
include multiple assessors (e.g., teacher, clients, peers, self-assessments).
• If you want to gather and store standard-centered and task-centered
assessment results as a basis for task selection, then the protocol portfolio
scoring can be used.
• If you want the learner to develop self-directed learning skills, then the
responsibility for task selection must be gradually transferred from the
teacher or another intelligent agent to the learner (i.e., second-order scaf-
folding, which requires shared control of the system and learner).
• If you apply built-in task support for second-order scafolding, then you
can limit the number of learning tasks from which the learner may choose
and gradually increase this number as the learner’s self-directed learning
skills develop.
• If you apply problem-solving guidance for second-order scafolding, then
you can use portfolio-based coaching sessions to give learners advice on
how to assess their performance and how to plan opportunities for future
learning and gradually decrease the frequency and level of detail of the
advice as the learner’s self-directed learning skills develop.

Glossary Terms

Backward chaining; Emphasis manipulation; Forward chaining; Knowledge


progression; Judgement of Learning; Learner control; Metadata; Part-
task sequencing; Part-whole sequencing; Peer assessment; Protocol port-
folio scoring; Second-order scafolding; Self-assessment; Self-directed
learning; Sequencing; Shared control; Simplifying conditions; Skill clus-
ter; Snowballing; System control; Task selection; Whole-part sequencing;
Whole-task sequencing
Chapter 7

Step 4
Design Supportive Information

7.1 Necessity
Supportive information is one of the four principal design components,
helping learners perform the nonrecurrent aspects of the learning tasks. We
strongly recommend performing this step.

After designing learning tasks, the next step is designing supportive infor-
mation for carrying out those tasks. This chapter presents guidelines for
this. It concerns the second design component and bridges the gap between

DOI: 10.4324/9781003322481-7
158 Step 4: Design Supportive Information

what learners already know and what they should know to fruitfully work
on the nonrecurrent aspects of learning tasks within a particular task class.
Supportive information refers to (a) information on the organization of the
task domain and how to solve problems within that domain, (b) examples
illustrating this domain-specifc information, and (c) cognitive feedback on
the quality of the task performance. All instructional methods for present-
ing supportive information promote schema construction through elabora-
tion. In other words, they help learners establish meaningful relationships
between newly presented information elements and connect this to what
they already know (i.e., their prior knowledge; Wetzels et al., 2011). This
elaboration process yields rich cognitive schemata that relate many elements
to many others. Such schemata allow for deep understanding and increase
the availability and accessibility of task-related knowledge in long-term
memory.
The structure of this chapter is as follows. Section 2 discusses the nature
of general information about how to solve problems in the task domain
(i.e., Systematic Approaches to Problem solving SAP) and the organization
of this domain (i.e., domain models). Section 3 describes how to illustrate
SAPs with modeling examples and illustrate domain models with case stud-
ies. Section 4 discusses deductive and inductive presentation strategies for
combining general SAPs and domain models with specifc modeling exam-
ples and case studies. It also describes inquisitory methods such as guided
discovery learning and resource-based learning. Section 5 presents guide-
lines for providing cognitive feedback on the quality of nonrecurrent aspects
of task performance. Section 6 discusses suitable media for presenting sup-
portive information, including multimedia, hypermedia, microworlds,
epistemic games, and social media. Section 7 explains how to position sup-
portive information in the training blueprint. The chapter concludes with a
summary.

7.2 Providing SAPs and Domain Models


Learners need information to successfully work on (nonrecurrent aspects
of) learning tasks and learn from those tasks. This supportive information
bridges what learners already know and what they need to know to work
on the learning tasks. Teachers typically call this information ‘the theory,’
and it is often presented in study books and lectures. Because the same
body of knowledge underlies the ability to perform all learning tasks in the
same task class, supportive information is not coupled to individual learn-
ing tasks but to whole task classes. The supportive information for each
subsequent task class is an addition to or an embellishment of the previous
supportive information, allowing learners to do new things they could not
do before.
Step 4: Design Supportive Information 159

Supportive information encompasses two types of knowledge. First, it


includes the cognitive strategies that allow one to carry out tasks and sys-
tematically solve problems. A cognitive strategy can be described as a System-
atic Approach to Problem solving (abbreviated as SAP; see Step 5 in the next
chapter), which outlines the phases that an expert typically goes through
while carrying out the task, along with the rules-of-thumb that may be help-
ful to complete each phase successfully. This specifcation of a SAP can be
presented in two ways. It can be directly presented as supportive informa-
tion, allowing learners to study it as helpful information for performing the
learning tasks within one particular task class. Alternatively, it can be trans-
formed into a process worksheet that ‘guides’ the learner through performing
a particular task (for an example of this kind of problem-solving support,
refer back to Section 4.7).
In addition, supportive information also includes the mental models that
allow reasoning within the task domain. Mental models may be described
as three diferent kinds of domain models (see Step 6 in Chapter 9): con-
ceptual models specifying what the diferent and specifc things in a domain
are, structural models describing how they are organized, and causal mod-
els describing how they work. Mental models of the organization of a task
domain are only helpful for solving problems if learners also apply useful
cognitive strategies. Likewise, cognitive strategies are only helpful if learn-
ers possess good mental models of the domain. There is, thus, a reciprocal
relationship between cognitive strategies and mental models: One is of little
use without the other. The next subsection discusses the presentation of SAPs
and domain models.

Presenting Systematic Approaches to Problem Solving (SAPs)

SAPs tell learners how to best solve problems in a particular task domain. To
this end, they provide an overview of the phases and, where necessary, the
subphases needed to reach particular goals and subgoals. They depict the
temporal order of these phases and subphases and indicate how particular
phases and subphases may depend on the outcomes of prior phases. In addi-
tion, provided rules-of-thumb or heuristics help the learner reach the goals
for each phase or subphase.
Figure 7.1 gives an example of a SAP for a video content producer. This
SAP for ‘developing a story’ comes as a fowchart (also called a SAP chart)
in which particular phases depend on the success or failure of one or more
preceding phases. According to this SAP, ‘creating a storyboard’ happens
only if needed (e.g., for communicating complex narratives with many visual
elements and transitions to crew or clients); otherwise, the video-content
producer only ‘refnes the script.’ The SAP on the right side of Figure 7.1
is a further specifcation of the frst phase of the SAP on the left (‘defne the
160 Step 4: Design Supportive Information

purpose of the video’) and depicts two subphases. Usually, you will present
global SAPs for early task classes and increasingly more detailed SAPs, with
more specific subphases, for later task classes.

Figure 7.1 SAP for video content production focusing on ‘developing a story.’


It describes phases in problem solving (see left side) as well as sub-
phases and rules-of-thumb that may help to complete each phase or
subphase (see right side).

The SAP on the right side of Figure 7.1 also provides some helpful
rules-of-thumb for identifying the client’s objectives (i.e., goal of Phase 1)
and the key message (i.e., goal of Phase 2). It is best to provide a prescrip-
tive formulation of the rules-of-thumb such as ‘to reach . . . , you should
try to do . . .’—and to discuss why to use these rules, when to use them,
and how to use them. Furthermore, it may be helpful to give SAPs a name
(Systematic Search, Split Half Method, Methodical Medical Acting, etc.)
to easily refer to them in later instructional materials and discussions with
learners.
Instructional methods for presenting SAPs must help learners establish,
in a process of elaboration (see Box 7.1), meaningful relationships between
newly presented information elements (e.g., phases, goals, rules) and mean-
ingful relationships between those new information elements and already
available prior knowledge. For the phases in a SAP, the chosen method or
Step 4: Design Supportive Information 161

Box 7.1 Elaboration and Supportive Information

Well-designed, supportive information provides a bridge between what


learners already know and what might be helpful for them to know to
carry out and learn to carry out the learning tasks. Its presentation
should provoke elaboration of new information; that is, those cogni-
tive activities that integrate new information with cognitive schemata
already available in memory. Together with induction (see Box 4.1
in Chapter 4), elaboration is a major learning process responsible for
constructing cognitive schemata.

Meaningful Learning
The best way to increase learners’ memory for new information is
to have them elaborate on the instructional material. This involves
having them enrich or embellish the new information with what
they already know. When learners elaborate, they frst search their
memory for general cognitive schemata that may provide a cogni-
tive structure for understanding the information in general terms
and for concrete memories that may provide a useful analogy (“Oh,
I came across something like this before”). These schemata con-
nect to the new information. Elements from the retrieved schemata
that are not part of the new information are now linked to it. It is a
form of meaningful learning because the learners consciously estab-
lish connections between the new material and one or more exist-
ing schemata in their memory. Thus, learners use what they already
know about a topic to help them structure and understand the new
information.

Structural Understanding
The main result of elaboration is a cognitive schema that enriches the
new information, with many interconnections within and extending from
that schema to other schemata. This network of connections will facili-
tate retrieving and using the schema because it ofers multiple retrieval
routes to access specifc information. In short, elaboration results in a rich
knowledge base that provides a structural understanding of the subject
matter. The knowledge base is well-suited for manipulation by controlled
processes. In other words, learners may employ it to guide problem-solv-
ing behavior, reason about the domain, and make decisions.
162 Step 4: Design Supportive Information

Elaboration Strategies
Like induction, elaboration is a strategic and controlled cognitive
process requiring conscious processing from the learners. It can be
learned and includes subprocesses such as exploring how new infor-
mation relates to things learned in other contexts, explaining how
new information fts in with things learned before (‘self-explana-
tion’), or asking how to apply the information in other contexts. Col-
laboration between learners and group discussion might stimulate
elaboration. In a collaborative setting, learners often must articulate
or clarify their ideas to the other group member(s), helping them
deepen their understanding of the domain. Group discussions may
also beneft the activation of relevant prior knowledge and so facilitate
elaboration.

Tacit Knowledge
Cognitive schemata resulting from elaboration (or induction) can
guide problem-solving behavior, making informed decisions, and rea-
soning about a domain. However, with consistent and repeated prac-
tice, cognitive rules may develop. These rules eventually produce the
efect of using the cognitive schema directly without referring to this
schema anymore (the formation of such cognitive rules is discussed in
Box 10.1). For many schemata, this transition to cognitive rules will
never occur. For instance, a troubleshooter might construct a schema
for dealing with a system malfunction based on one or two experi-
ences. When faced with a new problem situation, they might use this
(concrete) schema to reason about the system (“Oh yes, I encountered
something like this seven or eight years ago”). This schema will never
become automated because it is not used frequently enough. How-
ever, people may also construct schemata that are repeatedly applied
afterward. This can lead to tacit knowledge (literally: silent or not
spoken; also called implicit knowledge, ‘tricks-of-the-trade,’ or ‘fnger-
spitzengefühl’), which is characterized by its difculty to articulate and
heuristic nature; you ‘feel’ that something is the way it is. The explana-
tion for this phenomenon is that people initially construct advanced
schemata through elaboration or induction and subsequently form
cognitive rules based on direct experience. Afterwards, the schemata
are no longer used as such and quickly become difcult to articulate.
The cognitive rules directly drive performance but are not open to
conscious inspection.
Step 4: Design Supportive Information 163

Further Reading
Kalyuga, S. (2009). Knowledge elaboration: A cognitive load perspec-
tive. Learning and Instruction, 19, 402–410.
https://doi.org/10.1016/j.learninstruc.2009.02.003
Van Boxtel, C., van der Linden, J., & Kanselaar, G. (2000). Collabo-
rative learning tasks and the elaboration of conceptual knowledge.
Learning and Instruction, 10, 311–330.
https://doi.org/10.1016/S0959-4752(00)00002-5

methods should stress the temporal organization of the goals and subgoals
the task performer must reach. For instance, when learners study a SAP,
the instruction should explain why particular phases need to be performed
before performing other phases (e.g., “an aqueous solution must be heated
to a certain temperature before the reagent can be added because the chem-
ical reaction that you want to occur is temperature dependent”) or indicate
the efects and problems of rearranging phases (e.g., “if you frst add the
reagent to the aqueous solution and then heat it, it will cause the reagent
to bind to a specifc molecule and lose its function”). When presenting
rules-of-thumb, instructional methods should stress the change relationship
between an ‘efect’—the goal that must be reached—and its ‘cause’—what
must be done to reach this goal. For instance, the instruction may explain
how particular rules-of-thumb bring about particular desired states of afairs
(e.g., “if you add a certain reagent to an aqueous solution, then the calcium
will precipitate out of the solution”) or they may predict the efects of the
use—or lack of use—of particular rules-of-thumb (e.g., “if you don’t heat
the solution, then the reagent will not function, because the working of the
reagent is temperature dependent”).

Presenting Domain Models

Domain models indicate how things are organized in a specifc world (i.e.,
the relevant domain), specifying the relevant elements in a domain as well as
the relationships between those elements. We can distinguish three types of
models: conceptual, structural, and causal.
Conceptual models are the most common type of model encountered.
They have concepts as their elements, allowing for classifying and describing
objects, events, and activities. Conceptual models help learners answer the
question: What is this? For instance, knowledge about several types of medi-
cines, treatments, side efects, and contraindications, along with how these
difer from each other, helps a doctor determine the possibilities and risks asso-
ciated with diferent courses of action. Knowledge of diferent types of story
164 Step 4: Design Supportive Information

arcs helps a video-content producer organize content and engage the viewer
when writing a script. Analogous to instructional methods for presenting
SAPs, methods for presenting domain models should help learners establish
meaningful relationships between newly presented elements. Table 7.1 sum-
marizes some popular methods for establishing such relationships.

Table 7.1 Eight popular instructional methods stressing meaningful relation-


ships in presenting supportive information.

Type of domain model Instructional method Highlighted


relationship(s) a

Conceptual models, 1. Analyze a particular Subordinate kind-of or


consisting of concepts idea into smaller ideas part-of relation
and their relationships.
2. Describe a particular Subordinate kind-of or
Help the learner
idea in terms of its part-of relation
answer the question
main features or
“What is this?”
characteristics
3. Present a more general Superordinate kind-of
idea or organizing or part-of relation
framework for a set of
similar ideas
4. Compare and contrast Coordinate kind-of or
a set of similar ideas part-of relation
Structural models, 5. Explain the relative Location relation
consisting of plans, location of elements
scripts, and templates. in time or space
Help the learner
6. Re-arrange elements Location relation
answer the question
and predict effects
“How is this
organized?”
Causal models, 7. Predict future states Cause-effect or natural
consisting of principles process relation
and theories.
8. Explain a particular Cause-effect or natural
Help the learner
state of affairs process relation
answer the question
“How does this work”?
Note:
a
The different types of relationships are further explained in Chapters 8 and 9.

Important methods for presenting conceptual models include (see


Methods 1–4 in Table 7.1):

• Analyze a particular idea into smaller ideas. When discussing a concep-


tual model of electric circuits, you can distinguish typical kinds of circuits
such as parallel or series (kind-of relation because parallel and series are
Step 4: Design Supportive Information 165

kinds of circuits) and/or state typical components of an electric circuit


such as a switch, resistor, or energy source (part-of relation).
• Describe a particular idea in its main features or characteristics. When
presenting a conceptual model of a human-computer interface, you can
give a defnition (i.e., a list of features) of virtual reality helmets and data
gloves (using kind-of relations because helmets and gloves are particu-
lar kinds of interfaces) and/or give a defnition of the concepts dialog
box and selection menu (using part-of relations because dialog boxes and
selection menus are often part of an interface).
• Present a more general idea or organizing framework for a set of similar
ideas. If a conceptual model of process control, for example, is to be pre-
sented, you can state what all controllers (e.g., temperature controllers,
fow controllers, level controllers) have in common. This is often more
inclusive, more general, and more abstract than each of the specifc ele-
ments. If such an organizing framework is presented beforehand, it is
called an advance organizer (Ausubel, 1960).
• Compare and contrast a set of similar ideas. When discussing a concep-
tual model of iterative computer code, you can compare and contrast the
working of diferent kinds of looping constructs (e.g., WHILE-loops,
REPEAT-UNTIL-loops, FOR-loops, etc.).

Structural models describe how objects, events, or activities for reaching


particular goals or efects are interconnected in time or space. Structural
models are composed of plans as their fundamental elements and help learn-
ers answer the question: How is this organized? Plans that indicate how
activities or events are linked in time are also called scripts (Custers, 2015).
Such structural models help learners understand and predict events (What
happens when?). For example, in medicine, illness scripts allow doctors to
recognize symptoms as belonging to a particular disease, predict how the
disease will develop, and identify a treatment plan that fts the diagnosis.
Plans that indicate how objects are connected in terms of their spatial
relationships are also called templates and help the learner to understand and
design artifacts (How is this built?). For example, in software engineering,
knowledge about stereotyped patterns of programming code or program-
ming templates and how these patterns ft together helps computer pro-
grammers understand and write the code for computer programs. We can
use the same methods used to stress relationships in conceptual models for
structural models. Additional methods for presenting relationships in struc-
tural models include (see Methods 5 and 6 in Table 7.1):

• Explain the relative location of elements in time or space. When presenting


a structural model of scientifc articles, you would want to explain how
the main parts of an article (i.e., title, abstract, introduction, method,
results, conclusions, and discussion) and subparts (i.e., as part of the
166 Step 4: Design Supportive Information

method section: participants, materials, procedure, etc.) are related to


each other so that the article will reach its main goals (i.e., visibility, com-
prehensibility, replicability).
• Rearrange elements and predict efects. When presenting a structural
model of computer programs, you could indicate the efects of rearrang-
ing particular pieces of code on the behavior and the output of programs.

Causal models focus on how objects, events, or activities afect each other
and help learners interpret processes, give explanations, and make predic-
tions. Such models help learners answer the question: How does this work?
A principle is the simplest causal model that relates an action or event to an
efect. A principle allows learners to draw implications by predicting a certain
phenomenon that is the efect of a particular change (e.g., if A, then B) or
to make inferences by explaining a phenomenon as the efect of a particular
change (e.g., B has not happened, so A was probably not the case). Principles
may refer to very general change relationships, in which case they often take
the form of laws (e.g., the law of supply and demand, law of conservation of
energy) or to highly specifc relationships in one particular technical system
(e.g., opening valve C leads to an increase of steam supply to component X).
Causal models that explain natural phenomena through an interrelated
set of principles are called theories; causal models that explain the working
of engineered systems are called functional models. For instance, knowledge
about how components of a chemical plant function and how each compo-
nent afects all other components helps process operators with their trouble-
shooting tasks. For the presentation of causal models, additional methods to
stress relationships are (see Methods 7 and 8 in Table 7.1):

• Predict future states. When discussing a meteorological model, you may


give weather forecasts in diferent situations.
• Explain a particular state of afairs. When discussing a theory of why
particular objects corrode, you could state the factors causing corrosion
for one metal object, such as iron, and not causing corrosion for another
metal object, such as stainless steel.

Note that the expository methods in Table 7.1 do not provide any ‘practice’
for the learners because they do not explicitly stimulate learners to process
the new information actively. The methods are typically used in expository
texts and traditional one-way lectures. An enormous amount of educational
literature on writing instructional texts and preparing instructional presen-
tations discusses many more explanatory methods than the ones presented
in Table 7.1 (e.g., Hartley, 1994). We want to emphasize that, while the
Ten Steps recognizes the importance of knowledge acquisition strategies
such as desirable difculties (Bjork, 1994) and generative learning strategies
(Fiorella & Mayer, 2016; Wittrock, 1989), it does not prioritize them over
Step 4: Design Supportive Information 167

whole-task practice and inductive learning. The Ten Steps is written from
the perspective that education traditionally focused too much on knowledge
acquisition instead of developing complex skills to carry out tasks. While
books are flled with methods for stimulating memorization and com-
prehension, the Ten Steps does not consider these methods sufcient for
developing complex skills. Rather than starting the design by describing the
supportive information and selecting the instructional methods to acquire
the necessary knowledge, the Ten Steps considers these methods supportive
to developing complex skills, considering them only after designing learning
tasks. Therefore, we do not extensively discuss methods such as retrieval or
spaced practice here. However, to accommodate interested readers, we pre-
sent Table 7.2, which describes eight generative learning activities presented
by Logan Fiorella and Richard Mayer (2016) that may foster elaboration by
requiring learners to actively transform—cognitively and also, sometimes,
physically—new information into something else; for example, to express
the meaning of the newly presented information ‘in their own words’ or
transform verbal information (e.g., a text or a lecture) into a visual represen-
tation (e.g., a concept map or a picture).

Table 7.2 Eight generative learning strategies by Fiorella and Mayer (2016).

Generative activity Action


Learning by . . . You . . .

• summarizing • make a written or oral summary of the content


in your own words
• mapping • make a spatial representation with core concepts
(mind map/concept map)
• drawing • make a drawing of the core concepts/ideas
• imagining • create a mental image of the core concepts
• self-testing • make a written or oral quiz
• self-explaining • explain—either written or oral—the content to
yourself
• teaching • teach someone else about the topic
• enacting • make task-relevant movements (e.g., gesticulating/
manipulating objects

7.3 Illustrating SAPs and Domain Models


An especially important relationship is the experiential relationship, which
relates the general, abstract information discussed in the previous section
(i.e., specifcations of SAPs and domain models) to concrete, familiar exam-
ples that illustrate this information. According to the Ten Steps, you should
never present SAPs or domain models without illustrating them with rel-
evant examples. We recommend this approach because cognitive strategies
and mental models may contain general, abstract knowledge represented by
168 Step 4: Design Supportive Information

SAPs and domain models and memories of concrete cases exemplifying this
knowledge. In real-life tasks, people draw upon their general knowledge of
how to approach problems in that task domain, how the domain is organ-
ized, and their specifc memories of concrete cases related to similar tasks
they previously carried out. In case-based reasoning, the memories serve
as an analogy for solving the problem. In inductive learning (refer back to
Box 4.1 in Chapter 4 for a short description of this basic learning process),
they may serve to refne the general knowledge.
In instructional materials, modeling examples and case studies (i.e.,
worked-out examples with equivocal solutions) are the external counter-
parts of internal memories, providing a bridge between the supportive
information and the learning tasks. When providing supportive informa-
tion, modeling examples illustrate SAPs, and case studies illustrate domain
models. At the same time, these same two approaches may be seen as learn-
ing tasks with maximum task support (see Figure 7.2). They are important
for learners at all levels of expertise, ranging from beginners to true experts.
For instance, it is known that Tiger Woods, when he was the best golfer in
the world, still extensively studied videotapes of himself and his opponents
(even though he had probably played against them many times) to refne
cognitive strategies on how to approach problems and situations during
the match and meticulously studied the layout of golf courses around the
world to refne his mental models, allowing him to fgure out how he could
best play them (even though he had probably played these courses numer-
ous times). In other words, even extremely expert task-performers fne-tune
their cognitive strategies and mental models by extensively studying con-
crete examples.

Figure 7.2 Modeling examples and case studies as a bridge between learning
tasks and supportive information.
Step 4: Design Supportive Information 169

Modeling Examples

Chapter 4 discussed modeling examples and case studies as part of designing


learning tasks with maximum task support. Modeling examples that illus-
trate SAPs may show a professional performing a nontrivial task and simul-
taneously explaining why they make particular decisions and take particular
actions (e.g., by thinking aloud and/or showing the expert’s eye move-
ments). Modeling examples bring to light the hidden mental processes that
a professional employs to solve a problem. These examples make explicit
how thinking processes are consciously controlled to attain meaningful
goals. Learners can see how professionals reason through difcult, problem-
atic situations to overcome impasses, rather than simply observing a smooth
progression toward a correct solution, which is hardly ever the case. For
example, in a training program aimed at ‘producing video content,’ learners
may study how an experienced video content producer deals with inter-
viewees—including situations where things go wrong—making visible and
explicit what the producer was thinking and why they made certain choices
or changes during the interview so that they can develop more efective
interview strategies. Similarly, clinical psychology students might observe
videotaped therapeutic sessions between therapists and their clients to study
and learn how experienced therapists use SAPs to guide their conversations
with clients. This helps learners, for example, distinguish the phases in a bad
news conversation (i.e., tell the client the bad news, deal with the client’s
emotions, search for solutions) and use a wide range of rules-of-thumb to
cope with their clients’ emotional states (Holsbrink-Engels, 1997).

Case Studies
Diferent kinds of case studies can be used, depending on the domain model
they exemplify. Case studies that illustrate conceptual models will typically
describe a concrete object, event, or activity exemplifying the model. Stu-
dents learning to produce video content may study a variety of example
videos to develop a sense of composition, diferent shot transitions, types
of background music, etc. Architecture students may study successful (or
particularly unsuccessful) building designs to develop mental models of con-
cepts such as sight lines, ventilation obstacles, environmental friendliness, etc.
Case studies that illustrate structural models may be artifacts or descrip-
tions of those artifacts designed to reach particular goals. Students learning
to produce video content may study a deconstructed camera to determine
how diferent parts, such as lenses and sensors, are organized and related.
A more elaborated model of a camera’s internal organization may help them
record better quality footage. Architecture students may visit ofce build-
ings to study how particular goals have or have not been met using certain—
often, prefabricated—templates or elements in particular ways. An improved
170 Step 4: Design Supportive Information

model of possible design and construction techniques may help them design
better buildings.
Case studies that illustrate causal models may be real-life processes or
technical systems. Students learning to produce video content may study
example videos with diferent exposures, shutter speeds, and apertures and
how these settings afect the footage. A more detailed mental model of how
these camera settings afect the recording may help them improve their con-
tent. Architecture students may study a detailed description of the events
that led to a disaster or near disaster in an ofce building. A better mental
model of possible fault trees (see Section 9.2) may help them identify weak-
nesses in building processes or even design safer buildings.

7.4 Strategies for Presenting Supportive Information


At this point, an important question concerns how specifc modeling exam-
ples and case studies are best combined with general SAPs and domain
models to achieve efective and efcient instruction. Two categories of
presentation strategies can be distinguished; namely, deductive presentation
strategies and inductive presentation strategies. We will frst describe both
types of strategies, followed by a discussion of how to select an optimal strat-
egy. Finally, we discuss resource-based learning, where the designer does not
plan the presentation of supportive information beforehand but leaves it up
to the learners to search and fnd appropriate learning resources containing
the necessary supportive information.

Deductive Presentation Strategies

Deductive reasoning is reasoning that moves from theory to observations or


fndings. A deductive presentation strategy, thus, works from the general,
abstract information presented in SAPs and domain models toward concrete
illustrations of this information. Typically, the frst learning tasks are mod-
eling examples or case studies to illustrate the earlier presented SAPs and
domain models (see left upper box in Figure 7.3 later in this section). For
instance, in a biology program, one may teach students to categorize animals
as birds, reptiles, fsh, amphibians, or mammals. The general information
might then include a conceptual model of mammals, indicating that they
have seven vertebrae in the neck, are warm-blooded, have body hair, and
give birth to living young nourished by milk secreted by mammary glands.
The frst learning task may then be a case study illustrating this model and,
for instance, asking the learners to study interesting mammals like girafes,
whales, tree shrews, and human beings.
In this strategy, the general information and the illustrations are directly
presented by a teacher or in the instructional materials. This makes it a
Step 4: Design Supportive Information 171

deductive-expository strategy because all information is ‘exposed’ to them.


An alternative approach frst presents the general information but then asks
the learners to come up with examples illustrating this information. In this
approach, the learners would receive the same conceptual model of mammals,
but rather than being exposed to a case study illustrating this model, they must
come up with examples of mammals themselves. They must also explain why
these examples ft the presented conceptual model (see the right upper box
in Figure 7.3). This deductive-inquisitory strategy helps learners activate their
relevant prior knowledge and can promote the elaboration of new information
and deeper processing (see also Table 7.3, self-explaining). However, it will
also be more time consuming than a deductive-expository strategy because
learners need time to think of and present their examples, which may involve
making mistakes and corrections and discussing them with peers and their
teacher.

Inductive Presentation Strategies

Inductive reasoning constructs generalizations based on several individ-


ual instances. An inductive presentation strategy takes a path from con-
crete illustrations or examples to general, abstract information (SAPs and
domain models). Consequently, modeling examples or case studies are
stepping stones for presenting the general information. In this strategy,
case studies of mammals (girafes, whales, tree shrews, human beings) will
be given by the teacher or in the instructional materials before present-
ing the conceptual model (see left lower box of Figure 7.3). Compared
to a deductive strategy, an inductive strategy with early use of concrete
examples may work well for learners with little prior knowledge. For these
learners, relating concrete examples to what they already know will typi-
cally be easier than connecting the conceptual model to what they already
know, which will often be too abstract and unfamiliar. However, an induc-
tive strategy is more time consuming than a deductive strategy. When you
frst present the general information, as in a deductive strategy, it can often
be adequately illustrated with just one or two examples. In contrast, if you
begin with examples, as in an inductive strategy, you typically need to pro-
vide more examples to highlight the commonalities central to the general
information.
When the teacher or instructional materials directly present the exam-
ples and the general information in that order, this is called an inductive-
expository strategy because all information is exposed to the learners. The
alternative is an inductive-inquisitory strategy, where the examples are
presented to the learners, but they are then tasked with generating the
general information (e.g., conceptual models, structural models, causal
models) illustrated by these examples. In the situation sketched earlier,
172 Step 4: Design Supportive Information

the learners receive examples of mammals (giraffes, whales, tree shrews,


human beings) and are then asked to come up with a definition and con-
ceptual model of what mammals are (see right lower box in Figure 7.3).
This is a very difficult and time-consuming process for learners because
they do not know beforehand what they must discover. Novice learners
also often lack the knowledge needed to explore the deep-level features
of the examples and concentrate on surface-level features, which leads to
misunderstanding the conceptual, structural, or causal model (Kirschner
et al., 2009). An inductive-inquisitory strategy is, therefore, much more
demanding than a deductive-inquisitory strategy. In a deductive-inquis-
itory strategy, learners can generate concrete examples from (episodic)
memory that more or less fit the general information, but in an induc-
tive-inquisitory strategy, it is unlikely that learners already have useful
general information available in memory. Therefore, many authors argue
that pure inductive-inquisitory methods (also called discovery meth-
ods) are ineffective for learning and should be avoided (see Kirschner
et al., 2006).
Guided discovery methods provide a solution and use leading ques-
tions to guide learners in discovering the general information (Tawfik
et al., 2020). Useful leading questions that stimulate learners to acti-
vate their relevant prior knowledge may ask them to come up with
analogies and counter-examples or they may be constructed by add-
ing the prefix “ask the learners to . . .” to the methods discussed in
Section 7.2 (see Table 7.3 for examples of leading questions). Lead-
ing questions can be presented alongside case studies and modeling
examples, requiring learners to think critically about and thoughtfully
analyze the organization of the illustrated task domain and the dem-
onstrated problem-solving process. Scattered throughout the descrip-
tion, or possibly at the end, are questions requiring learners to examine
ideas, evidence and counter-evidence, and assumptions relevant to the
example. These questions help learners work from what they already
know to ‘self-explain’ the new information (Chiu & Chi, 2014; Renkl,
2002) and, especially, to process the relationships between pieces of
information illustrated in the examples to stretch their knowledge
toward a more general understanding. When using guided discovery
learning in a way that warrants learners’ articulation of the necessary
general information, it may build on what they already know. It may
offer good opportunities to help them further develop their elaboration
skills (McDaniel & Schlager, 1990). When used improperly or with-
out well-structured guidance, it can lead to excessive—extraneous—
cognitive load, ineffective cognitive strategies and mental models, and
poor or even no learning.
Step 4: Design Supportive Information 173

Table 7.3 Inquisitory methods that help learners activate their prior knowl-
edge and establish meaningful relationships in presented supportive
information.

Inquisitory method: Ask the learner Example of a leading question


to . . .

1. present a familiar analogy for Can you think of something else


a particular idea that stores energy as warmth?
2. present a counter-example Are there any fish that do not live
for a particular idea in water?
3. analyze a particular idea into Which elements may be part of an
smaller ideas electric circuit?
4. describe a particular idea in What are the main characteristics
its main features of a mammal?
5. present a more general idea To which family do giraffes, whales,
or organizing framework for a tree shrews, and human beings
set of similar ideas belong?
6. compare and contrast a set of What do human beings and whales
similar ideas have in common?
7. explain the relative location Why do most vehicles steer with
of elements in time or space their front wheels?
8. re-arrange elements and Will this machine still function
predict effects when we reverse the plus and min
poles?
9. explain a particular state of Why does water boil at a lower
affairs temperature in the mountains?
10. predict future states Will this kettle of water still be
boiling at a lower altitude?

Selecting a Strategy for the Presentation of Supportive


Information

Figure 7.3 depicts the four basic strategies for presenting supportive informa-
tion. The deductive-expository strategy (left upper box in Figure 7.3) repre-
sents a specifc type of ‘direct instruction’—pure expository instruction—and is
very time-efective. But it has some serious drawbacks. Learners with little or
no relevant prior knowledge may have difculties understanding the general
information. And learners are neither invited nor required to elaborate on the
presented information nor are they stimulated to connect it to what they already
know. Therefore, it is best to use a deductive-expository strategy if instructional
time is severely limited, if learners already have ample relevant prior knowledge,
and/or if a deep level of understanding is not strictly necessary.
174 Step 4: Design Supportive Information

Figure 7.3 Four basic strategies for the presentation of supportive information


and some suggestions for their use.

The inductive-inquisitory strategy (right lower box) is the opposite of a


deductive-expository strategy and represents ‘discovery learning.’ Although
the Ten Steps discourages pure discovery learning, well-designed guided
discovery can sometimes be useful. Its early use of concrete examples works
well for learners with little prior knowledge, and leading questions promote
activating relevant prior knowledge and elaborating the presented informa-
tion. A drawback is that it is very time consuming and, if not properly con-
ducted, may lead to ineffective cognitive strategies and mental models. Thus,
an inductive-inquisitory approach should always be fully guided and only
be used if sufficient time is available for the instruction, if learners have lit-
tle prior knowledge, and/or if a deep level of understanding is necessary. It
should warrant that the learners reach this deep understanding of the general
information.
By default, the Ten Steps uses the strategies in between direct
instruction and guided discovery learning. Compared to the deductive-
expository strategy, a deductive-inquisitory strategy (right upper box
in Figure 7.3) will cost more time because learners must think of and
present the examples. An inductive-expository strategy (left lower box)
will also cost more time because it involves presenting more examples
Step 4: Design Supportive Information 175

to highlight commonalities central to the general information. Thus, in


terms of instructional time, the ‘in-between’ strategies will be roughly
similar. By default, the inductive-expository strategy may be used at the
beginning of an educational program when learners are still novices, and
the deductive-inquisitory strategy may be used later in the educational
program when learners have already acquired some knowledge of the
domain. A more extensive discussion of these and other strategies can be
found in Gorbunova et al. (2023).
The previous subsections assumed that the teacher or another intelligent
agent is responsible for providing supportive information and presenting it
to the learners in a ready-made form or by asking leading questions. Chap-
ter 2 called this approach planned information provision and contrasted
it with resource-based learning (Hill & Hannafn, 2001). The distinction
between planned information provision and resource-based learning is simi-
lar to that between adaptive learning and on-demand education. In adaptive
learning, the teacher or another intelligent agent selects the learning tasks;
in on-demand education, the self-directed learner selects the learning tasks.
Likewise, in planned information provision, an intelligent agent is respon-
sible for providing supportive information; in resource-based learning, it is
the self-directed learner who must search and fnd this information in books
and articles, the Internet, multimedia sources, and so forth (or, if it begins
with presenting basic resources which themselves are not sufcient, learners
must search for additional resources).
Thus, resource-based learning is the preferred strategy if we want self-
directed learners to search and fnd the supportive information they need
to carry out the learning tasks. But what if learners do not yet possess the
information literacy skills necessary for searching and fnding information?
Then, we use an approach similar to the one discussed in Step 3, dealing
with the sequencing of learning tasks. Explicitly teaching learners how to
select learning tasks requires second-order scafolding with a gradual shift
from adaptive learning to on-demand education. Likewise, explicitly teach-
ing learners how to search and fnd useful supportive information requires
second-order scafolding with a gradual shift from planned information pro-
vision to resource-based learning (Brand-Gruwel et al., 2005; Wopereis &
van Merriënboer, 2011). Chapter 14 will further discuss the development of
such information literacy skills.

7.5 Cognitive Feedback


In the Ten Steps, performance assessments provide information about
the quality of all aspects of performance, including constituent skills.
They serve as a valuable source of informative feedback for the learner
176 Step 4: Design Supportive Information

(as described in Step 2). Unlike recurrent constituent skills, nonrecurrent


constituent skills rely on knowledge in the form of cognitive strategies and
mental models (cf. Figure 2.2, showing how knowledge is connected to
nonrecurrent constituent skills). Chapters 8 and 9 discuss analyzing cogni-
tive strategies and mental models. Therefore, when performance assess-
ments indicate that particular nonrecurrent constituent skills do not yet
meet the standards and that points of improvement exist, it will be helpful
to provide feedback concerning the knowledge underlying these aspects of
the complex skill. This is called cognitive feedback (Butler & Winne, 1995).
The Ten Steps considers this a type of supportive information because it
consists of information such as prompts, cues, and questions that help
learners construct or reconstruct their cognitive schemata in a process of
elaboration, aiming to improve future performance. Cognitive feedback
shares its focus on schema construction through elaboration with the other
instructional methods for presenting supportive information. The follow-
ing subsections describe how to stimulate learners to refect on the quality
of their cognitive strategies and mental models and why the diagnosis of
intuitive strategies and/or naïve mental models may be necessary when
learners fail to improve on particular nonrecurrent constituent skills for a
prolonged period.

Promoting Reflection

Cognitive feedback stimulates learners to refect critically on the quality of


their problem-solving processes and the solutions they have found so that
they can develop more efective and efcient cognitive strategies and mental
models. In contrast to corrective feedback (described in Chapter 10), the
main function of cognitive feedback is not detecting and correcting errors,
but rather, fostering refection by the receiver (see Yuan et al., 2019). Guasch
et al. (2013) and Popova et al. (2014) also refer to this as epistemic feedback.
A basic method for promoting refection is asking learners to critically com-
pare and contrast their problem-solving processes and (intermediate) solu-
tions with those of others (cf. Method 6 in Table 7.3). For example, learners
can compare their own:

• Problem-solving processes with SAPs presented in the instructional mate-


rials, modeling examples of experts illustrating those SAPs, or problem-
solving processes reported by other learners. This often requires learners
to document their problem-solving process in a process report or video
recording.
• Solutions—or intermediate solutions—with solutions presented in case
studies, expert solutions, the solutions of previously encountered prob-
lems, or solutions reported by other learners.
Step 4: Design Supportive Information 177

Comparing one’s models and solutions with the models and solutions of
experts may be especially useful in the early phases of the learning process
because it helps to construct a basic understanding of the task domain and
how to approach problems in this domain. Comparisons with models and
solutions provided by peer learners may be especially useful in later phases of
the learning process. Group presentations and discussions typically confront
learners with various alternative approaches and solutions, ofering a degree of
variability that can help them fne-tune their understanding of the task domain.
Collins and Ferguson (1993) proposed additional methods that pro-
mote refection through ‘feedback by discovery,’ such as selecting counter-
examples, generating hypothetical cases, and entrapping learners (cf. the
inquisitory methods presented in Table 7.3). For example, if a student of
patent examination has applied a particular method to classify patent applica-
tions, they could be made aware of a counter-example of a situation in which
this method will not work well. Alternatively, they could receive a task in
which their strategy leads to a wrong decision. In another example, if a medi-
cal student decides that a patient has a particular disease because the patient
has particular symptoms, the instructor might present a hypothetical patient
with the same symptoms that have arisen as a side efect of medication and
not because of the diagnosed disease.

Diagnosis of Intuitive Strategies and Naïve Mental Models

Sometimes, a learner may receive regular notifcations that the standards for
a particular nonrecurrent aspect of performance were not met and desired
improvements did not materialize, despite repeated cognitive feedback on
relevant cognitive strategies and mental models. This situation indicates per-
sistent problems with one or more aspects of performance over a prolonged
period. In such a case, it becomes essential to conduct an in-depth diagnostic
process to reveal possible intuitive cognitive strategies (see Section 8.3 for
their analysis) or naïve mental models (see Section 9.3 for their analysis) that
might explain the lack of progress. It is crucial to invite the learner to criti-
cally compare their problem-solving process and mental models with those
intuitive strategies and naïve models—and to work towards more efective
strategies and models in an efortful process of conceptual change. Chap-
ters 8 and 9 ofer more suggestions for dealing with intuitive strategies and
naïve models. Artifcial intelligence and advanced computer-based systems
capable of learning are now developing capabilities to perform in-depth
analyses of suboptimal problem-solving, reasoning, and decision-making
processes to provide cognitive feedback. Until they become more sophisti-
cated (at the time of this writing, that is not the case, but who knows where
we will be in a few years), teachers or instructors must actively engage with
the learner and their learning process to provide this diagnostic feedback.
178 Step 4: Design Supportive Information

7.6 Media for Supportive Information


Supportive information helps learners construct cognitive schemata in a
process of elaboration, connecting new information to prior knowledge
available in long-term memory. Traditional media for presenting supportive
information are textbooks, teachers, and realia (i.e., ‘real’ things). Textbooks
describe the ‘theory,’ the domain models that characterize a feld of study,
and, alas, often to a lesser degree, the SAPs that can help learners solve prob-
lems and perform nontrivial tasks in the domain. Teachers typically discuss
the highlights in the theory in their lectures, demonstrate or provide expert
models of SAPs, and provide cognitive feedback on learner performance.
Realia, or descriptions of real entities, illustrate the theory.
As briefy discussed in Chapter 2, new technologies may take over some
or even all of those functions. They can present theoretical models and con-
crete cases in a highly interactive way, explaining problem-solving approaches
and illustrating them by showing expert models on video or with animated,
lifelike avatars. They also allow learners to discuss the presented information
and exchange ideas. The next subsections briefy discuss multimedia, hyper-
media, microworlds, epistemic games, and social media.

Multimedia

Multimedia is simply defned as presentations containing words (such as


printed or spoken text) and pictures (such as illustrations, photos, anima-
tions, or videos). Mayer (2014) presents many principles for instructional
message design with multimedia. Principles that are important for the design
of supportive information are, for example, the multimedia, self-pacing, seg-
mentation, and redundancy principles (Van Merriënboer & Kester, 2014;
see Table 7.4 for examples). The most basic multimedia principle indicates
that a proper combination of texts and pictures is more benefcial to learning
than just text or pictures alone. This is based on the dual-coding theory frst
put forth by Allan Paivio (1971), who posited that humans process and rep-
resent verbal and non-verbal information in separate, related systems. When
learning, if information is encoded through both verbal and visual means,
it is more likely to be remembered because it can be retrieved from two
distinct—visual and auditory—channels. As a result, the information is easier
to process for the learner, and more cognitive capacity will be available for
elaborating on it. The self-pacing principle indicates that giving learners con-
trol over the pace of the presentation may facilitate elaboration. Especially
‘streaming’ or ‘transient’ information (e.g., video, dynamic animation) may
not allow learners sufcient time for deep processing as important informa-
tion can disappear before the learner can process it. Allowing learners to
pause and replay the stream allows them to refect on the new informa-
tion and couple it with existing knowledge. The segmentation principle is
Step 4: Design Supportive Information 179

also relevant for transient information, such as an animation illustrating a


dynamic domain model or a video of an expert modeling a particular prob-
lem-solving process. Here, it is important to divide the stream into meaning-
ful segments because this helps learners perceive the structure underlying the
process or procedure shown (i.e., the possible consecutive steps) and because
the pauses between the segments may give learners extra time to elaborate
on the segments (Spanjers et al., 2010). Finally, the redundancy principle
holds that presenting redundant information (i.e., speaking the text while
the same text is simultaneously available to read) negatively impacts learning.
It is a counter-intuitive principle because most people think the presentation
of the same information somewhat diferently will have a neutral or posi-
tive efect on learning. However, learners must frst process the information
to determine whether the information from the diferent sources is redun-
dant. They must also semantically decode both sources of information (even
though they are in diferent modalities), thus increasing the load on working
memory. Both may hamper elaboration and meaningful learning.

Table 7.4 Multimedia principles for the design of supportive information.

Principle Example

Multimedia For students who learn how lightning develops, present


principle pictures or an animation on how lightning develops
together with an explanatory text or narration.
Self-pacing For students in psychotherapy who learn to conduct intake
principle conversations with clients suffering from depression, show
video examples of real-life intake conversations and allow
them to stop/replay the recording after each segment to
reflect on this particular segment.
Segmentation For cooks in training who need to specialize in molecular
principle cooking, present the instruction video on how to make, for
example, a ‘golden Christmas tiramisu’ in meaningful cuts.
Redundancy For students in econometrics who learn to explain periods of
principle economic growth, first present a qualitative model (allows
them to predict if there will be any growth), and only then
present a more encompassing quantitative model (laws that
may help them to compute the amount of growth)—but
without repeating the qualitative information as such.

Hypermedia

A step beyond multimedia is hypermedia, a nonlinear medium of informa-


tion that includes graphics, audio, video, plain text, and hyperlinks linked to
each other via nodes (Gerjets & Kirschner, 2009), allowing users to navigate
and interact with multiple types of media content in a nonlinear fashion. It
180 Step 4: Design Supportive Information

extends the concept of hypertext, where text-based links can lead to other
text documents, by including diverse multimedia elements. This intercon-
nected environment enhances user engagement and provides fexible path-
ways for information retrieval and exploration. This has become the basis of
most online learning systems being used. By far, the most extensive hyper-
media system is the World Wide Web. Most online learning environments
make use of hypermedia, and the most modern try to model the learner’s
dynamic neural network (Baillifard et al., 2023).
Some authors argue that hypermedia may help learners elaborate and
deeply process presented information because their structure, to a certain
extent, refects how human knowledge is organized in elements and non-
arbitrary, meaningful relationships between them. But the opposite is also
true. Salomon (1998) describes the butterfy defect: learners futter across the
information on the computer screen, click or do not click on nodes or pieces
of information, to quickly futter to the next piece of information, never
knowing the value of it and without a plan. Learners often click on links,
forgetting what they are looking for. This futtering leads—at best—to a very
fragile network of knowledge and, at worst, to a tangle of charming but irrel-
evant pieces of information (and not knowledge). A second problem, also
signaled by Salomon, is that learners see hypermedia as being ‘easy’ and are
inclined to hop from one information element to another without putting
efort into deep processing and elaboration of the information. They may see
hypermedia as an opportunity to relax, similar to watching television.
One approach to designing hypermedia that stimulates elaboration of the
given information can be found in cognitive fexibility theory (Jonassen, 1992;
Lowrey & Kim, 2009). This theory starts from the assumption that ideas are
linked to others with many diferent relationships, enabling one to take multi-
ple viewpoints on a particular idea. For instance, if a case study describes a par-
ticular piece of machinery, the description may be given from the viewpoint
of its designer, a user, the worker who must maintain it, the salesperson who
has to sell it, and so on. Comparing and contrasting the diferent viewpoints
helps the learner better process and understand the supportive information.

Microworlds

Microworlds are a special category of multimedia in that they ofer highly


interactive case studies. A microworld is a simplifed and controlled environ-
ment or domain designed to facilitate learning, experimentation, or explora-
tion of specifc concepts, ideas, or skills. Microworlds are typically small-scale,
interactive, and tailored to a particular educational goal, where users can
manipulate variables and observe outcomes. Often used in educational set-
tings, microworlds allow learners to experiment within a specifc domain, fos-
tering deep understanding through hands-on experience and discovery. They
help learners construct conceptual models, structural models, causal models,
Step 4: Design Supportive Information 181

or combinations of the three. To support the learning of conceptual models,


for example, a microworld of the animal kingdom might present a taxonomy
of diferent classes (e.g., mammals, birds, reptiles, fsh, insects, etc.) to the
learner, ofering defnitions along with the opportunity for studying exam-
ples of members of the diferent classes (e.g., by looking at pictures or watch-
ing videos). To support the learning of structural models, artifcially designed
objects might be represented as a set of building blocks or plans, enabling
learners to experiment with designing and composing solutions from difer-
ent building blocks and observe the efects of changing a particular design.
For example, such a simulation might present a set of building blocks to the
learner for designing integrated circuits, factories, plants, or even roller coast-
ers (as in RollerCoaster Tycoon®). Finally, to support the learning of causal
models, they enable learners to change the settings of particular variables and
observe the efects of these changes on other variables. Such process simula-
tions might be relatively simple, such as for exploring relations between wind
speed, the number of turbines, and current output in a wind park (see the
simulation in Figure 7.4, which was developed in Go-Lab; De Jong et al.,
2014), but also extremely complex, such as a simulation of an advanced pro-
duction process in the chemical industry or a delta ecosystem.
Some microworlds present fully immersive three-dimensional digital envi-
ronments where users feel as if they are inside and part of that microworld
through virtual reality (VR). For instance, a VR simulation where students
can explore molecular structures or historical sites could be considered an
immersive microworld.

Figure 7.4 Wind energy simulator for exploring relations between wind speed,
the number of turbines, and current output (for this and other
examples, see www.golabz.eu).
182 Step 4: Design Supportive Information

Epistemic Games

Epistemic games are somewhat related to microworlds (Collins & Fer-


guson, 1993). The term ‘epistemic’ refers to knowledge and knowing.
The goals of epistemic games are to get players to learn to think like
professionals by solving problems that mirror real-world challenges in
those professions and to help players develop thought processes and
problem-solving skills specifc to those disciplines. Epistemic games are,
thus, knowledge-generating activities that ask learners to structure and/
or restructure information and provide new ways of looking at supportive
information. Their goal is to make sense of complex phenomena and pro-
cesses in the world. They use leading questions or other inquiry strategies
that promote the activation of prior knowledge and the elaboration of
new information. In addition, they involve a complex of rules, strate-
gies, and moves associated with particular representations (i.e., epistemic
forms such as conceptual, structural, and causal models). For example, in
an epistemic game called Urban Science, students play the role of urban
planning interns tasked with redeveloping a pedestrian mall. To accom-
plish this task, students receive a city budget plan, letters from commu-
nity groups that want a say in the redevelopment process, a geographic
information system model of the region to explore the efects of changes
on various indicators, and so forth. Students must compromise and jus-
tify their decisions before making a fnal proposal to the city council (see
Bagley & Shafer, 2009, 2011).
A common aspect of microworlds and epistemic games is that they pro-
vide a kind of ‘situated practice’ to the learners, which aims at attaining
a better or deeper understanding of the supportive information that may
eventually help perform the learning tasks. It may be called vicarious experi-
ence, as opposed to the direct experience that is provided by the learning
tasks, meaning that the main goal of microworlds and epistemic games is
not to help learners develop a complex skill or professional competency (as
for simulated task environments and serious games; see Step 1 in Chapter 4)
but to help them construct mental models of the organization of the world
through active exploration and experimentation.

Social Media

Communication with peers or collaborative learning helps learners to efec-


tively elaborate supportive information (Beers et al., 2007). In problem-based
learning, for example, students typically work in small groups. Brainstorming
about a presented problem helps them activate their prior knowledge, mak-
ing it easier to connect newly presented information to what they already
know (Loyens et al., 2011). Group discussions confront learners with alter-
native ideas and viewpoints they would not have come up with, helping them
Step 4: Design Supportive Information 183

establish extra meaningful relationships between presented information ele-


ments. They also force learners to process information so that they can put it
into their own words. Group feedback sessions confront learners with other
learners’ problem-solving processes and solutions, making it possible to com-
pare and contrast those with their problem-solving processes and solutions.
Social media may support brainstorming, group discussion, feedback
gathering, etc. When properly used, social media may stimulate learners
to elaborate and deeply process newly presented information, but when
improperly used, they may also lead to superfcial learning or no learning at
all (Kirschner, 2015). As for all educational media, in the end, it is not the
medium that counts but what is done with it.

7.7 Supportive Information in the Training Blueprint


The training blueprint specifes SAPs, domain models, modeling examples,
and case studies per task class. They enable learners to perform the nonre-
current aspects of the learning tasks at the level of complexity that is charac-
teristic of the task class under consideration. For each subsequent task class,
the supportive information is an extension or enrichment of the informa-
tion presented for the previous task classes, allowing learners to perform
more complex versions of the task under more difcult conditions. Thus,
each new task class attempts to bring the learners’ knowledge, skills, and
attitudes to a higher plane. This approach resembles Bruner’s (1960) con-
ception of a spiral curriculum and Ausubel’s (1968) ideas about progressive
diferentiation.

Positioning General Information and Examples

The position of general information within a task class depends on the cho-
sen presentation strategy. In a deductive strategy, learners frst study the
general information (in SAPs and domain models) by reading textbooks,
attending lectures, studying multimedia materials, and so forth. This infor-
mation is then illustrated in the frst learning tasks, which will usually take
the form of modeling examples and case studies (i.e., deductive-expository
strategy), or the learners are tasked with generating examples themselves
(i.e., deductive-inquisitory strategy). Typically, the general information
will be available to the learners before they start working on the learning
tasks and while they work on those tasks. Thus, if questions arise during
practice, they may consult their textbooks, teachers, Internet sites, or any
other background materials containing relevant information for perform-
ing the tasks.
In an inductive strategy, learners begin by studying modeling examples
and case studies that lead them to the general information relevant to per-
forming the remaining learning tasks. Then, there are three options. First,
184 Step 4: Design Supportive Information

in an inductive-expository strategy, the general information is explicitly


made available to the learners and can be consulted by them at any time.
In the Ten Steps, this is the default strategy for novice learners. Second,
an inductive-inquisitory strategy uses guided discovery, and learners receive
leading questions that guide them to the general information illustrated
in the examples. Third, a system of resource-based learning requires self-
directed learners to search for general information in the library, in a ‘study
landscape,’ or on the Internet. This approach may be particularly useful if
learners need to develop information literacy skills (see Chapter 14).
It is important to emphasize that selecting a presentation strategy is a
decision that needs to be made for each task class. Commonly, learners have
no relevant prior knowledge when they start a training program, which
would be a reason to apply an inductive strategy in the frst task classes
(default is an inductive-expository strategy). As learners acquire some of
the necessary relevant knowledge of the task domain during the training, it
becomes possible to apply a deductive strategy in later task classes (default is
a deductive-inquisitory strategy).

Positioning Cognitive Feedback

Cognitive feedback on nonrecurrent aspects of task performance is only


provided after learners have fnished one or more learning tasks. Because
there is often no clear distinction between correct and incorrect behaviors
or between correct and incorrect solutions, it is impossible to provide imme-
diate feedback while the learners are working on the learning tasks (note
that this is advisable for recurrent aspects of performance!). For problem-
solving, reasoning, and decision-making aspects of a task, learners must be
allowed to experience the advantages and disadvantages of applying particu-
lar approaches and rules-of-thumb. For instance, it is impossible to provide
feedback on ‘errors’ for learners constructing a storyboard for a video because
there is no single procedure or collection of drawings that will lead to a cor-
rect solution. Instead, many approaches are possible, and the learner might
apply many rules-of-thumb to reach a more or less adequate solution. Cog-
nitive feedback can thus only be given—and fully designed—retrospectively.
To conclude this chapter, Table 7.5 presents one task class out of a train-
ing blueprint for the complex skill ‘producing video content’ (you may refer
to Table 6.2 for a description of the other task classes). A specifcation of
the supportive information has been added to the task class and the learning
tasks. As can be seen, part of the supportive information is available to the
learners before they start to work on the learning tasks (a case study and an
inquiry for conceptual and structural models), and part of it is presented
after they fnish (cognitive feedback). See Appendix 2 for the complete
training blueprint.
Step 4: Design Supportive Information 185

Table 7.5 Preliminary training blueprint for the complex skill ‘producing video
content’. A specification of the supportive information has been
added to one task class.

Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3–5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Supportive Information (inductive strategy): Case study
Learners study three worked-out examples (i.e., case studies) of promotional
videos for a backpack with integrated solar panels, a virtual fitness
platform, and an urban art festival. In groups, a tutor guides them in
comparing and evaluating each example’s goals, scripts, camera use,
lighting, etc.
Supportive Information: Presentation of cognitive strategies
• SAP for developing a story for promotional videos
• SAPs for interacting with people and collaborating with the crew
• SAPs for shooting video (detailed strategies for creating compositions and
capturing audio)
Supportive Information: Inquiry for mental models: learners are asked to
identify examples of:
• Different types of cameras, microphones, and lights (conceptual models)
• Story arcs (structural models)
Learning Task 2.1
Support: Completion task
Guidance: Process worksheet
Learners receive the client briefing, synopsis, and storyboard for a video
promoting a new coffee machine. They follow a process worksheet to
record footage and create the final video.
Learning Task 2.2
Support: Reverse task
Guidance: Tutoring
Learners study a promotional video about a new startup in the field of
artificial intelligence. A tutor helps them work backward to explain critical
decisions in the production phase and develop a storyboard that fits the
video and meets the client’s requirements.
Learning Task 2.3: Imitation task
Support: Conventional task
Guidance: Modeling
Learners study a modeling example of how a teacher/expert creates a
short social media advertisement video for a small online clothing store.
Learners remake the ad for a small online art store.

(Continued)
186 Step 4: Design Supportive Information

Table 7.5 (Continued)

Supportive Information: Cognitive feedback


Learners receive feedback on their approach to Learning Task 2.3.
Learning Task 2.4
Support: Conventional task
Guidance: Tutoring
Under guidance from a tutor, learners create a promotional video
highlighting the products or services of a local store.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 2.4.

7.8 Summary of Guidelines


• If you design supportive information, you need to distinguish between
general information, illustrations or examples of this general information,
and cognitive feedback.
• If you design general information, such as systematic approaches to
problem solving (SAPs) and domain models, you must use instructional
methods that stress meaningful relationships between elements to help
learners understand the information.
• If you design SAPs, then you need to take a prescriptive perspective
(what the task performer should do) and indicate the problem-solving
phases and the rules-of-thumb that may help the learner to complete
each phase.
• If you design domain models, then you need to take a descriptive per-
spective (how the domain is organized) and distinguish between concep-
tual, structural, and causal models.
• If you present general information, you must always illustrate this infor-
mation with modeling examples (for SAPs) and case studies (for domain
models).
• If you combine general information with modeling examples and/or
case studies, then choose between an inductive-inquisitory strategy (i.e.,
guided discovery), inductive-expository strategy (default for novice
learners), deductive-inquisitory strategy (default for more experienced
learners), and deductive-expository strategy (i.e., direct instruction).
• If you want your learners to develop information literacy skills, then work
toward resource-based learning.
• If you design cognitive feedback, then you need to ask learners to criti-
cally compare and contrast their problem-solving processes and solutions
with those of others.
• If you present supportive information, then consider the use of multi-
media combining words and pictures, hypermedia connecting pieces of
Step 4: Design Supportive Information 187

information with hyperlinks, microworlds or epistemic games providing a


‘vicarious’ experience, and social media evoking discussion and exchange
of ideas between learners.
• If you specify supportive information in the training blueprint, then you
need to ensure that the supportive information for a new task class builds
on the information for the previous task classes and is an extension or
embellishment.
• If you specify supportive information in the training blueprint, then you
need to remember that learners acquire more expertise during the train-
ing program, which might be a reason to shift from inductive to deduc-
tive presentation strategies.

Glossary Terms

Causal model; Cognitive feedback; Conceptual model; Deductive-


expository presentation strategy; Deductive-inquisitory presentation
strategy; Domain model; Epistemic game; Expository method; Genera-
tive learning activities; Guided-discovery learning; Inductive-expository
presentation strategy; Inductive-inquisitory presentation strategy; Inquis-
itory method; Microworlds; Multimedia principle; Multiple viewpoints;
Redundancy principle; Segmentation principle; Self-explanation princi-
ple; Self-pacing principle; Structural model; Systematic approach to prob-
lem solving (SAP)
Chapter 8

Step 5
Analyze Cognitive Strategies

8.1 Necessity
Analysis of cognitive strategies provides the basis for the design of sup-
portive information, particularly systematic approaches to problem solving
(SAPs). Only perform this step if this information is not yet available in
existing materials.

We all have previously encountered a problem or have had to carry out a task
that looks familiar to us and for which we think we are experienced. Unfortu-
nately, while solving the problem or carrying out the task, we encounter some
DOI: 10.4324/9781003322481-8
190 Step 5: Analyze Cognitive Strategies

aspect we have never encountered before. At this point, our available routines
are insufcient, and we must use a diferent type of knowledge—strategic
knowledge—to solve the problem or carry out the task. Such strategic knowl-
edge helps us systematically approach new problems and efciently marshal
the necessary resources to solve them. This chapter focuses on analyzing cog-
nitive strategies for dealing with unfamiliar aspects of new tasks. The results
of the analyses take the form of SAPs that specify how expert task-performers
organize their behaviors; that is, which phases they go through while solving
problems and which rules-of-thumb they use to complete each phase suc-
cessfully. Systematic descriptions of how to approach particular problems in a
subject matter domain are sometimes already available in the form of existing
job descriptions, instructional materials, or other documents. If this is the
case, there is no need to carry out the activities described in this chapter. In all
other cases, the analysis of cognitive strategies may be important for design-
ing problem-solving support for learning tasks (e.g., process worksheets), for
refning a chosen sequence of task classes, and, last but not least, for designing
an important part of the supportive information.
The structure of this chapter is as follows. Section 2 discusses the speci-
fcation of SAPs, including identifying phases in problem solving and the
rules-of-thumb that may help complete each phase successfully. Section 3
discusses the analysis of intuitive cognitive strategies because the existence
of such strategies may interfere with the acquisition of more efective strate-
gies. Section 4 describes the use of SAPs in the design process because SAPs
help design problem-solving guidance, refne a sequence of task classes, and
design supportive information. For each activity, intuitive strategies may
afect the selection of instructional methods. The chapter concludes with a
summary of the main guidelines.

8.2 Specify SAPs


SAPs are prescriptive plans that specify the goals and subgoals to be reached
by learners when solving problems in a particular domain, plus the rules-
of-thumb that may help them reach those goals and subgoals. Thus, SAPs
describe the control structures that steer the performance of expert task-
performers. It is important to note that SAPs are always heuristic. An exam-
ple of a heuristic from everyday life is a car salesman who must haggle with
a prospective customer. They have to haggle with them, initially ofering a
high price and eventually arriving at a fair value with the customer. There is
a general approach, but each customer is diferent, and the salesman cannot
just follow a checklist (i.e., an algorithm) that always works. Thus, though
heuristics may help a learner solve problems or carry out a task in the task
domain, their application does not guarantee a solution to the problem
or correctly completing the task. Although they are less powerful than
Step 5: Analyze Cognitive Strategies 191

algorithmic procedures, power is exchanged for fexibility because SAPs


may be helpful in many more situations than an algorithm. For designing
instruction, analyzing cognitive strategies for inclusion in SAPs serves three
goals; namely, they:

1. Provide the basis for developing task and problem-solving guidance such
as process worksheets or performance constraints (see Section 4.7).
2. Help refne a sequence of task classes, such as identifying a progression
from simple to more complicated cognitive strategies (see Section 6.2).
3. Provide the basis for developing an important part of the supportive
information (see Section 7.2 in the previous chapter).

When analyzing phases and rules-of-thumb, an analyst typically interviews


and observes experts who work on concrete, real-life tasks. It naturally
follows the analytical activities described in Step 2, especially skill decom-
position and the formulation and classifcation of performance objectives
(Sections 5.2 to 5.4). In Step 2, the analyst answers the question: What
are the constituent skills or subcompetencies necessary to perform real-life
tasks? This step described all constituent skills and performance objectives
relevant to the complex skill taught. The resulting documents may help
the analyst (i.e., the instructional designer) properly prepare the interviews
and observations for the follow-up analyses described in the current chapter
(Step 5). In Step 5, the analyst answers the question: How are the nonrou-
tine aspects of real-life tasks performed? The constituent skills and associated
performance objectives classifed in Step 2 as nonrecurrent provide a good
starting point for analyzing cognitive strategies (cf. Table 5.2).
Asking expert task-performers to think aloud while they perform real-life
tasks may help identify the phases they go through and the rules-of-thumb
they apply. Alternatively, video recordings of task performance can serve
as ‘cues’ for retrospective reporting of the thinking processes (cued retro-
spective reporting; Van Gog et al., 2005). Here, the expert task-performer
watches how they carry out the task with the analyst. The expert can see
what they did and explain why they did those things, but the analyst can also
intervene and ask the expert why they did something. Remember that an
expert task-performer often is not really aware of what they did (for them,
it is not extraordinary or worth mentioning) or does things automatically
without consciously thinking about the when, why, or how.
To keep the analysis process manageable, it is best to progress from
(a) relatively simple tasks from early task classes to more complex tasks from
later task classes and (b) general phases and associated rules-of-thumb to
more specifc subphases and rules-of-thumb. According to this stepwise
approach, the analyst frst confronts an expert task-performer with a rela-
tively simple task. If Step 3 results in task classes (i.e., sequenced learning
192 Step 5: Analyze Cognitive Strategies

tasks), it is best to start with tasks from the frst task class. The expert’s high-
level approach is described in terms of phases, with their related goals and
applied rules-of-thumb, that may help reach the goals identifed for each
phase. Then, each phase may be further specifed into subphases, and again,
rules-of-thumb may be identifed for each subphase. After completing the
analysis for tasks at a particular level of complexity, the analyst confronts
the task-performer with progressively more complex tasks. These tasks typi-
cally require additional phases and/or rules-of-thumb and, thus, relate to
nonrecurrent constituent skills and associated performance objectives that
were not dealt with before. This process repeats until the analysis of the
most complex tasks has been completed—by then, the analyst should have
dealt with all nonrecurrent constituent skills and associated performance
objectives. At each iteration, they identifed phases in task completion and
rules-of-thumb.

Identifying Phases in Problem Solving

SAPs describe the successive phases in a problem-solving process as an


ordered set of goals the task performer must reach. These phases help learn-
ers approach tasks by optimally sequencing actions and decisions in time
(i.e., temporally). The ordered set of phases with their associated goals and
subphases with their associated subgoals are also called plans or prescrip-
tive scripts. The phases will typically correspond with the (nonrecurrent)
constituent skills in a skill hierarchy (see Section 5.2), the subphases with
the constituent skills one level lower in the skill hierarchy, the sub-subphases
with the constituent skills again one level lower in the skill hierarchy, and so
forth. Some SAPs are ordered linearly, while others are ordered nonlinearly
by taking particular decisions made during task completion into account.
For linear SAPs, temporal relationships link the constituent skills in the cor-
responding skill hierarchy (i.e., the skill on the left-hand side is always per-
formed before the skill to the right of it; see Figure 5.1). A highly familiar
linear sequence of fve phases in the feld of instructional design is, for exam-
ple, the ADDIE model:

• Phase 1: Analyze—The goal of this phase is to analyze the context in


which the training takes place, the characteristics of the target group, and
the task to be carried out or the content to be taught and/or learned.
• Phase 2: Design—The goal of this phase is to devise a blueprint or lesson
plan for the training program.
• Phase 3: Develop—The goal of this phase is to develop or produce the
instructional materials to be used in the training program.
• Phase 4: Implement—The goal of this phase is to implement the training
program in the organization, taking the available resources and organiza-
tional structures into account.
Step 5: Analyze Cognitive Strategies 193

• Phase 5: Evaluate—The goal of this phase is to evaluate the training pro-


gram and gather information that may be used to improve it.

These phases may be further specifed into subphases. For instance, sub-
phases for the frst phase include analyzing the (a) context, (b) the target
group, and (c) task or content domain. Sometimes, the subphases need to
be further specifed into sub-subphases, and so forth.
Figure 8.1 gives an example of a SAP for a training program for patent
examiners. This kind of SAP takes the form of a fowchart (see Section 11.2)
and may also be called a SAP chart (refer back to Figure 7.1 for another
example). The left part describes the problem-solving phases that can be
distinguished for one of the task classes in the skill cluster ‘preparing search
reports’ (refer back to Figure 6.4). This SAP shows that ‘writing a draft com-
munication’ occurs only if defects have been found in the patent application;
otherwise, the patent examiner has to ‘write a draft vote’ (i.e., a proposal
to the examining division to grant the patent). The right side of Figure 8.1
shows a further specifcation of the frst phase (‘read application’) of the SAP
on the left and is divided into two subphases with rules-of-thumb.

Figure 8.1 SAP for examining patent applications. It describes phases in prob-
lem solving (see left part) as well as subphases and rules-of-thumb
that may help to complete each phase or subphase (see right part).

For nonlinear SAPs, simultaneous or transposable relationships link several


constituent skills in the corresponding skill hierarchy, indicating that they
are not always performed in the same order. Figure 8.2 provides an example
of a nonlinear sequence of phases for solving thermodynamics problems
194 Step 5: Analyze Cognitive Strategies

(Mettes et al., 1981). Each action in a rectangle corresponds with a goal or


subgoal, and particular decisions made during problem solving are consid-
ered, as indicated by the hexagons. This form is typically used when reaching
particular goals depends on the success or failure of reaching other goals.
In this particular thermodynamics SAP, the phase ‘reformulate the problem’
is only relevant if the problem is not a standard problem. Furthermore, this
phase may be further specified into the subphases ‘identify key relationships,’
‘convert to standard problem,’ and ‘introduce alternate processes.’

Figure 8.2 SAP for solving thermodynamics problems, showing phases, subphases,


and examples of rules-of-thumb (grey rectangles) for one phase and
one subphase.

Identifying Rules-of-Thumb

Rules-of-thumb take the general form: ‘If you want to reach X, you may try
to do Y.’ Expert task-performers often use these rules to generate a solution
to a particular problem tailored to the situation and its particular circum-
stances. Rules-of-thumb are also called heuristics or prescriptive principles.
The basic idea is that some principles that apply in a particular domain may
be formulated in a prescriptive way to yield useful rules-of-thumb. For
example, one well-known principle in the field of learning and instruction is:

There is a positive relationship between the amount of task practice and


the level of task performance.
Step 5: Analyze Cognitive Strategies 195

This principle can also take a prescriptive form as a rule-of-thumb:

If you want learners to reach a high level of performance, consider pro-


viding them with large amounts of practice.

If one thinks of a continuum with highly domain-specifc, algorithmic rules


at one extreme and highly domain-general problem-solving methods at the
other, rules-of-thumb would typically be situated somewhere in the mid-
dle. While they are still linked to a particular domain, they merely indicate
a good direction to search for a solution (i.e., generate a procedure) rather
than algorithmically specifying a part of this solution (i.e., perform or carry
out a procedure; see Section 11.2). Some other examples of rules-of-thumb
are:

• If you are driving a car and have difculties negotiating curves, try to
estimate the angle of the curve and turn the steering wheel appropriately
to that angle.
• If you are defending in a soccer game, try to look at the ball rather than
the person with the ball.
• If you are controlling air trafc at an airport and are overwhelmed by the
task, try to control the aircraft by making as few corrections as possible.
• If you are presenting at a conference, try to adapt the amount of informa-
tion you will present to your audience’s prior knowledge.

As part of a SAP analysis, rules-of-thumb are analyzed for each phase and
subphase. Each specifed goal may then defne a category of rules-of-thumb
dealing with similar causes and efects. The ‘IF-sides’ of the rules-of-thumb
typically refer to the general goal of the phase under consideration but may
include additional conditions. For instance, the grey rectangles in Figure 8.2
list rules-of-thumb that may help learners understand the problem (the frst
main phase) and help introduce alternate processes (one of the subphases
within the main phase, ‘reformulate the problem’). Three guidelines for the
specifcation of such rules-of-thumb are:

1. Provide only rules-of-thumb not yet known by the learners and limit
them to those rules-of-thumb necessary for performing the most impor-
tant tasks.
2. Formulate rules-of-thumb in a readily understandable way for learners,
and use the imperative to make clear that they are directions for desired
actions.
3. Make the text as specifc as possible, but at the same time, remain general
enough to ensure the appropriateness of the rules-of-thumb for all situa-
tions to which they apply.
196 Step 5: Analyze Cognitive Strategies

8.3 Analyzing Intuitive Cognitive Strategies


The Ten Steps focuses on the analysis of efective cognitive strategies in
SAPs. This is a ‘rational’ analysis because it describes how the tasks should be
performed. In addition, the designer may analyze the target learners’ cogni-
tive strategies. This is an ‘empirical’ analysis because it describes how the
tasks are actually carried out. There may be large diferences between SAPs
that describe an efective approach as applied by expert task-performers and
intuitive or naïve cognitive strategies initially used by learners and identifed
in an empirical analysis.
For example, a common intuitive strategy for solving design problems
follows a top-down, depth-frst approach. When novice researchers write a
scientifc article or novice computer programmers write a computer pro-
gram, they typically decompose the problem into subproblems and deal with
each in separate sections or subroutines. The detailed solution is then fully
implemented for each subproblem before continuing with the next sub-
problem. In the two problems just presented, the novice researcher writes
the complete text for one section (e.g., the introduction) or the novice pro-
grammer develops full programming code for one subroutine. As a result,
novices easily lose track of the relationships between sections or subroutines
(e.g., that the variables measured in the methods section should also appear
in the introduction) and spend large amounts of time linking pieces of text
or code, repairing suboptimal texts or codes, and rewriting pieces of text or
code afterward. Experts, on the other hand, follow a top-down, breadth-
frst approach. They decompose the problem into subproblems (sections,
subroutines), then decompose all subproblems into sub-subproblems, and
so forth, until they arrive at the fnal solution at the text or code level. For
example, expert researchers will frst generate an outline for the scientifc
article, listing the main contents for the introduction, methods, results, and
discussion sections. Next, they will outline each paragraph within each sec-
tion, ensuring a good fow of argumentation and consistency between dif-
ferent parts of the text. Only in the end will they produce the fnal detailed
text. Iteratively developing extensive and detailed outlines generates the
solution in its full breadth.
Concerning rules-of-thumb, there may also be large diferences between
the intuitive heuristics learners use and those identifed in a SAP analysis. To
clarify this point, we refer to the rules-of-thumb presented in the previous sec-
tion. The intuitive counterpart for the rule-of-thumb for negotiating curves
with an unfamiliar vehicle by estimating the angle is to steer by lining up the
hood ornament (if available) to the road stripe. This strategy requires many
more decisions and leads to a higher cognitive workload. The intuitive coun-
terpart for the rule-of-thumb for defending in a soccer play by closely watching
the ball is to focus on the person you are defending. This strategy negatively
Step 5: Analyze Cognitive Strategies 197

afects performance because feints easily set the defender on the wrong foot.
The intuitive counterpart of the rule-of-thumb to control air trafc by making
corrections only if this is necessary to prevent dangerous situations is to make
many small corrections, a strategy that may work when only a few aircraft
are on the radar screen (e.g., by a small regional airport) but which is not
suitable for handling large numbers of aircraft in busy airports. Finally, the
intuitive counterpart of the rule-of-thumb to adapt the amount of presented
information to the audience’s prior knowledge is to provide the audience with
as much information as possible in the available time. This strategy does not
work because the audience becomes cognitively overloaded or bored.

8.4 Using SAPs to Make Design Decisions


Only carry out Step 5 if the information on SAPs is not available in the
existing instructional materials. If the step is carried out, the results of the
analysis provide the basis for several design activities. In particular, speci-
fed SAPs may help set up guidance for task performance, further refne an
existing sequence of task classes, and design part of the supportive informa-
tion. Furthermore, the identifcation of intuitive strategies may afect several
design decisions.

Designing Guidance for Task Performance

SAPs can help design problem-solving guidance in the form of process work-
sheets (see Section 4.7 and Table 4.5 for examples). Such worksheets indicate
the phases to go through when carrying out the task. The rules-of-thumb
can be presented to the learners as statements (e.g., “If you want to improve
your understanding of the problem, then you might try to draw a fgure
representing the problem situation”) or as guiding questions (“What activi-
ties could you undertake to improve your understanding of the problem?”).
Moreover, worksheets can help determine performance constraints that can
be applied to ensure that learners cannot perform actions irrelevant to or
detrimental to the phase they are working on and/or cannot continue to
the next phase before successfully completing the current one. For example,
based on the SAP presented in Figure 8.2, a performance constraint might
be that learners must frst submit their scheme of characteristics of the prob-
lem and its system boundaries to the teacher and are only allowed to con-
tinue solving the problem after receiving approval of this scheme.

Refining Task Classes Through a Progression of SAPs

SAPs can help refne an existing sequence of task classes using a method
known as knowledge progression (see Section 6.2). The simplest version of
198 Step 5: Analyze Cognitive Strategies

the task (i.e., the frst task class) will usually be equivalent to the simplest
or shortest path through a SAP chart. Subsequently, during a process
known as path analysis (Merrill, 1987), more intricate paths can be iden-
tifed, corresponding to more complex task classes. More complex paths
contain more decisions and/or goals than simpler ones. In addition,
more complex paths usually contain the steps of simpler paths, allowing
for the organization of a hierarchy of paths to refne a sequence of task
classes. For example, the SAP chart shown in Figure 8.2 encompasses
three paths:

1. The shortest path occurs for a standard problem. Thus, the question “Is
this a standard problem?” is answered with “Yes”. Consequently, the frst
task class contains standard problems in thermodynamics.
2. The next-shortest path occurs for a nonstandard problem that can be
transformed into a standard problem if identifying key relations yields
a solvable set of equations. Thus, the question “Is this a standard prob-
lem?” is answered with “No,” and the question “Is this a solvable set of
equations?” is answered with “Yes”. Consequently, the second task class
contains nonstandard problems that can easily be converted to standard
problems.
3. The longest path occurs for a nonstandard problem that cannot be trans-
formed into a standard problem by identifying key relations. In this situ-
ation, the question “Is this a standard problem?” is answered with “No,”
and the question “Is this a soluble set of equations?” is also answered
with “No”. Consequently, the third and fnal task class contains non-
standard problems that can only be transformed into standard problems
via reformulations, special cases, or analogies.

Designing Supportive Information

SAPs also provide the basis for designing the part of the supportive informa-
tion related to cognitive strategies. First, SAPs can explicitly be presented
to learners because they tell them how to best approach problems in a par-
ticular task domain (see Section 7.2). An instructional specifcation is often
needed after the analysis to ensure that the phases and the rules-of-thumb
are understandable for the learners. Second, SAPs can drive the search for
or design of modeling examples that give concrete examples of their appli-
cation (see Section 7.3). Such modeling examples may be seen as learning
tasks with maximum process-oriented support or illustrations of cognitive
strategies (see Figure 7.2). Finally, SAPs can help provide cognitive feedback
to learners (see Section 7.5). For instance, learners can be asked to compare
their problem-solving process with a presented SAP or with modeling exam-
ples illustrating this SAP.
Step 5: Analyze Cognitive Strategies 199

Dealing with Intuitive Strategies

Identifying intuitive strategies may afect decision making for the three
design activities discussed. Concerning designing problem-solving guidance,
intuitive strategies may be a reason to provide extra guidance to learners
working on the learning tasks. This extra guidance prevents unproductive
behaviors and mistakes that result from using the usually far-from-optimal
intuitive strategy. Providing particular aids such as process worksheets and
structured answer forms can help learners stay on track and apply useful
rules-of-thumb. Performance constraints can block using intuitive strate-
gies and can force learners to apply a more efective systematic approach.
In a course on scientifc writing, for example, learners could be forced or
coerced into fully decomposing their manuscript into ideas and subideas
with associated headings and subheadings (i.e., to use a top-down breadth-
frst approach) before they are allowed to start writing the text. All word
processors have this function (i.e., the outline function or view) built into
them (see, for example, De Smet et al., 2011).
Concerning refning task classes, intuitive strategies can be a reason for
providing additional task classes; that is, to slowly work from simpler learn-
ing tasks toward more complex learning tasks. This allows learners to care-
fully compare and contrast their intuitive approach to problem solving at
each level of complexity with more efective approaches. Ideally, the intui-
tive approach will gradually be replaced, although some intuitive strategies
(often in the form of misconceptions) are highly resistant to change.
Concerning designing supportive information, intuitive strategies can
infuence the choice of instructional methods. For the coupling of SAPs and
modeling examples, an inductive-expository strategy, which involves study-
ing modeling examples before discussing the general SAP, is recommended
as the default strategy for novice learners (see Figure 7.3). Extra modeling
examples could be provided to address intuitive strategies, and the inductive-
expository strategy might be replaced with an inductive-inquisitory strategy
(i.e., guided discovery). This approach allows learners to connect the newly
presented phases and rules-of-thumb to their existing intuitive ideas. For
using cognitive feedback (see Section 7.5), intuitive strategies underscore
the importance of learners carefully comparing and contrasting their intui-
tive approaches and the resulting solutions with the provided SAPs, mod-
eling examples, and expert solutions.

8.5 Summary of Guidelines


• If you analyze cognitive strategies, then observe and interview expert task-
performers to identify both the phases and subphases in a SAP process
and the rules-of-thumb that may help complete each phase successfully.
200 Step 5: Analyze Cognitive Strategies

• If you identify phases and subphases in a SAP, then specify an ordered set
of (sub)goals that the task performer should reach and, if necessary, the
decisions they must make because particular goals depend on the success
or failure of reaching previous goals.
• If you identify rules-of-thumb that may help complete one problem-
solving phase successfully, then list the conditions under which the
rule-of-thumb may help solve the problem and the action or actions the
learner could try out.
• If you analyze intuitive cognitive strategies, then focus on the discrepan-
cies between the problem-solving phases and rules-of-thumb applied by
expert task-performers and those by a naïve learner.
• If you use SAPs to design problem-solving guidance, then design pro-
cess worksheets or performance constraints so that the learner is guided
through all relevant problem-solving phases and prompted to apply use-
ful rules-of-thumb.
• If you use a SAP to refne a sequence of task classes, then identify simple
to increasingly more complex paths in the SAP charts and defne associ-
ated task classes.
• If you use SAPs to design supportive information, then formulate the
phases and rules-of-thumb so that they are understandable for your target
group, and select or design modeling examples to illustrate them.
• If you teach a cognitive strategy to learners inclined to use an inefec-
tive intuitive strategy, then provide extra problem-solving guidance, let
task complexity progress slowly, and let the learners critically compare
and contrast their intuitive problem-solving strategy with more efective
strategies.

Glossary Terms

Cognitive strategy; Cued, retrospective reporting; Empirical analysis; Intui-


tive cognitive strategy; Process-oriented support; Rational analysis; Ret-
rospective reporting; Rule-of-thumb; Think aloud
Chapter 9

Step 6
Analyze Mental Models

9.1 Necessity
Analyzing mental models into conceptual, structural, and causal models
provides the basis for designing supportive information, particularly domain
models. Only carry out this step if this information is not yet available in
existing materials.

What we know determines what we see and not the other way around
(Kirschner, 1992, 2009). A geologist walking in the mountains of France will
see geological periods and rock formations. A bicyclist in those same mountains
DOI: 10.4324/9781003322481-9
202 Step 6: Analyze Mental Models

will see gear ratios and climbing percentages. Each of them sees the same thing
(in terms of their sensory perception) but interprets what they see in very dif-
ferent ways (in terms of how they understand what they see). In this respect,
these two people have very diferent mental models of the mountains of France.
Mental models help task performers understand a task domain, reason
in this domain, give explanations, and make predictions (Van Merriën-
boer et al., 2002). This chapter focuses on analyzing mental models that
represent the organization of a domain. The result of such an analysis is a
domain model, which can take the form of a conceptual model (What is
this?), a causal model (How does this work?), and a structural model (How
is this built?). Mental models specify how expert task-performers mentally
organize a domain to reason about it and support their problem solving
and decision making. Extensive descriptions of relevant domain models are
often available in existing instructional materials, study books, and other
documents. If this is the case, then there is no need to carry out the activi-
ties described in this chapter. In all other cases, analyzing mental models is
important to refne a chosen sequence of task classes and, in particular, to
design an important part of the supportive information.
The structure of this chapter is as follows. Section 2 discusses the specif-
cation of domain models, including the identifcation of conceptual, struc-
tural, and causal models. Section 3 briefy discusses the empirical analysis of
intuitive mental models because the existence of such models may interfere
with the learner’s construction of more efective and scientifc models, as
was the case in the previous chapter on analyzing cognitive strategies. Sec-
tion 4 discusses the use of domain models for the design process. Domain
models help refne a sequence of task classes and design an important part
of the supportive information. For both activities, intuitive mental models
may afect the selection of instructional methods. To conclude, we briefy
compare the analysis of mental models with the analysis of cognitive strate-
gies and present the main guidelines.

9.2 Specify Domain Models


Domain models are rich descriptions of how the world is organized in a
particular task domain. They allow the interconnection of facts and concepts
by defning meaningful relationships. This process often results in highly
complex networks representing rich cognitive schemata that enable learn-
ers to interpret unfamiliar situations in terms of their general knowledge
or ‘understand new things.’ For the design of instruction, the analysis of
mental models into domain models:

1. Helps refne a sequence of task classes; for example, by identifying a


progression from simple toward more complicated mental models
Step 6: Analyze Mental Models 203

underlying the performance of increasingly more complex learning tasks


(See Section 6.2).
2. Provides the basis for developing an important part of the supportive
information for each task class (see Section 7.2).

The analysis of mental models will typically be based on extensive docu-


ment study and interviewing expert task-performers who explain which
models they use when working in a particular domain (e.g., talking about
the domain, classifying and analyzing things, giving explanations, making
predictions). As for the analysis of cognitive strategies in Step 5 (analyze
cognitive strategies), the analysis of mental models naturally follows the
analytical activities described in Step 2 (design performance assessments),
especially skill decomposition and the formulation and classifcation of per-
formance objectives (Sections 5.2 to 5.4). In Step 2, the main question
was: What are the constituent skills or subcompetencies necessary to carry
out real-life tasks? The resulting documents may help the analyst prepare
document study and expert interviews for the follow-up analyses described
in the current chapter. Now, the main question is: What do learners need
to know to carry out the nonroutine aspects of real-life tasks? The con-
stituent skills and associated performance objectives classifed as nonrecur-
rent in Step 2 provide a good starting point for analyzing mental models
(cf. Table 5.2).
For many professional and scientifc felds, domain models are extensively
described in available study books, reports, articles, and other documents.
If that is the case for you, then there is no need for you to perform Step 6.
Yet, especially in advanced domains, learners may need to be prepared for
new jobs or tasks and/or for dealing with new tools or machinery. If this is
the case, then interviewing experts or pioneers in the task domain is neces-
sary to develop relevant domain models for the simple reason that written
documentation is not yet available. To keep the analysis process manageable,
it best progresses from analyzing the mental models that underlie carrying
out simple tasks from early task classes to the mental models that underlie
doing this for more complex tasks from later task classes. Having done this,
the analyst then confronts a task performer with relatively simple tasks and
assists them in describing a mental model helpful to performing those tasks.
This process repeats for each new task class, with tasks at increasingly higher
levels of complexity.
At each level of task complexity, the analysis of mental models is an associ-
ative process. The analyst establishes meaningful relationships or associations
between facts and/or concepts that could possibly help carry out problem
solving, reasoning, and decision making at the given level of complexity—
but this does not necessarily need to be the case. Simply said, this refects
the commonly held belief that “the more you know about a domain and
204 Step 6: Analyze Mental Models

more or less related domains, the more likely it is that you will be able to
efectively solve problems or carry out tasks in this domain”. A great risk
here is to proceed with the associative analysis process for too long. In a
sense, everything is related to everything, and thus, an analyst can build
seemingly endless networks of interrelated pieces of knowledge. Therefore,
it is essential not to introduce new relationships if an expert task-performer
cannot clearly explain why newly associated facts or concepts improve their
performance.
However, this is certainly not an easy decision to make. For instance,
should students in information science know how a computer works to pro-
gram a computer, and if so, then to what extent? Should art students know
about the chemistry of oil paints to be able to paint, and if so, how much
should they know? Should students in educational technology know how
people learn to produce an efective and efcient instructional design, and
if so, to what level of specifcity should they know this? When is enough
enough? The number of relationships in domain models is theoretically
unlimited. According to the prominent types of relationships, the earlier
discussed three basic kinds of models may be distinguished: conceptual,
structural, and causal.

Identify Conceptual Models

The basic elements in conceptual models are concepts, which represent a


class of objects, events, or other entities by their characteristic features (also
called attributes or properties). A concept can be viewed as a node with links
to propositions or ‘facts’ that enumerate the features of the concept (see
Step 9—analyze prerequisite knowledge—in Chapter 12). Concepts enable
a person to identify or classify concrete things as belonging to a particular
class. Most words in a language identify concepts, and most concepts are
arbitrary, meaning that things can be grouped or classifed in many difer-
ent ways. For instance, a computer can be classifed according to its nature
(‘notebook’), color (‘black’), processor (‘Intel Core i9’), operating system
(‘Windows 11’), and so on. However, in a particular task domain, some
concepts are more useful for carrying out a task than others. Classifying
computers by their color is not very useful to a service engineer, while clas-
sifying them by type of processor is. But for an interior designer, classifying
them by color makes sense.
Conceptual models interrelate concepts with each other. They allow one
to answer the basic question: What is this? Conceptual models are particu-
larly important for categorizing, describing, and qualitative reasoning tasks
because they allow the person carrying out the task to compare things with
each other, analyze things in their parts or kinds, search for examples and
analogies, and so on.
Step 6: Analyze Mental Models 205

Many diferent types of relationships can be used in constructing a con-


ceptual model. A particularly important relationship is the kind-of relation-
ship that indicates that a particular concept is a member of another, more
abstract, or more general concept. For instance, both the concepts ‘chair’
and ‘table’ have a kind-of relationship with the concept ‘furniture’ because
they both belong to the same general class. Kind-of relationships often defne
a hierarchy of concepts called taxonomy. In a taxonomy, more abstract, gen-
eral, or inclusive concepts are called superordinate. Concepts at the same level
of abstraction, generalization, or inclusiveness are called coordinate, and con-
cepts that are more concrete, specifc, or less inclusive are called subordinate.
Superordinate concepts provide a context for discussing ideas lower in the
hierarchy; coordinate concepts provide a basis for comparing and contrasting
ideas at the same level in the hierarchy; and subordinate concepts provide a
basis for analyzing an idea in its kinds. Table 9.1 provides examples of super-
ordinate, coordinate, and subordinate kind-of relations between concepts.
Another important association is the part-of relationship, which indicates
that a particular concept is part of another concept. The concepts ‘keyboard’
and ‘monitor,’ for example, have a part-of relationship with the concept
‘desktop computer’ because both are parts of a desktop computer. Part-of
relationships often defne a hierarchy of concepts that is called partonomy.

Table 9.1 Examples of superordinate, coordinate, and subordinate kind-of rela-


tionships (taxonomy) and part-of relationships (partonomy).

Kind-of relationships Part-of relationships

Superordinate • The concept ‘animal’ • The concept ‘body’ is


Provide Context is superordinate to the superordinate to the
concept ‘mammal’ concept ‘organ’
• The concept ‘non-fiction’ • The concept ‘book’ is
is superordinate to the superordinate to the
concept ‘cookbook’ concept ‘chapter’
Coordinate • The concept ‘mammal’ • The concept ‘stomach’
Compare and is coordinate to the is coordinate to the
contrast concept ‘bird’ concept ‘heart’
• The concept ‘study • The concept ‘chapter’
book’ is coordinate to is coordinate to the
the concept ‘travel guide’ concept ‘preface’
Subordinate • The concept ‘human’ • The concept ‘chamber’
Analyze is subordinate to the is subordinate to the
concept ‘mammal’ concept ‘heart’
• The concept ‘self-study • The concept ‘paragraph’
guide’ is subordinate to is subordinate to the
the concept ‘study book’ concept ‘chapter’
206 Step 6: Analyze Mental Models

The right column of Table 9.1 provides some examples of superordinate,


coordinate, and subordinate part-of relationships between concepts.
Taxonomies and partonomies are examples of hierarchically ordered
conceptual models. Alternatively, each concept in heterarchical models may
have relationships with one or more other concepts, yielding network-like
structures. A concept map is a heterarchical model in which the relation-
ships are not labeled. In such a map, specifed relationships mean nothing
more than ‘concept A is in some way related to concept B,’ ‘concept A is
associated with concept B,’ or ‘concept A has something to do with concept
B.’ Sometimes, a distinction is made between unidirectional relationships
(indicated by an arrow from concept A to concept B) and bidirectional rela-
tionships (indicated by either a line without an arrowhead or a line with a
double-headed arrow between the concepts A and B). Figure 9.1 provides
an example of a concept map indicating relationships between concepts that
may help to reason about the pros and cons of being a vegetarian.

Figure 9.1 Example of a concept map.

One problem with the unlabeled relationships in a concept map is that


their meaning may be misinterpreted or obscure. An alternative is a semantic
network (see Figure 9.2), similar to a concept map in which the relation-
ships or links are explicitly labeled.
Step 6: Analyze Mental Models 207

Figure 9.2 Example of a semantic network.

In addition to the kind-of and part-of relationships discussed, other


meaningful relationships that serve to label the links in a semantic network
include:

• Experiential relationship, which connects a new concept to a concrete


example that is already familiar to the learners: a ‘car alternator’ exempli-
fes the concept ‘generator.’
• Analogical relationship, which connects a new concept to a similar, famil-
iar concept outside of the task domain: the ‘human heart’ is said to be
similar to a ‘HS pump.’
• Prerequisite relationship, which connects a new concept to another, famil-
iar concept that enables the understanding of that new concept: under-
standing the concept ‘prime number’ is enabled by understanding the
prerequisite concept ‘division.’

Experiential, analogical, and prerequisite relationships permit a deeper


understanding of the task domain because they explicitly relate the con-
ceptual model to what is already known by the learner. They are particu-
larly important when inductive and/or inquisitory instructional methods
are used for novice learners (see Section 7.4) because these relationships
may help the learners construct general and abstract models from their prior
knowledge. Other meaningful relationships include:

• Location-in-time relationship, indicating that a particular concept has


a particular relation in time with another concept (i.e., before, during,
after): In a military context, the concept ‘debriefng’ has an after-relation
with the concept ‘operation.’
208 Step 6: Analyze Mental Models

• Location-in-space relationship, indicating that a particular concept has a


special relation in space with another concept (e.g., in, on, under, above,
etc.): In this book, the concept ‘fgure caption’ is located in space under
the ‘fgure,’ while the concept ‘table caption’ is located above the ‘table.’
• Cause-efect relationship, indicating that changes in one concept (the
cause) are related to changes in another concept (the efect): The concept
‘demand’ usually has a cause-efect relationship with the concept ‘price’
because an increase in the one will cause an increase in the other if the
supply remains constant.
• Natural-process relationship, indicating that one concept typically coin-
cides with or follows another concept. However, this does not imply
causality: The concept ‘evaporation’ has a natural-process relation with
‘condensation’ (in a distillation cycle, you cannot say that evaporation
causes condensation or the other way around).

Which of the previously mentioned relationships to use in a conceptual


model depends on the characteristics of the task domain, especially the learn-
ing tasks that must be carried out in this domain. To keep the analysis process
manageable, conceptual models that facilitate qualitative reasoning in a task
domain are most efective when they use a parsimonious (i.e., concise) set of
relationships. Furthermore, one may explicitly focus on location relation-
ships in structural models or cause-efect and natural-process relationships in
causal models. The following subsections discuss these special models.

Identify Structural Models

Structural models are domain models where location-in-time and/or loca-


tion-in-space relations between concepts are dominant and together form
plans, allowing learners to answer the question “How is this built?” or
“How is this organized?” Plans organize concepts in time or space. Plans
that organize concepts in time are also called scripts that describe a stereo-
typed sequence of events or activities using location-in-time relationships.
For instance, in biology, the following script is seen as typical for the mating
behavior of the male stickleback (a family of fsh). The male stickleback:

• develops a red breast, followed by


• constructs a nest from weeds held together by secretions from their kid-
neys, followed by
• attracts a female stickleback to the nest, followed by
• fertilizes the eggs that the female stickleback has laid in their nest, fol-
lowed by
• guards the eggs until they hatch.
Step 6: Analyze Mental Models 209

This script allows a biologist to interpret the observation of a male stickle-


back acting in a certain way at the start of a stickleback mating ritual. It
allows the biologist to understand what is going on because the mating
behavior of other fsh is quite diferent from that of stickleback, and the
behavior of non-mating sticklebacks is quite diferent from that of mating
sticklebacks. Furthermore, scripts allow for predicting future events or fnd-
ing a coherent account for disjointed observations. For example, if a biolo-
gist observes a male stickleback with a red breast, they can predict that the
stickleback will soon start constructing a nest.
Plans that organize concepts in space rather than time are also called
templates, which describe a typical spatial organization of elements using
location-in-space relationships. Early research in the feld of chess, for exam-
ple, showed that expert chess players have better memory for meaningful
problem states than novices because they have templates available that refer
to specifc patterns of chess pieces on the board, but for random chess con-
fgurations on the board, there is no diference in memory between novices
and experts (De Groot, 1966; Chase & Simon, 1973). As a more practical
example, in the feld of scientifc writing, the following template is seen as
typical for an empirical journal article:

• Abstract, which comes before the


• Introduction, which comes before the
• Method, which comes before the
• Results, which come before the
• Discussion, which comes before the
• References

This template helps researchers understand empirical articles quickly because


they all adhere to the same basic structure. It also helps them write such
articles because the templates steer the writing process. In the same way,
templates in other felds help task performers design artifacts: computer
programmers use stereotyped patterns of programming code, architects use
typical building-block solutions for designing buildings, and chefs develop
their menus from standard dinner courses.
Structural models often do not consist of just one plan, but rather, an
interrelated set of plans that helps understand and design artifacts. Difer-
ent kinds of relationships might be used to associate plans with each other.
Figure 9.3 provides a structural model helpful for writing scientifc articles.
As another example in computer programming, more general and abstract
plans may refer to the basic outline of a program (e.g., heading, declaration,
procedures, main program). These are related to less abstract plans that
generally represent basic programming structures such as procedures, loop-
ing, and decision structures. These, in turn, are related to concrete plans
210 Step 6: Analyze Mental Models

providing a representation of structures that are close to the actual program-


ming codes, such as specific templates for looping structures (e.g., WHILE-
loops, FOR-loops, REPEAT-UNTIL-loops), conditions (e.g., IF-THEN,
CASE), and so on.

Figure 9.3 Structural model for writing scientific articles.

Identify Causal Models


Causal models are domain models in which cause-effect and natural-process
relations between concepts dominate, forming principles that allow learners
to answer questions such as “How does this work?” or “Why doesn’t this
work?” Principles relate changes in one concept to changes in another, with
a cause-effect or a natural-process relationship. Cause-effect relationships
may be deterministic, indicating that one change always implies another
change. For instance, “a decrease in the volume of a vessel holding a gas
always yields an increase in gas pressure if the temperature remains con-
stant” or “a greater amount of carbon dioxide (CO2) and/or ozone (O3) in
the atmosphere leads to more smog”. The relationships may also be proba-
bilistic, indicating that one change sometimes implies another change. For
instance, “working hard can lead to success” or “sunbathing can lead to skin
cancer”. Natural-process relationships are used when one event typically
Step 6: Analyze Mental Models 211

coincides with another event (A occurs simultaneously with B; A occurs


before B; or B follows A)—without causality implied. The relationship is,
thus, merely correlational. For instance, “the sun rises each morning” (is it
morning because the sun rises, or does the sun rise because it is morning?),
or “overweight people exercise little”.
Causal models typically do not consist of one principle but an inter-
related set of principles that apply to a particular domain. They allow task
performers to understand the workings of natural phenomena, processes,
and devices and to reason about them. Given a cause, the model enables
making predictions and drawing implications (i.e., given a particular state,
predict what efect it will have), and given an efect, the model enables
giving explanations and interpreting events (i.e., given a particular state,
explain what caused it). If a causal model describes the principles that apply
to natural phenomena, it is called a theory. For instance, a theory of elec-
tricity may be used to design an electric circuit that yields a desired type
of output, given a particular input (i.e., a large source of electricity run-
ning through a thin resistor makes the resistor very hot: this is how a light
bulb or a toaster functions). If a causal model describes the principles that
apply in engineered systems, it is called a functional model. Such a model
describes the behavior of individual components of the system by stating
how they change in response to changes in input and how this afects their
output and describes the behavior of the whole system by describing how
the outputs of one device connect to the inputs of other devices (i.e., ‘how
it works’).
Given a desired efect, a well-developed functional model of an engi-
neered system allows someone performing a task to identify and arrange
the causes that bring about a desired or undesired efect. This approach
may be particularly helpful for carrying out operating tasks. Given an unde-
sired efect (e.g., a fault, failure, error, or disease), a well-developed func-
tional model allows a task performer to identify the causes that might have
brought about the undesired efect (i.e., make a diagnosis) and, eventually,
to rearrange those causes to reach a desired efect (i.e., make a repair or plan
a treatment). This may be particularly helpful when performing trouble-
shooting tasks. And/or-graphs are suitable representations for linking efects
to multiple causes, and fault trees are a specifc type of and/or-graph that
may help the user to perform troubleshooting tasks because they identify all
of the potential causes of system failure. Figure 9.4 provides a simple exam-
ple of a fault tree for diagnosing projector lamp outages. It indicates that the
lamp’s failure may result from a power outage (the circle on the second level
indicates that this is a basic event not developed further), unresolved lamp
failure, accidental shutdown, or wiring failure. An unresolved lamp failure,
212 Step 6: Analyze Mental Models

in turn, results from a basic lamp failure and the absence of a spare lamp.
It should be clear that fault trees for large technical systems might become
extremely complex.

Figure 9.4 Fault tree for projector lamp outage.

Combining Different Types of Models

Structural and causal models are special kinds of conceptual models that
provide a particular perspective on a task domain. Complex domain mod-
els may combine them into semantic networks that try to represent the
whole mental model, enabling the performance of a complex cognitive
skill. However, focusing first on only one type of model might be worth-
while to keep the analysis process manageable. Different task domains have
different dominant structures: Structural models are particularly impor-
tant for domains that focus on analysis and design, such as mechanical
engineering, instructional design, or architecture. Causal models, on the
other hand, are particularly important for domains that focus on explana-
tion, prediction, and diagnosis, such as the natural sciences or medicine.
Finally, general conceptual models are particularly important for domains
that focus on description, classification, and qualitative reasoning, such as
history or law. The analyst (i.e., the instructional designer) should start
with analyzing the dominant type of model or ‘organizing content’ in
the domain of interest (Reigeluth, 1992). In later stages of the analysis
process, other models forming part of the mental model may be linked to
this organizing content.
Step 6: Analyze Mental Models 213

9.3 Analyzing Intuitive Mental Models


The Ten Steps focuses on analyzing efective mental models that help solve
problems and carry out tasks in the domain. This is a ‘rational’ analysis
because the identifed domain models rest on generally accepted concep-
tions, plans, and laws in the domain. In addition to this rational analysis, the
instructional designer can and should analyze the mental models of novice
learners in the feld. This is an ‘empirical’ analysis because it describes the
actual models the target group uses. There are often considerable diferences
between the domain models that describe the efective mental models used
by expert task-performers and the intuitive or naïve mental models of nov-
ice learners in that domain. Such intuitive or naïve mental models are often
fragmented, inexact, and incomplete; they may refect misunderstandings or
misconceptions, and the learners are typically unaware of the underlying rela-
tionships between the elements. Figure 9.5, for example, provides examples
of a novice learner’s intuitive conceptual models of the Earth (Vosniadou &
Brewer, 1992). An example of an intuitive structural model is the Internet
as a centralized system (which it is not!) in which all computer systems are
connected to one central server. An example of an intuitive causal model is
that the tides are exclusively caused by the moon’s revolution.

Figure 9.5 Examples of a child’s naïve mental models of the Earth.

Intuitive mental models are often very hard to change. One approach
to achieving this change is beginning instruction with the existing intui-
tive models (i.e., using inductive teaching methods) and slowly progressing
toward increasingly more efective models in a process of conceptual change
(Mills et al., 2016). Another approach is using instructional methods that
help learners question the efectiveness of their intuitive model, such as con-
trasting it with more accurate models or taking multiple perspectives on it.
The next section briefy discusses these approaches.
214 Step 6: Analyze Mental Models

9.4 Using Domain Models to Make Design Decisions


As stated, we recommend only carrying out Step 6 if existing documentation
and/or instructional materials do not provide information about domain
models. When performed, the analysis results provide the basis for several
design activities. In particular, specifed domain models may help refne an
existing sequence of task classes by using a progression of mental models
and designing an important part of the supportive information. In addition,
identifying intuitive mental models may afect several design decisions.

Refining Task Classes through a Progression of Mental Models

Domain models can help refne an existing sequence of task classes via men-
tal model progression (see Section 6.2). The frst task class is a category of
learning tasks that can be correctly carried out based on the simplest domain
model. This model already contains the most representative, fundamental,
and concrete concepts and should be powerful enough to formulate non-
trivial learning tasks that learners may work on. Increasingly more complex
task classes correspond one-to-one with increasingly more complex domain
models. In general, more complex models contain more—diferent types
of—elements and/or more relationships between those elements than ear-
lier models that are less complex (Mulder et al., 2011). They either add
complexity or detail to a part or aspect of the previous models and become
elaborations or embellishments of them or provide alternative perspectives
on solving problems in the domain. In mental model progression, all mod-
els, thus, share the same essential characteristics: Each more complex model
builds upon the previous models. This process continues until a level of
elaboration and a set of models ofering diferent perspectives are reached
that underlie the fnal exit behavior.
Table 9.2 provides an example of a mental model progression in trouble-
shooting electrical circuits (White & Frederiksen, 1990). Each model ena-
bles learners to carry out tasks that may also occur in the post-instructional
environment. In this domain, causal models describe the principles govern-
ing the behavior of electrical circuits and their components, such as batter-
ies, resistors, capacitors, etc. Three simple-to-complex causal models are:

• Zero-order models, containing principles relating the mere presence or


absence of resistance, voltage, and current to the behavior of a circuit.
They serve to answer a question like “Will the light bulb in this circuit be
on or of?”
• First-order models, containing principles that relate changes in one thing
to another. They serve to answer a question like “Is there an increase in
amperage in this circuit when the resistance is lowered?”
Step 6: Analyze Mental Models 215

• Quantitative models, containing principles that express the laws of


electricity, such as Kirchov’s and Ohm’s laws. They serve to answer a
question like “What is the amperage across the points X and Y in this
circuit?”

Table 9.2 Mental model progression in the domain of electronics troubleshooting.

Model progression Content of model Corresponding task class

Zero-order model • Basic circuit principles Learning tasks requiring


• Types of conductivity an understanding of how
• Current and absence/ voltages, current flows,
presence of resistance and resistances are
related
First-order model • Concept of feedback Learning tasks requiring
• Analog circuits detecting and
• Relating voltage, understanding feedback
current, and resistance
Quantitative model • Kirchov’s law Learning tasks requiring
• Ohm’s law computing voltages,
• Wheatstone bridges currents, and resistances
across points

Designing Supportive Information

Domain models provide the basis for designing the part of the supportive
information related to mental models. First, they may be explicitly presented
to learners because they tell them how things are labeled, built, and work in
a particular domain (see Section 7.2). In many cases, an educational specif-
cation is necessary to ensure that the domain model is presented in a manner
that learners can readily understand. Second, domain models may help the
instructional designer fnd case studies that give concrete examples of the
classifcation, organization, and working of things (see Section 7.3). Those
case studies may be seen as learning tasks with maximum product-oriented
support or as illustrations of mental models (refer back to Figure 7.2).
Third, domain models may help provide cognitive feedback, ofering learn-
ers opportunities to compare their solutions with a presented domain model
(see Section 7.5). For example, if learners have written a scientifc article,
they may compare the structure of their article with a given structural
model (cf. Figure 9.3) or with a specifc article from a scientifc journal
in that domain. If a learner has reached a diagnosis of a particular error,
they can then check the credibility of their diagnosis with a given fault tree
(cf. Figure 9.4). Such refective activities help learners elaborate on the sup-
portive information and construct more accurate mental models.
216 Step 6: Analyze Mental Models

Dealing with Intuitive Mental Models

Identifying intuitive mental models that novice learners possess may afect
decision making for the design activities discussed. Concerning refning task
classes, the existence of intuitive models may be a reason to start from those
existing models and to provide a relatively large number of task classes; that
is, to work slowly from the intuitive, inefective models via more efective but
still incomplete and fragmented models, toward more efective, more com-
plete, and more integrated models. This allows learners to carefully compare
and contrast the new and more powerful models at each level of complexity
with previous, less powerful models. Ideally, expert scientifc models then
gradually replace the novice intuitive models (cf. the naïve models of the
Earth in Figure 9.5 that gradually become more accurate from left to right).
However, intuitive mental models may be highly resistant to change (e.g.,
Steinberg et al., 1990).
Concerning designing supportive information, strong intuitive mental
models might be a reason to select instructional methods that explicitly focus
on elaborating new information. First, concerning case studies, it is desirable
to present a relatively large number of case studies to illustrate the domain
model and present them from multiple viewpoints. For instance, when teach-
ing a model of the Earth, the supportive information may be in the form
of satellite images from diferent perspectives (always showing a sphere but
with diferent continents). When teaching the workings of the tides, case
studies may show that the tides are stronger in the Atlantic Ocean than in
the Mediterranean Sea and are, thus, not only afected by the relative loca-
tion of the moon and the sun to the Earth but also by the shape, size, and,
for the Mediterranean Sea, the fact that the Strait of Gibraltar hinders the
fow of water and the boundaries of the body of water. Second, inductive-
expository and guided discovery methods should take priority over deductive
methods. The former two methods help learners refne their existing models
and construct more efective models. Leading questions such as “Why do
ships disappear beyond the horizon?” and “What happens if you start walk-
ing in a straight line and never stop?” may help refne a model of the Earth.
Questions such as “When is it spring tide?” and “When is it neap tide?” may
help build a model of how the tides work. Third, feedback by discovery should
stimulate learners to critically compare and contrast their models with more
efective or scientifc models. Here, learners could compare their model of
the Earth with a globe, and they could compare their predictions of the tides
with the actual measurements provided by the coast guard.

Analyzing Mental Models Versus Analyzing Cognitive Strategies

Mental models describe how the world is organized, while cognitive strate-
gies (Chapter 8) describe how task performers’ actions in this world are
Step 6: Analyze Mental Models 217

organized. The Ten Steps assumes a reciprocal relationship between mental


models and cognitive strategies, indicating that one is of little use without
the other. The better organized a learner’s knowledge about a particular
domain is in terms of mental models, the more likely they will use cognitive
strategies leading to carrying out the task properly. The reverse of this is also
true. The more powerful the mental models are, the more likely it is that
cognitive strategies help them carry out the task. Therefore, well-designed
instruction should always include both SAPs and domain models.
According to the Ten Steps, the diference between cognitive strategies
and mental models is primarily in their use rather than in the way they are
represented in human memory. For example, cognitive strategies use loca-
tion-in-time relationships to describe task performers’ actions, while men-
tal models use the same relationships to describe particular events in the
world. Cognitive strategies use cause-efect relationships to describe rules-
of-thumb, while mental models use the same relationships to describe prin-
ciples that apply in a particular domain. In essence, this entails using the
same representation in diferent ways. A geographical map provides a useful
metaphor. On the one hand, it provides an impression of what a geographi-
cal area looks like (cf. a mental model), while, on the other hand, it can be
used to plan a route for traveling from A to B (cf. a cognitive strategy).
The Ten Steps suggests analyzing cognitive strategies (Step 5) before
analyzing mental models (Step 6). This is because SAPs are often under-
specifed or even completely absent in existing documentation and instruc-
tional materials. Traditional instruction commonly focuses frst on the
‘what-is’ questions and only then on ‘how-to’ questions. The Ten Steps
reverses this. Nevertheless, no fxed order exists for analyzing cognitive
strategies and mental models. If the learning tasks under consideration are
described by applying SAPs (e.g., design skills), it is best to start with an
analysis of cognitive strategies, but if they are described as reasoning with
domain models (e.g., diagnostic skills), it is best to start with an analysis of
mental models.

9.5 Summary of Guidelines


• If you analyze domain models, then study relevant documents and inter-
view expert task-performers to identify the dominant knowledge elements
and relationships between those elements that may help to qualitatively
reason in the task domain.
• If you identify conceptual models that allow for description and clas-
sifcation, then focus on kind-of-relationships and part-of relationships
between concepts.
• If you want to relate new conceptual models to your learners’ prior knowl-
edge, then use experiential, analogical, and prerequisite relationships.
218 Step 6: Analyze Mental Models

• If you identify structural models, then focus on location-in-time and/or


location-in-space relationships.
• If you identify causal models, then focus on cause-efect and/or natural-
process relationships.
• If you analyze intuitive mental models, then focus on the discrepancies
between the elements and relationships between elements distinguished
by an expert task-performer and those distinguished by a naïve learner.
• If you use a domain model to refne a sequence of task classes, then iden-
tify simple to increasingly more complex models and defne correspond-
ing task classes.
• If you use a domain model to design supportive information, then for-
mulate the domain model so that it is understandable for your target
group and select or design case studies to help illustrate the model.
• If you are teaching a domain model to learners inclined to use an inef-
fective intuitive mental model, then present case studies from diferent
viewpoints, use inductive-expository or guided discovery strategies, and
provide feedback by discovery.
• If you need to analyze cognitive strategies and mental models, start with
the easiest analysis to perform in your task domain.

Glossary Terms

Concept map; Fault tree; Intuitive mental model; Mental model; Product-
oriented support; Semantic network
Chapter 10

Step 7
Design Procedural Information

10.1 Necessity
Procedural information is one of the four principal design components and
enables learners to perform the recurrent aspects of learning tasks and part-
task practice items. We strongly recommend carrying out this step.

Much that we do in our lives is based upon fairly fxed ‘routines.’ We


have routines when we get up in the morning, go to work or school during
the week, do our weekly shopping at the supermarket, cook certain dishes,
and even study for exams. We have these routines and use them to carry
DOI: 10.4324/9781003322481-10
220 Step 7: Design Procedural Information

out those task aspects or perform those acts that are the same every time. In
other words, we follow fxed procedures.
This chapter presents guidelines for designing procedural information. It
concerns the third blueprint component, which specifes how to carry out
the recurrent aspects of learning tasks (Step 1) or how to carry out part-task
practice items (Step 10), which are always recurrent. Procedural information
refers to (a) just-in-time (JIT) information displays providing learners with
the rules or procedures that describe the performance of recurrent aspects
of a complex skill as well as information prerequisite for correctly carrying
out those rules or procedures, (b) demonstrations of the application of those
rules and procedures as well as instances of the prerequisite knowledge, and
(c) corrective feedback on errors. All instructional methods for presenting
procedural information promote rule formation, a process of converting
new knowledge into task-specifc cognitive rules. Cognitive rules can drive
the recurrent aspects of performance without the need to interpret cognitive
schemata. After extensive training, which may sometimes take the form of
part-task practice (see Step 10 in Chapter 13), the rules can even become
fully automated (i.e., routines) and drive the recurrent aspects of an expert
task-performer’s performance without the need for conscious control.
The structure of this chapter is as follows. Section 2 discusses the design
of JIT information displays. These displays should be modular, use simple
language, and prevent split-attention efects. Section 3 describes the use of
demonstrations and instances. The presented rules and procedures are best
demonstrated in the context of whole learning tasks. Section 4 discusses
three presentation strategies for procedural information; namely, unsolicited
JIT information presentation by an instructor or other intelligent pedagogi-
cal agent during task performance, unsolicited information presentation in
advance so that learners can memorize the information before they start
working on the learning tasks, and solicited JIT information presentation
where learners consult checklists, manuals, or other resources during task
performance. Section 5 gives guidelines for providing corrective feedback on
errors in the recurrent aspects of performance. Section 6 discusses suitable
media for presenting procedural information, including the teacher acting
as an ‘assistant looking over your shoulder,’ job aids, and various electronic
tools. Section 7 discusses the positioning of procedural information in the
training blueprint. The chapter concludes with a summary of guidelines.

10.2 Providing Just-In-Time Information Displays


Learners need procedural information to carry out the recurrent aspects
of learning tasks. Just-in-time (JIT) information displays specify how to
carry out those tasks at a level of detail that can be immediately understood
by all learners (Kester et al., 2006). It is often called ‘how-to instruction,’
Step 7: Design Procedural Information 221

‘rule-based instruction,’ or ‘step-by-step instruction.’ It is usually either pre-


sented by an instructor or made available in the form of manuals, (online)
help systems, job aids, quick reference guides, help apps on tablets and/or
smartphones, etc. Because the procedural information is identical for many,
if not all, learning tasks that invoke the same recurrent constituent skills, it
is typically provided in the frst learning task for which the recurrent aspect
is relevant. For subsequent learning tasks, it is diminished as learners gain
more expertise and no longer need help. This principle is called fading.
JIT information combines two things: rules and how to use them. First,
it concerns the cognitive rules that allow one to carry out particular recur-
rent aspects of performance in a correct, algorithmic fashion. The analysis
of those rules, or of procedures that combine those rules, is discussed in
Chapter 11 (Step 8: Analyze cognitive rules). Second, it concerns things the
learner should know to apply those rules correctly (i.e., prerequisite knowl-
edge). The analysis of those prerequisite knowledge elements, such as facts,
concepts, plans, and principles, is discussed in Chapter 12 (Step 9: Ana-
lyze prerequisite knowledge). There is a unidirectional relationship between
cognitive rules and prerequisite knowledge: Prerequisite knowledge is pre-
conditional to correctly using the cognitive rules, but not vice versa.
Both rules and prerequisite knowledge are best presented during prac-
tice, precisely when the learner needs them (see Feinauer et al., 2023). When
learning to play golf and making your frst drives on the driving range, your
instructor will probably tell you how to hold your club (a rule), what a ‘club’
is (a concept prerequisite to the use of the rule), how to take your stance
(a rule), how to swing the club and follow through with your swing (a rule),
and what ‘following through’ is (a concept prerequisite to the use of the
rule). Although it is possible to present all of this information beforehand
in a classroom lesson or a textbook, it makes more sense and is more efec-
tive to present it exactly when it is needed (i.e., temporal contiguity, pre-
venting temporal split-attention) because its activation in working memory
during task performance helps learners construct appropriate cognitive rules
in their long-term memory—a basic learning process that is known as rule
formation (see Box 10.1).

Box 10.1 Rule Formation and Procedural


Information

Well-designed procedural information enables learners to carry out—


and learn to carry out—the recurrent aspects of learning tasks. The
presentation of procedural information should simplify embedding
it in cognitive rules that directly steer behavior and evoke particular
222 Step 7: Design Procedural Information

actions under particular conditions. Together with strengthening


(see Box 13.1), rule formation is the major process responsible for
schema or rule automation. John R. Anderson’s Adaptive Control of
Thought (ACT-R) theory is one of the most comprehensive theories
describing the learning processes responsible for forming cognitive
rules.

Weak Methods
In the early stages of learning a complex skill, the learner may receive
information about the skill by reading textbooks, listening to lec-
tures, studying examples, etc. The general idea is that this information
may be encoded in declarative memory and be interpreted by weak
methods to generate behavior. Weak methods are problem-solving
strategies independent of the particular problem; they are generally
applicable and include methods such as means-ends analysis, forward-
chaining search, subgoaling, hill climbing, analogy, trial-and-error,
and so forth. According to ACT-R, weak methods are innate (i.e.,
they are biologically primary; Geary, 2008) and can be used to solve
problems in any domain. However, this process is very slow, takes up
many cognitive resources, and is prone to errors. Learning, on the
one hand, involves the construction of cognitive schemata through
induction (see Box 4.1) and elaboration (see Box 7.1), which makes
performance much more efcient and efective because acquired cog-
nitive strategies and mental models may be interpreted to guide the
problem-solving process. On the other hand, it also involves the for-
mation of rules that eventually directly steer behavior—without the
need for interpretation. This process often starts by following how-to
instructions, watching demonstrations, and/or studying worked-out
examples, after which cognitive rules are further refned in a process of
production compilation.

Production Compilation
Production compilation creates new, task-specifc cognitive rules
by combining more general rules and eliminating conditions so
that one rule never has more than one retrieval from declarative
memory. Taatgen and Lee (2003) provide an example where there
are three rules (also called ‘productions’—that is why this process
is called production compilation) for computing the sum of three
numbers:
Step 7: Design Procedural Information 223

Rule 1:
IF
The goal is to add three numbers
THEN
Send a retrieval request to declarative memory for the sum of the frst
two numbers

Rule 2:
IF
The goal is to add three numbers AND the sum of the frst two num-
bers is retrieved
THEN
Send a retrieval request to declarative memory for the sum that has
just been retrieved and the third number

Rule 3:
IF
The goal is to add three numbers AND the sum of the frst two and
the third number is retrieved
THEN
The answer is the retrieved sum

In production compilation, learners construct new rules by specializing


and combining rules that fre in sequence while maintaining the con-
straint of performing only one retrieval from declarative memory. This
involves eliminating the retrieval request in the frst rule and the retrieval
condition in the second rule. Suppose that the three numbers that the
learner is adding are 1, 2, and 3. This would frst produce two new rules,
a combination of Rules 1 and 2 and a combination of Rules 2 and 3:

Rule 1&2:
IF The goal is to add 1, 2, and a third number
THEN Send a retrieval request to declarative memory for the sum of
3 and the third number

Rule 2&3:
IF The goal is to add three numbers and the third number is 3 AND
the sum of the frst two numbers is retrieved and is equal to 3
224 Step 7: Design Procedural Information

THEN The answer is 6


With more practice, learners can combine each of these two rules with
one of the original three rules to form a new rule that combines all
three rules:

Rule 1&2&3:
IF The goal is to add 1, 2, and 3
THEN The answer is 6

Compared with the original three rules, the new rule is highly task-
specifc—it will only work for the numbers 1, 2, and 3. With produc-
tion compilation, people learn new rules only if the new rules are more
specifc than the rules they already possess. Thus, they will learn new,
task-specifc rules only for frequently occurring situations (as is typical
for recurrent constituent skills!). For example, people probably learn
the rule that allows them to automatically answer the question of what
the sum of 1, 2, and 3 is but not the rule that allows them to auto-
mat cally answer the question of what the sum of 5, 9, and 7 is.

Further Reading
Anderson, J. R. (2007). How can the human mind occur in the physical
universe? Oxford University Press.
https://doi.org/10.1093/acprof:oso/9780195324259.001.0001
Taatgen, N. A., & Lee, F. J. (2003). Production compilation: A simple
mechanism to model complex skill acquisition. Human Factors, 45,
61–76. https://doi.org/10.1518/hfes.45.1.61.27224

Partitioning Procedural Information in Small Units

JIT information has a modular structure and is organized in small displays


where each display corresponds with one rule or one procedure for reaching
a meaningful goal. The principle of closure is relevant here, which indicates
that the displays are distinct from each other and self-contained (i.e., they
are fully understandable without consulting additional information sources).
Organization of the information in small units is essential because only by
presenting relatively small amounts of new information can we prevent cog-
nitive overload when learners are simultaneously working on the learning
tasks and processing the how-to instructions.
Figure 10.1 provides a familiar example of a JIT information display
presented on a computer screen. The display is oriented toward one goal:
Step 7: Design Procedural Information 225

‘changing the document’s page orientation’ between portrait and landscape.


It provides the procedure for reaching this goal, and each instruction cor-
responds with one procedural step. Some steps may require prerequisite
knowledge, such as knowledge of the concepts ‘page style,’ ‘portrait,’ and
‘landscape.’ In the example, these are hot, clickable words on a computer
screen with a diferent color (here, gray-toned and underlined) than the
main text and linked to certain concept-specifc or action-specifc informa-
tion (hyperlinks). The box in Figure 10.1 shows the concept defnition that
appears if the learner or user clicks the link ‘Page Style.’ Clicking the other
links will yield similar concept defnitions. Finally, this example shows that
some goals may refer to lower-level subgoals with their procedures; for
instance, applying the page style to a single page or applying the page style
to all subsequent pages.

Changing Page Orientation


OpenOffice.org uses page styles to specify the orientation of the pages in a document.
For example, to change the page orientation of one or more pages in a document from
portrait to landscape in a document, you need to create a page style that uses the
landscape orientation, and then apply the page style to the pages.

To Change the Page Orientation to Landscape or


Portrait
To change the page orientation for all pages that use the current page style:
1. Choose Format - Page.
2. Click the Page tab.
3. Under Paper format, select Portrait or Landscape.
4. Click OK.
To change the page orientation only for the current page, you first need a page style
then apply that style:
1. Choose Format - Styles and Formatting.
2. Click the Page Styles icon.
3. Right-click, and choose New. Use page styles to determine
4. On the Organizer tab page, type a na page layous, including the ox, for
example "My landscape" presence of headers and footers.
5. In the Next Style box, select the page style that you want to apply to the next
page.
• To only apply the new page style to a single page, select "Default" as the next
page style.
• To apply the new page style to all subsequent pages, select the name of the new
page style.
6. Click the Page tab.
7. Under Paper format, select Portrait or Landscape.
8. Click OK.

Figure 10.1 Example of a JIT information display presenting the procedure for
changing document orientation.
Source: Based on OpenOffice.org Writer.
226 Step 7: Design Procedural Information

Formulating JIT Information

The most important requirement for formulating JIT information is that each
rule or procedural step is specifed at the entry level of the learners. Ideally,
even the lowest-level ability learners must be able to apply the presented rules
or carry out the presented actions without making errors—under the very
important assumption that they already possess the prerequisite knowledge.
This requirement is directly related to the main distinguishing characteristic
between a systematic approach to problem solving (SAP) and a procedure:
In a SAP, success is not guaranteed because the phases merely guide the
learner through a heuristic problem-solving process, but in a procedure, suc-
cess is guaranteed because the steps provide an algorithmic description of
how to reach the goal. However, note that it is always possible to apply a rule
or carry out an action incorrectly and have the procedure fail. Knowing the
rule does not mean that it is carried out correctly! Because each step or rule
is directly understandable for each learner, the learner does not have to make
a particular reference to related knowledge structures in long-term memory
during the presentation. Elaboration is important for understanding sup-
portive information, but it is superfuous if the information is immediately
understandable, as should be the case for procedural information.
It is important to use an action-oriented writing style for JIT informa-
tion displays, which entails using the active voice, writing in short sentences,
and not spelling everything out. If a procedure is long and/or has many
diferent branches, it is often better presented graphically. Moreover, when
procedures specify the operation of complex machinery, it may be helpful to
depict physical models of those devices in exploded views or via other graphi-
cal representations. Section 12.2 will briefy discuss analyzing tools and
objects into physical models. There are many more guidelines for micro-
level message design and technical writing that will not be discussed in this
book (e.g., Alred et al., 2012; Carroll, 2003).

Preventing Split Attention

When making use of JIT information, learners must divide their attention
between the learning task that they are working on and the JIT information
presented to specify how to carry out the recurrent aspects of the task. In
this situation, learners must continuously switch their attention between
the JIT information and the learning task to mentally integrate the two.
This continuous switching between mental activities (i.e., carrying out the
task and processing the JIT information) may increase the extraneous cog-
nitive load (see Box 2.1) and hamper learning. Compare this with trying
to hit a golf ball while, at the same time, the golf pro is giving you all
types of pointers on your stance, your grip, the angle of your arms, and so
Step 7: Design Procedural Information 227

forth. This split-attention efect has been well documented in the literature
(for a review, see Ginns, 2006). To prevent the split-attention efect, it is
of utmost importance to fully integrate the JIT information into the task
environment and to replace multiple information sources with a single, inte-
grated information source. This physical integration removes the need for
mental integration, reduces extraneous cognitive load, and positively afects
learning.
Figure 10.2 shows a task environment where students learn to trouble-
shoot electrical circuits with a computer simulation. In the upper fgure,
the diagram of the electrical circuit on the left side of the screen is sepa-
rated from the JIT information on the right side, causing split attention
because the learner must constantly move back and forth between the
circuit and its components on the left-hand side of the screen and the
information about the components on the right. In the lower fgure, the
same JIT information is fully integrated into the diagram of the electri-
cal circuit such that the learner does not have to split their attention
between a component in the circuit and the information about that com-
ponent. The integrated format positively afects learning (Kester et al.,
2004, 2005).
A special type of split attention occurs with paper-based or mobile
device-based manuals (or checklists, quick reference guides, etc.) contain-
ing procedural information that relates to a real-life task environment. In
such a situation, it may be impossible to integrate the information into the
task environment. For instance, if a medical student is diagnosing a patient
or a process operator is starting a distiller, it is impossible to integrate the
relevant procedural information into the task environment. One option to
prevent learners from dividing their attention between the task environ-
ment and the procedural information here is to include representations of
the task environment in the manual. For instance, if the task environment is
computer-based, you can include screen captures in the manual. If the task
environment is an operating task for a particular machine, you can include
pictures of the relevant parts of the machine in the manual (Chandler &
Sweller, 1996). After all, splitting the learner’s attention between the task
environment and the manual creates the problem, not the use of the manual
per se. This split-attention problem, thus, can be solved by either integrat-
ing the JIT information in the task environment or, vice versa, by integrat-
ing relevant parts of the task environment in the manual. New modes of
presentation, such as augmented reality, are making this problem smaller
(you might refer back to Figure 2.4). To use the example previously men-
tioned, the process operator could wear a pair of augmented reality glasses
that present the JIT information at the time and place that the operator is
looking.
228 Step 7: Design Procedural Information

What’s wrong with this circuit? Click


Problem 5 next to give answer...
0.00 A source of electrical protential (voltage variable). Current
flows From the positive pole of a battery to the negative
pole.

A switch.
100
22V 0.00
0.00
A voltmeter is connected in parallel because
electrons cannot pass through this meter.

A lamp (9W: 6OmA: always the same).


100
A resistor (100 Ohm: variable).

0.00
An ammeter is cannected in series because this
meter has no resistance

The current in a series circuit is the same at all points in the circuit
The voltage is divided over the element in the circuit Electrans stop
flowing through the circuit when the serles connection is Interrupted.
The current in a parllel cinnection is divided over the parallel
branches. The voltage in a parallel circuit is the same in every
branch. Interruption of one of the parallel branmches has no
consequence for the flow of electrons through the other branches.
The current in a circut always follows the way of the
least resistance. A short circuit arises wehen a circuit
has no resistance.

What’s wrong with this circuit? Click


Problem 5 next to give answer...

0.00
The current in a parllel
cinnection is divided
over the parallel A voltmeter is
branches. The voltage in connected in parallel
a parallel circuit is the because electrons
same in every branch. cannot pass through The current in a circut always follows the way of the
Interruption of one of this meter. least resistance. A short circuit arises wehen a circuit
the parallel branmches has
has no resistance.
no consequence for the
flow of electrons through
the other branches.
100 A lamp (9W:
6OmA: always the
A source of A resistor (100 same).
22v electrical Ohm: variable).
protential
(voltage An a
variable). mmeter is
Current flows cannected
from the in series 0.00
positive pole because
of a battery to this meter
the negative has no
pole. A switch. resistance

The current in a series circuit is the same at all points in the circuit
The voltage is divided over the element in the circuit Electrans stop
flowing through the circuit when the serles connection is Interrupted.

Figure 10.2 An electrical circuit with nonintegrated JIT information (above) and
integrated JIT information (below).
Step 7: Design Procedural Information 229

10.3 Exemplifying Just-In-Time Information


It is best to exemplify JIT information using concrete examples in the con-
text of whole tasks. Demonstrations exemplify the use of procedures and
the application of rules, whereas instances exemplify prerequisite knowledge
elements.

Demonstrations

The rules and procedures presented in JIT information displays can be dem-
onstrated to the learners. For example, it is not uncommon that online dis-
plays like the one presented in Figure 10.1 contain a ‘show me’-link, which
animates the corresponding cursor movements and menu selections, allow-
ing the learner or user to observe how the rules are applied or how the steps
are carried out in a concrete situation. This is the equivalent of a colleague
or teacher saying, “Let me show you how to do it!” The Ten Steps strongly
suggests providing such demonstrations in the context of whole, meaning-
ful tasks. Thus, demonstrations of recurrent aspects of a skill ideally coincide
with modeling examples or other suitable types of learning tasks. This allows
learners to see how a particular recurrent aspect of a task fts within mean-
ingful whole-task performance.
Going back to students learning to produce video content, at some point
in their training, they will receive JIT information on how to operate the
video editing software. A particular JIT information display might present
the step-by-step procedure for ‘color correcting’: adjusting and enhanc-
ing things like exposure, contrast, and saturation of the video clips. This
procedure is preferably demonstrated in the context of the whole learning
task, showing how to correct colors relevant to the video at hand instead
of dreaming up a demonstration for an imaginary situation. Another situ-
ation is where a complex troubleshooting skill requires executing a stand-
ard procedure to detect when a specifc value is out of range. It is best to
demonstrate this standard procedure as part of a modeling example for the
troubleshooting skill and focus the learner’s attention on those recurrent
aspects spotlighted in the demonstration.

Instances

In addition to demonstrating procedures or rules (‘show me’), it may be


helpful to give concrete examples or instances of possible knowledge ele-
ments (i.e., facts, concepts, plans, and principles) prerequisite to correctly
using the rules or carrying out the procedural steps. Just like demonstrations
of rules and procedures, it is best to present instances in the context of the
learning tasks. Thus, when presenting a concrete page layout as an instance
230 Step 7: Design Procedural Information

of the concept ‘page style’ (see the box in Figure 10.1), this should be a set
of specifcations that is as relevant as possible for the specifc task of ‘chang-
ing the page orientation.’ In the case where students learn to produce video
content, a JIT information display on how to correct colors would pro-
vide concept defnitions of exposure, contrast, saturation, etc. Again, when
showing a concrete instance of one particular type of color correction, this
is preferably a manipulation relevant to the video at hand.
Van Merriënboer and Luursema (1996) describe CASCO (Comple-
tion ASsignment COnstructor), an intelligent tutoring system for teaching
computer programming in which all procedural information is presented
and demonstrated just-in-time. The system uses completion tasks in which
learners must complete partially written computer programs. When using
a particular piece of programming code for the frst time in part of a to-
be-completed program, an online JIT information display presents how-to
instructions and prerequisite knowledge for using this particular pattern of
code. At the same time, the instantiated code is highlighted in the given part
of the computer program, ofering a concrete instance exemplifying the use
of the code in a realistic computer program.

Combining JIT Information Displays with Demonstrations


and Instances

It is important to always specify JIT information at the entry level of the


learners. It can, thus, be immediately understood, suggesting a deductive-
expository strategy for information presentation (cf. Figure 7.3). This means
that the instructional designer works from the general procedural steps
and rules in the JIT information displays to the concrete examples of those
steps and rules (i.e., demonstrations) and from the general prerequisite
information to the concrete examples of this prerequisite information (i.e.,
instances). This strategy is time-efective and takes maximum advantage of
the fact that learners already possess the prior knowledge necessary to easily
understand the given how-to information. Using an inductive or inquisitory
strategy to promote elaboration is superfuous here. Ideally, JIT informa-
tion displays are presented simultaneously with those demonstrations and
instances that are part of the same learning task as the displays. This is vis-
ible in the bottom part of Figure 10.2, where the JIT information displays
are fully integrated with the learning task in space and in time (i.e., there is
maximal spatial and temporal contiguity, thus minimizing the split-attention
efect). This approach best positions the JIT information in the context of
the whole task.
In conclusion, it might be desirable to include more demonstrations of
a rule or procedure and/or more instances of its prerequisite knowledge
elements to fully exemplify a JIT information display. These demonstrations
Step 7: Design Procedural Information 231

and instances must be divergent for all situations the JIT information applies
to. A demonstration will typically show the application of only one version of
a procedure. For instance, the procedure for changing the page orientation
(Figure 10.1) is useful for changing the orientation of one page or the whole
document. Therefore, the procedure may be demonstrated by changing the
layout of one page, but it may also be demonstrated by changing the layout
of the whole document. Ideally, the whole set of demonstrations given to
the learner is divergent and representative of all situations that can be han-
dled with the procedure. Likewise, an instance only concerns one example of
a concept, plan, or principle. When presenting an instance to exemplify the
concept of ‘page style,’ the instance may show diferent footers, headers, and
orientations. As for demonstrations, the whole set of instances should ideally
represent all entities covered by the concept, plan, or principle.

10.4 Strategies for Presenting Procedural Information


Procedural information should preferably be active in learners’ working
memory when carrying out learning tasks so that it can be easily converted
into task-specifc cognitive rules in a process of rule formation (refer back
to Box 10.1). Thus, ensuring the optimal availability of procedural infor-
mation during practice is important. To this end, the timing of the informa-
tion presentation is important because the new information must be active
in working memory when needed to carry out the task. The presentation of
JIT information can be unsolicited, meaning that it is explicitly presented to
the learner when this is deemed necessary by the instructor or other intel-
ligent pedagogical agent, or solicited, meaning that the learner consults it
when they need it. Unsolicited information presentation can be given either
before or during task performance, yielding three presentation strategies:

1. Unsolicited JIT information presentation. Procedural information is spon-


taneously presented to the learners, providing step-by-step instructions
that are concurrently acted upon by the learners. The learners have no
control over the presentation. The relevant procedural steps are, thus,
directly activated in their working memory.
2. Unsolicited information presentation in advance. Procedural information
is presented in advance, and learners need to memorize it before they use
it when carrying out a learning task for the frst time. During task perfor-
mance, the procedural steps are easily accessible in the learners’ long-term
memory and, thus, readily activated in working memory.
3. Solicited JIT information presentation. Learners consult the procedural
information precisely when needed and, thus, have control over the pres-
entation. This directly activates the relevant procedural steps in the learn-
ers’ working memory.
232 Step 7: Design Procedural Information

Unsolicited JIT Information Presentation

The most familiar type of unsolicited JIT information presentation comes


from an instructor acting like an ‘Assistant Looking Over Your Shoulder’
(named ALOYS) who gives specifc directions on how to carry out recurrent
aspects of learning tasks. This form of contingent tutoring (Wood & Wood,
1999) is exhibited, for example, by the teacher, who closely watches individ-
ual students while they work on learning tasks in a laboratory and who gives
a specifc student directions like “No, you should point the test-tube away
from you . . .” or “All right, and now you calibrate the meter this way . . .”,
or by the coach who observes her players from the side of the playing feld
and shouts directions like “Sarah, remember to bend your knees . . .” or
“John, keep your shoulder down . . .”. These how-to instructions involve the
recurrent aspects of performance rather than the more complex conceptual
and strategic issues (i.e., supportive information). You always point a test
tube away from you, and you always keep your shoulder down while batting.
Nevertheless, it is extremely difcult to determine when to present what type
of JIT information. The instructor must continuously monitor the whole-
task performance of individual learners and interpret this performance to
‘predict’ when particular JIT information is needed by a particular learner
in a particular situation. There have been attempts to automate unsolicited
JIT information presentation in the feld of intelligent tutoring and intelli-
gent help systems, but this has proven to be very difcult if learners perform
complex and open-ended learning tasks. Recent research on using artifcial
intelligence appears to be progressing with this problem (Baillifard et al.,
2023). This is only possible if learners only perform one recurrent aspect of
a complex task as part-task practice (see Step 10 in Chapter 13: Design part-
task practice).
Because an instructor is often not available and because the automation
of contingent tutoring is extremely difcult, the Ten Steps suggests explic-
itly presenting JIT information displays together with the frst learning task
for which they are relevant as a default presentation strategy. This is also
called system-initiated help (Aleven et al., 2003). The JIT information is
connected to a learning task that requires the use of the new rule or proce-
dure. Figure 10.3 provides an example. A doctor-in-training is conducting
minimally invasive surgery and is looking at two displays. One display shows
what is happening in the surgery that is taking place at that very moment.
The other is a demonstration video presenting the correct use of surgical
tools during a successful operation in a JIT manner. Thus, the doctor-in-
training can imitate the demonstration to correctly carry out the recurrent
aspects of the surgical intervention.
Step 7: Design Procedural Information 233

Figure 10.3 A doctor-in-training watching a demonstration video while con-


ducting a minimally invasive surgical operation.

Unsolicited Information Presentation in Advance

Another traditional approach to presenting procedural information is hav-


ing learners memorize it before working on the learning tasks. By doing
this, learners store information in long-term memory to activate it in work-
ing memory when needed for performing the recurrent aspects of learning
tasks. The procedural information is already specifed at a level immediately
understandable by all learners, so mindful elaboration of the information
(i.e., deliberately connecting it to what is already known) is not critical.
Instead, instructional methods typically encourage learners to maintain the
new information in an active state by repeating it aloud or mentally to store
it in memory. This repetition of information is called rehearsal. Using mne-
monics (e.g., phrases, acronyms, visual imagery, other tricks) and forming
more or less relatively small meaningful information clusters may facilitate
memorization. Two of the better-known mnemonics are “i before e, except
after c, or when sounded as ‘ay’ as in neighbor and weigh” and “Roy G Biv”
(red, orange, yellow, green, blue, indigo, violet, for remembering the colors
234 Step 7: Design Procedural Information

of the visible-light spectrum and their sequence). As an example of cluster-


ing, suppose that knowledge of shorthand international airport destinations
(JFK for New York, AMS for Amsterdam, etc.) is necessary for handling
airfreight; useful clusters would be ‘destinations in Europe,’ ‘destinations in
South America,’ and so forth.
The Ten Steps does not recommend prior memorization for two reasons.
First, such memorization makes it impossible to demonstrate a presented
rule or procedure or to give instances of its prerequisite knowledge elements
in the context of performing whole learning tasks. For instance, if learners
need to memorize that a particular software package uses function keys F9
to synchronize folders, Alt-F8 to edit macros, and Alt-F11 to go to Visual
Basic before they work with the package, what these function keys actually do
cannot be simultaneously demonstrated in the context of using the package.
This hinders the development of an integrated knowledge base in which the
nonrecurrent and recurrent aspects of working with the software are inter-
related (Van Merriënboer, 2000). In other words, memorization in advance
can easily lead to fragmented knowledge that is hard to apply in real-life
tasks. Second, memorization is dull for learners and adds nothing to more
active approaches, as described in the previous and the next subsections.

Solicited JIT Information Presentation

Although unsolicited JIT information presentation is probably the most


efective way to help learners form cognitive rules, it may not always be
desirable and is also not always possible. It is not always desirable because,
sometimes, we want learners to develop self-directed learning skills—specif-
cally, skills aimed at the successive refnement and automation of recurrent
aspects of performance (‘deliberate practice’; Ericsson, 2015). Learners can
only acquire these skills if they have some control over the necessary pro-
cedural information. Chapter 14 discusses the development of deliberate
practice skills in a process of second-order scafolding.
It is also not always possible to apply unsolicited JIT information-presenta-
tion. First, contingent tutoring is difcult to implement if no human tutor is
available. Second, system-initiated help may also be difcult to implement if
the designer has no control over the learning tasks that the learners will work
on. Then, it may not be feasible to connect the JIT information displays to
the learning tasks. For instance, if training takes place on the job, it is not
uncommon for the learner to encounter authentic but arbitrary problems
that the instructional designer cannot foresee. A good example is when a
medical student is training at a hospital. In this situation, there is no real con-
trol over the order in which they will encounter diferent patients with dif-
ferent symptoms and diseases. This makes system-initiated help impossible.
In these situations, procedural information can be actively solicited by the
learners and provided by the instructor (who is then acting as an ‘assistant
Step 7: Design Procedural Information 235

on demand’ rather than an ‘assistant looking over your shoulder’) or by


specialized instructional materials (e.g., manuals, help and decision support
systems, job aids, quick reference guides—all of which are increasingly being
made available on mobile devices such as smartphones and tablets). Thus,
whereas the JIT information is not presented directly and explicitly when it
is needed for the learning tasks, it is at least easily available and readily acces-
sible for the learners during practice. There are three basic guidelines for
unsolicited JIT information presentation: (a) present small modular units,
(b) write in an action-oriented style, and (c) minimize split attention.
Concerning modular structure, well-designed to-be-solicited materi-
als ideally should support random access. Thus, JIT information displays
should be as independent of one another as possible so that learners can
jump around in any direction. Whereas complete independence may be
impossible to realize, displays should be as independent as possible by pro-
viding closure, meaning they can be understood without consulting ‘outside’
information because they are written for the lowest-level ability learner.
Concerning action-oriented writing, it must invite the learner to carry
out the recurrent aspects of the learning task. Van der Meij (2003) gives an
example of an uninviting and inefective invitation to act:

This writing is uninviting because no words prompt the learner to act, stim-
ulating the user to read rather than act. The word ‘note’ refers to a remark
on the side—an addendum. The learner cannot really act or explore because
there are no alternatives to try out. In an action-oriented style, the words
clearly prompt the learner to act, and the invitation comes just at the right
moment. There are no side notes, but rather, true alternatives that can be
tried out, as in the following display inviting learners to browse a text:

Concerning preventing split attention, instructional materials for solicited


JIT information presentation should take into account that information-
seeking activities bring about additional cognitive load that may negatively
afect learning. Even if the information is useful, simultaneously dealing
with the learning task and the procedural information imposes too much
236 Step 7: Design Procedural Information

cognitive load on the learners and may reduce learning. Consulting addi-
tional information when cognitive load is already high due to the character-
istics of the learning task easily becomes a burden for learners, as indicated
by the saying, “When all else fails, consult the manual”.
In the feld of minimalism (Carroll, 2003), additional guidelines have
been developed for the design of minimal manuals that provide proce-
dural information on demand. Minimalism explicitly focuses on supporting
learners—or users, in general—who are working on meaningful tasks. The
three main guidelines for this approach pertain to:

1. Goal directedness. Use an index system allowing learners to search for rec-
ognizable goals they may be trying to reach rather than functions of the
task environment. In other words, design in a task-oriented rather than
system-oriented fashion. Learners’ goals, thus, provide the most impor-
tant entrance point to the JIT information displays (see Table 10.1).
2. Active learning and exploration. Allow learners to continue their work
on whole, meaningful learning tasks to explore things. Let them try out
diferent recurrent task aspects for themselves.
3. Error recovery. When learners try out the recurrent task aspects, things
may go wrong. Error recognition and error recovery must be supported
on the spot by including a section—‘What to do if things go wrong?’—in
the JIT information display (Lazonder & van der Meij, 1995).

Table 10.1 Goal-oriented vs. system-oriented titles of JIT information displays.

Goal-oriented titles (correct) System-oriented titles (incorrect)


Starting up the car The ignition system
Using italics and bold The Format menu
Alphabetizing a list Sorting at paragraph level
Using BCC Mail headers and fields
Search for an exact word or phrase Booleans, conditionals, and strings

10.5 Corrective Feedback


As described in Chapter 5, performance assessments (Step 2) yield informa-
tion on the quality of all aspects of performance or constituent skills and,
thus, provide informative feedback to learners. For nonrecurrent constituent
skills, assessments may indicate shortcomings in knowledge (i.e., cognitive
strategies, mental models), and cognitive feedback then stimulates learners
to refect on their knowledge and to expand and improve it in a process of
elaboration (see Section 7.5). For recurrent constituent skills, in contrast,
assessments indicate errors in applying rules or performing procedures (or
lack of speed or inadequate time-sharing capabilities: see Chapter 13), and
corrective feedback helps learners to recognize these errors, recover from
Step 7: Design Procedural Information 237

them, and form accurate rules and procedures. Thus, in contrast to cog-
nitive feedback, the main function of corrective feedback is not to foster
refection, but rather, to detect and correct errors and to help learners form
correct cognitive rules.
In the Ten Steps, corrective feedback is seen as one type of procedural infor-
mation because it consists of information that helps learners automate their
cognitive schemata in a process of rule formation (refer back to Box 10.1)—it
shares this aim with the presentation of all procedural information. If the
rules or procedures that algorithmically describe efective performance are
not correctly applied, the learner is said to make an error. The following
subsections describe the design of efective corrective feedback and the diag-
nosis of malrules and misconceptions that may be necessary when learners fail
to improve on particular recurrent constituent skills for a prolonged period.

Promoting Learning from Errors

Well-designed feedback should inform the learner that there was an error and
why there was an error but without simply saying what the correct action is
(Wiliam, 2011). It should consider the goals that learners may be trying to
reach. If the learner makes an error that conveys an incorrect goal, the feed-
back should explain why the action leads to the incorrect goal and provide a
suggestion or hint on how to reach the correct goal. Such a hint will often
take the form of an example or demonstration. An example of such a sugges-
tion or hint is: “When trying to solve that acceleration problem, applying this
formula does not help you to compute the acceleration. Try using the same
formula used in the previous example”. If the learner makes an error that
conveys a correct goal, the feedback should only provide a hint for the cor-
rect step or action and not simply give away the correct step or action because
learning-by-doing is critical to forming cognitive rules. An example of such a
suggestion or hint is: “When trying to solve that acceleration problem, you
might want to consider substituting certain quantities for others”. Simply tell-
ing the learner what to do is inefective. The learner must execute the correct
action while the critical conditions for performing this action are active in
working memory.
If a learner has made an error, it may be necessary to give information on
how to recover from the results of this error before giving any other infor-
mation. This will be relatively straightforward if an instructor gives the feed-
back, but including this type of error information in manuals or help systems
is more difcult. One aspect of minimalist instruction (Carroll, 2003) is to
support learners’ error recovery by giving error information on the spot by
combining the JIT information displays with a section on ‘what to do if
things go wrong?’ To include such a section, it is necessary to analyze the
typical errors made by the target learners to support the recognition and
recovery of the most frequent errors (see Section 11.3 in the next chapter
238 Step 7: Design Procedural Information

for the analysis of typical errors). The error recovery information should
contain the following:

• a description of the situation that results from the error so that the learner
can recognize it;
• information on the nature and the likely cause or causes of the error so
that it can be avoided in the future; and
• action statements for correcting the error.

An example of well-designed error information given when a learner is not


successful in ‘selecting’ a sentence in a word processor is (Van der Meij &
Lazonder, 1993):

Concerning the timing of feedback on errors, it should preferably be pre-


sented immediately after an incorrect step is carried out or an incorrect rule
is applied (Hattie & Timperley, 2007). The learner must preserve informa-
tion about the conditions for applying a particular rule or procedural step in
working memory until they obtain feedback (correct/incorrect). Only then
can a cognitive rule that attaches the correct action to its critical conditions
be formed. Any delay of feedback may hamper this process, but, as was
the case for unsolicited JIT information presentation, the presentation of
immediate feedback typically requires an instructor who is closely monitor-
ing the learner’s whole-task performance, detecting errors in its recurrent
aspects, correcting the errors, and providing hints for the formation of cor-
rect rules.
For detecting and correcting errors and providing hints, instructors
typically use their pedagogical content knowledge (PCK; e.g., Jüttner &
Neuhaus, 2012; Kirschner et al., 2022), including knowledge about the
typical errors learners make. When learners carry out complex and open-
ended learning tasks, this approach is still hard to realize in computer-based
systems. However, it is easier to realize for part-task practice, where learners
practice only one recurrent constituent skill. Then, model tracing provides a
technique for diagnosing errors and giving corrective feedback to learners.
In model tracing, the learner’s behavior is traced rule by rule. No feedback
is provided as long as correct rules in the system can explain the learners’
behavior, and corrective feedback is provided if the learners’ behavior can
be explained by so-called ‘buggy rules’ that represent typical errors (see
Section 13.4 for further explanation).
Step 7: Design Procedural Information 239

Diagnosis of Malrules and Misconceptions

A learner may receive frequent notifcations about not meeting the standards
for a particular recurrent aspect of performance and not showing improve-
ments in performance accuracy, despite repeated corrective feedback on
errors. This situation requires an in-depth diagnosis to reveal possible mal-
rules, which are incorrect cognitive rules leading to persistent errors (see
Section 11.3 for their analysis), or misconceptions, which are pieces of misin-
formed prerequisite knowledge leading to incorrect application of rules (see
Section 12.3 for their analysis). It is important to note that not all errors
signify the existence of malrules: Most errors based on learners’ prior experi-
ence and intuition can be easily corrected and will never lead to the forma-
tion of malrules. We only talk about malrules when learners continue to
make the same type of error; thus, when they form incorrect or suboptimal
cognitive rules. For example, some people may always switch of their desk-
top computer by pressing its power button, which may be seen as the result
of applying a malrule rather than applying the correct rule: “If you want to
shut down the computer in Windows 11, then click <Windows icon>, then
click <Power icon> and then click <Shut Down>”. A related misconception
is that <Windows icon> only applies to starting up new programs, whereas it
also applies to changing settings, searching documents, and shutting down
applications. The existence of malrules and/or misconceptions may afect
the presentation of procedural information in the following ways:

1. Focus the learners’ attention on steps and prerequisite knowledge ele-


ments likely to form malrules or misconceptions. Unsolicited JIT infor-
mation presentation, relatively slow fading of presented information, and
the use of many demonstrations and instances can help.
2. Use, if possible, multiple representations in JIT information displays (e.g.,
text and pictures). Graphical representations of procedures and physical
models of tools and objects that the learners must manipulate can help.
3. Stimulate learners to compare and contrast correct steps and prerequisite
knowledge elements with their incorrect counterparts; that is, with their
malrules and misconceptions.
4. Include error recovery information in presented JIT information displays
so that learners can undo and repair the undesired consequences after
making errors.

10.6 Media for Procedural Information


Procedural information helps learners automate their cognitive schemata in
a process of rule formation, constructing new task-specifc cognitive rules in
long-term memory. The traditional media for presenting procedural infor-
mation are the teacher and all kinds of paper-based job aids and learning
240 Step 7: Design Procedural Information

aids. The teacher’s role is to walk through the classroom, laboratory, or


workplace, peer over the learner’s shoulder (remember that his name is
ALOYS), and give directions for performing the routine aspects of learning
tasks (e.g., “No—you should hold that instrument like this . . .”; “Watch,
you should now select this option . . .”). Job aids may be posters with fre-
quently used software commands hanging on the walls of computer classes,
quick reference guides adjacent to a piece of machinery, or booklets with
instructions on the house style for interns at a company.
In computer-based environments, online job aids and help systems, wizards,
and intelligent pedagogical agents can present procedural information. Mobile
devices such as smartphones and tablets have become important tools for
presenting procedural information in simulated- and real-task environments.
Such devices are particularly useful for presenting small displays of information
during task performance that tell learners how to correctly carry out the recur-
rent aspects of the task. The procedural information must be readily available
for learners when needed and presented in small, self-contained information
units. Relevant principles in this respect are the temporal split-attention princi-
ple, the spatial split-attention principle, the signaling principle, and the modal-
ity principle (Van Merriënboer & Kester, 2014; see Table 10.2 for examples).

Table 10.2 Multimedia principles for the design of procedural information.

Principle Example

Temporal split-attention For students of web design who learn to develop


principle web pages in a new software environment, a
pedagogical agent tells them how to use the
different functions of the software environment
precisely when they need them to implement
a particular aspect of their design—instead of
discussing all available functions beforehand.
Spatial split-attention For social science students who learn to conduct
principle statistical analyses on their data files with SPSS,
present procedural information describing how
to run a particular analysis on the computer
screen and not in a separate manual.
Signaling principle For students of automotive engineering who
learn to disassemble an engine block, animate
the disassembly process in a step-by-step
fashion and always spotlight those loosened and
removed parts.
Modality principle For students of instructional design who learn
to develop training blueprints by studying a
sequence of more and more detailed blueprints,
explain the blueprints with narration or spoken
text instead of visual on-screen text.
Step 7: Design Procedural Information 241

The split-attention principle has already been described in previous sec-


tions. It comes in two forms. First, the temporal split-attention principle
indicates that learning from mutually referring information sources is facili-
tated if these sources are not separated in time but presented simultane-
ously or as closely to each other in time. Therefore, how-to instructions
for performing the routine aspects of learning tasks and corrective feed-
back are best presented just in time, precisely when the learner needs them.
Second, the spatial split-attention principle suggests physically integrating
information sources in space to prevent learners from splitting attention
over two information sources with negative efects on learning (refer back
to Figure 10.2).
The signaling principle indicates that learning can improve if the learner
focuses on the critical aspects of the learning task or the presented informa-
tion. It reduces the need for visual search and frees up cognitive resources
for processing procedural information. For instance, if a teacher instructs a
learner how to operate a piece of machinery, it is useful to point a fnger at
those parts that must be controlled, or when using a video-based example to
demonstrate particular routine aspects of performance, it is helpful to focus
the learners’ attention on precisely those aspects through signaling (e.g., by
spotlighting hand movements or blurring nonrelevant areas).
The modality principle indicates that dual-mode presentation tech-
niques using auditory text or narration to explain diagrams, animations,
or demonstrations result in better learning than equivalent single-mode
presentations that only use visual information (for a review, see Ginns,
2005)—provided that the auditory and visual information are not redun-
dant (see Section 7.6) but complement each other. The positive efect of
dual-mode presentation is typically attributed to an expansion of efective
working-memory capacity because, for dual-mode presentations, both the
auditory and visual subsystems of working memory can be used rather than
either subsystem alone.

10.7 Procedural Information in the Training Blueprint


In the training blueprint, procedural information is specifed per learning
task, enabling learners to perform the recurrent aspects of those tasks. We
can distinguish between (a) JIT information displays that spell out appro-
priate rules or procedures and their prerequisite knowledge elements,
(b) examples of the information given in the displays (i.e., demonstrations
and instances), and (c) corrective feedback. At the very least, JIT informa-
tion displays are coupled to the frst learning task for which they are relevant,
though they may also be coupled to subsequent tasks that involve the same
recurrent aspects. The JIT information displays are best exemplifed by ele-
ments of the learning task itself so that learners see how demonstrations
242 Step 7: Design Procedural Information

and/or instances ft into the context of the whole task. This requires the use
of learning tasks that present a part of the problem-solving process or of the
solution (e.g., modeling examples, case studies, completion tasks) and indi-
cates the use of a deductive-expository strategy where the JIT information
displays are illustrated by simultaneously presented examples. Corrective
feedback on recurrent aspects of task performance is best given immediately
after the misapplication of a rule or procedural step. Whereas JIT informa-
tion displays and related examples can often be designed before learners
participate in an educational program, this may be impossible for corrective
feedback because it depends on the behaviors of each learner. Nevertheless,
some preplanning might be possible if typical errors of the target group are
analyzed beforehand.
If procedural information is coupled to the frst learning task for which it
is relevant and subsequent tasks, this is done via a process of fading. Fading
ensures that the procedural information repeats, in ever-decreasing amounts,
until learners no longer need it. For instance, a help system may frst system-
atically present relevant JIT information displays, including a description of
procedural steps and a demonstration thereof, as well as prerequisite infor-
mation and instances exemplifying this information. In a second stage, the
help system may only present the procedural steps and prerequisite informa-
tion (i.e., leaving out the examples). In a third stage, it may only present the
procedural steps (i.e., also leaving out the prerequisite information). And in
a fnal stage, no information might be provided at all. As another example, a
particular task environment may frst explain why there is an error, then pre-
sent only right/wrong feedback, and fnally, provide no corrective feedback
at all. Fading ensures that learners receive procedural information as long
as they need it and provides better opportunities to present a divergent set
of demonstrations and/or instances. This is an example of one of the fve
desirable difculties discussed by Bjork (Kirschner et al., 2022). Ideally, the
whole set of examples should represent all situations the presented rules or
procedures can handle.
Concluding this chapter, Table 10.3 presents one task class out of the
training blueprint for the complex skill ‘producing video content’ (you may
refer back to Table 6.2 for a description of the other task classes). A speci-
fcation of the procedural information has been added to the specifcation
of the task class, the learning tasks, and the supportive information. As you
can see, procedural information now appears when learners work on the frst
learning task for which it is relevant. See Appendix 2 for the complete train-
ing blueprint. One fnal remark: Procedural information is not only relevant
to performing the learning tasks but may also be relevant to part-task prac-
tice. Chapter 13 discusses special considerations for connecting procedural
information to part-task practice.
Step 7: Design Procedural Information 243

Table 10.3 Preliminary training blueprint for the complex skill ‘producing video
content’. For one task class, a specification of the procedural infor-
mation has been added to the learning tasks.

Task Class 1: Learners produce videos for fictional clients under the
following conditions.
• The video length is 1–3 minutes
• The clients desire aftermovies or event recaps, summarizing the
atmosphere at an event
• Locations are indoors
• There is plenty of time for the recording
• No interaction with other on-camera participants
Supportive Information (inductive strategy): Modeling example
Learners shadow a professional video team while they produce an aftermovie
of the yearly local cultural festival. Learners can interview the video team
during and after the project.
Supportive Information: Presentation of cognitive strategies
• Global SAP for preproduction, production, and postproduction phases
• SAP for shooting video (e.g., basic strategies for creating compositions and
capturing audio)
• SAPs for basic video editing (e.g., selecting footage and editing the video)
Supportive Information: Presentation of mental models
• Conceptual models of basic cinematography, such as composition and lighting
• Structural models of cameras
• Causal models of how camera settings affect the image and audio (music,
effects) affects mood
Learning Task 1.1
Support: Worked-out example
Guidance: Performance constraints
Learners receive a production
plan, intermediate footage, and
the final video of an existing
aftermovie. They evaluate the
quality of each aspect, but their
evaluations must be approved
before they can continue with the
next aspect.
Learning Task 1.2 Procedural Information
Support: Completion task Unsolicited
Guidance: Tutoring • How-to instructions for using
Learners receive a production plan postproduction software
and intermediate footage. They • How-to instructions for exporting
must select the footage and edit the video
the video into the fnal product.
A tutor guides learners in studying
the given materials and using the
postproduction software.

(Continued)
244 Step 7: Design Procedural Information

Table 10.3 (Continued)


Learning Task 1.3: Imitation Procedural Information
task Unsolicited
Support: Conventional task • How-to instructions for operating
Guidance: Modeling cameras, microphones, equipment
Learners study a modeling example • How-to instructions for using
of how a teacher/expert created postproduction software (fading)
a recap video for an (indoor)
automotive show. In groups,
students imitate this but for a
local exposition.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 1.3.
Learning Task 1.4 Procedural Information
Support: Conventional task Solicited
Guidance: None • Manuals for operating cameras,
Learners create an individual recap microphones, equipment
video for an indoor event of their • Manuals for using postproduction
choosing. software
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 1.4.

10.8 Summary of Guidelines


• If you design procedural information, then you need to make a distinc-
tion between JIT information displays, demonstrations and instances,
and corrective feedback.
• If you design JIT information displays, then you need to use small, self-
contained units that spell out one rule or procedure and its prerequisite
knowledge elements, use simple language, and physically integrate the
displays with the task environment to prevent split attention.
• If you exemplify JIT information displays with demonstrations of rules
or procedures or instances of prerequisite knowledge elements, then you
need to give those examples in the context of the whole learning tasks
and make sure they are divergent.
• If you present procedural information and there is an instructor available,
then you may use unsolicited JIT information presentation in which the
instructor spells out how to perform the recurrent aspects during whole-
task performance (i.e., contingent tutoring).
• If you present procedural information and there is no instructor available,
then you may use unsolicited information presentation where JIT infor-
mation displays are explicitly presented in the frst learning task for which
they are relevant (i.e., system-initiated help).
• If you present procedural information and there is no control over the
learning tasks, then you need to use solicited information presentation
Step 7: Design Procedural Information 245

and make JIT information displays available in a (minimal) manual,


help system, or job aid that the learners can readily consult during task
performance.
• If you design corrective feedback, then help the learner recognize an
error, then explain the cause, and hint at applying the correct rule.
• If you use new media to present JIT information displays, then consider
using media that is easy to consult during task performance, such as
smartphones, tablets, and other mobile technologies (e.g., augmented
reality).
• If you specify procedural information in the training blueprint, then you
need to couple it to the frst learning task for which it is relevant and fade
it away for subsequent learning tasks.

Glossary Terms

Contingent tutoring; Corrective feedback; Demonstration; Fading; Instance;


Just-in-time (JIT) information display; Malrules; Minimal manual;
Modality principle; Signaling principle; Spatial split-attention principle;
Split-attention efect; Temporal split-attention principle
Chapter 11

Step 8
Analyze Cognitive Rules

11.1 Necessity
Analysis of cognitive rules provides the basis for the design of procedural
information and, if applicable, part-task practice. Only perform this step if
this information is not yet available in existing materials.

When multiplying or dividing two numbers, if both numbers have the


same sign, then the product or quotient is positive, whereas if the numbers
have diferent signs, the product or quotient is negative. When simplifying

DOI: 10.4324/9781003322481-11
248 Step 8: Analyze Cognitive Rules

an equation, we frst multiply the numbers adjacent to the multiplication


sign before adding (e.g., 3 × 6 + 2 = 20 (18 + 2) and not 24 (3 × 8)). These
are examples of cognitive rules we all learned for arithmetic. Such cognitive
rules enable us to carry out recurrent aspects of learning tasks correctly.
We do not have to multiply or divide all of the numbers that there are or
simplify all equations; we can apply the multiplication/division rule to any
two numbers we encounter in a problem and the precedence procedure to
the equation.
This chapter focuses on analyzing cognitive rules for solving recurrent
aspects of new tasks. The results of the analyses take the form of IF-THEN
rules and/or procedures. They specify how competent task performers carry
out those aspects of real-life tasks that are identical from task to task and
problem to problem. Sometimes, relevant IF-THEN rules and/or proce-
dures for a particular task domain are already available in existing job aids,
instructional materials, and other documents. If this is the case, analyzing
the cognitive rules is unnecessary. In all other cases, the analysis of cogni-
tive rules yields input for analyzing prerequisite knowledge (Chapter 12).
Together with the analysis results for prerequisite knowledge, it provides the
basis for designing the procedural information, in particular JIT information
displays. If applicable, analyzing cognitive rules also provides a basis for the
design of part-task practice (Chapter 13).
The structure of this chapter is as follows. Section 2 explains the speci-
fcation of IF-THEN rules in a rule-based analysis and the specifcation of
procedures in an information-processing analysis. Both analysis methods
stress that the fnal specifcation should be at the level of the target group’s
lowest-ability learner. Section 3 discusses the analysis of typical errors and
so-called ‘malrules’ because these incorrect rules and procedural steps can
interfere with acquiring their correct counterparts. Section 4 describes
using IF-THEN rules and procedures in the instructional design process.
The identifed rules and procedures provide input for analyzing prerequisite
knowledge, designing procedural information, and designing part-task prac-
tice. For these design activities, typical errors and malrules may afect the
selection of instructional methods. The chapter concludes with a summary
of the main guidelines.

11.2 Specify IF-THEN Rules and Procedures


IF-THEN rules and procedures describe how the recurrent aspects of a com-
plex task, or recurrent constituent skills, are correctly performed. Like SAPs
(Chapter 8), rules and procedures organize the task performers’ actions in
the domain of interest. In contrast to the heuristic nature of SAPs, rules
and procedures are algorithmic, indicating that using the applicable rules or
performing the procedural steps in the specifed order guarantees correctly
Step 8: Analyze Cognitive Rules 249

performing the task and reaching its goal. The rules and procedures dis-
cussed in this chapter are thus examples of ‘strong methods.’ Their strength,
however, is counterbalanced by their limited fexibility: Highly domain-
specifc rules ensure that familiar task aspects can be correctly performed,
but they are not at all useful for unfamiliar task aspects in new problem
situations. For the design of instruction, the analysis of cognitive rules into
IF-THEN rules or procedures serves three goals:

1. Analyzing cognitive rules yields input for analyzing prerequisite knowl-


edge, which describes what learners need to know to correctly apply the
rules or carry out the procedural steps (see Step 9 in Chapter 12: Analyze
prerequisite knowledge).
2. Along with the analysis results of prerequisite knowledge, analyzing cog-
nitive rules provides the basis for the design of procedural information
(JIT information displays, demonstrations, and corrective feedback; see
Step 7 in Chapter 10: Design procedural information).
3. If part-task practice is necessary, analysis of cognitive rules provides the
basis for its design (see Step 10 in Chapter 13: Design part-task practice).

Analyzing cognitive rules is a laborious and time-consuming task with much


in common with computer programming. It naturally follows the analytical
activities described in Step 2 (Design performance assessments), especially
skill decomposition and the formulation and classifcation of performance
objectives (Sections 5.2–5.4). In Step 2, the main question was: What are
the constituent skills or subcompetencies necessary to perform professional
tasks? Now, the main question is: How are the routine aspects of profes-
sional tasks performed? The constituent skills and associated performance
objectives classifed in Step 2 as recurrent or to-be-automated recurrent (cf.
Table 5.2) provide a good starting point for analyzing cognitive rules and
procedures.
When analyzing rules or procedures, the analyst usually lets one or more
competent task performers carry out the task and either simultaneously or ret-
rospectively talk about what they are doing or did while performing the task
(‘think aloud’) or mentally walk through the task. All actions and decisions are
recorded in a table or task outline. This process, then, is repeated for all difer-
ent versions of the task involving diferent decisions to ensure that all possible
IF-THEN rules or all diferent procedural paths are included in the analysis
results. Moreover, the analysis of recurrent constituent skills is always a hierar-
chical process: The analyst repeats it until it reaches an elementary level where
the lowest-ability learners from the target group can apply the rules or carry out
the steps—assuming they have already mastered the prerequisite knowledge.
All recurrent constituent skills can be described as applying cognitive
rules and having been analyzed into IF-THEN rules (Anderson, 2007).
250 Step 8: Analyze Cognitive Rules

However, many diferent analysis methods take advantage of specifc charac-


teristics of the skill under consideration to simplify the analysis process (for
an overview of methods, see Jonassen et al., 1999). Behavioral task analysis,
for example, is particularly useful if the majority of conditions and actions
specifed in the rules are observable, or ‘overt’ (cf. Figure 10.1 showing
a procedure for changing the page orientation of a document). Cognitive
task analysis (Clark et al., 2008; for examples, see Brinkman et al., 2011;
Tjiam et al., 2012) is more powerful because it can also deal with skills for
which the conditions and actions are not observable, or ‘covert.’ This chap-
ter discusses two popular methods for cognitive task analysis of recurrent
skills. The frst, rule-based analysis, is a highly fexible method for analyzing
recurrent skills lacking a temporal order of steps. The second, information-
processing analysis, is somewhat simpler to perform but is only useful for
analyzing recurrent skills characterized by a temporal ordering of steps into
a procedure (e.g., conducting cardiopulmonary resuscitation, adding digits,
repairing a fat tire).

Rule-Based Analysis

Rule-based analysis is appropriate if most of the steps involved in perform-


ing a task do not show a temporal order, such as using a keyboard or con-
trol panel, typing or editing texts, having a conversation, and operating a
software program. Rules specify under which conditions the task performer
should take particular actions. They have a condition part, which is called the
IF-side, and an action part, which is called the THEN-side:

IF condition(s)
THEN action(s)

The IF-side specifes the conditions in terms of ‘states,’ representing objects


from the outside world or particular mental states while performing the task.
The THEN-side specifes the actions to take when the rule applies. When
this occurs, the rule is said to ‘fre.’ Actions can change either objects in the
outside world or mental states. Rules, thus, refect cognitive contingencies:
If a certain condition is met, then a certain action is taken. A set of rules
can model the performance of a task; rules from this set fre, by turns, in
so-called recognize-act cycles. One rule from the set recognizes a particular
state because its IF-side matches; it acts by changing the state according to
the actions specifed in its THEN-side; another rule recognizes this new
state because it matches its IF-side; it acts by changing this state, yet another
rule recognizes this new state and changes it, and so on, until the task is
completed.
Step 8: Analyze Cognitive Rules 251

A simple set of rules will illustrate the working of rule-based analysis. The
rules describe the recurrent skill of stacking buckets so that smaller buckets
always go into larger ones. The frst IF-THEN rule specifes when the task
is fnished:

1.
IF there is only one visible bucket
THEN the task is fnished.

Another IF-THEN rule specifes what to do to reach this goal:

2.
IF there are at least two buckets
THEN use the two leftmost buckets and put the smaller one into the
larger one.

In this rule, the term bucket might refer to a single bucket or a stack of
buckets. As shown in the upper part of Figure 11.1, these two rules already
describe the performance of this task for a great deal of all possible situations.
However, the middle part shows a situation where another rule is needed.
An impasse occurs here because a larger bucket is placed on a smaller one.
The following rule may help to overcome this impasse:

3.
IF a smaller bucket blocks a larger bucket
THEN put the larger bucket on the leftmost side and the smallest bucket
on the left.

As shown in the bottom part of Figure 11.1, this additional rule helps over-
come the impasse. The three identifed IF-THEN rules are actually sufcient
to stack up buckets for all situations one might think of. Moreover, the rules
are independent of one another, meaning that their order is unimportant.
This makes it possible to add or delete IF-THEN rules without upsetting the
behavior of the whole set of rules. For example, stacking up buckets can be
made more efcient by adding the following rule:

4.
IF the largest and the smallest bucket are both on the leftmost side
THEN put the smallest bucket to the rightmost side.

If you try out the rule set with this new rule, it becomes clear that fewer
cycles are necessary to stack up the buckets in many situations. Moreover,
252 Step 8: Analyze Cognitive Rules

this has been reached by simply adding one rule to the set without having to
bother about the position of this new rule in the whole set.

Figure 11.1 Working of a set of IF-THEN rules describing the task of stacking
buckets.

Performing a specifc task may be a function of not only the identifed rules
but also how so-called higher-order rules handle those rules. For instance, it
may be true that more than one rule has an IF-side that matches the given
state (actually, this is also the case for rules 2, 3, and 4!). In this situation,
a confict must be solved by selecting precisely one rule from among the
candidates. This process is called confict resolution. Common approaches
to confict resolution are to prioritize more specifc rules over more general
rules (e.g., prioritize rule 3 over rule 2), to prioritize rules that match more
recent states over rules that match older states (e.g., prioritize rules 3 and 4
over rule 2), and prioritize a rule that has not been selected in the previous
cycle above a rule that has been selected before.

Information-Processing Analysis

Information-processing analysis can be an alternative to rule-based analysis


if the steps involved in performing a task show a temporal order (Van Mer-
riënboer, 1997). The ordered sequence of steps is called a procedure. Exam-
ples are procedures for multiplication, starting up a machine, and traversing
a fault tree when troubleshooting a device. Information-processing analysis
Step 8: Analyze Cognitive Rules 253

focuses on the overt and/or covert decisions and actions made by the task
performer and yields a procedure typically represented as a fowchart. A typi-
cal fowchart uses the following symbols:

• Rectangle—represents an action to take. In most fowcharts, this will be


the most frequently used symbol.
• Hexagon—represents a decision to make. Typically, the statement in the
symbol will require a ‘yes’ or a ‘no’ response and will branch to diferent
parts of the fowchart, accordingly.
• Circle—represents a point at which the fowchart connects with another
process or fowchart. The name or reference for the other process should
appear within the symbol.

Including hexagons makes it possible to pay attention to decisions that afect


the sequence of steps. It enables reiterating parts of the procedure or follow-
ing distinct paths through it. Figure 11.2 provides an example of a fowchart
representing the recurrent skill of adding two-digit numbers.

Figure 11.2 Flowchart resulting from an information-processing analysis on


adding two-digit numbers.

In this fowchart, some actions are covert (e.g., add, decrease), and some
are overt (e.g., write result). To make it easier to refer to the diferent ele-
ments in the fowchart, the actions are indicated with the numbers 1 through
6 and the decisions, with the letters A and B.
Before ending the analysis process, the analyst should validate and verify
the fowchart to ensure that it includes all actions and decisions (i.e., mental/
covert and physical/overt) and all possible branches from decisions. Profes-
sional task performers and instructors with experience teaching the task can
254 Step 8: Analyze Cognitive Rules

provide important information on a fowchart’s quality and completeness.


Furthermore, asking learners from the target group to perform the task as
specifed in a given fowchart is helpful. This should be done for all versions
of the task, requiring the learner to follow all diferent paths. If learners have
difculties carrying out the task, it might be necessary to specify the steps fur-
ther because they are not yet formulated at the entry level of the target group.

Specification at the Entry Level of the Target Group

As indicated, the description of task performance in a rule-based or informa-


tion-processing analysis is highly specifc and algorithmic. Thus, using the
applicable rules, making the specifed decisions, and performing the actions
in the given order should guarantee that all learners carry out the task cor-
rectly. This raises the question: At which level of detail should you give the
prescriptions? Theoretically, the analysis might continue to a level of mental
operations, such as recalling an item from long-term memory or temporarily
storing an item in working memory, or perceptual-motor operations, such
as directing attention to a particular object or moving specifc fngers. This,
however, would yield a cumbersome analysis.
Therefore, the answer to the question of how specifc the prescriptions
should be is relative to the entry level of the target group. That is, the analy-
sis continues until a level where the lowest-ability learner should be able to
master the steps. One severe risk is to stop this reiterative process too early;
analysts often overestimate learners’ prior knowledge. Therefore, the analyst
may consider reiterating the process one or two levels beyond the expected
entry level of the target group. This analysis of the recurrent skill is comple-
mented by an analysis of the prior knowledge of the target group by repeat-
edly asking the question: Can my least profcient learners carry out this step
correctly, assuming they possess the necessary prerequisite knowledge (i.e.,
concepts used in the step)? This guarantees that the developed instruction
is appropriately detailed: It disregards steps already mastered by the learners
but remains specifc enough to be correctly performed by all of them.
The specifcity of the steps taken clearly sets fowcharts that result from
an information-processing analysis apart from SAP charts (compare Fig-
ures 7.1 and 8.1). A SAP analysis leads to the description of a general sys-
tematic approach. It merely describes the goals that must be reached and
the heuristics that may facilitate reaching them. However, it can never guar-
antee problem resolution because the goals are not at the entry level of the
target group, and the heuristics are merely rules-of-thumb that might aid
in reaching those goals. Learners instructed according to the SAP receive
‘direction’ in problem solving, potentially reducing the long and challeng-
ing process of developing this general plan themselves—but it does not
guarantee it. In summary, SAPs serve as a heuristic guide to the nonrecur-
rent task aspects, while IF-THEN rules and fowcharts resulting from an
Step 8: Analyze Cognitive Rules 255

information-processing analysis yield an algorithmic prescription for accu-


rately executing recurrent aspects of the task.

11.3 Analyzing Typical Errors and Malrules


The Ten Steps focuses on a rational analysis of correct IF-THEN rules and
procedures. This provides the basis for telling learners how the recurrent
aspects of the task should be performed. In addition, an analyst can and
should perform an empirical analysis to determine which typical errors
learners make when applying particular rules or performing particular pro-
cedural steps. At the start of a training program, learners often make these
typical errors intuitively or based on their prior experiences. Analyzing typi-
cal errors is essential because it enables the specifcation of error recovery
information, telling the learner how to recover from an error before apply-
ing the correct rule. Table 11.1 provides some examples of typical errors.

Table 11.1 Examples of typical errors, which might evolve into malrules if not
properly corrected.

Typical error/malrule Correct rule

If you want to switch the values If you want to switch the values of
of variables A and B, then variables A and B, then state C = A,
state A = B and B = A A = B, and B = C
If you want to switch off the If you want to shut down the computer,
computer, then press the then click <Windows icon>, then click
power button <Power> and then click <Shut Down>
If you want to expand the If you want to expand the expression
expression (x + y) 2, then just (x + y) 2, then multiply x + y by x + y
square each term (i.e., x 2 + y 2) (i.e., x 2 + 2xy + y 2)

Typical errors often refect the intention to apply the correct rule but still
have things go wrong. It is especially important to identify errors related to
rules or procedural steps that are difcult to apply, dangerous to perform, or
easily omitted by the learners in the target group. For instance, for the rule
“If you want to select a word, then place the cursor on it and double-click
the mouse,” it is important to stress that the two clicks quickly follow each
other because especially young children and elderly learners have difculties
with rapid double-clicking. For the rule “If you need to release the fshing
net, cut the cord by moving the knife away from you,” it is important to
stress the movement of the knife because of the risk of injuries. For the step
“Decrease the next left column by 1,” which is part of the procedure for
subtracting two-digit numbers with borrowing, it is important to stress this
step because novice learners who borrowed ten often omit it.
256 Step 8: Analyze Cognitive Rules

In contrast to typical errors, which are often rooted in intuition or


prior experience and typically fade away with corrective feedback, mal-
rules present an incorrect alternative to correct cognitive rules and tend
to persist when learners repeatedly use them. For example, if a child is
learning subtraction of two-digit numbers with borrowing and does not
correctly apply the step “Decrease the next left column by 1,” leading
to consistently wrong answers such as 23 – 15 = 18, 62 – 37 = 35, and
47 – 18 = 39, they will develop a malrule that might be difcult to unlearn
afterwards. This underscores the importance of providing immediate and
consistent feedback on errors. An ounce of prevention is worth a pound
of cure.

11.4 Using Cognitive Rules to Make Design Decisions


Step 8 is only necessary if information on rules and procedures is unavailable
in existing instructional materials. When performed, the results provide the
basis for several diferent activities; namely, analyzing prerequisite knowl-
edge, designing an important part of the procedural information, and, when
applicable, designing part-task practice. Furthermore, the identifcation of
typical errors and malrules might afect design decisions.

Analyzing Prerequisite Knowledge

Analyzing cognitive rules exclusively focuses on how recurrent aspects of


tasks are carried out, resulting in a lack of information about the knowledge
prerequisite to correctly carry out the rules or steps. Here, the key question
is: What knowledge enables learners to carry out the recurrent task aspects
as specifed in the rules and/or procedures? For example, learners can only
correctly apply the rule “If you want to select a word, then place the cursor
on it and double-click the mouse” if they are familiar with concepts such as
‘cursor’ and ‘mouse.’ If these concepts are unfamiliar, they should be taught
because they are ‘prerequisites’ that are essential for properly applying the
rule. Note that there is a unidirectional relationship between rules and their
prerequisite knowledge. Prerequisite knowledge enables the correct use
of the rules, but the reverse does not make sense. This is distinct from the
bidirectional relationship between cognitive strategies and mental models.
This distinction is why the schematic overview of the Ten Steps depicts
the analyses of cognitive strategies and mental models side by side, while
the analyses of cognitive rules and prerequisite knowledge are depicted one
below the other. Due to the unidirectional relationship, you should start
with analyzing cognitive rules and then proceed to analyzing prerequisite
knowledge. Analyzing prerequisite knowledge is also called instructional
analysis, or, when performed in direct combination with the analysis of cog-
nitive rules, combination analysis (Dick et al., 2014). These analyses will
Step 8: Analyze Cognitive Rules 257

be discussed in the next chapter. Together, the analysis results for cogni-
tive rules and prerequisite knowledge provide the main input for designing
procedural information, particularly JIT information displays (Chapter 10).

Designing Procedural Information

In a psychological sense, recurrent constituent skills are analyzed as if they


are automatic psychological processes. This is because instructional methods
for presenting procedural information and part-task practice must directly
promote rule formation (see Box 10.1) and subsequent strengthening of
cognitive rules for skills requiring a very high level of automaticity (see
Box 13.1). Rule formation is facilitated by explicitly presenting the appli-
cable rules or procedures as part of JIT information displays precisely when
learners need them. Well-designed displays are self-contained units that spell
out the rule or procedure at the level of the lowest-ability learner, use simple
language, and physically integrate the display with the task environment to
prevent split attention. Furthermore, rules and procedures that have been
identifed help develop demonstrations that give learners concrete applica-
tion examples. These demonstrations point out that the correct application
of the rules or procedural steps always yields the desired solution: Due to
its algorithmic nature, applying the rules or performing the steps simply is
the desired solution; there is no distinction between the problem-solving
process and the solution (cf. Figure 4.4). Finally, identifed rules and proce-
dures can help provide corrective feedback to learners. If a learner makes an
error, the feedback should point out that there is an error, explain how to
recover from the error, and give a hint as to how to reach the correct goal.
The hint will often have the form of a reference to the relevant JIT informa-
tion display and/or a demonstration.

Designing Part-Task Practice

Designers should include part-task practice for one or more recurrent con-
stituent skills if and only if those skills require a very high level of auto-
maticity (i.e., they are classifed as to-be-automated recurrent constituent
skills; see Table 5.2) through a learning process called strengthening (see
Box 13.1). Part-task practice increases the fuency of fnal whole-task per-
formance and helps learners pay more attention to the problem-solving,
reasoning, and decision-making aspects of learning tasks because it frees up
processing resources, as fully automated processes no longer require con-
scious processing (for example, see Hopkins & O’Donovan, 2021). Facil-
itating strengthening requires providing many practice items that require
learners to repeatedly apply the identifed rules or perform the procedures.
For long and/or multibranched algorithms, the identifed rule-sets or pro-
cedures play a crucial role in sequencing practice items, progressing from
258 Step 8: Analyze Cognitive Rules

simple to complex using part-whole techniques. Subsequently, these rules


and procedures guide the selection of suitable methods for drill and over-
learning. Further details on the design of part-task practice can be found in
Chapter 13, which addresses the last activity in the Ten Steps.

Dealing with Typical Errors and Malrules

Identifying typical errors and malrules can impact decision making in the
discussed design activities. Concerning the design of procedural informa-
tion, the presence of typical errors or malrules may necessitate particular
instructional methods (see Section 10.4). First, learners should focus on
rules or procedural steps susceptible to errors or mistakes. This may involve
providing unsolicited JIT information, a slow fading of presented informa-
tion, and using many divergent demonstrations for rules and steps that are
error-prone. Second, the JIT information should include error recovery
information to help learners undo or repair the undesired or unintended
consequences of errors once they have occurred. Finally, especially when
malrules are in play, learners should be stimulated to critically compare and
contrast the malrules they use with (demonstrations of) correct rules and
procedural steps.

11.5 Summary of Guidelines


• If you analyze cognitive rules, then observe thinking-aloud task perform-
ers to identify IF-THEN rules and/or procedures that algorithmically
specify the correct performance of recurrent task aspects.
• If you conduct a rule-based analysis for recurrent skills that do not show a
temporal order, then specify a set of IF-THEN rules that describe desired
task performance at the level of the lowest-ability learner. Higher-order
rules may describe how to select among several applicable rules.
• If you conduct an information-processing analysis for recurrent skills that
show a temporal order, then specify a procedure that describes desired
task performance at the level of the lowest-ability learner. The procedure
may be presented as a fowchart with actions (rectangles) and decisions
(hexagons).
• If you analyze typical errors and malrules, then focus on behaviors shown
by naïve learners and difcult, dangerous, or easily omitted steps.
• If you use rules and procedures to analyze prerequisite knowledge, then
answer the question “What knowledge is needed to correctly apply this
rule or carry out this step?” for each rule or procedural step.
• If you use rules and procedures to design procedural information, then
include them in JIT information displays and take them as a starting point
Step 8: Analyze Cognitive Rules 259

for identifying useful demonstrations and providing corrective feedback


on performance.
• If you use rules and procedures to design part-task practice, then use
them to specify practice items, sequence practice items for highly com-
plex algorithms, and select methods for drill and overlearning.
• If you use typical errors and malrules to design procedural information,
then focus the learners’ attention on rules and steps that are liable to
errors, include related error recovery information in JIT information dis-
plays, and ask them to critically compare and contrast correct rules with
malrules.

Glossary Terms

Cognitive rule; Cognitive Task Analysis (CTA); IF-THEN rules; Informa-


tion- processing analysis; Procedure; Rule-based analysis; Typical error
Chapter 12

Step 9
Analyze Prerequisite Knowledge

12.1 Necessity
Analyzing prerequisite knowledge provides input for designing procedural
information. Only carry out this step if the procedural information is not
yet specifed in existing materials and if you have already analyzed cognitive
rules in Step 8.

DOI: 10.4324/9781003322481-12
262 Step 9: Analyze Prerequisite Knowledge

Prerequisite knowledge enables learners to correctly apply IF-THEN


rules or carry out procedural steps. Prerequisite knowledge is the knowl-
edge that must be acquired to be able to carry out the recurrent aspects of
a complex task. Acquiring prerequisite knowledge involves incorporating
the new knowledge into the cognitive rules learners develop. This chapter
focuses on analyzing prerequisite knowledge in the form of concepts, plans,
and principles, which may consist of facts and physical models. Sometimes,
relevant IF-THEN rules, procedures, and associated prerequisite knowledge
are already available in existing job aids, instructional materials, or other
documents. In such cases, there is no need to analyze it. In all other cases,
the analyst should frst analyze cognitive rules (Step 8) before analyzing
prerequisite knowledge (this step). The analysis of prerequisite knowledge
is typically known as instructional analysis, or, when performed in direct
combination with an analysis of rules or procedures, as combination analysis
(Dick et al., 2014). The results of analyzing cognitive rules and prerequisite
knowledge provide the basis for designing procedural information.
The structure of this chapter is as follows. Section 2 discusses the specif-
cation of prerequisite knowledge, which encompasses identifying concepts,
plans, and principles. In addition, it explores their further analysis into
facts and, if applicable, physical models. This analysis is hierarchical and is
repeated until the prerequisite knowledge is specifed at the entry level of
the target group’s lowest-ability learner. Section 3 discusses the empirical
analysis of misconceptions because they may interfere with acquiring pre-
requisite knowledge. Section 4 describes the use of prerequisite knowledge
for the design process. Together with the analysis results of cognitive rules,
the analysis of prerequisite knowledge yields the input for designing proce-
dural information. Misconceptions may afect the selection of instructional
methods for presenting this information. The chapter concludes with a sum-
mary of the main guidelines.

12.2 Specify Concepts, Facts, and Physical Models


Conceptual knowledge can be described at diferent levels (see Figure 12.1).
At the highest level, conceptual knowledge can be described in terms of
domain models, which ofer rich descriptions of how the world is organized
within a particular domain. These models have concepts, plans, and princi-
ples as their building blocks. Step 6 (Chapter 9) dealt with the dissection of
mental models into these domain models, drawing distinctions between con-
ceptual models (What is this?), structural models (How is this built or organ-
ized?), and causal models (How does this work?). Domain models allow for
problem solving, reasoning, and decision making in a task domain. However,
they are less relevant for carrying out the recurrent aspects of a skill, as these
can be algorithmically described in terms of applying rules and procedures.
Step 9: Analyze Prerequisite Knowledge 263

Figure 12.1 Three levels to describe conceptual knowledge.

This chapter primarily addresses the two lowest levels depicted in


­ igure 12.1. It starts by examining concepts, plans (that relate two or more
F
concepts by location-in-time or location-in-space relationships), and prin-
ciples (that relate two or more concepts by cause-effect or natural-process
relationships). Concepts are the basic building blocks at this level and can
be subject to further analysis into facts and, for concrete concepts referring
to tools and objects, physical models.
Analyzing prerequisite knowledge starts from the IF-THEN rules and the
procedural steps identified in the analysis of cognitive rules. Thus, analyzing
cognitive rules (Step 8, Chapter 11) must always precede analyzing prereq-
uisite knowledge (Step 9). The analysis of IF-THEN rules and procedures
focuses on how a task is performed. Consequently, after Step 8, understand-
ing what knowledge is a prerequisite to correctly performing a rule or pro-
cedural step remains incomplete. To analyze prerequisite knowledge, the
analyst asks a basic question for each identified rule and procedural step:
Which concepts, plans, and/or principles does the learner need to know
to learn and be able to correctly apply the rule or carry out the procedural
step? It is crucial that learner embeds this knowledge into the developing
cognitive rules in the learning process called rule formation (see Box 10.1).
The answers to the basic question might introduce other concepts that the
learners do not yet know; thus, the analysis is hierarchical and should iterate
until it reaches a level where the learners are familiar with all the concepts
264 Step 9: Analyze Prerequisite Knowledge

involved. These concepts may be defned at the lowest level by stating the
facts or propositions that apply to them. Such propositions are typically seen
as the smallest building blocks of cognition and, thus, further analysis is
impossible.

Identify Concepts, Plans, and Principles

Concepts allow the description and classifcation of objects, events, and pro-
cesses (Tennyson & Cocchiarella, 1986). They enable us to give the same
name to diferent instances that share common characteristics (e.g., poo-
dles, terriers, and Chihuahuas are all dogs; dogs, cats, and humans are all
mammals). Concepts are important for all kinds of tasks because they allow
task performers to talk about a domain using appropriate terminology and
classify the elements within this domain. For the analysis of prerequisite
knowledge, the relevant question is: Are there any concepts the learners
have not yet mastered but need to understand to correctly apply a specifc
rule or carry out a particular procedural step? For example, in photography,
a procedural step for repairing cameras might be “Remove the lens from the
camera”. Presenting the concept ‘lens’ might be a prerequisite for correctly
applying this step. Whether this really is the case depends on the learners’
prior knowledge. It is only prerequisite for the program if the least prof-
cient learner does not yet know what a lens is. Another example in database
management is the rule: “If you want to delete a feld permanently, then
choose Clear Field from the Edit Menu”. Here, the concept of ‘feld’ might
be unfamiliar to the learners and thus could be a prerequisite to correctly
using the rule.
Plans relate concepts to each other in space to form templates or build-
ing blocks or in time to form scripts. Plans are often important prerequisite
knowledge for tasks involving the understanding, designing, and repairing
of artifacts such as texts, electronic circuits, machinery, and so forth. For
analyzing prerequisite knowledge, the relevant question is: Are there any
plans the learners have not yet mastered but need to understand to correctly
apply a specifc rule or to correctly carry out a particular procedural step?
For example, a rule in the domain of statistics might state, “If you present
descriptive statistics for normally distributed data sets, then report means
and standard deviations”. A simple template prerequisite to the correct appli-
cation of this rule may describe how scientifc texts typically present means
and standard deviations; namely, as “M = x.xx; SD = y.yy,” where x.xx is the
computed mean and y.yy is the computed standard deviation, with M and
SD capitalized and italic. As another example, a rule in text processing might
be “If you want to change a text from a Roman typeface to italic, then open
the Context Menu, click Style, and click italic” (see Figure 12.2). A simple
script that is prerequisite to correctly applying this rule may describe that,
Step 9: Analyze Prerequisite Knowledge 265

for formatting a text, frst, the text needs to be selected, and only then can
the formatting option be keyed in or selected from the toolbar. This script
might also contain one or more new concepts that the learners do not yet
know. For instance, the concept of ‘Context Menu’ might be new to them
and, thus, require further analysis.

Italic
Makes the selected text italic. If the cursor is in a word, the entire word is made italic. If
the selection or word is already italic, the formatting is removed.
If the cursor is not inside a word, and no text is selected, then the font style is applied to
the text that you type.
To access this command...
Open context menu - choose Style - Italic
A Italic

Figure 12.2 Example of a JIT information display presenting the procedure for
making text italic.
Source: Taken from OpenOffice.org Writer.

Principles relate concepts to each other with cause-efect or natural-


process relationships. They describe how changes in one thing are related to
changes in another. Principles are often important prerequisite knowledge
for tasks involving explaining and making predictions. They help learners
understand why particular rules are applied or why particular procedural
steps are carried out. The relevant question for the analysis of prerequisite
knowledge is: Are there any principles learners have not yet mastered but
need to understand to correctly apply a specifc rule or perform a particular
procedural step? For instance, the steps related to borrowing in the pro-
cedure for subtracting two-digit numbers from each other in the form ab
are “Decrease a by 1” and “Add 10 to b”. A principle that may help per-
form these steps correctly is “Each column to the left indicates a tenfold
increase of the column directly to its right”. This principle indicates that the
rightmost column denotes units, the column left of that column denotes
tens, the next column to the left denotes hundreds, and so forth (in a deci-
mal number system). It explains why particular steps are performed and is,
thus, important to their efective application. To a certain degree, provid-
ing such principles distinguishes rote learning of a procedure from learning
it with understanding. The principles allow the learner to understand why
particular steps are performed or rules are applied. Be that as it may, prin-
ciples yield some limited (thus, shallow) comprehension or understanding,
at best, because deep understanding requires far more elaborated mental
266 Step 9: Analyze Prerequisite Knowledge

models. Furthermore, identifed principles might contain new concepts not


yet known by the learners. For instance, the concept ‘column’ in the identi-
fed principle may be new to the learners and require further analysis.

Identify Facts in Feature Lists

The analyst may deconstruct plans and principles into their constituting
concepts. Concepts, in turn, may be further analyzed into facts and/or
physical models that apply to instances of the concept. One common way
to specify a concept is to list all facts that apply to its instances in a feature
list. The features or facts that apply to instances of the concept take the form
of propositions. A proposition consists of a predicate or relationship and at
least one argument. Examples of propositions or facts that characterize the
concept ‘column’ are, for instance:

• A column is elongated—This proposition has one argument (column,


which is the subject) and the predicate, ‘elongated.’
• A column organizes items—This proposition has two arguments (col-
umn, which is the subject; items, which are the objects) and the predi-
cate, ‘organize.’
• A text processor may construct columns using the table function—This
proposition has three arguments (text processor, which is the subject;
column, which is the object; and table function, which is the tool) and
the predicate, ‘construct.’

Figure 12.3 represents propositions graphically. The basic format is sim-


ple. From a propositional node, one link points to the predicate, and one or
more links point to the argument or arguments. The links can be labeled as
subject, object, etc. Propositions are typically seen as the smallest building
blocks of cognition. There are no facts that enable learning other facts. The
factual relationship conveys meaningless, arbitrary links (A predicates B). In
a sense, learners cannot understand facts or propositions but only memorize
them. This does not mean learners must learn all facts identifed as pre-
requisite knowledge by heart. On the contrary, well-designed instruction
repeatedly presents knowledge prerequisite to performing specifc proce-
dural steps or rules precisely when needed. Then, the prerequisite knowl-
edge is available in the learners’ working memory at the right time, allowing
it to be incorporated into the cognitive rules that develop through practice.
Propositions can be associated with a concept node, constituting a fea-
ture list for that specifc concept. In Figure 12.3, the three propositions are
linked to the concept node ‘column’ because they are part of a feature list
that defnes this concept. A textual defnition of a concept typically incor-
porates the most salient features. For instance, we could defne a column as
Step 9: Analyze Prerequisite Knowledge 267

Figure 12.3 Three propositions connected to a conceptual node.

“one of the elongated sections of a vertically divided page, used to organize


a set of similar items”. In the graphical representation, multiple concept
nodes may be interrelated to each other to form higher-level conceptual
models that allow for problem solving, reasoning, and decision making in a
domain (see top of Figure 12.1 and Step 6 in Chapter 9).

Identify Physical Models


Acquiring concrete concepts that refer to tangible and visible things may
require learning their features and physical images. For example, it is impor-
tant to identify physical models for those objects and tools specifed in the
performance objectives (see Section 5.3). Physical models describe the
appearance or ‘external stimulus pattern’ of a tool or object and its parts in
pictures, drawings, graphics, schematic diagrams, and so forth (Figure 12.4
provides an example of a physical model of a resistor). They help learners
develop mental images that enable them to look at the world with ‘expert
eyes’ and to act according to this diferent view. Generally, it is best to
develop a physical model of the complete tool or object frst. Then, the
model may be detailed by adding parts and subparts necessary to perform
particular procedural steps or apply IF-THEN rules. Exploded views and
3D models may help show these relevant parts (and subparts) of the object
or tool, their location in relation to each other (i.e., the topology), and, if
necessary, their appearance by sound, smell, or other senses. The model
268 Step 9: Analyze Prerequisite Knowledge

should not contain irrelevant details for carrying out the identifed proce-
dural steps and the IF-THEN rules. Thus, there should be a one-to-one
mapping between the analysis of rules and procedures and the analysis of
their related physical models.
In conclusion, analyzing prerequisite knowledge results in feature lists
and defnitions characterizing concepts, plans, and principles made up of
those concepts and, if appropriate, physical models that may help to clas-
sify things as belonging to a particular concept. Thus, the concept ‘resis-
tor’ specifes the main attributes of a resistor in some kind of feature list (it
impedes the fow of electric current, it looks like a rod with colored stripes,
etc.), thereby enabling a task performer to classify something as either being
a resistor or not. In addition, a physical model (see Figure 12.4) might help
a task performer recognize something as being a resistor or not. This exam-
ple may also illustrate the diference between analyzing prerequisite knowl-
edge and analyzing mental models into domain models. In analyzing mental
models, a conceptual model of ‘resistors’ (cf. top of Figure 12.1) would not
only focus on the resistor itself but would also include comparisons with
other components (e.g., transistors, capacitors, etc.); their function in rela-
tion to voltage, current, and resistance (including Ohm’s law); a description
of diferent kinds (e.g., thermistors, metal oxide varistors, rheostats) and
parts of resistors; and so on. The physical model would not only depict one
or more isolated resistors but would be replaced by a functional model illus-
trating the working of resistors in larger circuits.

Figure 12.4 Physical models of resistors.

Specification at the Entry Level of the Target Group

Both analyzing cognitive rules (Step 8) and analyzing prerequisite knowl-


edge (this step) are hierarchical, meaning that the analysis iterates until it
reaches a level where the prerequisite knowledge is already available to the
Step 9: Analyze Prerequisite Knowledge 269

lowest-ability learners in the target group before the instruction. The previ-
ous examples already introduced this hierarchical approach:

• Starting from the procedural steps for subtracting two-digit numbers


(e.g., Decrease a by 1; Add 10 to b), the analyst might identify the fol-
lowing principle as prerequisite knowledge: “Each column to the left
indicates a tenfold increase of the column directly to its right”.
• Newly introduced concepts in this principle are ‘column,’ ‘left,’ ‘tenfold,’
and ‘right.’ The analyst might identify ‘left,’ ‘tenfold,’ and ‘right’ as con-
cepts familiar to the target group’s lowest-ability learner, indicating that
further analysis is unnecessary. However, the concept ‘column’ may not
yet be familiar to the learners and, thus, require further analysis.
• Analyzing the concept ‘column’ in its main features may yield the defni-
tion: “One of the elongated sections of a vertically divided page, used
to organize a set of similar items”. Newly introduced concepts in this
defnition are, among others, ‘section,’ ‘vertical,’ ‘page,’ and ‘item.’ The
analyst might identify some of those concepts as known by the target
group’s lowest-ability learners, indicating that further analysis is unneces-
sary. Again, one or more concepts (e.g., ‘vertical’) may not yet be familiar
to the learners and thus require further analysis.
• Analyzing the concept ‘vertical’ in its main features yields, again, a defni-
tion that may contain both familiar and unfamiliar concepts. This process
iterates until all identifed concepts are at the entry level of the target
group.

The analyst should not stop this iterative, hierarchical process too early.
Learners’ prior knowledge is often overestimated by professionals in a task
domain and, to a lesser degree, by teachers of that domain who to a certain
extent are also experts. This is because experienced task performers are very
familiar with their domain, which makes it difcult to put themselves in the
position of novices (i.e., the curse of knowledge). Therefore, the analysis
process should typically go one or two levels beyond the entry level indi-
cated by profcient task performers or teachers.

12.3 Analyzing Misconceptions


Analysts can carry out an empirical analysis to identify misconceptions that
may hinder acquiring prerequisite knowledge, in addition to carrying out
a rational analysis of the knowledge prerequisite to learning to carry out
particular rules or procedural steps. The term ‘misconception’ often broadly
refers to erroneous concepts, naïve or buggy plans (in the case of knowl-
edge of plans), and misunderstandings (in the case of knowledge of princi-
ples). Quite often, misconceptions arise from diferences in linguistic usage.
270 Step 9: Analyze Prerequisite Knowledge

For instance, whereas people in the United States and Great Britain speak
approximately the same language, some words have diferent meanings. In
the United States, the concept ‘tip’ refers to the amount of money that one
adds to a restaurant bill for good service, whereas, in Great Britain, it refers
to a garbage dump. In other words, the concept of ‘refuse tip’ can have
two completely diferent meanings. For a designer, the concept of ‘elegant’
refers to how attractive something is. In contrast, for a computer program-
mer, ‘elegant’ refers to how parsimonious the program is (i.e., how few lines
are needed for the program). In other words, the concept of ‘elegant appli-
cation’ can have two completely diferent meanings. One should be aware of
such diferences in teaching to prevent misconceptions that hinder learning.
Concerning plans, an example of a common naïve plan in scientifc writing
concerns the misuse of ‘et al.’ for all citations in the body text with more than
two authors. For three, four, or fve authors, the correct plan (according to
the Publication Manual of the American Psychological Association, 7th edition)
is to cite only the frst author in every citation, even the frst, unless doing so
would create ambiguity. For example, sometimes multiple works with three or
more authors and the same publication year shorten to the same in-text cita-
tion. To avoid ambiguity, when the in-text citations of this kind shorten to the
same form, write out as many names as needed to distinguish the references,
and abbreviate the rest of the names to “et al”. in every citation.
Concerning principles, an example of a common misunderstanding (also
referred to as a misconception) is that heavy objects fall faster than light
objects. This misunderstanding might easily interfere with correctly applying
the procedures for computing forces, masses, and acceleration. Actually, in the
absence of air resistance (i.e., in a vacuum), all objects fall at the same rate (i.e.,
they accelerate equally), regardless of their mass. In other words, a feather and
a bowling ball fall at the same rate in a vacuum. The analysis of misconceptions
best starts with describing all prerequisite knowledge for one particular recur-
rent task aspect. Then, the question for each concept, plan, or principle is: Are
there any misconceptions or misunderstandings for my target group that may
interfere with acquiring this concept, plan, or principle? Experienced teachers
are often the best sources of information to answer this question.

12.4 Using Prerequisite Knowledge to Make Design


Decisions
Step 9 is only necessary if prerequisite knowledge is not yet available in
existing instructional materials, job aids, or help systems, and it must always
follow Step 8. The results of this analysis provide the basis for designing an
important part of the procedural information. Identifcation of misconcep-
tions and misunderstandings may afect design decisions.
Step 9: Analyze Prerequisite Knowledge 271

Designing Procedural Information

The results of analyzing prerequisite knowledge, together with describ-


ing IF-THEN rules or procedures from Step 8, yield the main input for
designing procedural information; in particular, JIT information displays.
Procedural information may be presented to learners while carrying out
their learning tasks (Step 1), where it is only relevant to the recurrent
task aspects, and when they carry out part-task practice (Step 10), where
it is relevant to the entire recurrent task they practice. As a result of the
analysis previously described, this information is organized in small units
according to the rules and procedural steps that describe the correct per-
formance of recurrent task aspects. Thus, all concepts, plans, and princi-
ples that are prerequisites for applying one particular rule or performing
one particular procedural step are connected to this rule or step. This is an
optimal form of organization because JIT information displays are best
presented precisely at the moment that learners must apply a new rule or
carry out a new procedural step. Then, at the level of the lowest-ability
learner, the displays present the necessary rule or step (i.e., how-to informa-
tion), together with the information prerequisite for correctly performing
this rule or step (refer back to Figure 10.1 for an example). As learners
acquire more expertise, the JIT information displays fade. This type of
information presentation facilitates rule formation (see Box 10.1) because
it helps learners embed presented prerequisite knowledge into developing
cognitive rules.

Dealing with Misconceptions

Identifying misconceptions can impact decision making when designing


procedural information. First, it is important to focus the learners’ atten-
tion on those concepts, plans, and principles susceptible to misconcep-
tions. Unsolicited JIT information presentation, relatively slow fading
of presented information, and using many instances that give concrete
examples of concepts, plans, and principles prone to misconceptions
may help. Second, when dealing with a misconception, it may be helpful
to use multiple representations. In addition to presenting feature lists
and verbal concept defnitions, presenting images of physical models
and instances may help learners develop an accurate cognitive schema
that builds on verbal and visual encodings (i.e., dual coding; Paivio,
1986; see Figure 12.5). Finally, learners should be stimulated to com-
pare and contrast prerequisite concepts, plans, and principles with their
inefective counterparts; that is, with misconceptions, buggy plans, and
misunderstandings.
272 Step 9: Analyze Prerequisite Knowledge

Figure 12.5 Dual coding of the concept ‘resistor.’

12.5 Summary of Guidelines


• If you analyze prerequisite knowledge, then start from IF-THEN rules
or procedural steps that describe correct task performance and ask which
concepts, plans, and/or principles the learner needs to know to (learn to)
correctly apply these rules or perform these steps.
• If you identify prerequisite plans, then describe the constituting concepts
and the location-in-time or location-in-space relations that interrelate
those concepts.
• If you identify prerequisite principles, then describe the constituting
concepts and the cause-effect or natural-process relations that interrelate
those concepts.
• If you identify prerequisite concepts, then describe the facts or prop-
ositions that characterize the concept in a feature list and/or concept
definition.
• If you identify prerequisite concepts and a concept refers to a tool or
object necessary for task performance, then depict the tool or object as a
physical model.
• If you identify prerequisite concepts, plans, or principles, then be aware
that their descriptions may introduce other concepts unfamiliar to your
target group; thus, reiterate the analysis hierarchically until reaching con-
cepts already mastered by the lowest-ability learner.
Step 9: Analyze Prerequisite Knowledge 273

• If you analyze misconceptions, including buggy plans and misunderstand-


ings, then question experienced teachers about common misconceptions
within the target group that may interfere with acquiring prerequisite
knowledge.
• If you design procedural information, then include a description of pre-
requisite knowledge in JIT information displays so that the learner has it
available when the connected rule or procedural step needs to be applied.
• If there are known misconceptions when you design procedural informa-
tion, then focus the learners’ attention on those concepts that are liable
to the misconceptions, use multiple representations (verbal and visual),
and stimulate learners to compare and contrast the misconceptions with
more accurate conceptions.

Glossary Terms

Concept; Feature list; Misconception; Physical model; Plan; Prerequisite


knowledge; Principle
Chapter 13

Step 10
Design Part-Task Practice

13.1 Necessity
Part-task practice is the last of the four design components. The other three
design components are always necessary, but part-task practice is not. You
should only carry out this step if the additional practice of recurrent task
aspects is strictly necessary to reach a high level of automaticity of these
aspects.

DOI: 10.4324/9781003322481-13
276 Step 10: Design Part-Task Practice

Suppose you are taking a course in Semi-Micro Qualitative Analysis, a


chemistry course where the goal is to start from an unknown aqueous solu-
tion and, through a series of steps, determine what the solution is and/or
what is in the solution. To achieve its goals, the course was designed accord-
ing to the Ten Steps, where you work on meaningful, whole tasks arranged
in task classes with all the trimmings. Within this course, you also have to
carry out titrations that involve adding a reagent to the solution by turning
a valve to let the reagent drip into the solution (or releasing your fnger from
the top of a pipette). This critical process requires a great deal of manual
dexterity, which is only achievable after much practice. Unfortunately, each
determination takes 15 minutes to 2 hours, so there is little room for the
amount of practice needed to learn to titer properly. Also, if you do it wrong
(adding too much reagent, for example), the analysis fails, and you have
‘wasted’ two hours. Enter part-task practice to save the day.
This chapter presents guidelines for the design of such part-task practice.
In general, an overreliance on part-task practice is not helpful for complex
learning. On top of this, part-task practice is often pointless because the
learning tasks themselves provide sufcient opportunity to practice both the
nonrecurrent and the recurrent aspects of a complex skill. After all, good
information presentation can often take care of the diferent nature of under-
lying learning processes for recurrent and nonrecurrent constituent skills.
Presenting just-in-time procedural information aims to automate cognitive
rules through rule formation, whereas presenting supportive information
aims to construct and reconstruct schemata through elaboration. However,
if very high automaticity of particular recurrent aspects is required, the total
number of learning tasks may be too small to provide the necessary repeti-
tive practice. Then—and only then—is it necessary to include additional
part-task practice for one or more selected recurrent aspects in the training
program.
The structure of this chapter is as follows. Section 2 describes prac-
tice items as the building blocks for part-task practice. Diferent types of
practice items provide diferent levels of support, and the complete set of
practice items should be so divergent that they cover all variants of the pro-
cedure. Section 3 presents several methods of sequencing part-task practice
items. Section 4 discusses special characteristics of procedural information
presented during part-task practice, including techniques for demonstrating
large rule sets or long and/or multibranched procedures, realizing contin-
gent tutoring, and providing corrective feedback through ‘model tracing.’
Section 5 describes instructional techniques explicitly aimed at ‘overlearn-
ing,’ such as changing performance criteria, compressing simulated time,
and distributing practice sessions over time. Section 6 contrasts dependent
part-task practice with independent part-task practice, where learners are not
Step 10: Design Part-Task Practice 277

explicitly ofered part-task practice but must independently identify part-


tasks to improve their whole-task performance. Section 7 discusses suitable
media for providing part-task practice. Section 8 discusses the positioning
of part-task practice in the training blueprint. The chapter concludes with a
summary of guidelines.

13.2 Practice Items


If the instructional design prescribes part-task practice, it means that
learners require extensive numbers of practice items. The rules and/or
procedures resulting from analyzing cognitive rules in Step 8 provide
the basis for designing practice items. It applies, however, only to a small
subset of these rules and procedures; namely, aspects of the whole task
classifed as to-be-automated recurrent constituent skills in Step 2, or even
more exceptionally, double-classifed constituent skills (cf. Table 5.2). For
other recurrent constituent skills, providing procedural information dur-
ing the performance of learning tasks sufces, and part-task practice is
unnecessary. Thus, performance objectives for the parts practiced in isola-
tion describe fully automated routines, and sometimes also, the ability to
recognize when these automated routines do not work in the context of
whole tasks.
Well-known examples of part-task practice are drilling addition, subtrac-
tion, and multiplication tables, practicing musical scales, or practicing a ten-
nis serve or free kick—all skills requiring speed and accuracy. In training
design, part-task practice is often also applied for recurrent constituent skills
critical to safety because their incorrect performance may cause danger to life
and cause loss of materials or damage to equipment. For instance, part-task
practice may be provided for executing emergency shutdown procedures in
a chemical factory or for giving intravenous injections to patients (Mushary-
anti et al., 2021; Vanfeteren et al., 2022). Moreover, if instructional time
allows, part-task practice may also be used for recurrent constituent skills
with relations in the skill hierarchy indicating that they:

• Enable performing many other skills higher in a skill hierarchy. Addi-


tional practice with the order of letters in the alphabet with elementary
school children enables the search skills needed for using dictionaries,
telephone books, and other alphabetically ordered materials.
• Are performed simultaneously with other coordinate skills in the hierar-
chy. Detecting dangerous situations on a radar screen in air trafc control
would receive additional practice because it is performed simultaneously
with communicating with pilots and labeling new aircraft entering the
airspace.
278 Step 10: Design Part-Task Practice

Types of Practice Items


Specifying practice items for part-task practice is more straightforward than
specifying learning tasks. To specify practice items for part-task practice, the
key criterion is the existence of a single, relevant, recurrent constituent skill
for which efective performance is algorithmically described as a procedure
or set of IF-THEN rules, as outlined in Step 8. A distinction between a
problem-solving process and a solution—or between guidance and task sup-
port—is irrelevant because correctly carrying out the procedure or apply-
ing the IF-THEN rules (cf. the problem-solving or task-execution process)
simply is the solution! For this reason, some people do not call it problem
solving, but it is equally justifed to call it the most efective type of problem
solving.
Practice items should invite learners to repeatedly carry out the recurrent
constituent skill, the procedural steps, or the IF-THEN rules. The saying
‘practice makes perfect’ is actually true for part-task practice because exten-
sive numbers of practice items will lead to routines that can be performed
quickly, accurately, and—most importantly—without conscious efort.
Unlike learning tasks, part-task practice will typically not occur in a real or
simulated high-fdelity task environment but in a simplifed environment
such as a ‘skills lab’ or a drill-and-practice computer program (e.g., a batting
cage or a computer multiplication program). A conventional practice item,
sometimes called a produce item (Gropper, 1983), confronts the learner with
a given situation and a goal and requires the learner to execute a procedure
(see the top row in Table 13.1 and Figure 11.2 for an explanation of the
algorithm). For instance, a practice item may ask the learner to compute the
product of 3 times 4, label a situation as dangerous or safe by observing air-
craft on a radar screen, play a musical scale, or give an intravenous injection.
For part-task practice, the general recommendation is to use conventional
practice items as quickly as possible. Only consider special practice items
if (a) learners are prone to making errors in specifc procedural steps or
IF-THEN rules, (b) procedures are long and multi-branched or the set of
IF-THEN rules is very large, or (c) learners have difculties recognizing
which procedural step or IF-THEN rule to use because there are diferent
alternatives for highly similar situations or highly similar alternatives for dif-
ferent situations. Table 13.1 includes two special types of practice items for
learning to add two-digit numbers: edit items and recognize items.
Edit practice items ask learners to correct an incorrect solution by identi-
fying the faulty step or steps or the incorrect IF-THEN rule or rules and then
provide the correct ones. They are especially useful to practice error-prone
procedures when ‘typical errors’ are introduced that need to be detected
and corrected by the learners. The example given in Table 13.1 illustrates
the typical error that learners forget to carry the 10 when adding numbers.
Step 10: Design Part-Task Practice 279

Recognize practice items requires learners to select a correct procedure


from a set of possible procedures. Such items are especially useful if it is dif-
fcult to recognize which procedures or IF-THEN rules to use for a particu-
lar situation or goal. Pairing similar procedures may also focus the learners’
attention on the conditions underlying the application of each procedure.

Table 13.1 Examples of conventional, edit, and recognize practice items for
adding two-digit numbers.

Practice item Solution or Example task description


procedure
Produce/ ?execute Compute 43 + 29 = ?
conventional item
Edit practice item ?edit Step 1: add 9 to 3 = 12
Step 2: decrease 12 by 10 = 2
Step 3: add 1 to 4 = 5
Step 4: write 2 in right column
Step 5 a: add 2 to 4 = 6
Step 6: write 6 in the left column
Which step is incorrect and why?
Recognize practice ?name a. Addition without carrying
item Step 1: add 9 to 3 = 12
Step 2: write 12 in the right column
Step 3: add 2 to 4 = 6
Step 4: write 6 in the left column
b. Addition with carrying
Step 1: add 9 to 3 = 12
Step 2: decrease 12 by 10 = 2
Step 3: add 1 to 4 = 5
Step 4: write 2 in right column
Step 5: add 2 to 5 = 7
Step 6: write 7 in the left column
Which of the two procedures is
correct and why?
Note:
a
This step is incorrect because the carried 10 (carry the 10) is neglected.

Fading Support and Training Wheels

Edit and recognize practice items provide learners with task support (refer
back to Figure 4.4) because they give them (a part of) the solution. If part-
task practice cannot immediately start with conventional items because they
are too difcult for the learner, the best option is to start with items that
provide high support and work as quickly as possible toward items without
support. A well-known fading strategy is the recognize-edit-produce sequence
(Gropper, 1983), which starts with items that require learners to recognize
280 Step 10: Design Part-Task Practice

which steps or IF-THEN rules to apply, continues with items where learners
have to edit incorrect steps or incorrect IF-THEN rules, and ends with con-
ventional items for which learners have to apply the steps or rules on their
own to produce the solution.
Problem-solving guidance is irrelevant for practice items because carrying
out the procedure correctly always yields the right solution. This process is
algorithmic rather than heuristic, so the learner does not need to try difer-
ent mental operations to fnd an acceptable solution. This makes providing
modeling examples, process worksheets, or other heuristic aids superfuous.
For part-task practice, the procedural information should specify a straight-
forward way to perform the procedure or apply the rules (see Section 13.4
later in this chapter).
Performance constraints, however, may be useful to support the learner
in carrying out long procedures or applying large sets of rules because such
constraints impede or prevent inefective behaviors. Performance constraints
for part-task practice often take the form of what is known as training wheels
interfaces (Carroll & Carrithers, 1984), a term indicating a resemblance to
using training wheels on a child’s bicycle. At the beginning of learning to
ride a bicycle, the wheels are on the same plane as the rear wheel, which
makes the bicycle very stable and prevents it from falling over. As the child’s
sense of balance increases, the wheels are moved above the plane so that the
bicycle still cannot fall over, but if the child is in balance, the wheels will not
touch the ground. The child is riding on two wheels (the front and back
wheels), except when negotiating curves, where the child has to slow down,
tends to lose balance, and is prone to falling. Ultimately, the child can ride
the bicycle, and the training wheels are removed, often replaced by a parent
running behind the child to ensure they do not fall.
Thus, the basic idea behind using training wheel interfaces is to ensure
that actions related to inefective procedural steps or rules are unreachable
for learners. For instance, a training wheels interface in a word-processing
course would frst present only the minimum number of toolbars and menu
options necessary for creating a document (i.e., the basics such as create,
save, delete, et cetera). All other options and toolbars would be unavailable
to the learner. New options and toolbars (e.g., for advanced formatting,
drawing, and making tables) only become available to the learners after they
have mastered using the previous, basic ones. Another example is a touch-
typing course, where it is common to cover the keyboard keys so the learner
cannot see the key symbols. The inefective or even detrimental behavior of
‘looking at the keys’ to fnd and use the correct key is blocked. The key cov-
ers are removed later in the training program. A fnal example is the teaching
of motor skills, where an instructor may hold or steer the learner in such a
way (see Figure 13.1) that it forces the learner to make a particular body
Step 10: Design Part-Task Practice 281

movement. Again, holding the learner prevents them from making inefec-
tive, undesired, or dangerous body movements.

Figure 13.1 Skydiving instructor providing training wheels to his student.

Divergence of Practice Items


To conclude this section, note that the complete set of practice items used
in part-task practice should be divergent, meaning that the items used rep-
resent all situations relevant to the procedure or IF-THEN rules. Diver-
gence of practice items is necessary to develop a broad set of cognitive rules,
allowing optimal rule-based processing in future problem situations. For
instance, part-task practice for adding two-digit numbers should include
both practice items with carrying the 10 and items without carrying, and
part-task practice for spelling words should include a broad set of practice
items (i.e., words) requiring the learner to use all diferent letters from the
alphabet in all diferent orders. Divergence of practice items is somewhat
similar to the variability of learning tasks, but divergence of practice items
only claims that practice items must represent all situations that can be han-
dled by the procedure or by the set of IF-THEN rules. The divergent items
never go beyond those rules. Variability of practice, in contrast, claims that
282 Step 10: Design Part-Task Practice

learning tasks must be representative of all situations that may occur in the
real world, including unfamiliar situations without known approaches. In
other words, divergence helps learners use rules or procedures; variability
helps learners fnd rules or procedures.

13.3 Part-Task Sequencing for Part-Task Practice


Thus far, we have only discussed practice items that require the learner to
carry out the complete, recurrent constituent skill. However, long, multi-
branched procedures or large sets of rules may require decomposing the pro-
cedure or rule sets into parts and training learners extensively in separately
carrying out parts of the procedure or subsets of rules before they begin
practicing the complete recurrent skill. For whole learning tasks, part-task
sequencing should only be applied under highly exceptional circumstances
(see Section 6.4). An important diference is that part-task sequencing for
learning tasks preferably uses a backward chaining approach. Hence, the
learners receive useful examples and models right from the beginning of
the program. In contrast, forward-chaining approaches are more efective
for part-task practice because carrying out each step or applying each rule
creates the conditions that prompt the next step or action. This approach
facilitates rule automation because learners must repeatedly carry out a step
or action under the appropriate conditions.
Table 13.2 presents three traditional sequencing techniques suitable for
part-task practice: segmentation, simplifcation, and fractionation. These
techniques use a forward-chaining approach based on a natural-process
order. If instructional time is severely limited, you may train the parts sepa-
rately. However, it is better to use forward chaining in combination with
snowballing if possible. For training a task consisting of the parts A, B, and
C, practice items would frst practice A, then A plus B, and fnally, A plus B
plus C.
The forward-chaining approaches listed in Table 13.2 all result in a low
contextual interference of practice items: Practice items are grouped or
‘blocked’ for one part of the task so that learners practice only one set of
(more-or-less) similar practice items at the same time. This difers for learn-
ing tasks, for which a random order yielding a high contextual interference
works best (i.e., interleaving and contextual interference; Birnbaum et al.,
2013; Bjork, 1994). Then, each learning task difers from its surround-
ing learning tasks on dimensions that also difer in the real world, which is
believed to facilitate the construction of mental models and cognitive strate-
gies. Yet, for practice items, each item should be similar to its surrounding
items because the repeated practice of the same things facilitates the desired
automation of cognitive rules in a process of strengthening (see Box 13.1).
Step 10: Design Part-Task Practice 283

Table 13.2 Techniques for sequencing practice items for procedures with many
steps/decisions or large sets of rules

Sequencing Description Example Easy to use for


technique
Segmentation Break the For repairing a flat Linear order of
procedure down tire, first give steps resulting
into distinct practice items to from a
temporal or remove the tire, behavioral task
spatial parts. then to repair analysis.
the puncture, and
finally, to replace
the tire.
Simplification Break the For subtracting Branched order
procedure numbers, first of steps and
down into parts give practice decisions (a
that represent items without flow chart)
increasingly borrowing, then resulting from
more complex with borrowing, and an information
versions of the finally, with multiple processing
procedure. borrowing. analysis.
Fractionation Break the For touch-typing, Set of if-then
procedure down first give practice rules resulting
into different items for the index from a rule-
functional parts. fingers, then for the based analysis.
middle fingers, and
so on.
Source: Wightman & Lintern, 1985.

Box 13.1 Strengthening and Part-Task Practice

Well-designed part-task practice makes it possible for learners to carry


out a recurrent or routine aspect of a complex skill after it has been
separately trained at a very high level of automaticity. Part-task practice
should provide the repetition needed to reach this. Together with rule
formation (see Box 10.1), which always precedes it, strengthening is
a major learning process responsible for rule or schema automation.

Accumulating Strength
It is usually assumed that each cognitive rule has a strength associated
with it that determines the chance it applies under the specifed condi-
tions and how rapidly it then applies. While rule formation leads to
284 Step 10: Design Part-Task Practice

domain-specifc rules that are assumed to underlie the accurate perfor-


mance of the skill, those rules are still not strong. Thus, performance
is not fully stable because weak rules may simply fail to apply under
appropriate conditions. Moreover, while it is fairly fast compared to
weak-method or schema-based problem solving, it could still ben-
eft from large speed improvements. Strengthening is a straightfor-
ward learning mechanism that simply assumes that rules accumulate
strength each time they are successfully applied. Only after further
practice will the skill become fully automatic.

The Power Law of Practice


The improvement that results from strengthening requires long peri-
ods of training. The Power Law of Practice characterizes strengthening
and the development of automaticity. This law predicts that the log of
the time to complete a response will be a linear function of the log of
the number of successful executions of that particular response. For
instance, the Power Law predicts that, if the time needed to add two
digits decreased from 3 seconds to 2 seconds over the frst 100 prac-
tice items, it will take 1.6 seconds after 1,000 items, 1.3 seconds after
10,000 items, and about 1 second to add two digits after 100,000
trials! In addition to adding digits, the Power Law is a good predictor
for various recurrent skills such as editing texts, playing card games,
performing choice reaction tasks, detecting letter targets, and rolling
cigars.

Rule Formation versus Strengthening


Both rule formation and strengthening are elementary cognitive pro-
cesses not subject to strategic control but are mainly a function of the
quantity and quality of practice. The Power Law clarifes that the time
it takes to compile a rule (which may even be a one-trial process) is
very modest compared to the time needed to reach full skill automa-
ticity. Strengthening may account for the continued improvement of
a skill long after the point of reaching accurate performance. In an
old study, Crossman (1959) reported on skill development in rolling
cigars. While it typically takes only a few hundred trials to accurately
carry out this skill, his participants still showed improvement after
3 million trials and 2 years! It should thus be clear that ample amounts
of overlearning are necessary to reach full automaticity.
Step 10: Design Part-Task Practice 285

Further Reading
Crossman, E. R. F. W. (1959). A theory of the acquisition of speed-
skill. Ergonomics, 2, 153–166.
https://doi.org/10.1080/00140135908930419
Palmeri, T. J. (1999). Theories of automaticity and the power law
of practice. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 25, 543–551.
https://doi.org/10.1037/0278-7393.25.2.543

To conclude, sequencing techniques for long, multibranched procedures


or large rule sets can be combined with fading support strategies. Suppose
learners practice a procedure in parts A, B, and C. Then, practice items
for part A can proceed through a recognize-edit-produce sequence before
starting practice on part B. For part B, the recognize-edit-produce sequence
can repeat (or parts A plus B, in a snowballing approach). And so on, until
conventional items are provided for the complete procedure: parts A plus B
plus C. Such complicated sequencing techniques are only required to teach
procedures with many steps/decisions or large sets of rules.

13.4 Procedural Information for Part-Task Practice


Step 7 discussed designing procedural information, focusing on its presenta-
tion while the learners carry out the whole learning task. For whole tasks,
the presentation of procedural information only pertains to the recurrent
aspects of those tasks. Procedural information is also relevant to part-task
practice, where only the recurrent aspects are practiced, and all the informa-
tion given to the learner pertains to the recurrent skill being practiced. The
same principles for designing procedural information apply to both situa-
tions; namely, presenting rules and procedures in small information units,
using simple and active language at the level of the lowest-ability learner,
and preventing split-attention efects (see Step 7 in Chapter 10). However,
due to the nature of part-task practice, there are also some shifts in empha-
sized techniques, namely:

• Demonstrations of procedures with many steps/decisions or large rule


sets may be isolated from the whole task.
• Contingent tutoring is easier to realize.
• The model-tracing paradigm may be used to generate immediate feed-
back on errors.
286 Step 10: Design Part-Task Practice

Demonstrating Procedures and Rules

If learners work on whole learning tasks, it is best to demonstrate proce-


dural information in the context of those whole tasks. For instance, if stu-
dent hairstylists learn to cut curly hair, how the scissors should be held (i.e.,
a recurrent aspect of the task) is best demonstrated in the context of the
whole haircutting task. During part-task practice, however, demonstrations
of the part-task will often be isolated from the whole task. This may be espe-
cially helpful for teaching recurrent constituent skills characterized by pro-
cedures with many steps/decisions or large sets of rules. For instance, pilots
in training may receive a demonstration of a particular emergency procedure
separately from their normal task of fying an airplane from one airport to
another. Such demonstrations should clearly indicate the given situation,
the desired goal or outcome, the materials and equipment to manipulate,
and the actual execution of the procedures or application of rules using
these materials and equipment. A good instructional design should allow
the learners to integrate the part-task behaviors into the whole task, typically
by introducing the part-task only after introducing the whole task (i.e., there
is a fruitful cognitive context) and by ‘intermixing’ with whole-task practice.
It is advisable to pay special attention to procedural steps or rules that
are difcult or dangerous for learners when incorrectly applied. Such steps
or rules are often associated with typical errors resulting from malrules or
buggy rules and/or misconceptions identifed during an empirical analysis
(see Sections 11.3 and 12.3). Table 13.3 presents four instructional meth-
ods to help learners deal with difcult aspects of a recurrent constituent skill:

• Subgoaling forces learners to identify the goals and subgoals reached with
particular procedural steps or rules.
• Attention focusing ensures that learners pay more attention to the difcult
aspects than the easy ones.
• Multiple representations help learners process the given information in
more than one way.
• Matching allows learners to critically compare and contrast correct and
incorrect task performance. In this case, it is critical to clearly point out
which of the two demonstrations is incorrect and why.

Contingent Tutoring

Contingent tutoring requires an instructor or computer system to closely


monitor and interpret the learner’s performance and present procedural
information precisely when the learner needs it (i.e., just-in-time). Con-
tingent tutoring may be realized for whole learning tasks by one-on-one
tutoring (i.e., the assistant looking over your shoulder—ALOYS), but this
Step 10: Design Part-Task Practice 287

Table 13.3 Four techniques for dealing with difficult aspects of a recurrent
constituent skill.

Technique Description Example

Subgoaling Ask learners to specify the Partly demonstrate a


goal or subgoal that is procedure, and ask the
reached with a particular learners what the next
procedure or rule. subgoal is (also called
milestone practice; Half,
1993).
Attention Focus attention on those Present a graphical
focusing procedural steps or demonstration of the
rules that are difficult or procedure or rules, and
dangerous. color-code the dangerous
steps in red.
Multiple Use multiple Present a demonstration
representations representation formats, in real life and as a
such as texts and visuals, simulated animation that
for presenting difficult the learner can control.
procedures or rules.
Matching Compare and contrast Demonstrate a correct and
correct demonstrations an incorrect version of
of procedures or rules carrying out a procedure,
with their incorrect and use the contrast
counterparts. to point out steps
or decisions that are
error-prone.

is very hard to realize if no human tutor is available. Therefore, the Ten


Steps prescribed another type of unsolicited information presentation as a
default strategy; namely, system-initiated help, where JIT information dis-
plays appear in the frst learning task or tasks for which they are relevant.
In contrast, contingent tutoring is the preferred and default strategy for
presenting procedural information for part-task practice.
A human tutor or a computer system may realize contingent tutoring
during part-task practice. Because only one recurrent skill is involved, it is
relatively easy to trace the learner’s actions to particular procedural steps
or rules. Then, a step-by-step approach can present a specifc action to be
taken (and, if necessary, its related prerequisite knowledge) to the learner
exactly at the moment it has to be carried out (JIT). The instruction may be
spoken by a tutor or presented visually or graphically (Figure 13.2 provides
an example). For example, this can be a graphic of a procedure with many
steps/decisions, highlighting the relevant next step at the exact moment it
needs to be carried out. Relevant prerequisite knowledge could be included
288 Step 10: Design Part-Task Practice

in or connected to this highlighted part (refer to Figure 10.1, which shows


hyperlinks to present prerequisite information).

Figure 13.2 Step-by-step instructions for how to tie a bow tie; ideally, each
step is provided or highlighted at the moment the learner has to
carry out that step.

Model Tracing and Corrective Feedback

Like contingent tutoring, providing immediate corrective feedback requires an


instructor or computer system to monitor and interpret learner performance.
If learners work on whole tasks, a human tutor may give immediate correc-
tive feedback, which is too hard to realize by other means. However, if learn-
ers work on part-tasks, it is easier to provide immediate feedback because we
can trace their behavior step-by-step or rule-by-rule. To achieve this, the IF-
THEN rules or procedural steps identifed in the analysis of cognitive rules
Step 10: Design Part-Task Practice 289

(Step 8) can serve as diagnostic tools in a model-tracing paradigm (e.g., Chu


et al., 2014). Model tracing is often applied in drill-and-practice computer
programs and intelligent tutoring systems and entails the following steps:

1. Each learner-action is traced back to a specifc IF-THEN rule or proce-


dural step that describes or models the recurrent constituent skill taught.
2. As long as the tracing process succeeds and observed behavior is consist-
ent with the rules or steps, the learner is on track, and no feedback is
necessary.
3. If the tracing process fails and the observed behavior cannot be explained
by the rules or steps, a deviation from the model trace must have appeared,
and the following feedback is given:
• The learner is told that an error has occurred.
• If possible, an explanation (why there is an error) is provided for the
learner’s deviation from the model trace. This may be based on avail-
able malrules or misconceptions identifed in an empirical analysis of
the recurrent skill under consideration (see Sections 11.3 and 12.3).
• The learner is told how to recover from the results of the error and is
given a hint about the next step or action to take.

Suppose a learner is writing a computer program in which they must switch


the values of two variables, and they make a typical error; namely, stating
A = B and then stating B = A to exchange the values. This typical error can
be traced back to a malrule for switching the values of two variables (refer to
Table 11.1). Consequently, corrective feedback may take the following form:
290 Step 10: Design Part-Task Practice

13.5 Overlearning
The instructional methods discussed in the previous sections are sufcient to
teach a recurrent skill to a level where the learner can accurately carry it out.
However, the goal of part-task practice typically extends beyond mere accurate
performance, focusing instead on reaching a very high level of automaticity
and speed. Accurate performance is only the frst step. To attain full automa-
ticity, overlearning is essential. This involves extensive amounts of divergent,
conventional practice items representing all situations where the performer
can apply the procedure or set of rules. The slow learning process under-
lying overlearning is strengthening (refer back to Box 13.1). Three instruc-
tional strategies that explicitly aim at strengthening through overlearning are
changing performance criteria, compressing simulated time, and distributing
practice.

Changing Performance Criteria

Performance objectives for a recurrent constituent skill specify the stand-


ards for carrying it out (Chapter 5). They often include criteria for accu-
racy, speed, and time-sharing (i.e., carrying out the skill simultaneously with
other skills). For most to-be-automated recurrent skills, the ultimate goal is
not to reach the highest possible accuracy but to obtain satisfactory accu-
racy, combined with high speed and the ability to carry out the skill together
with other skills (i.e., without cognitive processing) and, ultimately, in the
context of the whole task. Three phases of overtraining that involve chang-
ing performance criteria lead to this goal:

• In Phase 1, typically completed before overtraining starts, the learner


trains until they reach an acceptable level of accuracy.
• In Phase 2, the learner trains under (moderate) speed stress while main-
taining the accuracy criterion. Speed stress makes it impossible for learn-
ers to consciously follow the steps in a procedure and thus forces them to
automate the skill.
• In Phase 3, the skill is trained together with an increasing number of
other skills while maintaining the accuracy and speed criteria. This con-
tinues until the learner can carry out the skill at a very high level of auto-
maticity in the context of the whole task.

Compressing Simulated Time

Thousands of practice items may be necessary to reach full automaticity of


recurrent skills. For slow processes such as weather prediction and other
natural processes or steering large sea vessels and other slowly responding
Step 10: Design Part-Task Practice 291

systems, the time required for practicing the skill under normal conditions
becomes enormous. Compressing simulated time by a factor of 10 to 100
can drastically reduce the necessary training time and, at the same time, facil-
itate automation due to increased speed stress and latency time of feedback.
In an old study, Schneider (1985) provides an example of air trafc con-
trol. Making judgments about where an aircraft should turn and seeing the
results of this decision normally takes about 5 minutes, but the simulated
time for this maneuver was compressed by a factor of 100 to enable com-
pleting a practice item in a few seconds. Consequently, practicing more
items in 1 day than in months of normal training becomes possible, while
the associated speed stress promotes overlearning.

Distributing Practice Over Time

Relatively short periods of part-task practice distributed over time (i.e.,


spaced practice; Benjamin & Tullis, 2010; Bjork, 1994) give better results
than long, concentrated drill periods (massed practice). In an old study
by Bray (1948), participants had to practice using Morse code for 4 or
7 hours a day. They found no diference in the efectiveness of practice
schedules. The learners in the 7-hour-a-day group were, thus, wasting
the additional 3 hours of practice. Other studies (for an overview, see
Rohrer & Taylor, 2006) indicate that, the longer the spacing between
practice sessions and the more they are alternated with other learning
activities, the more efective the practice is. This suggests that it is best to
intermix part-task training sessions with practicing whole learning tasks
and to end the part-task practice sessions when the learner reaches the
standards of the part-task.

13.6 Independent Part-Task Practice


The previous sections assumed that the teacher or other intelligent agent
is responsible for selecting to-be-automated recurrent task aspects, which
are then explicitly provided as part-task practice after introducing them in
the context of whole, meaningful learning tasks. For independent part-task
practice (see Section 2.6), the self-directed learner decides which recurrent
aspects of the learning tasks need additional practice for successive refne-
ment and automation and when to practice them. Independent part-task
practice is a form of deliberate practice (Ericsson, 2015) that is quite com-
mon in many educational settings. Even in elementary school, children who
want to practice recurrent skills related to math (e.g., counting, addition,
multiplication), language (e.g., spelling, use of punctuation, sayings, etc.),
or other subjects can often use drill-and-practice computer programs when
292 Step 10: Design Part-Task Practice

they want to. For a teacher, independent part-task practice is relatively easy
to implement because it:

• Concerns only one well-defned recurrent skill or routine, with no need


to organize the contents of the program for each learner.
• Often takes the form of individual practice, with no need to form groups
of learners on the fy.
• Can often be supported with drill-and-practice of-the-shelf computer
programs, with no need to schedule teachers or instructors.

Usually, a learner’s desire to improve performance on whole learning tasks


will trigger the initiative to carry out additional part-task practice and consult
procedural information benefcial to this part-task practice. For example, a
junior surgeon in training might experience difculties with a particular part
of an operation and decide to practice a selected recurrent skill on a part-
task trainer (e.g., an endoscopic part-task training box in a skills lab). The
junior doctor’s key problem is determining which recurrent aspects of whole-
task performance need additional part-task practice to improve whole-task
performance.
A digital development portfolio as described in Chapter 5 may help
learners identify those aspects. Such a portfolio gathers assessments for
all aspects of performance, including the recurrent to-be-automated con-
stituent skills (i.e., skills that might require additional part-task practice).
The standards for these skills will indicate a high level of automation; for
example, the criterion for ‘operating camera and equipment’ (i.e., skillfully
using the camera’s settings, controls, and features) as a constituent skill of
‘creating the composition’ is that camera operation is done faultlessly, very
quickly, and almost efortlessly (refer back to Table 5.3). If a learner has
carried out a series of learning tasks, and the standard-centered assessment
on the constituent skill ‘operating camera and equipment’ indicates that the
standards for this skill have not yet been met, this might be a reason for the
learner to do additional part-task training in the use of camera operation.
Figure 13.4 shows a JIT information display in the form of a ‘cheat sheet’
that learners might consult when practicing confguring aperture, shutter
speed, and ISO settings.
Independent part-task practice, thus, requires deliberate practice skills
from the learners to fnd valuable practice opportunities along with pro-
cedural information that helps them proft from those opportunities, just
like on-demand education does for selecting suitable learning tasks (see
Section 6.5) and resource-based learning does for searching for support-
ive information. Sometimes, it may be desirable to explicitly help learners
develop these deliberate practice skills in a process of second-order scafold-
ing. This will be further discussed in Chapter 14.
Step 10: Design Part-Task Practice 293

Figure 13.3 Pocket guide ( JIT information display) for setting a camera’s
aperture, shutter speed, and ISO.

13.7 Media for Part-Task Practice


Traditional media for part-task practice include paper-and-pencil for doing
small exercises (e.g., simple addition, verb conjugation), part-task trainers in
skills labs for practicing perceptual-motor skills (e.g., operating machinery,
conducting physical examinations in medicine), and the real task environment
(e.g., marching on the street, taking penalty kicks on the soccer feld). The
main reason for applying part-task practice is the component fuency hypoth-
esis (Carlson et al., 1990), which indicates that drill-and-practice on one or
more routine aspects of a task may positively afect learning and carrying out
the whole task. A very high level of automaticity for routine aspects frees up
cognitive capacity for other processes because these automated aspects no
longer require resources for conscious processing (Van Merriënboer, 2013).
As a result, all available cognitive capacity can be allocated to the problem-
solving, reasoning, and decision-making aspects of whole-task performance.
The computer has proved its worth in the last decades for part-task prac-
tice. Drill-and-practice computer-based training (CBT), often in the form of
adaptive programs that respond to exhibited task performance, is undoubt-
edly the most successful type of educational software. Such CBT programs
are sometimes criticized for their use of ‘drill-and-practice,’ calling it drill-
and-kill and thinking that this is the entirety of the instruction. These cri-
tiques, however, miss the point. They contrast drill-and-practice CBT with
294 Step 10: Design Part-Task Practice

traditional teaching or educational software focusing on rich, authentic


learning tasks. According to the Ten Steps, however, part-task practice never
replaces meaningful whole-task practice. It merely complements work on
rich learning tasks and is applied only when the learning tasks themselves
cannot provide enough practice to reach the desired level of automaticity for
selected routine aspects. If such part-task practice is necessary, the computer
is a highly suitable medium because it can make drill efective and appealing.
It can present procedural support by compressing simulated time to allow
more exercises than in real-time, give knowledge of results and immediate
feedback on errors, and use multiple representations, gaming elements, and
sound efects. It also never gets tired of the constant repetition.

13.8 Part-Task Practice in the Training Blueprint


Two guidelines help connect part-task practice to the training blueprint.
First, part-task practice should always be provided in a fruitful cognitive con-
text, meaning learners must already be able to relate and integrate it to the
required whole-task performance. This could be reached by frst present-
ing modeling examples or other learning tasks that allow learners to under-
stand how the part-task fts into the whole task. Part-task practice without an
appropriate cognitive context, such as extensive part-task practice in advance
of whole-task training or drill courses before the actual training program,
is likely inefective. This principle is clearly illustrated in a study by Carl-
son et al. (1990), who found no positive efect of 8,000 items for Boolean
functions when practiced before the whole task of troubleshooting logical
circuits, but the practice items did have a positive efect on the fuency of
whole-task performance, when practiced after exposure to a simple version
of the whole task.
Second, part-task training is best distributed over time in sessions alter-
nating with learners working on learning tasks in intermixed training (see
Figure 13.4). Further, if part-task practice is provided for more than one
recurrent aspect of the whole skill, diferent part-task practice sessions
and working on the whole learning tasks are best intermixed to promote
integration (Schneider, 1985). Intermixed training might also beneft
double-classifed skills (see Section 5.4). Here, extensive part-task prac-
tice is provided, recognizing that the developed routines are not powerful
enough to address all situations the learner might encounter. The learn-
ing tasks are then strategically used to create occasional impasses, forcing
learners to deal with situations where the routines do not work. This trains
them to switch from an automatic execution to a problem-solving mode
and cultivates their ability to ‘switch cognitive gears’ (Louis & Sutton,
1991) or ‘cognitively slow down’ when the situation demands it (Moulton
et al., 2010).
Step 10: Design Part-Task Practice 295

Figure 13.4 Intermixing practice on learning tasks and part-task practice for
two selected recurrent task aspects.

To conclude this chapter, Table 13.4 presents one of the task classes from
a training blueprint for the complex skill ‘producing video content’ (you
may refer to Table 6.2 for a description of the other task classes). A specif-
cation of part-task practice has been added to this task class, indicating that
part-task practice of sketching scenes for a storyboard starts parallel to a
learning task for which it is relevant. Part-task practice continues until learn-
ers reach the standards for acceptable performance. See Appendix 2 for the
complete training blueprint.

Table 13.4 Preliminary training blueprint for the complex skill ‘producing video
content.’ For one task class, a specification of part-task practice has
been added to the blueprint.

Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3-5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Supportive Information (inductive strategy): Case study
Learners study three worked-out examples (i.e., case studies) of promotional
videos for a backpack with integrated solar panels, a virtual fitness platform,
and an urban art festival. In groups, a tutor guides them in comparing and
evaluating each example’s goals, scripts, camera use, lighting, etc.
Supportive Information: Presentation of cognitive strategies
• SAP for developing a story for promotional videos
• SAPs for interacting with people and collaborating with the crew
• SAPs for shooting video (detailed strategies for creating compositions and
capturing audio)
Supportive Information: Inquiry for mental models: learners are asked to
identify examples of:
• Different types of cameras, microphones, and lights (conceptual models)
• Story arcs (structural models)

(Continued)
296 Step 10: Design Part-Task Practice

Table 13.4 (Continued)


Learning Task 2.1 Procedural Information
Support: Completion task Unsolicited
Guidance: Process worksheet • How-to instructions for lighting and
Learners receive the client briefing, selecting lenses and microphones
synopsis, and storyboard for a
video promoting a new coffee
machine. They follow a process
worksheet to record footage and
create the final video.

• Sketching a storyboard
Part-task practice
Learning Task 2.2 Procedural Information
Support: Reverse task Unsolicited
Guidance: Tutoring • How-to instructions for
Learners study a promotional video sketching a storyboard
about a new startup in the field
of artificial intelligence. A tutor
helps them work backward to
explain critical decisions in the
production phase and develop a
storyboard that fits the video and
meets the client’s requirements.
Learning Task 2.3: Imitation Procedural Information
task Solicited
Support: Conventional task • How-to instructions for
Guidance: Modeling lighting and selecting lenses
Learners study a modeling example and microphones
of how a teacher/expert creates a • Platform with how-to videos
short social media advertisement for using postproduction
video for a small online clothing software
store. Learners remake the ad for
a small online art store.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 2.3.
Learning Task 2.4 Procedural Information
Support: Conventional task Solicited
Guidance: Tutoring • Platform with how-to videos
Under guidance from a tutor, for using postproduction
learners create a promotional software
video highlighting the products
or services of a local store.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 2.4.

13.9 Summary of Guidelines


• If you are considering using part-task practice for recurrent aspects of
a complex skill, then only apply part-task practice if a very high level of
automaticity is required and/or if the recurrent skill is critical, enables
Step 10: Design Part-Task Practice 297

performing many other skills, or is simultaneously carried out with many


other skills.
• If you design part-task practice, then use conventional practice items in a
process of overlearning as quickly as possible.
• If you design part-task practice for a recurrent skill that is error-prone or
easily confused with other skills, then consider using edit items, recognize
items, and training wheels; in these cases, frst provide support through
recognize or edit items and/or through training wheels and then quickly
fade the support as learners acquire more expertise.
• If you design part-task practice, then ensure that the whole set of practice
items used is divergent; the items must represent all situations that can be
handled with the procedure or the set of IF-THEN rules.
• If you design part-task practice for procedures with many steps/decisions
or large sets of rules, then apply part-task sequencing techniques such as
segmentation, simplifcation, and fractionation; these techniques apply a
forward-chaining approach and yield low contextual interference (which
is the opposite of sequencing techniques for learning tasks!).
• If you design procedural information that is to be presented during part-
task practice, then (a) provide demonstrations that focus the learners’
attention on difcult or dangerous actions; (b) apply contingent, step-by-
step tutoring; and (c) use a model-tracing paradigm to give immediate
corrective feedback.
• If you design part-task practice for overlearning, then change perfor-
mance criteria from accuracy, via accuracy plus speed, to accuracy plus
speed plus time-sharing; if applicable, compress simulated time with a
factor of 10 to 100 and distribute practice sessions over time.
• If you design part-task practice, then consider using drill-and-practice
computer-based training programs or part-task trainers to increase the
fuency of whole-task performance.
• If you include part-task practice in a training blueprint, then intermix part-
task practice for a selected recurrent skill with the work on whole learning
tasks and, if applicable, with part-task practice on other recurrent skills.

Glossary Terms

Attention focusing; Component fuency hypothesis; Divergence of prac-


tice items; Fractionation; Intermixed training; Matching; Model tracing;
Multiple representations; Overlearning; Practice item; Segmentation;
Simplifcation; Subgoaling; Training wheels approach
Chapter 14

Domain-General Skills

DOI: 10.4324/9781003322481-14
300 Domain-General Skills

The Ten Steps focuses, in the frst place, on training domain-specifc com-
plex skills or professional competencies. Yet, training programs based on
the Ten Steps also provide good opportunities for training domain-general
skills; that is, skills not bound to one particular domain. We encountered
one example in Step 3—the sequencing of learning tasks. In on-demand
education, self-directed learners can select learning tasks that best ft their
needs, yielding individualized learning trajectories based on the results of
frequent self-assessments. Thus, task selection is a domain-general self-
directed learning skill that can be well trained in programs based on the
Ten Steps, provided that the learner is given a certain amount of freedom
to select learning tasks and is also capable of efectively using that freedom.
This chapter discusses the training of domain-general skills in programs
based on the Ten Steps, including self-regulated learning skills, self-directed
learning skills (in addition to task-selection skills, also information literacy
skills and deliberate practice skills), and other domain-general skills (also
mistakenly called 21st century skills).
Although domain-general skills are not ‘bound’ to one particular domain,
they must always be learned and trained in one or more domains. When learn-
ers select learning tasks, these tasks concern learning in a particular domain.
When learners regulate their learning, they regulate their acquisition of
knowledge, skills, and attitudes in a particular domain. When learners search
for necessary learning resources, these resources contain information about
a particular domain. Thus, domain-general skills must always be learned and
trained in a learning program directed at developing domain-specifc com-
plex skills, but the design should also allow for the acquisition and practic-
ing of the domain-general skills. For example, learners can only practice task
selection skills in a program in which they can choose their learning tasks.
Likewise, they can only practice information literacy skills in a program in
which they can search for their learning resources. They can only practice
collaboration skills in a program in which they can collaborate with others,
and so forth. This is a strict requirement because attempts to teach domain-
general skills outside of domains consistently fail (Tricot & Sweller, 2014).
The structure of this chapter is as follows. Section 1 describes the mecha-
nisms underlying self-regulated and self-directed learning and the implica-
tions for their teaching. The following sections discuss the training of two
important self-directed learning skills: information literacy skills in Section 2
and deliberate practice skills in Section 3. The focus is on intertwining the
training of domain-specifc skills with the training of self-directed learning
skills. Section 4 discusses the popular but mistaken term 21st-century skills.
They include learning and literacy skills, which are the focus of this chapter,
but also thinking and social skills. In the Ten Steps, intertwining the train-
ing of domain-specifc skills and the training of domain-general skills always
follows the same approach. The chapter ends with a summary.
Domain-General Skills 301

14.1 Self-Regulated and Self-Directed Learning


Two complementary subprocesses in Self-Regulated Learning (SRL) and
Self-Directed Learning (SDL) are monitoring and control (Nelson & Nar-
ens, 1990). Monitoring refers to the (metacognitive) thoughts learners
have about their learning. For example, learners reading a study text should
monitor their comprehension. Control refers to how learners respond to
the environment or adapt their behavior based on their thoughts. Thus, if
comprehension monitoring leads learners to think that a text is not yet well
understood, they may decide to restudy one or more parts of it. Monitor-
ing and control are closely linked in the same learning cycle: One is useless
without the other. Figures 2.3 and 6.6 provided examples of learning cycles.
When applying this cycle to self-regulated or self-directed learning, monitor-
ing will take the form of self-assessments, and control will refer to the self-
directed learner using these self-assessments to select their future learning
tasks (i.e., on-demand education; see Section 6.5).
Because monitoring and control are closely linked to each other in the
same learning cycle, it makes no sense to ask learners to monitor their learn-
ing (e.g., assess their learning or refect on their learning processes) if they
have no opportunity to control it. To illustrate this, suppose you are in the
passenger seat of a car and must monitor the trafc in the rearview mirror.
This would feel like a pointless exercise because it does not help to drive the
car more safely, and you would probably ignore the request. Looking in the
rearview mirror only makes sense when you are in the driver’s seat, when
you are in control and can use the information on the trafc behind you to
drive more safely. The same is true in education: It only makes sense to ask
learners to monitor or refect on their performance when they can use their
thoughts to control or plan future actions.
Monitoring and control can take place at diferent levels. First, at the task
or topic level, learners monitor how well they carry out a particular learning
task, which afects how and how long they continue practicing it, or they
monitor how well they comprehend, for example, a piece of text, animation,
or video, which then afects how and how long and how often they engage
in studying it. Second, at the instructional-sequence level, learners monitor
how well they performed on one or more learning tasks after completing
them, afecting their selection of the next suitable tasks and/or other learn-
ing resources. In the Ten Steps, we limit the term SRL to the level of tasks
and topics and reserve the term SDL for the instructional-sequence level.

Learning and Teaching SRL Skills

When students monitor their learning and estimate how well they learned
something (often called judgments of learning or JOLs), their metacognitive
302 Domain-General Skills

thoughts are typically based on cues that more or less predict their future
performance (Koriat, 1997). Unfortunately, learners are not good at this,
either overestimating their knowledge and skills or, especially for low-per-
forming students, having an overly optimistic view of what they know—a
phenomenon known as ‘unskilled and unaware,’ or the Dunning-Kruger
efect (Kruger & Dunning, 1999). They often base their JOLs on invalid
cues. One striking example of an invalid cue that learners often use that
is not predictive of future performance is the ease of recall of information
immediately after study. The information is then easily recallable because it
is still active in working memory, but this does not mean it is readily retriev-
able from long-term memory. Thus, a much better cue is whether the infor-
mation is easily recallable a few hours after study (Van Loon et al., 2013).
Unfortunately, there is an overall tendency for learners to use invalid and/or
superfcial cues, which may also explain their overconfdence when predict-
ing future performance. When learners use invalid cues and are overconf-
dent, this has negative consequences for their control decisions; for example,
they use surface rather than deep study strategies, they terminate practice or
study too soon, or they skip particular elements during practice or study—all
of which have negative efects on learning outcomes (Bjork et al., 2013).
Accurate monitoring must, thus, use valid cues. In the Ten Steps, what
valid cues are will difer for learning tasks, supportive information, proce-
dural information, and part-task practice. When learners work on learning
tasks and are involved in schema construction through inductive learning,
they should monitor whether their learning activities help construct sche-
mata in long-term memory that allow for transfer of learning. Valid cues
are whether they can carry out alternative approaches to the task or explain
how their approach difers from other’s approaches. Unfortunately, learn-
ers often use invalid cues. For example, they may solely monitor the accu-
racy and fuency of their current performance. Yet, being able to perform a
task smoothly does not predict future performance on transfer tasks (cf. the
‘transfer paradox,’ described in Section 2.4). Instruction that helps learn-
ers use more valid cues may take the form of metacognitive prompts that
explicitly help them focus on more valid cues (i.e., improve monitoring)
and undertake learning activities that promote schema construction (i.e.,
improve control; see the frst row of Table 14.1 for examples).
Similarly, giving learners metacognitive prompts may help them use bet-
ter cues for monitoring and controlling their learning of supportive infor-
mation, procedural information, and part-task routines. For supportive
information, the ease of immediate recall or the ease of studying the infor-
mation are not valid cues for the desired construction of schemata through
elaboration (they yield an ‘illusion of understanding’; Paik & Schraw, 2013).
Instead, learners should ask themselves whether they can generate keywords,
summaries, or diagrams of the studied information or answer test ques-
tions about it (see second row in Table 14.1 for examples). For procedural
Domain-General Skills 303

information, the ability to carry out the current task (i.e., learning task or
part-task practice item) with procedural information and corrective feedback
at hand is not a valid cue for the desired automation of schemata through
rule formation. Instead, learners should ask themselves whether they can
carry out the same task without consulting the procedural information or
without receiving immediate feedback on errors (see third row of Table 14.1
for examples). Finally, for part-task practice, the ability to carry out the task
accurately and without errors is not a valid cue for the desired automation
of schemata through strengthening. Instead, learners should ask themselves

Table 14.1 Metacognitive prompts for monitoring and control in self-regulated


learning.

Metacognitive prompts

Monitor Control

Learning Would you be able to Can you carry out alternative


tasks perform this task in an approaches to this task?
alternative fashion?
How well do you expect Can analogies, worked-out
to perform on future examples, or task solutions
tasks that are different of others help you perform
from the current one? this task?
Supportive Can you self-explain the Can you paraphrase,
information information you just summarize, or build a
studied? diagram for the information
you just studied?
Can you answer test Which parts do you want to
questions on the gist of restudy to increase your
the studied information? understanding?
Procedural Would you be able to Can you perform the task
information perform this part-task again without consulting the
without the just-in-time procedural information?
instructions?
Is your performance still If you make an error, are you
dependent on corrective able to recover from this
feedback? error without asking for help?
Part-task Does it cost you any Should you continue practicing
practice mental effort to or better plan another
perform this task? practice session (i.e.,
distributing practice)?
Would you be able Can you now perform the task
to perform it faster or under time-sharing
simultaneously with conditions?
other tasks?
Source: Van Merriënboer, 2016.
304 Domain-General Skills

whether they can carry out the task faster and/or together with other tasks
(see bottom row of Table 14.1 for examples).
Learning is always self-regulated: It is impossible for learners to work on
learning tasks without monitoring their approach and adapting it accord-
ingly, studying supportive information without monitoring comprehension
and adapting reading or viewing strategies accordingly, etc. The Ten Steps
fully acknowledges the importance of (teaching) SRL skills on the task and
content level, but a full discussion falls beyond the scope of this book (see
De Bruin & van Merriënboer, 2017). Instead, we focus on SDL skills on the
instructional-sequence level because the Ten Steps provides unique oppor-
tunities for teaching domain-general skills at this level.

Learning and Teaching SDL Skills

One important SDL skill, the selection of new learning tasks, was discussed
in Step 3 because it sequences learning tasks into individualized learning
trajectories. Moreover, we described how we can support learners in devel-
oping their task-selection skills (Section 6.4). As shown in Table 14.2, teach-
ing task-selection skills takes place in the context of on-demand education,
where learners can select their learning tasks from a set of available tasks (cf.
Figure 6.6). There needs to be a form of shared control where the teacher or
other intelligent agent provides support and/or guidance to the learner for
assessing progress, identifying learning needs, and selecting learning tasks
that can fulfll these needs. Support and guidance decrease in a process of
second-order scafolding, meaning that there is a gradual transition from a
situation where a teacher/system decides on which learning tasks the learner
should work on to a situation where the learner decides on the next task or
tasks to work on. Thus, the learner gains increasing control over the task-
selection as their task-selection skills develop. In Chapter 6, an electronic
development portfolio was described as a useful tool to help learners develop
both domain-specifc skills and domain-general task-selection skills because
it keeps track of all performed tasks, gathers assessments of those tasks, and
provides overviews that indicate points of improvement or learning needs.
In coaching meetings, learners and teachers can then use the information
from the portfolio to refect on progress and points of improvement and
plan future learning tasks (see Van Meeuwen et al., 2018).
Table 14.2 describes two other types of SDL skills highly relevant for
educational programs based on the Ten Steps. First, information literacy
skills enable a learner to search, scan, process, and organize supportive infor-
mation from various learning resources to fulfll information needs resulting
from the work on learning tasks. Learners can develop such information lit-
eracy skills in the context of resource-based learning (Section 7.4). Second,
deliberate practice skills enable a learner to identify recurrent aspects of a
Domain-General Skills 305

skill that can be successively refned and automated through part-task prac-
tice to improve whole-task performance and to use procedural information
from various learning resources to support this part-task practice. Learn-
ers can develop such deliberate practice skills in the context of independ-
ent part-task practice (Section 13.6) and solicited information presentation
(Section 10.4). Information literacy skills and deliberate practice skills will
be further discussed in the next sections.

Table 14.2 SDL skills relevant for the four components.

Learning SDL skills Second-order scaffolding


context

Learning On-demand Task selection Learners receive


tasks education skills, including increasingly less
self-assessment support and guidance
of own for selecting suitable
performance learning tasks as their
and task-selection skills
identification develop
of points of
improvement
Supportive Resource-based Information Learners receive
information learning literacy or increasingly less
information support and guidance
problem for searching and
solving (IPS) studying learning
skills resources as their
information-literacy
skills develop
Procedural Solicited Deliberate Learners receive
information information practice for increasingly less
presentation routine aspects support for identifying
of behavior opportunities for
Part-task Independent
part-task practice and
practice part-task
related procedural
practice
information as their
deliberate-practice
skills develop

14.2 Training Information Literacy Skills


In resource-based learning (Section 7.4), it is not the teacher or the system
but the self-directed learner who is responsible for searching for helpful sup-
portive information from all available learning resources. Typically, learners
start searching for supportive information because learning needs arise from
306 Domain-General Skills

their work on learning tasks; while working on a task, learners become aware
that they need to study domain models, case studies, SAPs, or modeling
examples to successfully complete the task. This is similar to the self-study
phase in problem-based learning (PBL; Loyens et al., 2011). There, small
groups of learners work on learning tasks called ‘problems.’ Their main aim
is to come up with a solution, which often takes the form of a general expla-
nation for the particular phenomena described in the problem. To do so,
they cycle through three phases (Wood, 2003). In the frst orientation phase,
learners come together in a tutor-led small-group meeting to discuss the
problem. They clarify unknown concepts, defne the problem, ofer tentative
explanations, draw up an inventory of explanations, and formulate learning
issues. In the second self-study phase, learners attempt to reach the learning
objectives by fnding relevant learning resources and collecting supportive
information in the ‘study landscape,’ which includes the library (e.g., books,
articles) and other learning resources (multimedia, Internet, human experts,
etc.). In the third evaluation phase, the learners come together again in a
second small-group meeting in which they report their fndings, synthe-
size the collected information, and evaluate and test it against the original
problem. In this way, PBL not only helps learners acquire knowledge about
the learning domain but, if carried out well-structured, can also help them
develop information literacy or problem-solving skills related to systemati-
cally searching for relevant learning resources.
The teaching of information literacy skills, and, actually, the teaching of
all domain-general skills, should follow exactly the same design principles as
the Ten Steps prescribes for domain-specifc skills or professional competen-
cies (Argelagós et al., 2022). Thus, for the design of information literacy
learning tasks, critical principles to take into account are variability of prac-
tice, providing problem-solving guidance, providing task support, and scaf-
folding support and guidance as learners’ information literacy skills develop
(cf. Step 1 in Chapter 4). The ‘primary’ training blueprint for the complex
skill or professional competency (i.e., domain-specifc skill) taught should
enable the learners to practice both the domain-specifc and the domain-
general skills. For example, if learners must develop task-selection skills, the
primary training blueprint should allow them to practice selecting learning
tasks (i.e., use on-demand education). If learners must develop information
literacy skills, the primary training blueprint must permit them to practice
searching for relevant learning resources (i.e., use resource-based learn-
ing). The designer can then develop a ‘secondary’ training blueprint for the
domain-general skill in a recursive fashion, using the same design principles
they used for the primary training blueprint, and fnally, intertwine the pri-
mary and secondary training blueprint so that the learners can simultane-
ously develop domain-specifc and domain-general skills (Frèrejean et al.,
2019). This will be explained in the next subsections.
Domain-General Skills 307

Variability

If learners develop information literacy skills, they do so by working on


information literacy learning tasks. To facilitate a process of schema con-
struction by inductive learning, these tasks should show high variability
of practice. Real-life information literacy tasks indicate the dimensions on
which tasks difer from each other, such as types of learning resources (e.g.,
study books, journal articles, reports, websites, blogs, educational multi-
media, e-learning applications, pod- and vodcasts, human experts, popular
writings, etc.), reliability and credibility of learning resources (i.e., relatively
high for peer-reviewed articles and study books published by renowned
publishers; potentially low for websites, popular writings, etc.), presentation
formats used (e.g., written text, spoken text, static visualizations, dynamic
visualizations such as video and animation, etc.), and so forth. The entire
set of information literacy learning tasks in the educational program should
thus be representative of the information literacy problems in learners’
future professional or daily lives.

Task Support

For task support, Step 1 of the Ten Steps distinguished between the given
situation, the desired goal situation, and the solution, transforming the
given situation into the goal situation (refer back to Figure 4.4). For infor-
mation literacy learning tasks, the given situation refers to an information
problem that arises from working on one or more domain-specifc learn-
ing tasks, the desired goal situation refers to the availability of learning
resources that contain the supportive information necessary to carry out
these domain-specifc learning tasks, and the solution refers to an organized
set of relevant learning resources. In traditional instruction with planned
information provision, the teacher/system guarantees that the learning
resources are available for the learners when needed and that their quality
is high. However, when information literacy skills are taught in resource-
based learning, learners must learn to fnd relevant learning resources (or,
if basic resources are already provided, additional or alternative resources)
themselves.
Second-order scafolding should then be used to gradually decrease
the given support. PBL can provide an example of second-order scafold-
ing of given task support. During an educational program, support can be
decreased by:

• First, giving the learners a limited list of relevant resources (e.g., books,
articles, video lectures, websites, educational multimedia, etc.) that they
should consult to explain the phenomenon introduced in a particular
problem but asking them to analogously expand the list.
308 Domain-General Skills

• Then, giving the learners a long list of relevant resources—for example,


all resources relevant (and possibly not relevant) for the range of prob-
lems presented in one particular course—so that the learners must choose
the resources relevant for the problem.
• Finally, giving the learners no list of resources at all. In this case, they
must independently search for the resources in the ‘study landscape’
and/or the Internet.

Guidance

It is generally necessary to supplement built-in task support with problem-


solving guidance. The guidance for solving information literacy problems
often relies on a SAP, which specifes the phases to go through and the rules-
of-thumb that may be helpful to complete each phase. Brand-Gruwel et al.
(2005) describe a SAP containing fve phases for solving information prob-
lems: (1) defning the information problem that arises from the domain-
specifc learning task (cf. Phase 1 in PBL), (2) searching information that is
relevant to performing the domain-specifc learning task, (3) scanning the
information to determine its relevance and credibility, (4) processing the
information by elaborating on its content, and (5) organizing the informa-
tion so that it is readily available for later use (see Figure 14.1). Rules-of-
thumb can be described for completing each phase (e.g., a rule-of-thumb
for phase 3, credibility, is that articles in peer-reviewed scientifc journals or
reports from government agencies are usually credible information sources).
Together with modeling examples, this SAP may be presented to learners
to demonstrate the information problem-solving process; it may serve as
the basis for a process worksheet guiding the learners through the process;
or it may be used to defne performance constraints (e.g., after Phase 3, the
learning resources found by the learner could need approval by the teacher
before the learner is allowed to process them).
Teachers may also use the SAP to shape their tutoring actions, helping
students proceed through the information problem-solving process. Second-
order scafolding can be used to gradually decrease the guidance that they give
to the learners. For example, the tutor in PBL meetings can guide the learners
on how to follow the SAP shown in Figure 14.1. In the course of an educa-
tional program, guidance can then gradually diminish in the following way:

• In a frst phase, the tutor gives the learners explicit advice on how to defne an
information problem arising from the domain-specifc learning task (in each
frst group meeting), on how to search, scan, and process relevant learning
resources (in the self-study phase), and on how to organize the information
for others and for their later use (in each second group meeting).
Domain-General Skills 309

Figure 14.1 S ystematic approach to problem solving for information literacy


skills.

• In a second phase, the tutor might no longer give explicit advice but,
at the end of each first group meeting, ask the learners how they plan
to search for relevant resources and provide them with cognitive feed-
back on their intended search strategies and, at the end of each second
group meeting, give learners feedback on processing and organizing the
information.
• In a third phase, the tutor may not guide at all or even be absent because
the group should, at that point, be ready and able to function as a self-
managed group.

Intertwining Training Blueprints


Table 14.3 combines a primary training blueprint for the domain-specific
skill of patent examination (identical to Table 6.1 in Chapter 6) and a sec-
ondary training blueprint for the domain-general skill of information liter-
acy. For the sake of simplicity, procedural information and part-task practice
have not been included (for complete training blueprints for teaching
310 Domain-General Skills

Table 14.3 Intertwining training blueprints for domain-specific skills (patent


examination) and domain-general information literacy skills.

Primary blueprint Secondary blueprint


(patent examination skills) (information literacy skills)

Task Class 1 Task Class 1


Learning tasks that require learners to Learning tasks that ask learners
handle a clear patent application involving to search and use supportive
a single independent claim with one clear information necessary
and complete reply from the applicant and for examining patent
no need for intermediate revision during applications. All information
the examination process. Learners must is available in the Patent
prepare the search report and carry out Office library, which makes
the substantive examination. all search tasks more or less
equally complex. Therefore,
no further task classes are
distinguished.
Supportive information
• SAP for systematically
searching relevant supportive
information in the library
of the Patent Office (cf.
Figure 14.1)
• Video-based modeling
example in which a tutor
shows how to systematically
search for relevant
supportive information in the
Patent Office’s library
Supportive information -> -> -> -> -> Learning task 1.1—imitation
-> -> -> -> -> -> -> + process worksheet
Learning task 1.1—To be specified Learners must search for
Learning task 1.2—To be specified supportive information
Learning task 1.3—To be specified necessary to perform learning
tasks 1.1–1.3 in the Patent
Office’s library (What are
independent claims, applicant
replies, and intermediate
revisions? How does one prepare
search reports and conduct
substantive examinations?).
They imitate the video-based
modeling example given
to them, are guided by the
leading questions in a process
worksheet, and receive
cognitive feedback from their
tutor.

(Continued)
Domain-General Skills 311

Table 14.3 (Continued)

Primary blueprint Secondary blueprint


(patent examination skills) (information literacy skills)
Task Class 2
Learning tasks that require learners
to handle a clear patent application
involving a single independent claim
with many unclear and incomplete
replies from the applicant and a need
for intermediate revisions during the
examination process. Learners must
prepare the search report and carry out
the substantive examination.
Supportive information -> -> -> -> -> Learning task
-> -> -> -> -> -> 1.2—conventional + process
worksheet
Learning task 2.1—To be specified
Learners must search for
Learning task 2.2—To be specified supportive information
necessary to perform learning
tasks 2.1–2.2 in the Patent
Office’s library (How does one
deal with unclear/incomplete
applicant replies, and how
does one conduct intermediate
revisions?). They may use
the process worksheet and
receive cognitive feedback
from their tutor.
Task Class 3
Learning tasks that require learners to
handle an unclear patent application
involving several claims with multiple
dependencies and many unclear and
incomplete replies from the applicant but
no need for intermediate revisions during
the examination process. Learners must
prepare the search report and carry out
the substantive examination.
Supportive information -> -> -> -> -> Learning task
-> -> -> -> -> -> 1.3—conventional
Learning task 3.1—To be specified Learners must independently
Additional learning tasks may be added search for supportive
information necessary to
perform learning tasks 3.1–3.x
in the Patent Office’s library
(How does one deal with claims
with multiple dependencies?).
They receive no help or
feedback from their tutor.
312 Domain-General Skills

information literacy skills, see Frèrejean et al., 2016; Wopereis et al., 2015).
Note that what is (unspecifed) supportive information in the primary blue-
print is a learning task in the secondary blueprint! After all, the main goal
of the information literacy learning tasks in the secondary blueprint is to
identify the supportive information needed to carry out the domain-specifc
learning tasks in the primary blueprint.
Intertwining training blueprints for domain-specifc and domain-general
skills is largely uncharted territory in instructional design. For information
literacy skills, the basic principle is to replace supportive information in
the primary blueprint with learning tasks in the secondary blueprint, thus
requiring learners to search for precisely this supportive information. Then,
supportive information, procedural information, and part-task practice
can be added to this secondary blueprint. Table 14.3 includes supportive
information at three levels of complexity for the domain-specifc skills. This
means there are three task classes for patent-examination learning tasks, but
the three information-literacy-learning tasks connected to this supportive
information are treated as being all on the same level of complexity, leading
to only one task class for information-literacy-learning tasks.
Does Table 14.3 refect the optimal way of intertwining the two blue-
prints? We have no defnitive answer to this question. It might be better to
distinguish task classes for both the patent-examination and information lit-
eracy tasks. This would have made the development of information literacy
skills ‘smoother’ (due to a more gradual increase in task complexity), but
the intertwining of the two blueprints would have been more complex. In
the combined blueprint in Table 14.3, learners must also search for the sup-
portive information needed to carry out the frst patent-examination learn-
ing tasks (tasks 1.1–1.3 in the primary blueprint). Thus, patent-examination
and information literacy skills develop in parallel right from the start of the
training program. However, this approach might lead to a great deal of cog-
nitive load and thus impede the learning of the primary task, the secondary
task, or both. If this is the case, it might be preferable to start developing
information literacy skills only after learners master the patent-examination
skills on a basic level of complexity. In short, there are many open questions
about intertwining blueprints that can only be answered by future research.

14.3 Deliberate Practice for Building Routines


Ericsson and Lehman (1996; see also Ericsson, 2015) defne the concept
of deliberate practice as “the individualized training activities specially
designed by a coach or teacher to improve specifc aspects of an individual’s
performance through repetition and successive refnement” (pp. 278–279;
italics added), and they add that “to receive maximal beneft from feedback,
individuals have to monitor their training with full concentration, which is
efortful and limits the duration of daily training” (p. 279; italics added). In
Domain-General Skills 313

the Ten Steps, deliberate practice is primarily concerned with the automa-
tion of recurrent aspects of performance, which relates to the presentation
of procedural information and the provision of part-task practice. Further-
more, it aims to help learners monitor and control their performance, which
relates to second-order scafolding so that learners gradually take over the
regulation of their learning. This will be further explained in the following
subsections.

Deliberating the Need for Procedural Information

If we want learners to become involved in deliberate practice, they must learn


to make optimal use of procedural information that can help them improve
recurrent aspects of their task performance. This procedural information can
be in the form of JIT information displays with how-to instructions, but
more often, it will have the form of demonstrations that are carefully stud-
ied. For example, in sports, athletes will study video recordings of their com-
petitors to fnd out how particular routine behaviors are precisely performed
by them, with the main aim of learning from them and successively refning
their routine behaviors, or they may study video recordings of themselves,
which they then analyze with their coaches to determine (a) what they are
doing right or wrong, (b) how to correct what they are doing wrong or
perfect what they are doing right, and (c) how to create a training regime to
strengthen what they are doing right and correct what they are doing wrong.
In Chapter 7, we described how the golfer Tiger Woods meticulously studies
videotapes of his opponents to refne his techniques; that is, specifc aspects
of his performance. As another example, surgeons in training will observe
expert surgeons to study how they operate particular tools, with the main
aim of learning from them and practicing operating the tools the same way
as they do. To scafold the learner’s development of deliberate practice skills
related to consulting procedural information, the support and guidance
given to the learner must gradually decrease in a process of second-order
scafolding. This entails a gradual shift from unsolicited to solicited informa-
tion presentation (see Section 10.4). The following serves as an example:

• In a frst phase of learning, the teacher provides demonstrations, JIT infor-


mation displays, and feedback precisely when the learner needs it (i.e.,
unsolicited information presentation through ‘contingent tutoring’).
• In a second phase, the learner is free to consult demonstrations and/or
JIT information displays when needed, but the teacher closely observes
the learner, provides corrective feedback, and helps to fnd relevant pro-
cedural information when this might help to improve performance (i.e.,
solicited information presentation with guidance).
• In a third phase, the learner is free to consult demonstrations and/or
JIT information displays when needed, but there is no more guidance or
314 Domain-General Skills

feedback from the teacher (i.e., solicited information presentation with-


out guidance).
• In a fourth phase, the learner may be required to perform the tasks with-
out the option to consult demonstrations and/or JIT information dis-
plays. This fnal phase only makes sense if the learner is expected to fully
automate the recurrent task aspects under consideration. Yet full auto-
mation through part-task practice will typically be an important part of
deliberate practice.

Deliberating the Need for Part-Task Practice

If we want learners to become involved in deliberate practice, we should


also make them aware of how part-task practice can help to successively
refne and fully automate recurrent aspects of their performance (Van Gog,
Ericsson et al., 2005). Such deliberate practice can sometimes even lead to
temporary drops in performance. For example, some years ago, Tiger Woods
said he wanted to improve one swing technique. He predicted a temporary
drop in his position in the world rankings because ‘unlearning’ the old swing
technique and learning the new one would take a lot of time-consuming
part-task practice, resulting in a temporary drop in whole-task performance,
as refected in these rankings. This is precisely what happened! As another
example, a surgeon receiving new operating tools (e.g., a new laparoscope
or hysteroscope) may decide to extensively practice the use of these tools on
a part-task trainer available in a simulation lab before trying it on patients
because part-task practice will help improve whole-task performance in the
operating room. To scafold the learner’s development of deliberate practice
skills related to independent part-task practice, the support and guidance
given to the learner must again gradually decrease in a process of second-
order scafolding. This entails a gradual shift from dependent to independ-
ent part-task practice (see Section 13.6). The following serves as an example:

• In a frst phase, the teacher provides part-task practice to the learners


and explains the most important principles underlying it (e.g., distribute
practice sessions over time, practice under speed stress, intermix part-task
practice with whole-task practice).
• In a second phase, it is up to the learners to decide on doing part-task
practice or not, but the teacher provides an overview of all opportuni-
ties for part-task practice and gives advice and feedback on the learners’
choices.
• In a third phase, the teacher leaves it up to the learner when and how
to use part-task practice. However, supervisors or coaches will always be
available for consultation in many professional environments (profes-
sional sports, medicine, aviation, etc.).
Domain-General Skills 315

Intertwining Blueprints

Secondary training blueprints for teaching deliberate practice skills can be


combined with primary training blueprints for teaching domain-specifc skills
in the same way as illustrated in Table 14.3 for information literacy skills and
domain-specifc skills. For deliberate practice skills, part-task practice in the
primary training blueprint is replaced with learning tasks in the secondary
blueprint. These (secondary) learning tasks require the learner to search for
forms of part-task practice that may help to successively refne and/or fully
automate recurrent aspects of performance. Similarly, procedural informa-
tion in the primary training blueprint can be replaced with learning tasks in
the secondary blueprint; these (secondary) learning tasks require the learner
to search for procedural information (e.g., superb demonstrations) that may
help improve recurrent aspects of performance. Although the intertwining
of blueprints for training domain-specifc skills and training deliberate prac-
tice skills is essentially a rather straightforward process, the use of diferent
techniques for scafolding and simple-to-complex sequencing in both blue-
prints creates a broad range of possibilities from which it is difcult to make
a fnal selection.

14.4 So-Called 21st-Century Skills


The popular, though mistaken, term 21st-century skills is used to refer to the
domain-general skills people need to live, work, and realize their potential in
the contemporary world. One problem with this term is that most of these
skills, such as the ability to solve a problem, work with others, communicate
with others, or be creative, were also extremely necessary throughout his-
tory. To solve the problem of fresh meat rotting without refrigeration, crea-
tive paleolithic ‘cave people’ began smoking, and the earliest known practice
of drying was done around 12,000 B.C. by inhabitants of the modern Mid-
dle East and Asia regions. The second problem, of course, is that these skills,
like the SDL- and SRL-skills discussed earlier in this chapter, can only be
acquired and carried out when the necessary domain-specifc knowledge
and skills have already been acquired. You cannot efectively communicate
with someone else, solve a problem, be creative, or collaborate with others
without the prerequisite domain-specifc knowledge and skills.
The domain-general skills just described have been the drivers of our
advancement as a civilization since the Age of Enlightenment. The major
diferences are, frst, that the tools for doing those things have changed (but
have also changed before, e.g., from post to telegraph to telephone to fax
to email to shared workspaces). A second major change, possibly the only
true 21st-century skill, relates to information literacy and management. The
growth of available information and the growth of the number of reputable
316 Domain-General Skills

but also disreputable information sources requires skills that were not really
needed before. There are many frameworks describing different types of
these skills. A common distinction can be made between (a) learning skills,
(b) literacy skills, (c) thinking skills, and (d) social skills (see Figure 14.2).
This chapter focused on learning skills, which relate to self-regulation, select-
ing one’s learning tasks, and deliberate practice; and literacy skills, which
relate to skills such as searching for learning resources and using ICT and
new technologies. The Ten Steps sees information literacy skills as a type of
self-directed learning skill, not a distinct category.

Figure 14.2 Types of domain-general skills.

The Ten Steps also provides excellent opportunities for developing learn-
ers’ thinking and social skills. The use of learning tasks based on real-life
tasks and the focus on transfer of learning naturally stresses the develop-
ment of problem-solving, reasoning, and decision-making skills. Tasks are
typically ill-structured or even wicked (Rittel & Webber, 1973), asking for
innovation and creativity. Moreover, learning tasks will often require team-
work and interprofessional work, providing good opportunities for practic-
ing communication, cooperation, and interpersonal and cross-cultural skills
(e.g., Claramita & Susilo, 2014; Susilo et al., 2013). Instructional methods
for having learners study supportive information will also often be collabo-
rative (e.g., group discussion, brainstorming, peer learning), giving even
more opportunities for developing social skills.
Domain-General Skills 317

To summarize, the Ten Steps provides three general guidelines for devel-
oping domain-general skills:

1. Develop a ‘primary’ training blueprint for teaching the domain-specifc


skill so that it also enables practicing desired domain-general skills. For
example, if you want your learners to develop learning skills, let them
select the tasks/topics they work on; if you want your learners to develop
literacy skills, let them select their learning resources; if you want your
learners to develop thinking skills, let them work on tasks that require
problem solving, critical thinking, and creativity; and if you want your
learners to develop social skills, let them work on tasks that require team-
work, communication, and cooperation. This is a critical requirement
because the isolated training of domain-general skills will not work.
2. Develop a ‘secondary’ training blueprint for teaching the domain-general
skill(s). The design principles underpinning the secondary blueprint are
identical to those underpinning the primary one. Thus, it has a back-
bone of learning tasks, and the work on the learning tasks is sustained by
supportive information, procedural information, and, if applicable, part-
task practice. Learning tasks show high variability, and learners receive
support and guidance, which gradually decrease in a process of second-
order scafolding. It may also be necessary to sequence learning tasks for
domain-general skills in simple-to-complex task classes.
3. Finally, the primary training blueprint for teaching the domain-specifc
skill and the secondary training blueprint for teaching the domain-
general skill must be intertwined. For example, if students need to
develop task-selection skills, learning tasks in the primary blueprint are
replaced by (second-order) learning tasks in the secondary blueprint
that ask students to select suitable (frst-order) learning tasks; if students
need to develop information literacy skills, supportive information in the
primary blueprint is replaced by learning tasks in the secondary blue-
print that ask students to search for learning resources. If students need
to develop deliberate practice skills, part-task practice in the primary
blueprint is replaced by learning tasks in the secondary blueprint that
ask learners to identify routine behaviors that need further practice to
improve whole-task performance. Yet, there are still many open ques-
tions on how to best intertwine the teaching of domain-specifc and
domain-general skills.

When using the Ten Steps to design educational programs to develop


domain-specifc and domain-general skills, performance assessment of both
types of skills will be required. One intertwined training blueprint is the
basis for the educational program’s design; ideally, one integrated assessment
instrument will be used to assess learner performance and monitor progress.
318 Domain-General Skills

In Chapter 6, the section on individualized learning trajectories described


using an electronic development portfolio to reach that goal. First, the port-
folio contained an overview of all performed tasks and assessments, allow-
ing the coach and the learner to discuss domain-specifc performance and
progress. Second, it contained overviews of points of improvement formu-
lated by the learner and tasks selected by the learner, allowing the coach
and learner to discuss performance and progress on SDL skills. Similarly,
an electronic development portfolio can also include assessments of literacy,
thinking, and social skills. The next chapter will further discuss assessment
in programs based on the Ten Steps.

14.5 Summary
• Domain-general skills are not bound to one particular domain but can
only be taught in domains. They include self-regulated and self-directed
learning skills (i.e., task selection, information literacy, deliberate practice).
• Self-regulated learning (SRL) and self-directed learning (SDL) include
the metacognitive processes of monitoring and control. Monitoring
refers to learners’ thoughts about their learning (Am I able to carry out
this task? Do I understand this text?). Control or regulation refers to
what learners do to improve their performance (e.g., continue practicing)
or understanding (e.g., restudy a text).
• In the Ten Steps, SRL relates to the task or topic level, while SDL relates
to the instructional-sequence level.
• Metacognitive prompts may help learners use better cues to monitor and
control their learning. The prompts are diferent for each of the four
blueprint components.
• Training information literacy skills requires a form of second-order scaf-
folding, from planned information provision to resource-based learning,
so that learners become more and more responsible for searching and
using their learning resources.
• Training deliberate practice skills requires a form of second-order scaf-
folding, from unsolicited information presentation to solicited informa-
tion presentation and from dependent part-task practice to independent
part-task practice, so that learners become more and more responsible for
successively refning and automating recurrent aspects of their whole-task
performance.
• The design principles for training domain-general skills are identical to
those for training domain-specifc skills. In the Ten Steps, a ‘secondary’
training blueprint for the domain-general skill is developed that can then
be intertwined with the ‘primary’ training blueprint for the domain-
specifc skill or professional competency.
Domain-General Skills 319

• The domain-general skills can be categorized into learning, literacy,


thinking, and social skills—all examples of domain-general skills. Educa-
tional programs based on the Ten Steps provide excellent opportunities
for teaching such skills.

Glossary Terms

21st-century skills; Control; Deliberate practice; Information literacy skills;


Monitoring; Primary training blueprint; Secondary training blueprint;
Self-regulated learning (SRL)
Chapter 15

Programs of Assessment

DOI: 10.4324/9781003322481-15
322 Programs of Assessment

Whole tasks as intended in the Ten Steps are virtually absent in many educa-
tional programs. For example, in the traditional lecture-based curriculum in
higher education, courses strongly focus on transmitting supportive infor-
mation and mainly provide procedural information in the context of part-
task practice, which takes place in practicals or skills labs. In some settings,
the only whole task provided to students is their fnal project. Consequently,
learners are expected to integrate the entirety of their acquired knowledge
and skills into this fnal whole task at the end of the program. Unsurpris-
ingly, then, transfer of learning is often low. Except for the fnal project,
student assessments in such an educational program predominantly focus on
acquired knowledge and part-task performance.
This is completely opposite to assessment in an educational program
based on the Ten Steps. In the Ten Steps, the program’s backbone consists
of learning tasks, and performance assessments are gathered in a develop-
ment portfolio (see Step 2) to measure learners’ whole-task performance
at particular points in time as well as their gradual progression toward the
program’s fnal attainment levels. The Ten Steps assumes that, when learners
demonstrate they can carry out tasks in a way that meets all of the standards,
they must also have mastered the underlying—supportive and procedural—
knowledge and routine skills. Thus, performance-based whole-task assess-
ment is the only type of assessment that is an integral and required part of
the Ten Steps. This is sufcient for most situations!
However, there might be reasons for assessing learners not only on the
level of whole-task performance but also on the levels of acquired knowledge
(i.e., remembering and understanding) and part-task performance. External
authorities may require that learners are not only assessed on reaching per-
formance objectives (i.e., describing the acceptable performance of tasks; Step
2) but also on reaching learning objectives (i.e., describing what learners must
learn to be able to perform those tasks). Especially when learners are not fre-
quently assessed on whole-task performance, being assessed on part-tasks and
acquired knowledge might stimulate them to invest time and efort in learning
(Reeves, 2006). In that case, a good match between these assessments and the
organization of the curriculum, also called constructive alignment, is necessary
to efectively promote learning (Carr & Harris, 2001). This chapter discusses
what a complete program of assessment might look like in a whole-task cur-
riculum based on the Ten Steps (Torre et al., 2020). The focus is on summa-
tive assessment; that is, assessment to make pass/fail and certifcation decisions.
The structure of this chapter is as follows. Section 1 describes Miller’s
pyramid as a framework to distinguish four assessment levels related to the
four blueprint components. Section 2 revisits the assessment of learning
tasks, now focusing on summative assessment. Section 3 discusses the assess-
ment of supportive information, distinguishing between assessing cognitive
Programs of Assessment 323

strategies and assessing mental models. It describes the ‘progress testing’


approach, which fts the Ten Steps nicely. Section 4 discusses the assessment
of part-task performance and procedural information. Assessments of part-
tasks should focus on to-be-automated recurrent skills; the assessment of all
other types of part-tasks has serious drawbacks. In Section 5, assessments of
domain-general skills central in the Ten Steps, such as task selection, infor-
mation literacy, and deliberate practice, are discussed. The chapter ends with
a summary.

15.1 Miller’s Pyramid and the Four Components


Miller’s pyramid (1990) makes a distinction between four levels in a program
of assessment: (1) knows, (2) knows how, (3) shows how, and (4) does. The
lowest level (knows) assesses the learner’s factual and conceptual knowledge.
The next level (knows how) assesses whether a learner can explain how to
work with this knowledge and how to apply it to problems or authentic
tasks. The next level (shows how) assesses whether a learner can carry out
complex tasks in a simulated task environment. Typically, this happens in
so-called assessment centers: The tasks are complex, but the situation is not
entirely authentic. Only when assessing task performance in a real-life set-
ting do we speak of the fnal and highest level of Miller’s pyramid (does).
Figure 15.1 links Ten Steps’ four blueprint components to Miller’s pyra-
mid’s four levels. Learning tasks and part-task practice are on the ‘shows
how’ and ‘does’ levels, and procedural information and supportive informa-
tion are on the ‘knows’ and ‘knows how’ levels. Furthermore, the basic dis-
tinction between recurrent and nonrecurrent task aspects splits the pyramid
into two halves:

• Procedural information is located in the left, recurrent half of the pyra-


mid on the ‘knows’ level (i.e., prerequisite knowledge—Where is the on/
of button located on this piece of equipment?) and the ‘knows how’
level (i.e., cognitive rules—Can you tell me how to start up this piece of
equipment?).
• Supportive information is in the right, nonrecurrent half of the pyra-
mid; also on the ‘knows’ level (i.e., mental models—Can you explain the
internal workings of this piece of equipment?) and the ‘knows how’ level
(i.e., cognitive strategies—How could you use this piece of equipment to
produce product X?).
• Part-task practice, like procedural information, is located in the left,
recurrent half of the pyramid, but it is on the ‘shows how’ level. It can
never be on the ‘does’ level because carrying out only one part of the
whole task in real life would not make sense.
324 Programs of Assessment

• Learning tasks are located on the ‘shows-how’ and ‘does’ levels. Here,
the distinction between recurrent and nonrecurrent task aspects is no
longer relevant because learning tasks appeal, by defnition, to both of
them.
• In addition, supported/guided learning tasks performed in a simulated
task environment are on the ‘shows how’ level, while unsupported/
unguided learning tasks performed in a real-life task environment (the
workplace or daily life) are on the ‘does’ level.

Figure 15.1 Four components in Miller’s pyramid.

The Ten Steps, as described in this book, worked from the top to the bot-
tom of Miller’s pyramid. It began in Step 1 with the identifcation of real-life
tasks as a basis for the design of learning tasks, followed, in Step 2, by the
formulation of performance objectives, including standards of acceptable
performance for both unsupported/unguided tasks (does) and supported/
guided tasks (shows how). Thus, assessment was limited to performance
assessment of whole tasks and was only formative, aiming to improve learn-
ing. In most educational programs, this is the only thing needed: When
learners can demonstrate they can carry out the learning tasks, including
Programs of Assessment 325

unsupported/unguided learning tasks, up to all the standards, one might


reasonably assume that they have also mastered the underlying—support-
ive and procedural—knowledge and routine skills. But, as indicated, there
might be reasons for implementing a more complete program of (summa-
tive) assessment. The next section describes the elements of such a program
for assessment of learning tasks, of supportive information, and of part-task
practice in combination with procedural information.

15.2 Summative Assessment of Learning Tasks


When formatively assessing learning tasks to identify points of improvement
and support further learning, the distinction between tasks with support
and guidance and tasks without support and guidance is largely irrelevant.
Therefore, in the Ten Steps, both types appear in the same standards-tasks
matrix (cf. Figure 5.4; Table 6.6) and the same development portfolio (cf.
Figure 5.6). Yet, especially for summative assessment purposes, it might be
necessary to focus on tasks that are carried out without any support and
guidance in real-life situations (see Figure 15.2; it concerns the ‘empty’ cir-
cles that may also be called ‘test tasks’). If unsupported/unguided tasks on
the does-level of Miller’s pyramid occur in each task class or on each level
of complexity (which is true in many double-blended learning programs),
summative assessments may help to define so-called ‘entrustable profes-
sional activities’ (EPAs; Ten Cate, 2013).

Figure 15.2 Summative assessment based on test tasks in the backbone of an


educational program set up according to the Ten Steps.

Assessment of Tasks at the Does Level

On the does-level, learners will often carry out tasks in a professional setting
(e.g., internships, placements, clerkships), which are then professional tasks
326 Programs of Assessment

not designed beforehand. Whereas the guidelines for assessment discussed


in Step 2 (Chapter 5) also apply here, the professional setting has some
important additional implications for the assessment process. Concerning
the nature of the standards, the generic aspects of professional competen-
cies will often be especially important on the does-level. For example, in the
health-professions domain, standards related to ‘collaborating’ and ‘com-
municating’ become prevalent: When things go wrong in medical practice,
shortcomings in these aspects are often at stake. But exactly those aspects
of performance are the most difcult ones to grasp. Attempting to defne,
for example, ‘professionalism’ in detail and to measure it with quantitative
checklists easily risks trivialization (Norman et al., 1991). Yet, we all have
an intuitive notion of what professionalism means, especially when we see it
(or, more often, do not see it) in actual performance. Narrative expert judg-
ments are, thus, required to assess generic aspects at the does-level.
Concerning the function of assessment, it should be clear that, in an
educational setting (shows-how level), performance assessment is typically
seen as an integral part of the learning process, but in a professional set-
ting (does-level), this is—unfortunately—not always the case. This creates
an interesting paradox. On the one hand, summative assessment is best
done at the does-level because “the proof of the pudding is in the eating”.
But on the other hand, these summative assessments must also fulfll a
strong formative function because, otherwise, they become trivialized and
will not work. For example, if learners prepare and submit self-assessments
in their development portfolio primarily to please the assessment commit-
tee or meet a formal requirement, then the self-assessments will have no
signifcance to the learner, and the members of the committee will make
their judgments without much information and quickly return to their
daily work routine. Especially in a professional setting, summative assess-
ments will only work if they also succeed in driving the learning process,
become part of existing routines, and ultimately, appear indispensable for
learning.

Entrustable Professional Activities

Workplace supervisors traditionally judge the maturity of learners by their


ability to bear responsibility and to safely carry out their professional tasks
without supervision. These tasks are called entrustable professional activities
(EPAs) and are defned as responsibilities entrusted to a learner to execute,
unsupervised, once they have obtained adequate competence (Ten Cate,
2013). EPAs closely resemble task classes in the Ten Steps. The main dif-
ference is that task classes defne a set of learning tasks at a particular level
of complexity that must help the learner reach particular standards. In con-
trast, EPAs defne a set of professional tasks at a particular level of complex-
ity for which the learner has already reached the standards—as determined
Programs of Assessment 327

through summative assessment. The learner is then allowed to carry out


these tasks without supervision.
EPAs are often coupled to graded workplace supervision; for example:

• Observing the activity carried out by others.


• Acting with direct supervision available in the room.
• Acting with direct supervision available in a few minutes.
• Acting without any supervision (EPA).

Figure 15.3 shows how EPAs and levels of supervision can be placed in a
training blueprint that is developed according to the Ten Steps and imple-
mented in the workplace. The advantage of this approach is that a training
program has ‘milestones,’ making progress at different levels of complexity
visible: At a particular point in time (i.e., the vertical line in Figure 15.3),
the learner can be fully responsible for carrying out tasks at one particular
level of complexity while still being supervised when carrying out tasks at
a higher level of complexity and only observing others carrying out tasks
at even higher levels of complexity. Thus, the educational blueprint is used
flexibly: The learner can be working on tasks from more than one task class
at the same time but with different levels of workplace supervision.

Figure 15.3 Entrustable professional activities (EPAs) at different levels of com-


plexity and supervision.

15.3 Summative Assessment of Supportive


Information
Assessment of supportive information at the two bottom layers of Miller’s
pyramid relates to assessing strategic and conceptual knowledge. This type
328 Programs of Assessment

of assessment has a very long history in education and is dominant in


many educational sectors (Shumway & Harden, 2003). While it mainly
uses written assessments, it can take other forms, such as an oral exam.
A distinction can be made between assessments measuring cognitive strat-
egies (What would you do in situation X?) and assessments measuring
mental models (How can X be described? How is X structured? How
does X work?). Finally, progress testing is consistent with the Ten Steps
because it assesses the growth of multidisciplinary knowledge throughout
the program.

Assessing Cognitive Strategies

Assessing cognitive strategies may involve a combination of cases and open-


ended questions. The cases will describe real-life situations, including all
relevant information and details necessary to answer the open-ended ques-
tions. Questions may include: What would you do in this situation? Why
would you do this? Are there particular rules-of-thumb that might help you
reach an acceptable solution in this situation? What are common mistakes
made in this situation? Learners can encounter these questions alongside the
case description or at the end. In general, it is better to use a large number
of varied, short cases with a limited number of questions dealing with essen-
tial decisions than to use only one large case with many questions. Using
more cases with ‘critical’ questions increases the reliability and validity of the
assessment, and varying the cases allows for the assessment situations to vary
enough to reliably determine whether the learner has learned all that they
should. It will be necessary to develop a scoring system for all questions, and
ideally, multiple raters should score the answers to the questions.
The use of closed rather than open questions may help simplify scoring.
Examples are the situational judgment test and the script concordance test.
In a situational judgment test, the learner receives a set of real-life scenarios.
After explaining each scenario, several possible reactions are given (usually
about fve). The learners must evaluate the scenario and select one or more
appropriate reactions. Situational judgment tests can, for example, be used
to measure professional behavior (Schubert et al., 2008). Script concordance
tests place learners in written but authentic clinical situations where they must
interpret data to make decisions (Charlin et al., 2000). They are designed to
measure reasoning and decision making in ambiguous and uncertain situa-
tions, probing the multiple judgments made by the learner in this process.
Scoring refects the degree of concordance of the judgments made by the
learner to the judgments made by a panel of reference experts. Script con-
cordance tests can, for example, be used in the health-professions domain to
measure clinical reasoning and decision making (Dory et al., 2012).
Programs of Assessment 329

Assessing Mental Models

Essays and open-ended questions are most commonly used for assessing
mental models. The open questions here do not ask the learner how to
approach a particular situation, as was the case for cognitive strategies, but
rather, to describe phenomena or ideas (conceptual models); how things
are organized, structured, or built (structural models); or how processes
or machines work or function (causal models). The focus is, thus, not on
assessing factual knowledge but on knowledge of how things are interre-
lated; the methods described in Tables 7.1 and 7.2 highlighted the kinds of
relationships the learner must be able to explain. We can distinguish short-
answer questions, long-answer questions, short-essay questions, and full
essays. An advantage of full essays and short-essay questions, as opposed
to short- and long-answer, open-ended questions, is that they can easily be
combined with information sources so that not only domain knowledge but
also domain-general academic skills can be assessed (e.g., writing, analyz-
ing, synthesizing, refective thinking, etc.). Yet another method for assessing
mental models is asking the learner to draw a concept map of a knowledge
domain. This is a time-efcient alternative to writing essays or answering
short-essay questions, and the quality of concept maps may also be easier to
score (see, e.g., Turns et al., 2000).
When assessing mental models, using closed rather than open questions
may help simplify the scoring, but developing good closed questions is far
from easy. The most common format is the multiple-choice test, where learn-
ers must choose one correct answer from several possible answers (typically
three or four); another is the extended-matching-questions test, where learn-
ers must choose more correct answers from several possible answers (typically
10–25). The two main problems with developing closed questions for meas-
uring conceptual knowledge are formulating questions that truly measure
understanding and comprehension rather than factual knowledge and formu-
lating unacceptable answer alternatives that are still credible. This is also why
using true/false questions is not recommended for assessing mental models.

Progress Testing

Progress testing is a form of assessing supportive information that fts the


Ten Steps nicely. It was frst introduced in the context of problem-based
learning (Van der Vleuten et al., 1996). A progress test is a comprehensive
test sampling knowledge across all subjects or disciplines, refecting the fnal
attainment level of the whole curriculum and not just a part of it (e.g.,
a block or a course). This aligns with the idea of supportive information
defned in the Ten Steps: the whole, multidisciplinary body of knowledge
330 Programs of Assessment

allowing learners to carry out real-life tasks. For example, in an engineer-


ing curriculum that includes mathematics, mechatronics (a combination of
electronics and mechanical engineering), thermodynamics, energetics, and
material sciences because it helps students work on their engineering prob-
lems (i.e., the learning tasks), a progress test would include questions or
items from all these diferent domains.
The progress test is then periodically given to all students in the curricu-
lum regardless of their year of training. For instance, in a four-year curricu-
lum, the test could be given four times per year to all students, meaning that
each student would have to take the test 16 times. For each assessment, all
students receive an identical test; thus, a frst-year student is doing precisely
the same test as a fourth-year student. The tests are equivalent for subse-
quent assessments, meaning that items are randomly drawn from a very
large item pool for each new test. For one learner, an expected minimum
score might be, for example, 6% for the frst test, 12% for the second test,
18% for the third test, 24% for the fourth test (at the end of the frst year),
and so on until 96% for the 16th, fnal test at the end of the four-year curric-
ulum. Thus, progress test results make the growth of knowledge through-
out the curriculum visible, like a development portfolio makes the growth of
complex skills or competencies throughout the curriculum visible. Ideally,
points of improvement identifed in a development portfolio can be related
to progress test results, meaning that a student shows particular weaknesses
in performance because of a lack of knowledge in one or more subjects.
Then, the student can be advised to restudy or further study these par-
ticular subjects. The format of the progress test has some other advantages
over traditional end-of-course tests. For example, it precludes students from
preparing themselves specifcally for the test, thereby preventing memoriza-
tion and superfcial learning (for a further discussion of progress testing, see
Wrigley et al., 2012).

15.4 Summative Assessment of Part-Tasks


and Procedural Information
According to the Ten Steps, teaching and assessment of part-tasks on the
shows-how level should be limited to to-be-automated recurrent skills. Then,
a summative assessment of those part-tasks can add value. The assessment
of all other types of part-tasks, as well as acquired procedural information,
is discouraged by the Ten Steps. Though these assessments are not uncom-
mon in existing educational programs, they sufer from the same problems
as teaching based on part-tasks rather than whole tasks. The following sub-
sections explain this further.
Programs of Assessment 331

Assessment of Part-Tasks in the Ten Steps

When providing part-task practice in the Ten Steps, the standards for to-
be-automated, recurrent constituent skills will focus mainly on accuracy,
speed, and time-sharing capabilities. Thus, when summatively assess-
ing part-tasks, not only accuracy counts but also speed and ability to per-
form the skill together with other skills (i.e., time-sharing). This guideline
is often neglected in educational practice. For example, when children in
primary school learn the multiplication tables, assignments like 3 × 4 and
7 × 5 should primarily be assessed on speed rather than only on accuracy
because multiplication of numbers smaller than 10 is typically classifed as
a to-be-automated recurrent constituent skill; assignments like 23 × 64 and
573 × 12, in contrast, should be assessed on accuracy because multiplica-
tion of numbers greater than 10 is typically classifed as a normal, recurrent
constituent skill. Another example is when nurses- and doctors-in-training
learn cardiopulmonary resuscitation (CPR), which will typically be classi-
fed as a to-be-automated recurrent skill, they should primarily be assessed
on speed and time-sharing capabilities (assuming that they do it properly),
such as being able to keep an eye on the environment and give directions to
bystanders while doing the CPR.
The separate assessment of part-tasks, which always make an appeal on
to-be-automated recurrent constituent skills in the Ten Steps, has some
added value to only assessing whole-task performance because it might indi-
cate when a learner has reached the standards and may stop practicing the
part-task. In other words, summative assessment of part-tasks might be used
as an entry requirement for whole-task practice. For example, the nurses-
and doctors-in-training learning CPR might not be allowed to work in the
emergency department of a hospital, where they are confronted with criti-
cal whole tasks before they have successfully reached the speed and time-
sharing standards of the part-task CPR training. Yet, this does not negate
the need for assessing to-be-automated recurrent aspects of whole-task per-
formance! Carrying out a part-task in isolation difers from carrying it out
in the context of a whole task because the latter requires coordination of
the part-task with the other aspects of the whole task. Thus, an assessment
of whole-task performance on the does-level, including its to-be-automated
recurrent aspects, will also be required.

Superfluity of the Assessment of Procedural Information

In the Ten Steps, procedural information is relevant to to-be-automated


recurrent constituent skills, which may be practiced as part-tasks and aspects
of whole tasks, and to normal, recurrent constituent skills solely practiced in
332 Programs of Assessment

the context of whole tasks. For to-be-automated recurrent constituent skills,


‘knowing’ the prerequisite knowledge and ‘knowing how’ to apply the rules
does not predict speed and time-sharing capabilities. Consider the example
of multiplication: one might be perfectly able to explain the algorithm for
multiplication without being able to immediately say what 36 × 87 is—the
answer (3,132) still has to be ‘computed.’ Similarly, for normal, recurrent
constituent skills, ‘knowing’ the prerequisite knowledge and ‘knowing how’
to apply the rules does not translate into accurate performance. For exam-
ple, take a soccer fan watching a game on the couch at home. They can often
precisely explain what a player must do and how to do it while not being
able carry out those actions themselves. Thus, the summative assessment of
procedural information has no added value to assessing whole-task perfor-
mance and/or part-task performance; good performance assessments make
the assessment of procedural information fully superfuous. This is not to say
it can have no value to formatively assess the quality of acquired procedural
information; for example, when typical errors (Section 11.3) and/or mis-
conceptions (Section 12.3) may explain the occurrence of errors.

Problems with the Assessment of Nonrecurrent Part-Tasks

According to the Ten Steps, only part-tasks classifed as ‘to-be-automated


recurrent’ are separately practiced and assessed. All other part-tasks are only
trained in exceptional cases (see Chapter 6, where part-tasks were used for
backward chaining with snowballing and whole-part sequencing), but even
then, they are not more than stepping stones toward whole-task practice
and will thus not be assessed in a summative sense. Yet, in many existing
curriculums, practice and assessment on the shows-how level takes place with
part-tasks that can, according to the Ten Steps, not be classifed as to-be-
automated, recurrent. For example, curricula in the health professions use
objective, structured clinical examinations (OSCE; Harden et al., 1975) as
an approach based on objective testing and direct observation of student
performance during planned clinical encounters. The typical OSCE includes
‘test stations’ where examinees carry out specifc clinical tasks within a speci-
fed period (see Figure 15.4). To complete the examination, students rotate
through a series of stations (as few as two or as many as 20), each measuring
only particular aspects of whole-task performance. OSCE stations are often
planned clinical encounters in which a student interacts with a standardized
simulated patient. A trained observer scores the learner’s performance for
part-tasks such as taking a patient history, carrying out a physical examination
or diagnostic procedure, or advising a patient. A standardized rating form or
checklist specifes each station’s evaluation criteria and scoring system. The
more items marked as ‘completed’ on the checklist, the higher the score.
Programs of Assessment 333

Figure 15.4 The objective, structured clinical examination (OSCE) with a series
of test stations.

OSCEs sufer from at least three problems, similar to the problems of


using nonrecurrent part-tasks for teaching: a lack of authenticity, a lack of
context, and a lack of variability. Concerning authenticity, OSCEs tend to
focus more on what the learner must do (i.e., the response format) than
on the nature of the task given to the learner (i.e., the stimulus format).
However, what one is measuring—or the validity of the assessment—is more
determined by the stimulus format than the response format (Schuwirth &
van der Vleuten, 2004). Like good learning tasks, good assessment tasks
should be authentic and based on real-life tasks. The classic OSCE in health
professions curriculums is not successful in this because it consists of short-
duration stations assessing skills in a fragmented way (e.g., station 1: exami-
nation of the abdomen; station 2: communication with the patient; etc.)
and in a context that is educational rather than professional. This is far from
authentic to the real situation the OSCE intends to emulate.
This brings us to the second, somewhat related, problem of context. As
just described, carrying out a part-task in isolation difers from carrying it
out in context and, in particular, from carrying it out together with other
tasks and other people such as colleagues. Suppose a learner in the OSCE
example successfully completed station 1 (examination of the abdomen) and
station 2 (communication with the patient). Would this mean the learner
334 Programs of Assessment

can communicate with the patient and can explain to the patient what they
are doing while they are doing it during the examination of the abdomen?
The simple answer is no. Doing both tasks at the same time requires coordi-
nation and increases complexity; thus, cognitive load. If this is what doctors
must be able to do (i.e., if this is the whole task), then the tasks should be
practiced together and assessed together. If the curriculum does not prop-
erly realize this, students might do well in communication skills courses
and assessments in school while still failing to properly communicate with
patients or clients when they have to combine it with other tasks during
internships.
Third, concerning variability, a troubling fnding is that a learner’s per-
formance on only one assessment task is a very poor predictor of their
performance on another similar assessment task. This is called the ‘con-
tent specifcity’ problem, the dominant source of unreliability of OSCEs
(Petrusa, 2002). The original motivation of implementing OSCEs—namely,
to increase assessment reliability by objectifying and standardizing (the O
and the S in the acronym) the assessment—thus did not pan out. To increase
the assessment reliability, one should increase variability, thus assessing a
range of tasks difering from each other on all dimensions that real-life tasks
also difer. Think of examining the abdomen and, if applicable, simultane-
ously explaining what you are doing (stations 1 and 2) to the parents of a
newborn or baby, a toddler, an adolescent, an elderly person, a pregnant
woman, someone with edema, etc. Like variability of practice in a set of
learning tasks is essential to develop competence, variability in a set of assess-
ment tasks is essential to measure this competence reliably. In more sim-
plistic terms, ‘one measure is no measure.’ We should always be extremely
careful with single-point assessments—not only because they are unreliable
but also because learners will quickly learn about the assessments and start to
memorize checklists, making the assessments trivial (Van Luijk et al., 1990).
The three problems connected with assessing nonrecurrent part-tasks (as
opposed to to-be-automated recurrent ones) are solved in a whole-task cur-
riculum in which summative assessments are coupled to the unsupported/
unguided learning tasks in a development portfolio, as described in Step 2.

15.5 Summative Assessment of Domain-General Skills


Finally, a program of assessment should also assess any domain-general skills
purposefully taught in the program. In the Ten Steps, this may relate to
task-selection skills, information literacy skills, deliberate practice skills, or
other domain-general (often metacognitive) skills such as SDL and SRL
(see the previous chapter). Let us assume that an educational or training
program based on the Ten Steps implements individualized learning trajec-
tories for learners and applies second-order scafolding to teach learners how
Programs of Assessment 335

to select their learning tasks. Two critical domain-general constituent skills


to be taught and assessed are ‘self-assessing performance’ and ‘determining
desired complexity and available support/guidance of next tasks’ (Kostons
et al., 2012). In coaching meetings, a development portfolio (like STEPP
for student hairstylists in Figure 5.6) can then help learners refect on their
performance and their progress, formulate points of improvement, and plan
future learning tasks. The same development portfolio can also be used for
summative assessment of both domain-specifc skills and task-selection skills
when it contains:

• Assessments of domain-specifc hairstyling tasks. For summative assess-


ment, these should be unsupported/unguided hairstyling tasks, assessed
by a teacher or workplace supervisor on standards related to ‘washing
and shampooing,’ ‘haircutting,’ ‘communicating with the client,’ and so
forth.
• Assessments of domain-general task-selection tasks. For summative assess-
ment, these should be unsupported/unguided tasks to select hairstyl-
ing tasks for improving overall hairstyling performance. The discrepancy
between self-assessments and assessments made by teachers or workplace
supervisors can indicate the quality of the learner’s self-assessments (‘self-
assessing performance’). The discrepancy between actually selected tasks
and ‘ideal’ selections, as can be computed with, for example, the protocol
portfolio scoring, can indicate the quality of the learner’s task selections
(‘determining desired complexity and available support/guidance of next
tasks’).

The previous chapter explained that developing a training blueprint for


domain-general skills follows exactly the same steps as developing a training
blueprint for domain-specifc skills. The primary blueprint for training the
domain-specifc skill and the secondary blueprint(s) for training the domain-
general skill(s) can then be intertwined in one combined blueprint (refer
back to Table 14.3), which is the basis for an educational program in which
the learner develops both the domain-specifc and the domain-general skills.
The hairstyling example illustrates that the same ‘intertwined approach’ can
be followed to develop assessment instruments. However, recursivity is a
requirement for both teaching and assessing domain-general skills—mean-
ing that designing an educational program for teaching/assessing domain-
general skills is only possible based on an educational program for teaching/
assessing domain-specifc skills that enables the performance of the domain-
general skills (i.e., one is ‘nested’ in the other). Simply said: We can only
teach/assess task-selection skills in a program ofering learners the oppor-
tunity to select their learning tasks; we can only teach/assess information
literacy skills in a program asking learners to search for their own learning
336 Programs of Assessment

resources; we can only teach/assess creativity skills in a program requiring


learners to be creative when carrying out learning tasks; and so forth.
If the criterion of recursivity is met, developing comprehensive assess-
ment instruments for domain-general skills follows the same process as
applied to domain-specific skills (i.e., according to Step 2). Thus, it starts
with drawing up a skill hierarchy, then formulating performance objectives,
including standards for all constituent skills in this hierarchy, and finally,
developing scoring rubrics for assessing performance on all its relevant
aspects. ­Figure 15.5 depicts a skill hierarchy for information literacy skills in
resource-based learning, as discussed in the previous chapter. When using
this skill hierarchy to develop an assessment instrument for information lit-
eracy skills, the standards and scoring rubrics will relate to all the constituent
skills in this hierarchy. Finally, constructing a development portfolio allows
monitoring the development of information literacy skills across learning
tasks, and this portfolio can be intertwined with the portfolio assessing per-
formance and progress on the domain-specific learning tasks.

Figure 15.5 
S kill hierarchy for information literacy skills in resource-based
learning, which can serve as the basis for developing an assessment
instrument
Source: Brand-Gruwel et al., 2005.
Programs of Assessment 337

Due to the recursivity involved, this chapter must end where it began:
The Ten Steps assumes that, when learners demonstrate that they can carry
out domain-general tasks (task selection, information literacy, deliberate
practice, teamwork, etc.) in a way that meets all of the standards, they must
also have mastered the underlying (supportive and procedural) knowledge
and routine skills. Thus, for both domain-specifc and domain-general skills,
performance-based, whole-task assessment is the only type of assessment
that is an integral and required part of the Ten Steps. There might, however,
for the domain-general skills, also be reasons for assessing learners not only
on their level of whole-task performance but also on the levels of acquired
knowledge (i.e., remembering and understanding) and part-task perfor-
mance. Then, you might develop a whole program of assessment for the
domain-general skills in the same way described for domain-specifc skills in
this chapter.

15.6 Summary
• Miller’s pyramid can be used to distinguish between assessments on the
‘knows,’ ‘knows how,’ ‘shows how,’ and ‘does’ levels.
• For summative assessment of learning tasks, one should only use unsup-
ported/unguided tasks (the ‘empty’ circles in the schematic training
blueprint).
• For summative assessment of tasks in a professional workplace setting,
include narrative, expert judgments and give the assessments a strong,
formative function.
• Entrustable professional activities (EPAs) are responsibilities learners are
allowed to perform without supervision after being summatively assessed
on unsupported/unguided tasks at a particular level of complexity (i.e.,
task class).
• A combination of case descriptions and open-ended what-to-do ques-
tions can be used to assess cognitive strategies; essays, open questions,
and assignments to draw up concept maps can be used to assess mental
models.
• Progress testing can be used for assessing supportive information.
It measures how a student’s multidisciplinary knowledge develops
throughout the educational program and, therefore, nicely fts the Ten
Steps.
• Summative assessment of part-tasks should only be used for to-be-
automated recurrent skills and focus not only on accuracy but also on
speed and time-sharing capabilities. Training and assessing other types of
part-tasks is discouraged by the Ten Steps.
338 Programs of Assessment

• Performance assessments for domain-general skills such as self-directed


learning, information literacy, and deliberate practice need to be specifed
in the same way as for domain-specifc skills (i.e., according to the guide-
lines in Step 2).
• If domain-general skills are taught and assessed, the educational program
aimed at developing domain-specifc skills must be purposefully designed
so that recursivity allows for practicing and assessing domain-general
skills.

Glossary Terms

Entrustable Professional Activity (EPA); Miller’s pyramid; Objective, Struc-


tured Clinical Examination (OSCE); Progress testing; Recursivity; Script
concordance test; Situational judgment test; Summative assessment
Chapter 16

Closing Remarks

This chapter concludes the Ten Steps to Complex Learning. The introduc-
tory Chapters 1–3 discussed the main aims of the model, the four blue-
print components, and the Ten Steps for developing educational blueprints
based on the four components. Chapters 4–13 discussed each of the steps
in detail. Chapters 14 and 15 discussed, in order, the teaching of domain-
general skills and programmatic assessment in educational programs based
on the Ten Steps. This fnal chapter briefy discusses the position of the Ten
Steps in the current feld of instructional design and education and sketches
some directions for the model’s further development.

DOI: 10.4324/9781003322481-16
340 Closing Remarks

16.1 Positioning the Ten Steps


The Ten Steps shares its focus on real-life tasks as the basis for designing
learning tasks with established educational models such as problem-based
learning (Norman & Schmidt, 2000), project-based learning (Blumenfeld
et al., 1991; Peng et al., 2019), and the case method (Barnes et al., 1994).
But, unlike the Ten Steps, these established educational models provide a
template for organizing education rather than a systematic design approach.
As a result, the educational programs are infexible, do not take the growing
competencies of learners into account, and, after some time, are perceived
as boring because learners may be working in the same pattern for years
(Moust et al., 2005). Models for task-centered learning, including the Ten
Steps, ofer a systematic approach to designing educational programs that
are much more fexible, take learners’ growing competencies into account,
and ofer a varied use of instructional methods and technologies. The next
subsections discuss the Ten Steps as an exponent of task-centered learning
models, with ‘toppling the design approach’ as its cornerstone and changing
teacher roles as its most important practical consequence.

Task-Centered Learning

Overviews of instructional design models (e.g., Göksu et al., 2017; Reige-


luth et al., 2017; Van Merriënboer & Kirschner, 2018; Wasson & Kirschner,
2020) place the Ten Steps in an increasingly popular and growing category
of instructional design models under the heading ‘task-centered learning.’
In addition to the Ten Steps or 4C/ID, they include models such as elab-
oration theory, cognitive apprenticeship learning, and learning by doing
(Francom & Gardner, 2014). Merrill (2020) carefully analyzed and com-
pared models for task-centered learning and formulated fve ‘frst principles
of instruction’ that are shared by all of them:

1. Learners are engaged in carrying out real-life tasks or solving real-world


problems.
2. Existing knowledge is activated as a foundation for new knowledge.
3. New knowledge is demonstrated to the learner.
4. New knowledge is applied by the learner.
5. New knowledge is integrated into the learner’s world.

It is interesting to note that diferent models share these fve fundamental


principles. However, they are based on very diferent theoretical assump-
tions and use diferent conceptual descriptions and practical methods.
When research in diferent research paradigms (Van Merriënboer & de
Bruin, 2014) yields similar conclusions, this provides extra support to
Closing Remarks 341

the credibility and validity of the claims made. Moreover, there is increas-
ing empirical evidence that applying the five principles helps improve
transfer of learning; that is, the ability to apply what has been learned to
new tasks and in real-life contexts (Francom, 2017; Van Merriënboer &
Kester, 2008).

Toppling the Design Approach

When teachers or novice designers use the Ten Steps for the first time, they
often find the model difficult to apply. We think this is mainly caused by
their prior experiences in education—either as students or as teachers—
where teaching typically starts from presenting theoretical information, and
then, practice tasks are coupled to the information presented. The Ten Steps
replaces this knowledge-first approach with a task-first approach, which
might initially feel counter-intuitive. It reflects a toppling, where practice
tasks are no longer coupled to the information presented but, in contrast,
where helpful information is coupled to learning tasks that are specified first
(see Figure 16.1). Together, these changes may appear to reflect a (mod-
erate) constructivist view of learning and instruction because the learning
tasks primarily drive a process of active knowledge construction by the

Figure 16.1 Toppling the design approach.


342 Closing Remarks

learner. However, in contrast to most current constructivist approaches to


learning, the Ten Steps places a strong accent on support and guidance
by the teacher and/or available instructional materials (Van Merriënboer &
Kirschner, 2018).
The toppling of the design approach is a recurring theme in education.
Nowadays, it can, for example, be found in discussions on the fipped class-
room, an approach that is well in line with the Ten Steps (see, for exam-
ple, Marcellis et al., 2018). The fipped classroom is a blended approach
to education where theoretical information, traditionally presented in the
classroom, is now presented outside the classroom, often, though not nec-
essarily, online. In turn, the learning tasks wherein the theoretical infor-
mation is applied, typically assigned as homework, are now carried out in
the classroom with support and guidance from the teacher (O’Flaherty &
Phillips, 2015). This method reallocates valuable face-to-face time with a
knowledgeable teacher from information presentation to fexible guidance
and support for learners engaging in carrying out meaningful learning
tasks. Teachers transitioning to the fipped classroom experience the same
challenges as those applying the Ten Steps. The conventional mindset
involves planning theoretical information presentation in the classroom
frst and subsequently coupling (homework) assignments to this infor-
mation. In the fipped classroom approach, they must frst think about a
learning task for in-class work and then link the theoretical information
for home study to this task. This switch requires a new mindset and is not
easy to make.

Changing Teacher Roles

In educational programs based on the Ten Steps, the roles of teachers will
change—both with regard to preparing and realizing lessons or educational
programs. Concerning preparing lessons, the fipped classroom, as described
earlier, makes clear that teachers need to fulfll their role as instructional
designer—the ‘teacher as designer’ (Kali et al., 2015). Traditionally, the con-
tents of study books used for a particular course largely defned the contents
of the lessons. But in a fipped classroom or a program based on the Ten
Steps, the lessons are based on learning tasks that students work on. Often,
these learning tasks will not be available in existing instructional materials
but must be designed and developed. Important considerations for teachers
who act as designers are the following:

• Teacher design teams are strongly preferred above individual teachers


designing their learning tasks. Learning tasks based on real-life tasks typi-
cally appeal to knowledge from diferent subjects or disciplines, and these
diferent subjects and disciplines should ideally be represented by difer-
ent design team members (Dolmans et al., 2013).
Closing Remarks 343

• Design teams can be strengthened by adding professional instructional


designers, media specialists, and, most importantly, practitioners from the
professional feld relevant to the team (Hoogveld et al., 2005). These
practitioners will often have a better overview of new developments in
the feld than teachers, which helps develop up-to-date and professionally
relevant learning tasks.
• Design teams can also be strengthened by including learners. In par-
ticipatory design, learners—as stakeholders in the teaching/learning
process—also participate in the design team and help shape their educa-
tion. This is important because discrepancies between teachers’ and learn-
ers’ perspectives may inhibit the intended implementation of instructional
methods (Könings et al., 2014; Sarfo & Elen, 2007, 2008).

After the development of the program, for realizing lessons, teacher roles in
a program based on the Ten Steps will typically include those of (a) tutor,
(b) presenter, (c) assistant looking over your shoulder (ALOYS), (d) instruc-
tor, and (e) coach. As a tutor, the teacher’s main role is guiding learners on
carrying out the learning tasks and giving them cognitive feedback (Wolter-
inck et al., 2022). As a presenter, teachers will stick to their traditional role of
explaining how a learning domain is organized but will also increasingly fulfll
the role of expert model, showing how to approach real-life tasks systemati-
cally and how the application of rules-of-thumb may help overcome difcul-
ties (Van Gog et al., 2004, 2005). As ALOYS, the teacher’s role is to present
JIT how-to information on routine aspects of learning tasks or part-task prac-
tice to learners and give them corrective feedback (Kester et al., 2001). As
instructor, the teacher will provide part-task practice to learners and apply
principles such as changing performance criteria and distributing practice.
Finally, teachers will increasingly fulfll the role of coach, helping learners
develop domain-general skills related to task selection, deliberate practice,
information literacy, creativity, and/or teamwork (i.e., provide second-order
scafolding). All of these new teacher roles pose new requirements to the
form and content of future teacher training programs, with more intensive
use of new technologies (Kirschner & Selinger, 2003; Yan et al., 2012), more
attention for the complex skill of providing diferentiated instruction (Frère-
jean et al., 2021; Van Geel et al., 2019), new physical learning spaces (Van
Merriënboer et al., 2017), and knowledge communities that allow for the
exchange of learning experiences (Kirschner & Wopereis, 2003).

16.2 Future Directions


The Ten Steps has a strong basis in research conducted since the late 1980s.
The main aim of this book is to provide a practical description of the Ten
Steps, not to discuss the research providing evidence for the instructional
methods it prescribes. Nevertheless, a sketch of future directions must
344 Closing Remarks

inevitably start from research questions because they will drive the further
development of the Ten Steps as a research-based model (Table 16.1). The
fve main questions relate to diferent ‘blends’ of learning, mass customiza-
tion, and big data, intertwining the training of domain-general and domain-
specifc skills, the role of motivation and emotion, and computer-based tools
to support the instructional design process.

Table 16.1 Five research questions driving the further development of the Ten
Steps.

Question Topics

How can we best integrate new educational Double-blended learning,


technologies in programs based on the game-facilitated curricula,
Ten Steps? artificial intelligence
How can we deal with (very) large groups Mass customization, big data,
of learners in programs based on the Ten learning analytics
Steps?
How can we integrate the teaching of Intertwining training
metacognitive, domain-general skills in blueprints, recursivity in
programs based on the Ten Steps? design
How can we maintain learner motivation Self-Determination Theory
and prevent negative emotions in and Ten Steps, impact of
programs based on the Ten Steps? emotions on learning
How can we support designers in Computer-based ID tools,
developing programs based on the Ten specification of individualized
Steps? learning trajectories

Blended Learning and Game-Facilitated Curricula

Many well-developed and attractive educational multimedia (e.g., simula-


tions, games, animations, online video, etc.) and hypermedia (e.g., websites
and e-learning applications including multimedia) are underused in educa-
tion because it is difcult to integrate them into existing educational pro-
grams. Teachers are usually required to adapt their teaching to the media
rather than adopt the media in their teaching. Therefore, the Ten Steps
promotes the integral design of educational programs in which technology-
enhanced and face-to-face activities strengthen each other. Chapter 2 intro-
duced the concept of ‘double-blended learning’ as a promising approach,
combining online and face-to-face activities and learning in the educational
setting and the workplace. Vandewaetere et al. (2015) describe such a pro-
gram for general practitioners (GPs) in training; the program consists of
courses dealing with the real-life tasks a GP is confronted with (courses
are: ‘patients with diabetes,’ ‘the tired patient,’ ‘sick children,’ ‘patients with
Closing Remarks 345

lower-back pain,’ etc.). All courses include an e-learning application for


lifelike learning that sets three types of learning tasks: (a) tasks that learn-
ers carry out in the e-learning environment itself, either individually or in
small groups; (b) tasks that they need to prepare for face-to-face meetings
at the educational institute; and (c) tasks that they need to carry out in the
workplace; that is, in the GP’s practice where they do their internships. The
group meetings at the educational institute play a connecting role: Here,
both the tasks that learners prepared online and the tasks they conducted in
the workplace are discussed and evaluated in face-to-face group meetings,
and, in a similar vein, the tasks to conduct in the future period are planned
and discussed. This double-blended approach forces learners to transfer
what they learn to the workplace.
A similar example in senior vocational education is CRAFT, a game-
facilitated curriculum in mechatronics based on the Ten Steps (Van Bus-
sel et al., 2014). Mechatronics is a multidisciplinary feld of science that
combines mechanical engineering, electronics, computer engineering, tel-
ecommunications engineering, systems engineering, and/or control engi-
neering. CRAFT contains a simulated workplace with virtual machines that
students can use to build a variety of mechatronic products (see upper part
of Figure 16.2) and an amusement park where they can build attractions
from these products (see bottom part of Figure 16.2); these attractions can
be shared with peer learners, friends, and family. The machines in the simu-
lated workplace replicate the real machines they can use in the school or
workplace (i.e., they have high functional fdelity). CRAFT, however, is not
solely used as a serious game but primarily as a tool to run a game-facilitated
curriculum because it sets learning tasks that students must carry out:
(a) in the simulated workplace in the game, (b) on real machines in the
school setting, and (c) as interns at the workplace. These learning tasks are
either automatically assessed by the game, by the teacher in the school set-
ting, or by the supervisor at the workplace. All assessments are collected into
a development portfolio in the game. This allows for monitoring student
progress and, based upon this, adapting the provision of learning tasks to
individual student needs, yielding a highly fexible curriculum. It should be
clear that research on double-blended learning and game-facilitated cur-
riculums is scarcely out of the eggs and raises many specifc questions for
future research.

Mass Customization and Big Data

Due to societal changes, there is an increasing need for fexible educational


programs to serve heterogeneous target groups with learners who difer
greatly in age, prior education and experience, and cultural background.
The Ten Steps allows such programs to be developed because it ofers
346 Closing Remarks

Figure 16.2 CRAF T—a game-facilitated curriculum for mechatronics. In the


game-part, students can construct products in a simulated work-
place (top) and then use these products to build attractions in an
amusement park (bottom).

unique opportunities for sequencing learning tasks in individualized learn-


ing trajectories. Moreover, these programs ofer the opportunity to help
learners develop lifelong learning skills by giving them increasingly more
control over the selection of learning tasks and other resources as their self-
directed learning skills develop. However, this requires a form of second-
order scafolding, such as a teacher/coach who uses a development portfolio
Closing Remarks 347

for each learner to monitor their performance and progress, identify points
of improvement, and advise on points to improve and new tasks to select.
The teacher/coach will give the learner increasing control and responsibil-
ity as they progress. This approach works well for relatively small groups of
learners, but the question arises: How can we deal with (very) large groups
of learners?
The frst part of the answer to this question may be mass customization,
which uses computer-aided systems that combine the low unit costs of mass
production with the fexibility of individual customization (Schellekens
et al., 2010a, 2010b). The basic principle is that, in a group of ten learn-
ers, it may be very costly to ofer one particular task to only one of the ten
students, but in a group of 1,000 learners, there will be many more students
needing this one task, which makes it much more economically feasible to
provide it. Thus, mass customization identifes subgroups of students to
group together because they have the same needs and, thus, can work on
the same learning task or receive the same information (i.e., it follows a
service-oriented approach). A second part of the answer lies in big data and
learning analytics (Edwards & Fenwick, 2016). If big data are available
on learners’ background characteristics and performance assessments on a
series of selected learning tasks (cf. development portfolio), learning analyt-
ics could make it possible to give individual learners advice on the best tasks
to select or even to decide on the level of control that can be safely given to
them. Future research should aim to develop the algorithms that make this
possible and relieve the tasks of a human coach.

Intertwining Domain-General Skills in the Training Blueprint

The importance of teaching domain-general skills has been acknowledged


in education for a very long time. Nowadays, they are often erroneously
called 21st-century skills (see Chapter 14), but the teaching of metacog-
nitive skills such as learning skills, literacy skills, thinking skills, and social
skills (Figure 14.2) has been debated as long as formal education exists.
Until today, the teaching of domain-general skills has not been very suc-
cessful. One approach is to train the domain-general skills outside a par-
ticular domain in an isolated fashion. Think of creativity courses claiming
to teach creative skills; library courses claiming to teach information literacy
skills; study-strategy courses claiming to teach self-directed learning skills;
problem-solving courses claiming to teach general problem-solving skills;
and the latest—courses claiming to teach learners thinking skills! Research
shows, at best, very limited success of those courses and very limited or no
transfer of learning (Tricot & Sweller, 2014). Another, more dangerous,
approach is to require learners to apply these skills without explicitly teach-
ing them and allowing them to practice the skills. Think of courses that ask
348 Closing Remarks

learners to select their learning tasks and provide them with tasks that are
much too easy or much too difcult or courses that ask learners to search
their learning resources, only for them to end up with essays on Baconian
science with texts about the 20th-century British artist Francis Bacon and
about the problems that Martin Luther King Jr. had with Pope Leo X and
Holy Roman Emperor Charles V (Kirschner & van Merriënboer, 2013).
The Ten Steps assumes that teaching domain-general skills can only be
efective when meeting three requirements. First, domain-general skills are
only taught in the context of acquiring domain-specifc skills. Thus, the ‘pri-
mary’ blueprint for training the domain-specifc skill or competency should
enable learners to practice the domain-general skill: If learners must develop
information literacy skills, they must be allowed to search for their learn-
ing resources; if learners must develop team skills, they must be required
to work in teams on complex team tasks; if learners must develop delib-
erate practice skills, they must be able to decide which routine behaviors
they want to automate further; and so forth (‘recursivity’; see Chapter 14).
Second, domain-general skills must be trained according to exactly the
same principles as domain-specifc skills. Thus, the ‘secondary’ blueprint
for training domain-general skills contains learning tasks based on real-life
tasks, shows variability of practice, and applies second-order scafolding of
support and guidance—if necessary—on increasingly higher levels of com-
plexity. In addition, learners must receive the necessary supportive informa-
tion (including cognitive feedback), procedural information, and part-task
practice. Third, the ‘primary training blueprint’ and the ‘secondary training
blueprint’ need to be intertwined so learners can simultaneously develop
domain-specifc and domain-general skills. These three requirements have
the status of hypotheses and not proven principles at the time of this writing;
future research needs to establish how successful this approach to teaching
domain-general skills really is. One piece of research on this by Noroozi
et al. (2017) is on the presentation, scafolding, support, and guidance for
achieving the domain-general skill of argumentation. Furthermore, there
are many ways to intertwine blueprints, and research is also needed to inves-
tigate the efects of diferent options.

Motivation and Emotion

An important question is how to maintain learners’ motivation and deal


with negative emotions in educational programs based on the Ten Steps.
Systematic research on motivation and the Ten Steps is largely missing, but
self-determination theory (SDT; Ryan & Deci, 2000) might ofer a good
starting point for doing such research. SDT distinguishes three basic human
needs concerning intrinsic motivation; namely, a feeling of competence, of
relatedness, and of autonomy (Figure 16.3). Competence refers to feeling
Closing Remarks 349

the need to be effective in dealing with the environment (i.e., self-efficacy).


In the Ten Steps, it directly relates to the complexity of learning tasks as well
as available support and guidance. Challenging learning tasks that can, nev-
ertheless, be completed successfully thanks to available support and guid-
ance will invite learners to invest effort in learning (Paas et al., 2005) and
positively affect feelings of competence. In contrast, tasks that are far too
difficult and constantly lead to failure will harm feelings of competence.
Thus, one might expect positive effects on intrinsic motivation when indi-
vidualized learning trajectories adapt complexity and available support and
guidance of learning tasks to individual needs.

Figure 16.3 Self-determination theory and three basic needs.

Relatedness is feeling the need for close relationships with others, includ-
ing teachers and peer learners. It stresses the importance of using learn-
ing tasks that require group work and instructional methods for the study
of supportive information that use collaborative learning. For educational
programs largely realized online, it also stresses the importance of learning
networks: “online social networks through which users share knowledge
and jointly develop new knowledge. This way, learning networks may enrich
the experience of formal, school-based learning and form a viable setting
for professional development” (Sloep & Berlanga, 2011, p. 55). They do
this through tools and processes such as instant messaging, email and list-
serves, blogs, wikis, feeds (RSS, Atom), podcasting and vodcasting, open
educational learning resources, tags and social bookmarking, etc. A learning
350 Closing Remarks

network is, thus, an online social network of individuals and information


that uses tools and processes to stimulate and promote the learning of those
within the network. A strong learning network will probably increase feel-
ings of relatedness and, thus, increase intrinsic motivation.
The third need afecting intrinsic motivation is the feeling of autonomy:
feeling that one has control of the course of their life. Here it is important
to note that feeling autonomy is not the same as having autonomy. The
learner is given the opportunity (i.e., the autonomy) to make meaningful
choices in what they are studying, but these choices are limitless and are pre-
programmed by the teacher. In the Ten Steps it is directly related to ofer-
ing opportunities for self-directed learning, such as is done in on-demand
education (task-selection skills), resource-based learning (information lit-
eracy skills), and solicited information presentation in combination with
independent part-task practice (deliberate practice skills; see Chapter 14).
For one thing, giving learners some autonomy will increase their intrin-
sic motivation and is, thus, a condition sine qua non for learning. But for
another thing, giving them too much autonomy will lead to frequent fail-
ures in learning and performance, decreasing feelings of competence, and
lower intrinsic motivation. In short, there is a delicate balance between, on
the one hand, autonomy and, on the other hand, support and guidance.
This balance needs to be carefully maintained in a process of second-order
scafolding. Often, the teacher or other intelligent agent needs to maintain
the balance for each learner, but learning analytics and learning networks, as
described here, may also contribute to it.
In this light, one thing needs to be noted. Many people assume that
motivation and success are reciprocal and, thus, that motivation will lead
to success and that success will lead to motivation. Garon-Carrier et al.
(2016) conducted a longitudinal study on the relationship between intrinsic
motivation and achievement in mathematics. They found that mathematics
achievement had a signifcant positive efect on intrinsic motivation, but
not the other way around; intrinsic motivation did not afect mathematics
achievements. The Ten Steps, in its use of task classes going from least to
most complex, along with its use of support and guidance, ensures success
and thus positively afects motivation.
Finally, when students work on learning tasks based on real-life tasks, this
may also afect their emotions, which may in turn afect or mediate both
cognitive outcomes (e.g., learning and cognitive load; Young et al., 2021)
and noncognitive outcomes (e.g., motivation; Choi et al., 2014; Zwart
et al., 2022). For example, Fraser et al. (2014) report research on the emo-
tional and cognitive impact of unexpected patient death in simulation-based
training of medical emergency skills. They found that the unexpected death
of the mannequin yielded more negative emotions, higher cognitive load,
and poorer learning outcomes. They hypothesized that negative emotions
Closing Remarks 351

may limit human cognitive processing capacity, negatively afecting learn-


ing. These fndings directly afect the design of learning tasks; for example,
one solution might be the online monitoring of physiological measures
of cognitive load and adapting tasks accordingly (Gerjets et al., 2014). In
any event, more research is needed on how to deal with adverse emotional
experiences—and emotions, in general—during learning in simulated and
real task environments.

Instructional Design Tools

Compared to other design felds, few computer-based design tools are avail-
able for instructional designers (Van Merriënboer & Martens, 2002). The
available tools are either very general (e.g., for making fowcharts or concept
maps) or focus on the ADDIE phases of development (e.g., producing slide
shows, authoring e-learning applications) and implementation (e.g., learn-
ing management systems, MOOC platforms). For the analysis and design
phases, De Croock et al. (2002) describe a prototype design tool that sup-
ports the fexible application of the Ten Steps, allowing for zigzag design
approaches. Its main function is to support the construction of a training
blueprint consisting of the four components. The design tool provides func-
tions for entering, editing, storing, maintaining, and reusing analysis and
design products, providing templates for easily entering information in a
way consistent with the Ten Steps. Furthermore, it provides functions to
check whether the analysis and design products are complete, internally con-
sistent, and in line with the Ten Steps.
Future research is needed to develop more powerful computer-based
tools that ofer functionalities additional to the ones already mentioned.
First, such tools should support the construction of a training blueprint
along with standards and scoring rubrics necessary for assessing learner
performance and combine both in some kind of tasks-standards matrix (cf.
Figure 5.4). Second, tools should support the realization of individualized
learning trajectories; thus, they must specify how information on learner
performance and learner progress is either used to select tasks or to gener-
ate advice to learners on how to select their tasks. Finally, the tools should
support intertwining blueprints for domain-specifc and domain-general
skills. This complex task would greatly beneft from computer support and
will, likely, become more important given the current focus in education on
metacognitive skills.

Artificial Intelligence

When we wrote the third edition of the Ten Steps, artifcial intelligence was
like some exotic animal. A few people had heard about it, fewer had seen it,
352 Closing Remarks

and only a privileged few had ever experienced it. It was something people
dreamed of: a computer application that could think. Now, when writing
this fourth edition, there have been several rapid technological advances in
this feld, not the least of which is the rise of large language models used
in tools such as ChatGPT. These models are trained to learn patterns and
relationships in language data to capture grammar rules, vocabulary, and
contextual understanding from the texts that they have been exposed to.
Once trained, they can perform various language-related tasks like gener-
ating answers to questions or carrying out tasks based on their ‘learned’
knowledge. As a consequence, we now have a new generation of students
surrounded by AI tools, creating headaches for teachers and educational
institutions who rely on assessments of essays, open-ended questions, and
term papers for grading and certifcation. It is clear that education will have
to deal with this and must be prepared.
We briefy sketch some opportunities these AI tools ofer instructional
designers applying the Ten Steps:

• When designing learning tasks, we can ask AI applications to generate


the contents and scenarios for many types of learning tasks. For example,
the case presented in Chapter 4 of an elderly man with a lung condi-
tion was partly generated by ChatGPT 4.0. Such case developments can
signifcantly speed up the work of the instructional designer when gen-
erating tasks with diferent levels of support (e.g., worked-out examples)
and guidance (e.g., process worksheets). However, it is important to be
careful. Because we know that this and other AI programs can also ‘hal-
lucinate’—that is, make things up that are not true—we ran the case by
an experienced pulmonologist to evaluate its quality. He determined that
the generated case was adequate but could be improved and signaled
how and where. While AI tools are helpful, they cannot replace either
design expertise or subject-matter expertise in a design team. Regarding
sequencing learning tasks, we can foresee much more intelligent elec-
tronic development portfolios with extensive ‘knowledge’ of the learn-
ing program and performance objectives that can automatically retrieve
and analyze student assessment data to advise each learner on the ideal
sequence of learning tasks—or even generate that sequence.
• For designing supportive information, we can use large language models
to quickly write instructional texts and even adapt those to diferent levels
of understanding. A recent study (Baillifard et al., 2023) demonstrated an
AI tutor automatically generating questions from existing course materi-
als and then developing a dynamic neural network model of students’
grasp of key concepts, enabling the personalization of the tutoring to
each student’s level and abilities. The results indicate that students who
actively engaged with the AI tutor achieved signifcantly higher grades
Closing Remarks 353

than those who did not. Besides using AI for producing text or gener-
ating tests to stimulate elaboration, it can also potentially take on the
role of study coach, helping learners monitor and control their learning
process.
• For designing procedural information, AI might help provide JIT infor-
mation displays. For example, when the learner is wearing augmented
reality glasses, AI could help the learner carry out procedures by high-
lighting real-life objects (e.g., tools, parts, equipment), indicating where
to look, or displaying step-by-step instructions directly in the feld of
vision. Other applications might involve recording and recognizing a
learner’s posture, location, or actions and providing immediate corrective
feedback. Its verbal capabilities could help narrate instructions or respond
to the learner’s questions, avoiding the split-attention efect often created
by manuals. In addition, AI’s generative capabilities might be useful for
creating the numerous practice items required for part-task practice.

Many more applications of AI can be envisioned. But, despite some recent


noteworthy breakthroughs and people’s high expectations, we see current
large language models hallucinating and generating text that is incorrect,
misleading, or nonsensical but that sounds plausible on the surface. These
inaccuracies often stem from the models’ training data, which includes a mix
of accurate and incorrect information from the Internet. Other AI tools,
such as those specialized in recognizing and generating speech, images, or
audio, struggle with similar problems: They produce impressive outputs but
are not always reliable and trustworthy. While we acknowledge that these
developments are exciting and can have signifcant implications for edu-
cation in general and instructional design in specifc, we take a reserved
stance toward AI at the time of writing. Heaps of research are still nec-
essary. We encourage readers to explore the benefts of AI but to remain
critical and careful when applying them in their instructional designs and
implementations.

16.3 A Final Word


In April 2000, the magazine Training expressed growing discontent with
instructional systems design in a cover story titled Is ISD R.I.P.? (Gordon &
Zemke, 2000). A panel of instructional design experts addressed the ques-
tion of whether instructional design was dead. It argued that mainstream
instructional design was waning because it could not deal with the highly
complex skills needed in modern society, was not based on sound learning
and performance theory, and used a cookbook approach that forced design-
ers into an unproductive straitjacket. Whether the arguments for writing of
instructional design were completely valid or not, more than 20 years later,
354 Closing Remarks

these are precisely the central issues in task-centered models such as the Ten
Steps: a focus on complex learning, a strong basis in learning theory, and
a highly fexible design approach (Francom, 2017; Van Merriënboer et al.,
2018). We hope that the Ten Steps, as well as other task-centered models for
whole-task education, contribute to a further revival of the feld of instruc-
tional design. Such a revival is badly needed to cope with the educational
requirements of a fast-changing and increasingly more complex world.

Glossary Terms

Large Language Models


Appendix 1

Step 1—Design Learning Tasks Design learning tasks or whole-task


problems that require learners to carry
out all relevant constituent skills in a
coordinated fashion. Learning tasks must
show high variability of practice. Start
with learning tasks with high support and
guidance and slowly decrease the support
and guidance to zero (scaffolding). Specify
product-oriented learner support (e.g.,
case studies, completion problems) to
exemplify standards and templates for good
solutions to problems and process-oriented
learner support (e.g., modeling examples,
process worksheets) to exemplify effective
approaches to problem solving. The learners
may carry out learning tasks in a simulated
or real task environment; the physical
fidelity of the task environment increases as
the training program progresses.
Step 2—Design Performance Set performance objectives for all constituent
Assessments skills contained in whole-task performance.
A skill hierarchy can represent their
interrelationships. Performance objectives
always include an action verb, the
conditions under which the constituent skill
needs to be carried out, tools and objects
used, and the standards (i.e., criteria, values,
attitudes) for acceptable performance.
Classify constituent skills as nonrecurrent,
recurrent, or recurrent-to-be-automated.
Use identified standards for developing
scoring rubrics. Development portfolios
allow for assessing all relevant aspects of
performance and monitoring improvement
on particular aspects (standard-centered
assessment) as well as overall performance
(task-centered assessment).
(Continued)
356 Appendix 1

Step 3—Sequence Learning Organize learning tasks in simple-to-complex


Tasks task classes to develop a global outline
of the training program. Each task class
describes a category of more or less equally
complex tasks that learners will work on
during the training. If even the simplest task
class is still too complex to start training,
then split the skill hierarchy into skill
clusters. These clusters can be seen as parts
of the complex cognitive skill and are best
sequenced in a backward chaining approach
with snowballing. Consider using a cyclical
process of performing tasks, assessing task
performance, and selecting new tasks for
designing individualized learning trajectories.
Use on-demand education if learners need to
develop task-selection skills.
Step 4—Design Supportive Supportive information presents and
Information exemplifies generally applicable information
describing cognitive strategies (Systematic
Approach to Problem solving—SAPs)
and mental models (conceptual, causal,
and structural models). Present it at the
beginning and parallel to each task class.
Use an inductive-expository strategy for
novice learners (first present examples
and then the general information) and
a deductive-inquisitory strategy for
more experienced learners later in the
educational program (first present the
general information and then ask the
learners for examples). Finally, design
cognitive feedback inviting learners to
compare their cognitive strategies and
mental models with those of others. Use
resource-based learning if learners need to
develop information literacy skills.
Step 5—Analyze Cognitive Analyze the cognitive strategies that guide
Strategies expert task-performance or problem-
solving behavior. The analysis results
typically take the form of SAPs, which
describe the successive phases and sub
phases in task performance and represent
them in a linear sequence or a SAP-chart.
Specify heuristics or rules-of-thumb that
might be helpful to successfully complete
each phase or subphase and specify the
standards for successful completion of each
phase or subphase. SAPs may pertain to
the whole complex cognitive skill or one or
more of its nonrecurrent constituent skills.
(Continued)
Appendix 1 357

Step 6—Analyze Mental Analyze the mental models that experts use
Models to reason about their tasks. Mental models
are representations of how the world is
organized for a particular domain. They
help to carry out nonrecurrent constituent
skills. The analysis results typically take the
form of conceptual models (What is this?),
structural models (How is this organized?),
and causal models (How does this work?).
The different types of models can also be
combined when necessary. Make a (graphic)
representation of each model by identifying
and describing simple schemata (i.e.,
concepts, plans, and principles) and how
these relate to each other.
Step 7—Design Procedural Design procedural information for each
Information learning task in the form of JIT information
displays, specifying how to perform the
recurrent aspects for this task. Give
complete information at first, and then
fade it. Teach at the level of the least
experienced learner. Do not require
memorization; instead, have the procedure
available during practice. Provide all steps in
the procedure and all facts, concepts, and
principles necessary to carry it out. Give
demonstrations of a procedure, instances
of facts/concepts, et cetera, coinciding
with case study. Give immediate, corrective
feedback about what is wrong, why it is
wrong, and corrective hints. Use solicited
information presentation if learners need to
develop deliberate practice skills.
Step 8—Analyze Cognitive Analyze expert task-performance to
Rules identify cognitive rules or procedures
that algorithmically describe correct
performance of recurrent constituent
skills. A cognitive rule describes the exact
condition under which a certain action
has to be carried out (IF condition THEN
action). A procedure is a set of steps and
decisions always applied in a prescribed
order. Perform a procedural analysis
(e.g., information processing analysis) for
recurrent skills that show a temporal order
of steps. Perform a rule-based analysis for
recurrent skills that show no temporal
order of steps.
(Continued)
358 Appendix 1

Step 9—Analyze Prerequisite Further analyze the results of the procedural


Knowledge or rule-based analysis from Step 8
and specify for each cognitive rule the
knowledge that enables correctly carrying
out this rule or, for each procedural
step or decision, the knowledge that
enables carrying the step out or making
the decision. The question is: What must
the learner know to be able to apply this
rule or to carry out this procedural step
correctly? Represent the knowledge as
simple facts, concepts, principles, or plans.

Step 10—Design Part-Task Design part-task practice for recurrent


Practice constituent skills that need a very high level
of automaticity after the training and for
which learning tasks cannot provide enough
practice to reach this. For procedures with
many steps and/or decisions—or large
rule sets—work from parts to the whole.
Then provide repetitive practice until
mastered. Use divergent examples that are
representative of the span of situations
where the skill will be used. Practice first
for accuracy, next for speed, and finally,
for speed and accuracy together under
high workload. Practice on a distributed,
not massed, training schedule. Intermix
with learning tasks. Use independent part-
task practice if learners need to develop
deliberate practice skills.
Appendix 2

Task Class 1: Learners produce videos for fictional clients under the
following conditions.
• The video length is 1–3 minutes
• The clients desire aftermovies or event recaps, summarizing the
atmosphere at an event
• Locations are indoors
• There is plenty of time for the recording
• No interaction with other on-camera participants
Supportive Information (inductive strategy): Modeling example
Learners shadow a professional video team while they produce an aftermovie
of the yearly local cultural festival. Learners can interview the video team
during and after the project.
Supportive Information: Presentation of cognitive strategies
• Global SAP for preproduction, production, and postproduction phases
• SAP for shooting video (e.g., basic strategies for creating compositions and
capturing audio)
• SAPs for basic video editing (e.g., selecting footage and editing the video)
Supportive Information: Presentation of mental models
• Conceptual models of basic cinematography, such as composition and
lighting
• Structural models of cameras
• Causal models of how camera settings affect the image and audio (music,
effects) affects mood
Learning Task 1.1
Support: Worked-out example
Guidance: Performance constraints
Learners receive a production plan,
intermediate footage, and the final video
of an existing aftermovie. They evaluate
the quality of each aspect, but their
evaluations must be approved before
they can continue with the next aspect.
360 Appendix 2

Learning Task 1.2 Procedural Information


Support: Completion task Unsolicited
Guidance: Tutoring • How-to instructions for using
Learners receive a production plan and postproduction software
intermediate footage. They must select • How-to instructions for
the footage and edit the video into the exporting the video
final product. A tutor guides learners in
studying the given materials and using
the postproduction software.
Learning Task 1.3: Imitation task Procedural Information
Support: Conventional task Unsolicited
Guidance: Modeling • How-to instructions
Learners study a modeling example of for operating cameras,
how a teacher/expert created a recap microphones, equipment
video for an (indoor) automotive show. • How-to instructions for using
In groups, students imitate this but for postproduction software
a local exposition. (fading)
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 1.3.
Learning Task 1.4 Procedural Information
Support: Conventional task Solicited
Guidance: None • Manuals for operating cameras,
Learners create an individual recap video microphones, equipment
for an indoor event of their choosing. • Manuals for using
postproduction software
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 1.4.
Appendix 2 361

Task Class 2: Learners produce videos for fictional clients under the
following conditions:
• The video length is 3-5 minutes
• The clients desire promotional videos for a product, service, or event
• Locations are indoors
• There is plenty of time for the recording
• Participant dynamics are favorable (e.g., experienced participants, easy to
work with)
Supportive Information (inductive strategy): Case study
Learners study three worked-out examples (i.e., case studies) of promotional
videos for a backpack with integrated solar panels, a virtual fitness
platform, and an urban art festival. In groups, a tutor guides them in
comparing and evaluating each example’s goals, scripts, camera use,
lighting, etc.
Supportive Information: Presentation of cognitive strategies
• SAP for developing a story for promotional videos
• SAPs for interacting with people and collaborating with the crew
• SAPs for shooting video (detailed strategies for creating compositions and
capturing audio)
Supportive Information: Inquiry for mental models. Learners are asked to
identify examples of:
• Different types of cameras, microphones, and lights (conceptual models)
• Story arcs (structural models)
Learning Task 2.1 Procedural Information
Support: Completion task Unsolicited
Guidance: Process worksheet • How-to instructions for
Learners receive the client briefing, synopsis, lighting and selecting lenses
and storyboard for a video promoting a and microphones
new coffee machine. They follow a process
worksheet to record footage and create the
final video.
• Sketching a storyboard
Part-task practice

Learning Task 2.2 Procedural


Support: Reverse task Information
Guidance: Tutoring Unsolicited
Learners study a promotional video about • How-to instructions
a new startup in the field of artificial for sketching a
intelligence. A tutor helps them work storyboard
backward to explain critical decisions in the
production phase and develop a storyboard
that fits the video and meets the client’s
requirements.
362 Appendix 2

Learning Task 2.3: Imitation task Procedural


Support: Conventional task Information
Guidance: Modeling Solicited
Learners study a modeling example of how a • How-to instructions
teacher/expert creates a short social media for lighting and
advertisement video for a small online selecting lenses and
clothing store. Learners remake the ad for a microphones
small online art store. • Platform with
how-to videos
for using
postproduction
software
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 2.3.
Learning Task 2.4 Procedural
Support: Conventional task Information
Guidance: Tutoring Solicited
Under guidance from a tutor, learners create • Platform with
a promotional video highlighting the how-to videos
products or services of a local store. for using
postproduction
software
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 2.4.
Appendix 2 363

Task Class 3: Learners produce videos for fictional clients under the
following conditions.
• The video length is increased to 5-10 minutes
• The clients desire informational or educational videos
• Locations are indoor or outdoor
• There is plenty of time for the recording
• Participant dynamics are more challenging (e.g., inexperienced/nervous
participants)
This task class employs the completion strategy.
Supportive Information: Presentation of cognitive strategies
• SAP for coaching people being filmed
• SAP for developing an informative or educational story
• SAPs for advanced video editing (e.g., animations, visualizing complex
ideas)
Supportive Information: Presentation of mental models
• Conceptual models of outdoor equipment, such as filters, reflectors, and
deadcats
• Causal models of how people learn from multimedia materials
Learning Task 3.1: Modeling Procedural Information
example • Demonstrations of using outdoor
Support: Worked-out example equipment
Guidance: Modeling • Demonstrations of how to add
Learners observe an expert thinking effects, titles, and graphics
aloud while working outdoors with
experienced and inexperienced
cyclists to create an informational
video about safe cycling.
Learning Task 3.2 Procedural Information
Support: Completion task Unsolicited
Guidance: Process worksheet • How-to instructions for adding
Learners receive a production plan effects, titles, and graphics
and footage with bad takes and good
takes. They must select good takes
and edit them into a video informing
patients about a medical product. A
process worksheet provides guidance.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 3.2.
364 Appendix 2

• Operating camera and equipment


Part-task practice
Learning Task 3.3: Procedural Information
Support: Completion task Solicited
Guidance: Tutoring • Platform with how-to
Learners receive a production plan videos for adding effects,
and footage of an expert (i.e., actor) titles, and graphics
explaining content but showing
nervousness and making mistakes.
Learners must reshoot the footage
with the actor, coaching them to
arrive at the desired result and finish
the final video.
Learning Task 3.4 Procedural Information
Support: Completion task Unsolicited
Guidance: Performance constraints • How-to instructions for
Learners receive a synopsis for a using outdoor equipment
training video on a construction site
and must write the script, record the
footage, and create the final video.
Each step requires approval from a
teacher before they can continue.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 3.4.
Learning Task 3.5
Support: Conventional task
Guidance: None
An expert in home organization and
decluttering with no on-camera
experience wants an explainer video.
Learners carry out all phases to
create the final video for the client.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 3.5.
Appendix 2 365

Task Class 4: Learners produce videos for fictional clients under


the following conditions.
• The video length is longer than 10 minutes
• The clients desire documentaries or interview videos
• Locations can be outdoors in bad weather
• There is limited time for the recording
• Participant dynamics are challenging (e.g., interviewing people, working
with animals)
Supportive Information: Presentation of cognitive strategies
• SAPs for conducting background research
• SAPs for visual storytelling with documentaries and interviewing subjects
on camera
• SAP for working with animals
Supportive Information: Presentation of mental models
• Conceptual models of different types of documentaries
• Causal models of problems caused by weather conditions (e.g., heavy wind,
rain, bright sun, etc.)
Learning Task 4.1 Procedural Information
Support: Worked-out example • Demonstrations of how
Guidance: Tutoring to select microphones and
Learners study the production plans and place them for interviews
completed videos of three documentaries.
A tutor facilitates a group discussion
about recording outdoors, storytelling,
interviewing techniques, etc.
Learning Task 4.2 Procedural Information
Support: Nonspecific goal task Solicited
Guidance: Tutoring • How-to instructions for
Learners receive a script for a documentary using outdoor equipment
about a historic outdoor location. They visit
the site and simulate various challenging
situations, such as unexpected weather
conditions, breaking equipment, etc.
Learners must develop approaches for
dealing with such challenges.
Learning Task 4.3 Procedural Information
Support: Conventional task Solicited
Guidance: Process worksheet • Job aids for selecting
Learners create a 15-minute documentary about microphones and placing
a farmer, requiring them to interview, work them for interviews
with animals, and record outdoors. They
receive a process worksheet for guidance.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 4.3.
Learning Task 4.4
Support: Conventional task
Guidance: None
Learners create a 30-minute documentary on a
topic of their choosing.
Supportive Information: Cognitive feedback
Learners receive feedback on their approach to Learning Task 4.4.
Glossary

Term Chapter Definition

21st Century Skills 14 Domain-general skills people need to


live, work, and realize their potential in
the contemporary world. They include
learning skills, literacy skills, thinking
skills, and social skills.
4C/ID Model 2 Abbreviation for four-component
instructional design model, where the
training blueprint is built from a backbone
of learning tasks, to which supportive
information, procedural information, and
part-task practice are connected. 4C/ID is
the forerunner of the Ten Steps.
Adaptive Learning 2 In the Ten Steps, adaptive learning refers
to the dynamic selection of learning tasks
by a teacher or intelligent agent in such a
way that their difficulty, level of support
and guidance, and available real-world
features are optimized to the needs of
an individual learner. Adaptive learning
contrasts with on-demand education.
ADDIE Model 3 A generic Instructional Systems Design
(ISD) approach made up of the steps:
Analyze, Design, Develop, Implement, and
Evaluate. The Ten Steps is an Instructional
Design (ID) model with a clear focus on
Analysis and Design.
Atomistic Design 2 In contrast to a holistic approach, a
design approach that analyzes a complex
learning domain in small pieces that
often correspond with specific learning
objectives and then teach the domain
piece-by-piece without paying attention
to the relationships between pieces. This
hinders complex learning and competence
development.
(Continued)
Glossary 367

Term Chapter Definition


Attention Focusing 13 A technique for the teaching of complex
procedures where the learner’s attention
is focused on those procedural steps or
rules that are difficult or dangerous to
perform.
Authentic Task 4 A task as it exists in real life. In the Ten
Steps, learning tasks are designed on
the basis of authentic or real-life tasks.
However, because they often contain
support and guidance and need not be
performed in the real task environment,
learning tasks need not be identical to
authentic tasks.
Backward Chaining 6 An approach to part-task sequencing, where
the training starts with constituent skills
that are performed last and works toward
constituent skills that are performed first
during regular task performance (counter-
to-performance order).
Blended Learning 2 Any combination of face-to-face learning
and online learning. When frequent
self-quizzes are used in the online part,
it can be equally effective as face-to-
face learning. See also double-blended
learning.
Case Method 4 An educational method where learners in
small groups study cases or case studies.
Case Study 4 A description of a given begin state, a
desired goal state, and a chosen solution.
A case study requires learners to actively
participate in an actual or hypothetical
problem situation in the real world
and may take different forms, such as
a description of a particular event or
situation, an artificially designed object, a
design simulation, or a process simulation.
A case study may be used either as a
learning task that must be studied or as
an illustration of a domain model as part
of the supportive information.
Causal Model 7 A specific type of domain model describing
the principles and their interrelationships
important in a particular task domain.
A causal model results from the analysis
of mental models and is important for
interpreting events, causal reasoning,
giving explanations, and making
predictions.
(Continued)
368 Glossary

Term Chapter Definition


Cognitive Feedback 7 A type of feedback that allows learners to
reflect on the quality of found solutions
or the quality of the problem-solving
process; typically used to provide
feedback on the quality of performance of
nonrecurrent aspects of a complex skill.
Cognitive Load Theory 2 A theory stating that limited human
working-memory capacity has far-reaching
implications for teaching and learning.
Well-designed training systems prevent
cognitive overload, decrease cognitive
load that is not relevant to learning,
and optimize cognitive load relevant to
learning.
Cognitive Rule 11 A mental representation of a consistent
relationship between particular
conditions and a (mental) action to be
taken under these conditions. In the Ten
Steps, cognitive rules are analyzed as
IF-THEN rules or combinations of IF-
THEN rules in a procedure.
Cognitive Strategy 8 A mental representation of how to
approach problems in a particular task
domain. In the Ten Steps, a cognitive
strategy is analyzed as a Systematic
Approach to Problem solving (SAP),
containing a description of phases in
problem solving and rules-of-thumb that
may help to complete each of the phases.
Cognitive Task 11 A family of methods and tools for gaining
Analysis (CTA) access to the mental processes that
organize and give meaning to observable
behavior. In the Ten Steps, steps 2–3, 5–6,
and 8–9 make up an integrated system of
CTA.
Cognitive Tool 4 A device that helps learners carry out
cognitive learning activities and critical
thinking. Cognitive tools are learner
controlled and actively engage learners in
the creation of knowledge that reflects
their comprehension and conception of
information.
Compartmentalization 2 The tendency in traditional education to
teach knowledge, skills, and attitudes
separately. This approach hinders
complex learning and competence
development.
(Continued)
Glossary 369

Term Chapter Definition


Competence Map 5 An often-visual representation of a
combination of complex cognitive and
higher-order skills, highly integrated
knowledge structures, interpersonal and
social skills, and attitudes and values.
It resembles a skill hierarchy but often
contains more than only enabling and
temporal relationships.
Completion Strategy 4 Sequencing learning tasks from case studies
or worked examples that students
must study, via completion tasks with
incomplete solutions that must be
finished, to conventional problems
that must be solved. The completion
strategy is an example of fading support
as learners acquire more expertise (i.e.,
scaffolding) and has been found to have
positive effects on inductive learning and
transfer.
Completion Task 4 A learning task describing a given begin
state, a desired goal state, and a partial
solution. The partial solution must be
completed by the learner.
Complexity 2 The number of elements inherent to
performing a learning task, along with
the degree of interaction between
those elements. Note that a task with
a particular complexity can be difficult
for a novice learner but easy for a more
experienced learner.
Component Fluency 13 A hypothesis reflecting the idea that
Hypothesis training routine aspects or recurrent
components of a complex task to a
very high level of automaticity frees
up cognitive resources, and thus,
has a positive effect on whole-task
performance.
Concept 12 A mental representation representing a
class of objects, events, or other entities
by their characteristic features and/or
mental images. In the Ten Steps, single
concepts are analyzed as part of the
prerequisite knowledge.
Concept Map 9 A visual representation of a conceptual
model in which the relationships between
concepts are not labeled.
(Continued)
370 Glossary

Term Chapter Definition


Conceptual Model 7 A specific type of domain model describing
the concepts and their interrelationships
that are important for solving problems
in a particular task domain. A conceptual
model results from the analysis of mental
models and is important for classifying or
describing objects, events, and activities.
Constituent Skill 5 Sub skills or component skills of a complex
cognitive skill that may best be seen as
aspects of the whole skill. The constituent
skills that make up a whole complex
cognitive skill are identified through a
process of skill decomposition.
Contextual 4 A type of variability in which contextual
Interference factors inhibit quick and smooth mastery
of a skill (also called ‘interleaving’). The
Ten Steps suggest using high contextual
interference across learning tasks but
low contextual interference for part-task
practice.
Contingent Tutoring 10 A form of unsolicited information
presentation where a teacher or tutor
closely monitors a learner who is working
on a learning task and gives specific and
just-in-time directions on how to solve
the problem or perform the task (i.e., the
‘assistant looking over your shoulder’).
Control 14 A sub process referring to how learners
respond to the environment or adapt
their behavior based on their thoughts
(also called ‘regulation’). Monitoring
and control are two complimentary sub
processes in the learning cycle.
Conventional Task 4 A learning task describing a given begin
state and a desired goal state. The learner
must independently generate a solution.
Corrective Feedback 10 A type of feedback that gives learners
immediate information on the quality of
performance of recurrent aspects of a
complex skill. Corrective feedback often
takes the form of hints.
Cued Retrospective 8 A form of retrospective reporting on
Reporting a problem-solving process cued by a
registration of this process in the form
of a video recording, audio recording,
eye-movement recording, or other. See
retrospective reporting.
(Continued)
Glossary 371

Term Chapter Definition


Deductive-Expository 7 An approach to the presentation of
Presentation supportive information that works from
Strategy giving general information to giving
examples that illustrate this information.
In the Ten Steps, this strategy is only
used if the available time is limited,
learners have relevant prior knowledge,
and a deep level of understanding is not
strictly necessary.
Deductive-Inquisitory 7 An approach to the presentation of
Presentation supportive information that works from
strategy giving general information to asking the
learners to come up with examples that
illustrate this general information. In the
Ten Steps, this strategy is used by default
for more experienced learners.
Deliberate Practice 14 Practice activities aimed at improving
specific—recurrent—aspects of
performance through repetition and
successive refinement. In the Ten Steps,
deliberate practice skills can be trained
in the context of independent part-
task practice and solicited information
presentation.
Demonstration 10 An example illustrating the performance of
a procedure or the application of a set
of rules. Demonstrations may be used
to exemplify the rules or procedures
presented by a just-in-time information
display.
Dependent Part-Task 2 Part-task practice for a selected, to-be-
Practice automated, recurrent task aspect that
is explicitly provided by a teacher or
another intelligent agent after this aspect
has been introduced in the context of
whole and meaningful learning tasks.
Dependent part-task practice contrasts
with independent part-task practice.
Desirable Difficulty 4 A learning task or study strategy that
requires an extra but desirable amount
of effort, thereby improving long-term
performance. They create challenges
and slow the rate of apparent learning
often optimize long-term retention and
transfer.
(Continued)
372 Glossary

Term Chapter Definition


Development Portfolio 5 An assessment instrument used to
gather assessment results over time.
At each moment in time, the portfolio
gives information on the learner’s
overall level of performance (i.e., task-
centered assessment) and the quality of
performance on particular aspects of the
task (i.e., standard-centered assessment).
Divergence of Practice 13 The principle that a set of practice items
Items must be representative for all variants of
the procedure or set of rules practiced by
the learners. The same principle applies
to a set of demonstrations or instances.
Domain Model 7 A description of a learning domain in terms
of applicable facts, concepts, principles,
and plans. A domain model is the result of
the analysis of mental models. Examples
of domain models are conceptual models,
causal models, and structural models.
Double-Blended 2 An educational program combining (a) face-
Learning to-face learning and online learning and
(b) learning at the workplace and learning
in a simulated-task environment. This
type of program nicely fits the Ten Steps.
Double-Classified 5 The classification of a critical skill as both
Constituent Skill recurrent and nonrecurrent. Training
design (e.g., intermixed training) should
then help the learner to automate the
routine but also being able to recognize
when the routine does not work.
Elaboration 2 A category of learning processes by which
learners connect new information
elements to each other and to knowledge
already available in long-term memory.
Elaboration is a form of schema
construction that is especially important
for the learning of supportive information
using, for example, multimedia,
hypermedia, and social media.
Emphasis Manipulation 6 An approach to the sequencing of
learning tasks in which different sets
of constituent skills are emphasized in
different task classes. In the first task
class, only a limited set of constituent
skills is emphasized and, in later task
classes, increasingly more constituent
skills are emphasized.
(Continued)
Glossary 373

Term Chapter Definition


Empirical analysis 8 An analysis of skills and knowledge used by
target learners. It describes how tasks are
actually performed rather than how they
should be performed (rational analysis).
Entrustable 15 Tasks to be entrusted to the unsupervised
Professional Activity execution by a trainee once he or she has
(EPA) attained sufficient specific competence.
In the Ten Steps, EPAs can be granted
once a learner successfully performs the
unsupported/unguided learning tasks in
one particular task class.
Epistemic Game 7 A knowledge-generating activity that asks
learners to structure or restructure
information, providing them with new
ways of looking at supportive information.
Expertise Reversal 4 An effect that shows that instructional
Effect methods that are highly effective
with novice learners can lose their
effectiveness and even have negative
effects when used with more experienced
learners and vice versa.
Expository Methods 7 Instructional methods explicitly presenting
meaningful relationships in supportive
information to the learner.
Extraneous Cognitive 2 Cognitive load imposed by cognitive
Load processes not directly relevant to learning
(e.g., searching for relevant information,
weak-method problem solving, integrating
different sources of information). Well-
designed training should decrease
extraneous cognitive load.
Fading 10 The principle indicating that the
presentation of information and the
provision of help become increasingly
superfluous as the learners gain more
expertise, and thus, should gradually
diminish.
Fault Tree 9 A specific type of functional model, helping
the learner perform troubleshooting tasks
because it identifies all of the potential
causes of system failure.
Feature List 12 A list of all ‘facts’ that are true for the
instances of a particular concept. For
example, the feature list for the concept
‘bed’ might read: (a) you can lie on it,
(b) it has a flat surface, and (c) it has a
mattress. Concrete concepts may also be
described by their physical models.
(Continued)
374 Glossary

Term Chapter Definition


Fidelity 4 A measure of the degree of correspondence
of a given quality of a simulated task
environment with the real world. Types
of fidelity are psychological fidelity (e.g.,
skills to be performed by the learner),
functional fidelity (e.g., behavior and
interface of the simulation), and physical
fidelity (e.g., look, smell, feel of the
simulation).
Flipped Classroom 2 A blended approach where the presentation
of supportive information that normally
occurs in the classroom takes place
outside of the classroom (often online,
but not necessarily) and the work on
learning tasks that traditionally is done as
homework happens in the classroom.
Formative Assessment 5 In the Ten Steps, a type of assessment
that assesses the quality of a learner’s
performance on learning tasks in order to
improve her or his learning process.
Forward Chaining 6 An approach to part-task sequencing where
the training starts with constituent skills
that are performed first during regular
task-performance and works toward
constituent skills that are performed last
during regular task-performance (i.e., a
natural-process order).
Fractionation 13 An approach to part-task sequencing in
which the procedure is broken down into
different functional parts.
Fragmentation 2 The tendency in traditional education to
analyze a complex learning domain in
small pieces that often correspond with
specific learning objectives and then teach
the domain piece-by-piece without paying
attention to the relationships between
pieces. This hinders complex learning and
competence development.
Functional Fidelity 4 The degree to which a simulated task
environment behaves in a way similar to
the real task environment in reaction to
the tasks executed by the learner.
Generative Learning 7 Generative learning involves actively making
Activity sense of to-be-learned information by
mentally reorganizing and integrating
it with one’s prior knowledge, thereby
enabling learners to apply what they have
learned to new situations.
(Continued)
Glossary 375

Term Chapter Definition


Germane Cognitive 2 Cognitive load imposed by processes
Load directly relevant for learning (i.e., schema
construction and automation). Well-
designed instruction should optimize
germane cognitive load within the limits
of the total available working-memory
capacity.
Guidance 4 A form of process-oriented support that
helps learners systematically approach
problems because they are guided
through the problem-solving phases and
prompted to use relevant rules-of-thumb.
Guided Discovery 7 An inductive approach to information
Learning presentation that works from examples
to general information and where learners
are guided to discover the meaningful
relationships in the general information.
In the Ten Steps, this strategy is only
used if there is ample instructional time,
learners have well-developed discovery
skills, and a deep level of understanding is
required.
Holistic Design 2 In contrast to an atomistic approach, a
design approach that does not analyze a
complex domain into unrelated pieces but
that simplifies complex tasks in such a
way that learners can be confronted with
whole, meaningful tasks right from the
start of the educational program. The Ten
Steps is an example of a holistic design
approach.
IF-THEN Rule 3, 11 A rule stating which actions to take under
particular conditions. IF-THEN rules are
identified in the rule-based analysis of
recurrent constituent skills.
Imitation Task 4 A learning task describing a case study or
worked example as well as a given state
and a goal state for a similar problem
for which the learner must a generate
solution.
Independent Part-task 2 In the Ten Steps, a form of deliberate
Practice practice where the learner may decide
which to-be-automated recurrent aspects
of the learning tasks they will additionally
practice and when these will be practiced.
Independent part-task practice contrasts
with dependent part-task practice.
(Continued)
376 Glossary

Term Chapter Definition


Inductive Learning 2 A category of inductive learning
processes, including generalization
and discrimination, by which learners
mindfully abstract from their concrete
experiences. Inductive learning is a form
of schema construction that is especially
important for learning from learning tasks
in real or simulated task environments.
Inductive-Expository 7 An approach to the presentation of
Presentation supportive information that works
Strategy from giving examples to giving the
general information illustrated in these
examples. In the Ten Steps, this is the
default strategy for presenting supportive
information to novice learners.
Inductive-Inquisitory 7 An approach to the presentation of
Presentation supportive information that works from
Strategy giving examples to asking the learners
to come up with the general information
illustrated in these examples. This form
of pure discovery learning is discouraged
by the Ten Steps and should be replaced
by guided discovery learning.
Information Literacy 14 Domain-general skills enabling a learner
Skills to search, scan, process, and organize
supportive information from various
learning resources to fulfill information
needs resulting from the work on learning
tasks.
Information- 11 A task-analytical technique for analyzing
Processing Analysis recurrent constituent skills that is mainly
used when the actions taken and/or
decisions made show a temporal order
but are largely covert and unobservable.
Inquisitory Method 7 An instructional method asking learners
to produce or construct meaningful
relationships from what they already
know. The inquisitory method fits a
guided discovery strategy for information
presentation.
Instance 10 A concrete example of a concept, principle,
or plan. Instances may be used to
exemplify the general information given in
just-in-time information displays.
(Continued)
Glossary 377

Term Chapter Definition


Instructional Systems 3 An approach to the design of instructional
Design (ISD) systems in which phases are distinguished
such as analysis, design, development,
implementation, and evaluation (e.g.,
ADDIE). The Ten Steps focus on the
analysis of the complex skill and the
design of an educational blueprint and
is, thus, best used in combination with a
broader ISD model.
Intermixed Training 13 A training program in which work on
learning tasks is interspersed with one
or more sub programs for part-task
practice. Intermixed training is suitable
for training double-classified skills
where learning tasks are used to create
occasional impasses confronting learners
with situations where developed routines
do not work, necessitating a switch from
automatic processing to problem solving.
Interprofessional 4 Occurs when learning tasks require learners
Learning from different professional fields to
perform team-based, professional tasks.
Intrinsic Cognitive 2 Cognitive load that is a direct function
Load of performing the task; in particular, of
the number of elements that must be
simultaneously processed in working
memory (i.e., element interactivity).
Intuitive Cognitive 8 The cognitive strategies that learners
Strategies possess prior to training. Intuitive
strategies easily interfere with learning
supportive information, just as typical
errors easily interfere with learning
procedural information.
Intuitive Mental 9 The mental models that learners possess
Models prior to training. Intuitive mental models
easily interfere with learning supportive
information, just as misconceptions
easily interfere with learning procedural
information.
Iteration 3 In instructional design, the phenomenon
that the outcomes of particular design
activities later in the design process
provide input to activities earlier in
the design process. Rapid prototyping
is an approach to plan such iterations
beforehand.
(Continued)
378 Glossary

Term Chapter Definition


Judgment of Learning 6 An assessment that a person makes about
( JoL) how well they have learned particular
information; that is, a prediction about
how likely they will be to remember a
target item when later given a cue to
remember.
Just-in-Time ( JIT) 10 A unit of procedural information meant to
Information Display present one procedure or one rule for
reaching a meaningful goal or sub goal.
Just-in-time information displays are best
presented precisely when learners need
them.
Knowledge 6 An approach to sequencing learning tasks
Progression in which task classes are based on
increasingly more elaborated knowledge
models. Task classes might be based on
increasingly more elaborated cognitive
strategies or increasingly more elaborated
mental models (i.e., mental-model
progression).
Layers of Necessity 3 The phenomenon that not all activities
in a design process might be necessary
because circumstances greatly differ
between projects. In the Ten Steps,
the conditions under which a particular
activity might be skipped are indicated as
part of each step.
Learner Control 6 A situation where it is the learner who
controls the instruction. In the Ten
Steps, it is possible to have learners
select their own learning tasks
(on-demand education), their own
supportive information (resource-
based learning), their own procedural
information (solicited-information
presentation), and their own part-task
practice (independent part-task practice).
Learning Task 2 The first blueprint component and the
backbone of an educational program.
Each learning task is designed on the
basis of a real-life task and promotes
inductive learning through meaningful
whole-task experiences. Learning tasks
are performed in a real or simulated task
environment.
Malrule 10 Incorrect cognitive rules leading to
persistent errors.
(Continued)
Glossary 379

Term Chapter Definition


Mash-up 3 A combination or mixing of different
elements; especially content from
different sources. Most commonly used
in music it is a s a creative work, made
by blending two or more pre-recorded
songs.
Matching 13 A technique for teaching complex
procedures where correct
demonstrations of rules or procedures
are compared and contrasted with their
incorrect counterparts.
Means-End Analysis 4 A weak problem-solving method where,
given a current state and a goal state, the
learner searches for an action reducing
the difference between the two. The
action is performed on the current state
to produce a new state, and the process
is recursively applied to this new state
and the goal state until the goal state
has been reached. Means-ends analysis is
typical of novices, yields a high cognitive
load, and does not contribute to learning.
Mental Model 9 A rich mental representation of how a task
domain is organized. In the Ten Steps, a
mental model is analyzed in conceptual
models (What is this?), structural models
(How is this built?), or causal models
(How does this work?)
Metadata 6 Metadata (Greek meta ‘after’ and Latin
data ‘information’) are data that describe
other data. In the Ten Steps, important
metadata that enable the selection
of learning tasks pertain to their
(a) difficulty, (b) support and guidance,
and (c) real-life dimensions on which
tasks differ from each other.
Microworlds 7 Simulations of conceptual domains that
offer a highly interactive approach to the
presentation of cases because learners
can change the settings of particular
variables and study the effects of those
changes on other variables. They help
learners construct mental models of a
learning domain.
(Continued)
380 Glossary

Term Chapter Definition


Miller’s Pyramid 15 A common distinction between four levels
in a program of assessment: (1) knows,
(2) knows how, (3) shows how, and
(4) does. The Ten Steps focuses on
formative assessment on the shows-how
and does levels, although summative
assessment on all levels is possible when
required/desired.
Minimal Manual 10 A manual presenting minimal, task-
oriented information on how to perform
procedural tasks. In the Ten Steps,
the minimal manual fits the solicited
presentation of procedural information.
Misconception 12 Learner’s intuitive, though often incomplete
and/or faulty, understanding of concepts,
principles, and plans. Misconceptions
(and typical errors) easily interfere with
learning procedural information, just as
intuitive cognitive strategies and mental
models easily interfere with learning
supportive information.
Modality Principle 10 Replacing information presentation in one
modality such as a written explanatory
text coupled on another source of visual
information such as a diagram (i.e.,
unimodal; using only a visual modality) with
information in two modalities such as a
spoken explanatory text and a visual source
of information (i.e., multimodal; using visual
and auditory modalities) has a positive
effect on learning and transfer. Contrast
this with the redundancy principle.
Model Tracing 13 An approach to contingent tutoring where
the learner’s behavior is traced back to
identified IF-THEN rules. If the tracing
process fails, a deviation from the model
trace must have appeared and feedback is
provided to the learner.
Modeling Example 4 A worked example or case study, together
with a demonstration of the problem-
solving process, leading to the presented
solution. A modeling example, for
instance, may show an expert working
on a problem and explaining why they
are doing what they are doing in order to
reach a solution. A modeling example may
be used as a learning task that must be
studied or as an illustration of a SAP as
part of supportive information.
(Continued)
Glossary 381

Term Chapter Definition


Monitoring 14 A sub process referring to how learners
form (metacognitive) thoughts about their
own learning. Monitoring and control are
two complementary sub processes in the
learning cycle.
Multimedia Principle 7 This principle indicates that words and
pictures are more conducive to learning,
rather than just text or pictures alone.
This is the case because multimedia
presentations make use of both the visual
and auditory channel of working memory.
Multiple 13 A technique for the teaching of
Representations complex procedures, where multiple
representation formats such as texts
and visuals (e.g., flowcharts) are used to
present difficult procedures or rules.
Multiple Viewpoints 7 The presentation of supportive information
in such a way that the learner is
stimulated to take different viewpoints
or perspectives on the same information,
aiding elaboration and transfer.
Nonrecurrent 5 An aspect of complex task performance
Constituent Skill for which the desired exit behavior
varies from problem situation to problem
situation (i.e., it involves problem solving,
reasoning, or decision making). By default,
the Ten Steps categorize constituent skills
as nonrecurrent.
Non-Specific Goal 4 A learning task describing a given begin
Task state and a loosely described goal state.
The learner must generate solutions for
self-defined goals. Also called goal-free
problems.
Objective, Structured 15 Planned clinical encounters where
Clinical Examination examinees rotate through a series of
(OSCE) stations and perform specific clinical
tasks within a specified time period.
Performance is typically assessed with
checklists. In the Ten Steps, this type of
assessment of nonrecurrent constituent
skills is discouraged.
On-Demand Education 2 In the Ten Steps, an educational approach
that refers to a type of self-directed
learning where the learner is responsible
for selecting future learning tasks.
On-demand education contrasts with
adaptive learning.
(Continued)
382 Glossary

Term Chapter Definition


Open Educational 3 Any type of educational materials that are
Resources (OER) in the public domain or introduced with
an open license so that anyone can legally
and freely copy, use, adapt, and re-share
them for teaching, learning, and assessing.
Overlearning 13 The learning of to-be-automated recurrent
aspects of performance up to a very high
level of automation involving part-task
practice with an enormous amount of
repetition.
Part-Task Practice 2 One of the four blueprint components
in which additional practice items are
provided to train a selected routine
aspect of a complex skill (i.e., to-be-
automated, recurrent constituent skill) up
to a very high level of automation through
a learning process called strengthening.
Part-Task Sequencing 6 An approach to sequencing in which the
training works from parts of the task
toward the whole task. The Ten Steps do
not recommend part-task sequencing for
learning tasks, unless it is impossible to
find a version of the whole task that is
easy enough to start the training with.
Part-Whole 6 An approach to sequencing in which
Sequencing a sequence of easy-to-difficult parts
is developed first (i.e., part-task
sequencing), after which whole-
task sequencing is applied to further
simplify the parts. The Ten Steps do not
recommend part-whole sequencing for
learning tasks.
Pebble-in-the-Pond 3 A practical and content-oriented approach
Approach to instructional design that starts by
specifying what the learners will do; that
is, the design of learning tasks (i.e., the
pebble). This one pebble starts all of the
other activities rolling. The term was
introduced by David Merrill.
Peer Assessment 6 Assessment of a learner’s performance
by peers or ‘near’ peers. Typically, the
assessments are based on given standards
and scoring rubrics.
Performance 5 Assessment based on more-or-less
Assessment authentic tasks such as activities,
exercises, or problems that require
students to show what they can do.
(Continued)
Glossary 383

Term Chapter Definition


Performance 4 A measure taken by the instruction
Constraint that makes particular actions that are
not relevant for desired performance
unavailable to learners. Thus, unnecessary
actions are blocked. The use of
performance constraints is also called a
training wheels approach.
Performance 5 An expression of a desired result of a
Objective learning experience. In the Ten Steps,
each constituent skill has its own
performance objective containing
an action verb, a description of the
conditions under which the desired
performance might occur, a description of
tools and objects used, and a description
of standards (i.e., criteria, values,
attitudes) for acceptable performance.
Physical Fidelity 4 The degree to which real-world operational
equipment is reproduced in a simulated
task environment (e.g., looks like, smells
like, and feels like). According to the
Ten Steps, physical fidelity might be
low in early task classes but should
increase over later task classes as learner
expertise develops.
Physical Model 12 Drawings, pictures, photographs,
miniatures, or other representations of—
often concrete—concepts for which it is
important that learners acquire a mental
image. The identification of physical
models is often important for the tools
and objects that have been specified as
part of the performance objectives.
Plan 12 Mental representation where the
location-in-time and/or location-in-
space relationships between concepts is
dominant. Plans that organize concepts in
time are called scripts; plans that organize
concepts in space are called templates. In
the Ten Steps, plans are analyzed as part
of prerequisite knowledge.
Planned Information 2 In the Ten Steps, this is when a teacher
Provision or other intelligent agent decides which
supportive information must be studied
by the learners and when they must
study it. Planned information provision
contrasts with resource-based learning.
(Continued)
384 Glossary

Term Chapter Definition


Practice Item 13 An item that asks the learner to perform a
selected, recurrent aspect of a complex
skill or a part thereof. Practice items help
learners develop routines and are the
building blocks for part-task practice.
Prerequisite 12 Mental representations that are
Knowledge prerequisite to correct application
of cognitive rules. In the Ten Steps,
prerequisite knowledge is analyzed into
concepts, principles, and plans.
Primary Medium 4 In a multimedia learning environment,
the medium used to drive the learning
process. In the Ten Steps, the primary
medium is always a real or simulated task
environment in which the learning tasks
can be performed.
Primary Training 14 If both domain-specific and domain-general
Blueprint skills are trained, the term primary
training blueprint refers to the blueprint
for the domain-specific skill. It can be
intertwined with the secondary training
blueprint for the domain-general skills(s).
Principle 12 Mental representations in which cause-
effect and natural-process relationships
between concepts are dominant. In the
Ten Steps, principles are analyzed as part
of prerequisite knowledge.
Problem-Based 4 An inductive approach to learning where
Learning (PBL) the learning tasks have the form of
‘problems.’ Students discuss a problem
in a small group, search and consult
resources to solve it, and finally, come
up with a general explanation for the
particular phenomenon described in the
problem. Students are guided by a tutor.
Procedural 2 One of the four blueprint components.
Information This information is relevant for learning
the recurrent/routine aspects of learning
tasks through a learning process called
rule formation.
Procedure 11 A step-by-step description of recurrent
aspects of task performance where
steps relate to actions and decisions.
Procedures are identified in an
information processing analysis.
(Continued)
Glossary 385

Term Chapter Definition


Process Worksheet 4 A device to guide learners through a
systematic task-completion process.
A process worksheet typically provides
a description of subsequent problem-
solving phases as well as the rules-of-
thumb that may help to complete each
phase successfully.
Process-Oriented 8 Support that helps learners carry out a
Support learning task that could not be performed
without that help. Process-oriented
support provides additional information
on the problem-solving process in terms
of phases to go through and rules-of-
thumb that may help to complete each of
the phases.
Product-Oriented 9 Support that helps learners carry out a
Support learning task that could not be performed
without that help. Product-oriented
support provides additional information
on the given begin state, the goal state,
and possible solutions.
Progress Testing 15 Periodically using a comprehensive test
sampling knowledge across all subjects
reflecting the final attainment level of
the whole curriculum. It nicely fits the
Ten Steps, as it assesses the whole,
multidisciplinary body of knowledge
learners need in order to carry out real-
life tasks.
Project-Based learning 4 An inductive approach to learning where
the learning tasks have the form of
‘projects.’ Students work on the project
together, often taking particular roles,
and produce an advice or product
that answers a (research or practical)
question. Students are guided by a
teacher.
Protocol Portfolio 6 An approach to using development
Scoring (PPS) portfolios that is fully consistent with
the Ten Steps and where (a) the applied
standards are constant throughout the
whole educational program, (b) a mix
of assessment methods and assessors
is used, and (c) a distinction is made
between task-centered and standard-
centered assessment.
(Continued)
386 Glossary

Term Chapter Definition


Psychological Fidelity 4 The degree to which training tasks
reproduce actual behaviors or behavioral
processes required in real-life tasks.
According to the Ten Steps, psychological
fidelity of learning tasks should be as
high as possible from the start of the
educational program (i.e., learning tasks
should be based on real-life tasks).
Rapid Prototyping 3 An approach for planning iterations in
the design process. In the Ten Steps,
rapid prototyping can be realized by
developing one or more learning tasks
that fit one particular task class (i.e.,
the ‘prototypes’) and testing them with
real users before developing additional
learning tasks and other task classes.
Rational Analysis 8 An analysis of skills and knowledge used
by expert task-performers. It describes
how tasks should be performed, rather
than how they are actually performed
(empirical analysis).
Recurrent Constituent 5 An aspect of complex task performance for
Skills which the desired exit behavior is highly
similar from problem situation to problem
situation (i.e., a routine). A special
category is formed by to-be-automated
recurrent constituent skills, which may
require additional part-task practice.
Recursivity 15 In the Ten Steps, recursivity is a
requirement to teaching and assessing
domain-general skills: The design of an
educational program for teaching domain-
general skills is only possible on the basis
of an educational program for teaching
domain-specific skills that requires the
performance of domain-general skills.
Redundancy Principle 7 Replacing multiple sources of information
that are self-contained (i.e., they can
be understood on their own) with one
source of information. This has a positive
effect on elaborative learning and transfer.
Resource-Based 2 In the Ten Steps, a type of self-directed
Learning (RBL) learning where the learner is responsible
for deciding which supportive information
to study and when to study it. RBL
contrasts with planned information
provision and enables the teaching of
information literacy skills.
(Continued)
Glossary 387

Term Chapter Definition


Retrospective 8 In retrospective reporting, participants are
Reporting instructed to report the thoughts they
had while they were working on a task
immediately after completing it.
Reverse Task 4 A learning task describing a goal state and
a solution. The learner must indicate
the given begin states for which the
presented solution is acceptable.
Rule Formation 2 A category of learning processes by
which learners embed new information
in cognitive rules that directly steer
behavior. Rule formation is a form of
schema automation especially important
for learning procedural information.
Rule-Based Analysis 11 A task-analytical technique for analyzing
recurrent constituent skills in which the
actions and/or decisions do not show a
temporal order.
Rule-of-thumb 8 A heuristic prescription that can help to
perform the nonrecurrent aspects of a
task but that does not necessarily do so.
It contrasts with an algorithmic how-to
instruction.
Scaffolding 4 Problem-solving support integrated
with practice on learning tasks. The
scaffolding fades as learners gain more
experience. Particular problem formats,
problem sequences, process worksheets,
constraints on performance, and cognitive
tools may be used to scaffold learning.
Schema Automation 2 A category of learning processes
responsible for automating cognitive
schemata which, then, contain cognitive
rules that directly steer behavior
without the need for conscious control.
Subprocesses are rule formation and
strengthening.
Schema Construction 2 A category of learning processes
responsible for constructing cognitive
schemata that might then be interpreted
by controlled processes to generate
behavior in new, unfamiliar situations.
Sub processes are inductive learning and
elaboration.
(Continued)
388 Glossary

Term Chapter Definition


Scoring Rubric 5 A scale for rating complex performance,
constructed on the basis of the standards
(i.e., criteria, values, attitudes) for
acceptable performance for all different
aspects of the task (i.e., constituent skills).
For each standard, there may be a scale
of values on which to rate the degree to
which the standard has been met.
Script Concordance 15 A test with written but authentic clinical
Test situations in which examinees have to
interpret data to make decisions. They
measure reasoning and decision making in
ambiguous and uncertain situations.
Secondary Training 14 If both domain-specific and domain-general
Blueprint skills are trained, the term secondary
training blueprint refers to the blueprint
for the domain-general skill. It can be
intertwined with the primary training
blueprint for the domain-specific skill.
Second-Order 6 Gradually decreasing support and guidance
Scaffolding for self-directed learning skills or other
domain-general skills. In the Ten Steps,
second-order scaffolding helps learners
learn to select learning tasks (task-
selection skills), find relevant supportive
information (information-literacy skills),
and consult procedural information/
identify part-task practice (deliberate-
practice skills).
Segmentation 13 An approach to part-task sequencing in
which the procedure is broken down in
distinct temporal or spatial parts.
Segmentation Principle 7 Dividing transient information (e.g., video,
animation) in meaningful segments, so
that learners can better perceive the
structure underlying the process or
procedure shown and also have more
time to process the information.
Self-Assessment 6 Assessment of performance by the learner
themself. Typically, the assessments are based
on given standards and scoring rubrics.
Self-Directed Learning 6 A process in which students take the
(SDL) initiative to diagnose their learning needs,
formulate learning goals, identify resources
for learning, select and implement learning
strategies, and evaluate learning outcomes.
In the Ten Steps, SDL skills are developed
with on-demand education, resource-based
learning, solicited information presentation,
and independent part-task practice.
(Continued)
Glossary 389

Term Chapter Definition


Self-Explanation 7 The learner’s tendency to connect new
Principle information elements to each other and
to existing prior knowledge. Prompting
learners to self-explain new information
by asking them, for instance, to identify
underlying principles has a positive effect
on elaborative learning and transfer.
Self-Pacing Principle 7 Giving learners control over the tempo of
instruction, which often has the form of
transient information (e.g., animation,
video). Self-pacing has a positive effect on
elaborative learning and transfer.
Self-Regulated 14 The metacognitive process learners use to
Learning (SRL) monitor and control their own learning.
In the Ten Steps, monitoring and control
will be different for each of the four
components and underlying learning
processes.
Semantic network 9 A visual representation of a conceptual
model in which the semantic relationships
between concepts are labeled.
Sequencing 6 According to the Ten Steps, the preferred
type of sequencing is the ordering
of learning tasks in task classes and
sequencing those task classes from simple
to complex (whole-task sequencing).
Other approaches to sequencing are only
used if it is impossible to find learning
tasks simple enough to start the training
with.
Shared Control 6 An educational system where the learner
and the system (i.e., teacher or other
intelligent agent) share control over
the instruction. In the Ten Steps, it is
the preferred mode of control because
it allows for second-order scaffolding
that helps learners develop self-directed
learning skills.
Signaling Principle 10 Focusing learners’ attention on the critical
aspects of learning tasks or presented
information. Signaling reduces visual
search and has a positive effect on rule
formation and transfer.
Simplification 13 An approach to part-task sequencing in
which a procedure is broken down into
parts that represent increasingly more
complex versions of the procedure.
(Continued)
390 Glossary

Term Chapter Definition


Simplifying Conditions 6 An approach to sequencing learning tasks
where conditions that simplify the
performance of the complex task are
used to define task classes. All conditions
that simplify performance are applied to
the first task class, and they are relaxed
for later task classes.
Situational Judgment 15 A test describing a set of real-life scenarios
Test and a number of possible reactions for
each scenario from which the examinee
must select one or more appropriate
ones. They can, for instance, be used to
measure professional behavior.
Skill Cluster 6 A meaningful and relatively large group of
constituent skills that may be seen as a
‘part’ of the whole, complex cognitive
skill. Skill clusters are only used to
sequence learning tasks if it is impossible
to find a whole task simple enough to
start the training with.
Skill Decomposition 5 The analytical process to describe all
constituent skills that make up a complex
skill in a skill hierarchy.
Skill Hierarchy 5 A hierarchical description of all constituent
skills that make up a complex skill or
professional competency. A vertical
relation indicates a ‘prerequisite’
relationship, and a horizontal relation
indicates a ‘temporal’ relationship
between constituent skills.
Snowballing 6 An approach to part-task sequencing where
increasingly more parts are trained
together as the training progresses. Thus,
if there are three parts—A, B, and C—an
example of snowballing is first training A,
then AB, and finally, ABC.
Solicited Information 2 In the Ten Steps, a type of self-directed
Presentation learning where the learner is responsible
for consulting procedural information
in, for example, manuals, checklist,
on-line systems, etc. Solicited information
presentation contrasts with unsolicited
information presentation.
(Continued)
Glossary 391

Term Chapter Definition


Spatial Split-Attention 10 Replacing multiple sources of information
Principle (e.g., frequently visual representations
and accompanying prose text) with a
single, integrated source of information
(e.g., text in the visual representation).
Eliminating spatial split-attention has
a positive effect on rule formation and
transfer.
Split-Attention Effect 10 The phenomenon that learning is hampered
when learners must integrate information
sources split either in time (temporal
split-attention) or space (spatial split-
attention) to fully understand something.
Standard-Centered 5 Assessment that focuses on one particular
Assessment standard: It tells how a learner is doing
and developing over tasks on one
particular aspect of performance. It
reflects the learner’s mastery of distinct
aspects of performance and yields
particularly important information for the
identification of points for improvement.
Contrasts with task-centered assessment.
Standards 5 Parts of performance objectives that
include criteria, values, and attitudes for
acceptable performance. Standards are
the basis for developing scoring rubrics
and, thus, performance assessment and
assessment instruments.
Strengthening 2 A category of learning processes
responsible for the fact that a cognitive
rule grows stronger (i.e., accumulates
strength) each time it is applied
successfully. Strengthening is a form of
advanced schema automation that is
especially important for (over)learning
on the basis of part-task practice with,
for instance, drill-and-practice computer-
based training.
Structural Model 7 A specific type of domain model describing
the plans and their interrelationships
that are important in a particular task
domain. A structural model results from
the analysis of mental models and is
important for designing and evaluating
artifacts.
Subgoaling 13 A technique for the teaching of complex
procedures where the learner is asked
to specify the goal or sub goal that is
reached by a particular procedure or rule.
(Continued)
392 Glossary

Term Chapter Definition


Summative 15 In the Ten Steps, the assessment of
Assessment learner’s performance on unsupported/
unguided learning tasks, which may also
be seen as test tasks, in order to make
formal decisions on passing/failing (e.g.,
continue to next task class or not) and
certification (e.g., successful completion
of the program).
Support 4 Measures that help learners perform
a learning task that could otherwise
not be performed without that help.
A distinction can be made between
product-oriented support and process-
oriented support.
Supportive 2 One of the four blueprint components.
Information This information is relevant for learning
the nonrecurrent (i.e., problem-solving,
reasoning, and decision-making) aspects
of learning tasks through elaboration and
understanding.
System Control 6 An educational system where the system
(i.e., teacher or other intelligent agent)
controls instruction. A drawback of
system control is that learners have
few opportunities to develop their self-
directed learning skills. System control
contrasts with learner control, but the
two can be combined in a system of
shared control.
System Dynamics 3 The phenomenon in complex (instructional)
systems that the outcomes of one
component of the system directly or
indirectly have an impact on all other
components of the system. According
to a systems view, instructional design
procedures should take system dynamics
into account by being not only systematic
but also systemic.
Systematic Approach 7 A description of a systematic way to solve a
to Problem solving problem in terms of subsequent problem-
(SAP) solving phases and rules-of-thumb that
may help successful completion of each
phase. A SAP is the result of analyzing
cognitive strategies. If it includes
decisions, it is also called a SAP chart.
(Continued)
Glossary 393

Term Chapter Definition


Task Class 2 A class of equivalent learning tasks that are
at the same level of complexity and can
be performed with the same supportive
information. Task classes are also called
‘case types’ in older versions of the
4C/ID-model.
Task Selection 6 In the Ten Steps, where subsequent learning
tasks are selected in a way that they best
meet the needs of an individual learner. In
a system of adaptive learning, subsequent
tasks are selected by an intelligent agent
(e.g., teacher, e-learning application); in a
system of on-demand education, they are
selected by the learner.
Task-Centered 5 Task-centered assessment takes all
Assessment standards into account: It reflects how a
learner is doing overall and how overall
performance is developing over tasks.
It, thus, reflects the learner’s mastery
of the whole complex skill and is, thus,
very appropriate for making progress
decisions. Contrasts with standard-
centered assessment.
Temporal Split- 10 Presenting multiple sources of information
Attention Principle (e.g., mutually referring pictures and text)
at the same time, instead of one by one.
Eliminating temporal split-attention has
a positive effect on rule formation and
transfer.
Terminal Objective 5 The performance objective that is at the
top of the skill hierarchy and that is a
specification of the overall learning goal.
Think Aloud 8 A method for cognitive task analysis where
experts are invited to think aloud when
performing real-life tasks. It helps to
specify cognitive strategies in Systematic
Approaches to Problem solving (SAPs).
To-Be-Automated 5 An aspect of complex task performance for
Recurrent which the desired exit behavior is highly
Constituent Skill similar from problem situation to problem
situation and that needs to be developed
to a very high level of automaticity. For
these constituent skills, part-task practice
is included in the training program.
(Continued)
394 Glossary

Term Chapter Definition


Training Wheels 13 An approach to instruction that blocks
Approach undesirable actions of the learner.
Practice items are sequenced in such a
way that learners’ performance is first
constrained, after which the constraints
are slowly loosened until none.
Transfer of Learning 2 The ability to perform an acquired complex
skill in new, unfamiliar situations.
A distinction can be made between
near transfer, where the transfer tasks
closely resemble the trained tasks, and
far transfer, where the transfer tasks are
very different from the trained tasks. The
terms retention or self-transfer are used
for situations where transfer tasks are
identical to the trained tasks.
Transfer Paradox 2 The tendency in traditional education to
use instructional methods that are highly
efficient for achieving specific learning
objectives (e.g., blocked practice) but that
are not efficient for reaching transfer of
learning. This hinders complex learning
and competence development.
Typical Error 11 The tendency of learners to make particular
mistakes when they have to apply new
rules or perform new procedural steps.
Typical errors and misconceptions
easily interfere with learning procedural
information, just as intuitive cognitive
strategies and mental models easily
interfere with learning supportive
information.
Unsolicited 2 An approach to the presentation of
Information procedural information where just-in-
Presentation time information displays are explicitly
presented to the learner precisely when
they are needed. Contrasts with solicited
information presentation.
Variability of Practice 4 Organizing learning tasks in such a way
that they differ from each other on
dimensions that also differ in the real
world (e.g., the situation or context, the
way of presenting the task, the saliency
of defining characteristics). Variability has
positive effects on inductive learning and
transfer.
(Continued)
Glossary 395

Term Chapter Definition


Whole-Part 6 An approach to sequencing in which
Sequencing first a sequence of simple-to-complex
whole tasks is developed (i.e., whole-
task sequencing), after which part-task
sequencing is applied to the whole
tasks. The Ten Steps prefer whole-part
sequencing above part-whole sequencing.
Whole-Task 6 An approach to sequencing in which the
Sequencing training immediate starts with learning
tasks based on the simplest version of
real-life tasks. The Ten Steps strongly
recommends whole-task sequencing for
learning tasks.
Worked (Out) 4 A learning task describing a given begin
Example state, a desired goal state, and a chosen
solution; also called a case study if it
reflects a real-life problem situation.
A process-oriented worked example also
pays attention to the problem-solving
processes necessary to reach the goal and
is called a modeling example.
Zigzag Design 3 A design approach in which iterations,
skipping of activities, and switches
between activities are common. The Ten
Steps allow for zigzag design.
References

Achtenhagen, F. (2001). Criteria for the development of complex teaching-learning


environments. Instructional Science, 29, 361–380. https://doi.org/10.1023/
A:1011956117397
Akkaya, A., & Akpinar, Y. (2022). Experiential serious-game design for development
of knowledge of object-oriented programming and computational thinking skills.
Computer Science Education, 32(4), 476–501. https://doi.org/10.1080/08993
408.2022.2044673
Aleven, V., Stahl, E., Schworm, S., Fischer, F., & Wallace, R. (2003). Help seeking
and help design in interactive learning environments. Review of Educational
Research, 73(3), 277–320. https://doi.org/10.3102/00346543073003277
Alred, G. J., Brusaw, C. T., & Oliu, W. E. (2012). Handbook of technical writing
(10th ed.). Bedford St. Martins.
Anderson, J. R. (2007). How can the human mind occur in the physical universe?
Oxford University Press.
Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Lawrence
Erlbaum Associates.
Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning,
teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives.
Longman.
Argelagós, E., Garcia, C., Privado, J., & Wopereis, I. (2022). Fostering informa-
tion problem solving skills through online task-centred instruction in higher
education. Computers and Education, 180, 104433. https://doi.org/10.1016/
j.compedu.2022.104433
Ausubel, D. P. (1960). The use of advance organizers in the learning and retention
of meaningful verbal material. Journal of Educational Psychology, 51(5), 267–272.
https://doi.org/10.1037/h0046669
Ausubel, D. P. (1968). Educational psychology: A cognitive view. Holt, Rinehart and
Winston.
Ayres, P. L. (1993). Why goal-free problems can facilitate learning. Contemporary
Educational Psychology, 18(3), 376–381. https://doi.org/10.1006/ceps.1993.1027
Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & van der Vleuten, C. P. M.
(2006). The wheel of competency assessment: Presenting quality criteria for com-
petency assessment programs. Studies in Educational Evaluation, 32(2), 153–170.
https://doi.org/10.1016/j.stueduc.2006.04.006
References 397

Bagley, E., & Shafer, D. W. (2009). When people get in the way: Promoting
civic thinking through epistemic gameplay. International Journal of Gaming
and Computer-Mediated Simulations, 1(1), 36–52. https://doi.org/10.4018/
jgcms.2009010103
Bagley, E., & Shafer, D. W. (2011). Promoting civic thinking through epistemic
game play. In R. Ferdig (Ed.), Discoveries in gaming and computer-mediated simu-
lations: New interdisciplinary applications (pp. 111–127). IGI Global.
Baillifard, A., Gabella, M., Banta Lavenex, P., & Martarelli, C. S. (2023). Implement-
ing learning principles with a personal AI tutor: A case study. https://arxiv.org/
abs/2309.13060
Barbazette, J. (2006). Training needs assessment: Methods, tools, and techniques.
Pfeifer.
Barnes, L. B., Christensen, C. R., & Hansen, A. J. (1994). Teaching and the case
method: Text, cases, and readings (3rd ed.). Harvard Business Review Press.
Bastiaens, E., van Tilburg, J., & van Merriënboer, J. J. G. (Eds.). (2017). Research-
based learning: Case studies from Maastricht University. Springer. https://doi.
org/10.1007/978-3-319-50993-8
Battig, W. F. (1966). Facilitation and interference. In E. A. Bilodeau (Ed.), Acquisi-
tion of skill (pp. 215–244). Academic Press.
Beckers, J., Dolmans, D. H. J. M., Knapen, M. M. H., & van Merriënboer, J. J. G.
(2019). Walking the tightrope with an e-portfolio: Imbalance between support
and autonomy hampers self-directed learning. Journal of Vocational Education
and Training, 71(2), 260–288. https://doi.org/10.1080/13636820.2018.14
81448
Beckers, J., Dolmans, D. H. J. M., & van Merriënboer, J. J. G. (2016). e-Portfolios en-
hancing students’ self-directed learning: A systematic review of infuencing factors.
Australasian Journal of Educational Technology, 32(2), 32–46. https://doi.org/
10.14742/ajet.2528
Beckers, J., Dolmans, D. H. J. M., & van Merriënboer, J. J. G. (2019). Perfect:
Design and evaluation of an electronic development portfolio aimed at supporting
self-directed learning. TechTrends, 63(4), 420–427. https://doi.org/10.1007/
s11528-018-0354-x
Beers, P. J., Boshuizen, H. P. A., Kirschner, P. A., & Gijselaers, W. H. (2007). The
analysis of negotiation of common ground in CSCL. Learning and Instruction,
17(4), 427–435. https://doi.org/10.1016/j.learninstruc.2007.04.002
Benjamin, A. S., & Tullis, J. (2010). What makes distributed practice efective? Cog-
nitive Psychology, 61(3), 228–247. https://doi.org/10.1016/j.cogpsych.2010.
05.004
Birnbaum, M. S., Kornell, N., Bjork, E. L., & Bjork, R. A. (2013). Why interleaving
enhances inductive learning: The roles of discrimination and retrieval. Memory &
Cognition, 41(3), 392–402. https://doi.org/10.3758/s13421-012-0272-7
Bjork, R. A. (1994). Memory and metamemory considerations in the training of
human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing
about knowing (pp. 185–205). MIT Press.
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, tech-
niques, and illusions. Annual Review of Psychology, 64, 417–444. https://doi.org/
10.1146/annurev-psych-113011-143823
398 References

Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2010). Transfer of train-
ing: A meta-analytic review. Journal of Management, 36(4), 1065–1105. https://
doi.org/10.1177/0149206309352880
Blumenfeld, P. C., Soloway, E., Marx, R. W., Krajcik, J. S., Guzdial, M., & Palincsar,
A. (1991). Motivating project-based learning: Sustaining the doing, supporting
the learning. Educational Psychologist, 26(3–4), 369–398. https://doi.org/10.10
80/00461520.1991.9653139
Bohle Carbonell, K., Stalmeijer, R. E., Könings, K. D., Segers, M., & van Merriën-
boer, J. J. G. (2014). How experts deal with novel situations: A review of adaptive
expertise. Educational Research Review, 12, 14–29. https://doi.org/10.1016/
j.edurev.2014.03.001
Boud, D. (1995). Enhancing learning through self assessment. RoutledgeFalmer.
Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solv-
ing by experts and novices: Analysis of a complex cognitive skill. Computers in
Human Behavior, 21(3), 487–508. https://doi.org/10.1016/j.chb.2004.10.005
Bray, C. W. (1948). Psychology and military profciency. Princeton University Press.
Briggs, G. E., & Naylor, J. C. (1962). The relative efciency of several training meth-
ods as a function of transfer task complexity. Journal of Experimental Psychology,
64(5), 505–512. https://doi.org/10.1037/h0042476
Brinkman, W., Tjiam, I. M., Schout, B. M. A., Hendrikx, A. J. M., Witjes, J. A.,
Scherpbier, A. J. J. A., & van Merriënboer, J. J. G. (2011). Designing simulator-
based training for nephrostomy procedure: An integrated approach of cognitive
task analysis (CTA) and 4-component instructional design (4C/ID). Journal
of Endourology, 25(Supplement 1), 29–29. https://doi.org/10.3109/01421
59X.2012.687480
Bruner, J. S. (1960). The process of education. Harvard University Press.
Bullock, A. D., Hassell, A., Markham, W. A., Wall, D. W., & Whitehouse, A. B.
(2009). How ratings vary by staf group in multi-source feedback assessment of
junior doctors. Medical Education, 43(6), 516–520. https://doi.org/10.1111/
j.1365-2923.2009.03333.x
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theo-
retical synthesis. Review of Educational Research, 65(3), 245–281. https://doi.
org/10.3102/00346543065003245
Camp, G., Paas, F., Rikers, R., & van Merriënboer, J. J. G. (2001). Dynamic prob-
lem selection in air trafc control training: A comparison between performance,
mental efort and mental efciency. Computers in Human Behavior, 17(5–6), 575–
595. https://doi.org/10.1016/S0747-5632(01)00028-0
Carlson, R. A., Khoo, B. H., & Elliott, R. G. (1990). Component practice and
exposure to a problem-solving context. Human Factors: The Journal of the Human
Factors and Ergonomics Society, 32(3), 267–286. https://doi.org/10.1177/
001872089003200302
Carlson, R. A., Sullivan, M. A., & Schneider, W. (1989). Component fuency in a prob-
lem-solving context. Human Factors: The Journal of the Human Factors and Ergo-
nomics Society,31(5), 489–502.https://doi.org/10.1177/001872088903100501
Carr, J. F., & Harris, D. E. (2001). Succeeding with standards: Linking curricu-
lum, assessment, and action planning. Association for Supervision and Curriculum
Development.
References 399

Carroll, J. M. (Ed.). (2003). Minimalism beyond the Nurnberg Funnel. MIT Press.
Carroll, J. M., & Carrithers, C. (1984). Blocking learner error states in a training
wheels system. Human Factors: The Journal of the Human Factors and Ergonomics
Society, 26(4), 377–389. https://doi.org/10.1177/001872088402600402
Chandler, P., & Sweller, J. (1996). Cognitive load while learning to use a computer pro-
gram. Applied Cognitive Psychology, 10(2), 151–170. https://doi.org/10.1002/
(SICI)1099-0720(199604)10:2<151::AID-ACP380>3.0.CO;2-U
Charlin, B., Roy, L., Brailovsky, C., Goulet, F., & van der Vleuten, C. (2000). The script
concordance test: A tool to assess the refective clinician. Teaching and Learning in
Medicine, 12(4), 189–195. https://doi.org/10.1207/S15328015TLM1204_5
Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology,
4(1), 55–81. https://doi.org/10.1016/0010-0285(73)90004-2
Chiu, J. L., & Chi, M. T. H. (2014). Supporting self-explanation in the classroom.
In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.), Applying science of learn-
ing in education: Infusing psychological science into the curriculum (pp. 91–103).
American Psychological Association.
Choi, H.-H., van Merriënboer, J. J. G., & Paas, F. (2014). Efects of the physical
environment on cognitive load and learning: Towards a new model of cognitive
load. Educational Psychology Review, 26(2), 225–244. https://doi.org/10.1007/
s10648-014-9262-6
Chu, Y. S., Yang, H. C., Tseng, S. S., & Yang, C. C. (2014). Implementation
of a model-tracing-based learning diagnosis system to promote elementary
students’ learning in mathematics. Educational Technology and Society, 17(2),
347–357.
Claramita, M., & Susilo, A. P. (2014). Improving communication skills in the South-
east Asian health care context. Perspectives on Medical Education, 3(6), 474–479.
https://doi.org/10.1007/s40037-014-0121-4
Clark, R. E. (1983). Reconsidering research on learning from media. Review of Edu-
cational Research, 53, 445–459.
Clark, R. E. (Ed.). (2001). Learning from media: Arguments, analysis, and evidence.
Information Age Publishing.
Clark, R. E., Feldon, D. F., van Merriënboer, J. J. G., Yates, K. A., & Early, S.
(2008). Cognitive task analysis. In J. M. Spector, M. D. Merrill, J. J. G. van Mer-
riënboer, & M. P. Driscoll (Eds.), Handbook of research on educational commu-
nications and technology (3rd ed., pp. 577–594). Lawrence Erlbaum Associates/
Routledge.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teach-
ing the craft of reading, writing, and mathematics. In L. B. Resnick (Ed.), Know-
ing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–493).
Lawrence Erlbaum Associates.
Collins, A., & Ferguson, W. (1993). Epistemic forms and epistemic games: Struc-
tures and strategies to guide inquiry. Educational Psychologist, 28(1), 25–42.
https://doi.org/10.1207/s15326985ep2801_3
Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2008). Selecting learning
tasks: Efects of adaptation and shared control on learning efciency and task
involvement. Contemporary Educational Psychology, 33(4), 733–756. https://doi.
org/10.1016/j.cedpsych.2008.02.003
400 References

Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2009a). Combining shared
control with variability over surface features: Efects on transfer test performance
and task involvement. Computers in Human Behavior, 25(2), 290–298. https://
doi.org/10.1016/j.chb.2008.12.009
Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2009b). Dynamic task selection:
Efects of feedback and learner control on efciency and motivation. Learning and
Instruction,19(6), 455–465.https://doi.org/10.1016/j.learninstruc.2008.07.002
Corbalan, G., Kester, L., & van Merriënboer, J. J. G. (2011). Learner-controlled
selection of tasks with diferent surface and structural features: Efects on transfer
and efciency. Computers in Human Behavior, 27(1), 76–81. https://doi.org/
10.1016/j.chb.2010.05.026
Costa, J. M., & Miranda, G. L. (2019). Using Alice software with 4C/ID model:
Efects in programming knowledge and logical reasoning. Informatics in Educa-
tion, 18(1), 1–15. https://doi.org/10.15388/infedu.2019.01
Costa, J. M., Miranda, G. L., & Melo, M. (2021). Four-component instructional
design (4C/ID) model: A meta-analysis on use and efect. Learning Environments
Research, 25, 445–463. https://doi.org/10.1007/s10984-021-09373-y
Crossman, E. R. F. W. (1959). A theory of the acquisition of speed-skill. Ergonomics,
2(2), 153–166. https://doi.org/10.1080/00140135908930419
Custers, E. J. F. M. (2015). Thirty years of illness scripts: Theoretical origins and
practical applications. Medical Teacher, 37(5), 457–462. https://doi.org/10.310
9/0142159X.2014.956052
Daniel, M., Stojan, J., Wolf, M., Taqui, B., Glasgow, T., Forster, S., & Cassese, T.
(2018). Applying four-component instructional design to develop a case presentation
curriculum. Perspectives on Medical Education, 7(4), 276–280. https://doi.org/
10.1007/s40037-018-0443-8
Davis, D. A., Mazmanian, P. E., Fordis, M., van Harrison, R., Thorpe, K. E., &
Perrier, L. (2006). Accuracy of physician self-assessment compared with observed
measures of competence: A systematic review. Journal of the American Medical
Association, 296, 1094–1102. https://doi.org/10.1001/jama.296.9.1094
De Bruin, A. B. H., & van Merriënboer, J. J. G. (Eds.). (2017). Bridging cognitive
load and self-regulated learning research [Special issue]. Learning and Instruction,
51, 1–98.
De Croock, M. B. M., Paas, F., Schlanbusch, H., & van Merriënboer, J. J. G.
(2002). ADAPTit: Instructional design tools for training design and evaluation.
Educational Technology Research and Development, 50(4), 47–58. https://doi.
org/10.1007/BF02504984
De Croock, M. B. M., & van Merriënboer, J. J. G. (2007). Paradoxical efects of in-
formation presentation formats and contextual interference on transfer of a com-
plex cognitive skill. Computers in Human Behavior, 23(4), 1740–1761. https://
doi.org/10.1016/j.chb.2005.10.003
De Groot, A. D. (1966). Perception and memory versus thought. In B. Kleinmuntz
(Ed.), Problem solving: Research, method, and theory. Wiley & Sons.
De Jong, T., Linn, M. C., & Zacharia, Z. C. (2013). Physical and virtual laboratories
in science and engineering education. Science, 340, 305–308. https://doi.org/
10.1126/science.1230579
References 401

De Jong, T., Sotiriou, S., & Gillet, D. (2014). Innovations in STEM education:
The Go-Lab federation of online labs. Smart Learning Environments, 1(3), 1–16.
https://doi.org/10.1186/s40561-014-0003-6
De Smet, M. J. R., Broekkamp, H., Brand-Gruwel, S., & Kirschner, P. A. (2011).
Efects of electronic outlining on students’ argumentative writing perfor-
mance. Journal of Computer Assisted Learning, 27(6), 557–574. https://doi.
org/10.1111/j.1365-2729.2011.00418.x
Dick, W., Carey, L., & Carey, J. O. (2014). The systematic design of instruction (8th
ed.). Pearson.
Dolmans, D. H. J. M., Wolfhagen, I. H. A. P., & van Merriënboer, J. J. G. (2013).
Twelve tips for implementing whole-task curricula: How to make it work. Medical
Teacher, 35(10), 801–805. https://doi.org/10.3109/0142159X.2013.799640
Dory, V., Gagnon, R., Vanpee, D., & Charlin, B. (2012). How to construct and im-
plement script concordance tests: Insights from a systematic review. Medical Edu-
cation, 46(6), 552–563. https://doi.org/10.1111/j.1365-2923.2011.04211.x
Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment: Implications
for health, education, and the workplace. Psychological Science in the Public Inter-
est, 5(3), 69–106. https://doi.org/10.1111/j.1529-1006.2004.00018.x
Edwards, R., & Fenwick, T. (2016). Digital analytics in professional work and learn-
ing. Studies in Continuing Education, 38, 213–227. https://doi.org/10.1080/0
158037X.2015.1074894
Ericsson, K. A. (2015). Acquisition and maintenance of medical expertise: A perspec-
tive from the expert-performance approach with deliberate practice. Academic Medi-
cine, 90(11), 1471–1486. https://doi.org/10.1097/ACM.0000000000000939
Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance:
Evidence of maximal adaptation to task constraints. Annual Review of Psychology,
47(1), 273–305. https://doi.org/10.1146/annurev.psych.47.1.273
Ertmer, P. A., & Russell, J. D. (1995). Using case studies to enhance instructional
design education. Educational Technology, 35(4), 23–31.
Eva, K. W., & Regehr, G. (2007). Knowing when to look it up: A new conception
of self-assessment ability. Academic Medicine, 82(Suppl), S81–S84. https://doi.
org/10.1097/ACM.0b013e31813e6755
Faber, T. J. E., Dankbaar, M. E. W., & van Merriënboer, J. J. G. (2021). Four-
component instructional design applied to a game for emergency medicine. In
A. L. Brooks, S. Brahman, B. Kapralos, A. Nakajima, J. Tyerman, & L. C. Jain
(Eds.), Recent advances in technologies for inclusive well-being. Intelligent systems
reference library (Vol. 196, pp. 65–82). Springer. https://doi.org/10.1007/
978-3-030-59608-8_5
Fassier, T., Rapp, A., Rethans, J.-J., Nendaz, M., & Bochatay, N. (2021). Train-
ing residents in advance care planning: A task-based needs assessment using the
4-component instructional design. Journal of Graduate Medical Education, 13(4),
534–547. https://doi.org/10.4300/JGME-D-20-01263
Fastré, G. M. J., van der Klink, M. R., Amsing-Smit, P., & van Merriënboer, J. J.
G. (2014). Assessment criteria for competency-based education: A study in nurs-
ing education. Instructional Science, 42(6), 971–994. https://doi.org/10.1007/
s11251-014-9326-5
402 References

Fastré, G. M. J., van der Klink, M. R., Sluijsmans, D., & van Merriënboer, J. J. G.
(2013). Towards an integrated model for developing sustainable assessment skills.
Assessment & Evaluation in Higher Education, 38(5), 611–630. https://doi.org/
10.1080/02602938.2012.674484
Fastré, G. M. J., van der Klink, M. R., & van Merriënboer, J. J. G. (2010). The
efects of performance-based assessment criteria on student performance and
self-assessment skills. Advances in Health Sciences Education, 15(4), 517–532.
https://doi.org/10.1007/s10459-009-9215-x
Feinauer, S., Voskort, S., Groh, I., & Petzoldt, T. (2023). First encounters with the au-
tomated vehicle: Development and evaluation of a tutorial concept to support users
of partial and conditional driving automation. Transportation Research Part F: Traf-
fc Psychology and Behaviour, 97, 1–16. https://doi.org/10.1016/j.trf.2023.06.002
Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning.
Educational Psychology Review, 28(4), 717–741. https://doi.org/10.1007/
s10648-015-9348-9
Fisk, A. D., & Gallini, J. K. (1989). Training consistent components of tasks:
Developing an instructional system based on automatic/controlled processing
principles. Human Factors: The Journal of the Human Factors and Ergonomics
Society, 31(4), 453–463. https://doi.org/10.1177/001872088903100408
Francom, G. M. (2017). Principles for task-centered instruction. In C. M. Reige-
luth, B. J. Beatty, & R. D. Myers (Eds.), Instructional design theories and models:
The learner-centered paradigm of education (Vol. 4, pp. 65–91). Routledge.
Francom, G. M., & Gardner, J. (2014). What is task-centered learning? TechTrends,
58(5), 27–35. https://doi.org/10.1007/s11528-014-0784-z
Fraser, K., Hufman, J., Ma, I., Sobczak, M., McIlwrick, J., Wright, B., & McLaugh-
lin, K. (2014). The emotional and cognitive impact of unexpected simulated
patient death. Chest, 145(5), 958–963. https://doi.org/10.1378/chest.13-0987
Frederiksen, N. (1984). Implications of cognitive theory for instruction in problem solv-
ing. Review of Educational Research, 54(3), 363–407. https://doi.org/10.3102/
00346543054003363
Frèrejean, J., Dolmans, D. H. J. M., & Van Merriënboer, J. J. G. (2022). Research
on instructional design in the health professions: From taxonomies of learning
to whole-task models. In J. Cleland & S. J. Durning (Eds.), Researching medical
education (2nd ed., pp. 291–302). Wiley. https://doi.org/10.1002/978111983
9446.ch26
Frèrejean, J., van Geel, M., Keuning, T., Dolmans, D., van Merriënboer, J. J. G., &
Visscher, A. (2021). Ten steps to 4C/ID: Training diferentiation skills in a pro-
fessional development program for teachers. Instructional Science, 395–418.
https://doi.org/10.1007/s11251-021-09540-x
Frèrejean, J., van Merriënboer, J. J. G., Condron, C., Strauch, U., & Eppich, W.
(2023). Critical design choices in healthcare simulation education: A 4C/ID
perspective on design that leads to transfer. Advances in Simulation, 8(5), 1–11.
https://doi.org/10.1186/s41077-023-00242-7
Frèrejean, J., van Merriënboer, J. J. G., Kirschner, P. A., Roex, A., Aertgeerts, B., &
Marcellis, M. (2019). Designing instruction for complex learning: 4C/ID in
higher education. European Journal of Education, 54, 513–524. https://doi.
org/10.1111/ejed.12363
References 403

Frèrejean, J., van Strien, J. L. H., Kirschner, P. A., & Brand-Gruwel, S. (2016).
Completion strategy or emphasis manipulation? Task support for teaching infor-
mation problem solving. Computers in Human Behavior, 62, 90–104. https://
doi.org/10.1016/j.chb.2016.03.048
Frèrejean, J., Velthorst, G. J., van Strien, J. L. H., Kirschner, P. A., & Brand-Gruwel,
S. (2019). Embedded instruction to learn information problem solving: Efects of
a whole task approach. Computers in Human Behavior, 90, 117–130. https://doi.
org/10.1016/j.chb.2018.08.043
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible
are jobs to computerisation? Technological Forecasting and Social Change, 114,
254–280. https://doi.org/10/gc3fzf
Gagné, R. M. (1968). Learning hierarchies. Educational Psychologist, 6(1), 1–9.
https://doi.org/10.1080/00461526809528968
Gagné, R. M., & Merrill, M. D. (1990). Integrative goals for instructional design.
Educational Technology Research and Development, 38(1), 23–30. https://doi.
org/10.1007/BF02298245
Garon-Carrier, G., Boivin, M., Guay, F., Kovas, Y., Dionne, G., Lemelin, J. P., Séguin,
J., Vitaro, F., & Tremblay, R. (2016). Intrinsic motivation and achievement in
mathematics in elementary school: A longitudinal investigation of their associa-
tion. Child Development, 87(1), 165–175. https://doi.org/10.1111/cdev.12458
Geary, D. C. (2008). An evolutionarily informed education science. Educational
Psychologist, 43(4), 179–195. https://doi.org/10.1080/00461520802392133
Gerjets, P., & Kirschner, P. A. (2009). Learning from multimedia and hypermedia.
In N. Balachef, S. Ludvigsen, A. J. M. de Jong, A. Lazonder, & S. Barnes (Eds.),
Technology-enhanced learning: Principles and products (pp. 251–272). Springer.
Gerjets, P., Walter, C., Rosenstiel, W., Bogdan, M., & Zander, T. O. (2014). Cog-
nitive state monitoring and the design of adaptive instruction in digital envi-
ronments: Lessons learned from cognitive workload assessment using a passive
brain-computer interface approach. Frontiers in Neuroscience, 8, 1–21. https://
doi.org/10.3389/fnins.2014.00385
Gessler, M. (2009). Situated learning and cognitive apprenticeship. In R. Maclean &
D. Wilson (Eds.), International handbook of education for the changing world of
work (pp. 1611–1625). Springer.
Ginns, P. (2005). Meta-analysis of the modality efect. Learning and Instruction,
15(4), 313–331. https://doi.org/10.1016/j.learninstruc.2005.07.001
Ginns, P. (2006). Integrating information: A meta-analysis of the spatial contigu-
ity and temporal contiguity efects. Learning and Instruction, 16(6), 511–525.
https://doi.org/10.1016/j.learninstruc.2006.10.001
Göksu, I., Özcan, K. V., Çakir, R., & Yuksel, G. (2017). Content analysis of research
trends in instructional design models: 1999–2014. Journal of Learning Design,
10(2), 85–109. http://dx.doi.org/10.5204/jld.v10i2.288
Gopher, D., Weil, M., & Siegel, D. (1989). Practice under changing priorities: An
approach to the training of complex skills. Acta Psychologica, 71(1–3), 147–177.
https://doi.org/10.1016/0001-6918(89)90007-3
Gorbunova, A., Van Merriënboer, J. J. G., & Costley, J. (2023). Are inductive
teaching methods compatible with cognitive load theory? Educational Psychology
Review, 35(4), 111 (1–26). https://doi.org/10.1007/s10648-023-09828-z
404 References

Gordon, J., & Zemke, R. (2000). The attack on ISD. Training, 37(4), 42–53.
Govaerts, M. J. B., van der Vleuten, C. P. M., Schuwirth, L. W. T., & Muijtjens, A.
M. M. (2005). The use of observational diaries in in-training evaluation: Student
perceptions. Advances in Health Sciences Education, 10(3), 171–188. https://doi.
org/10.1007/s10459-005-0398-5
Gropper, G. L. (1973). A technology for developing instructional materials. American
Institutes for Research.
Gropper, G. L. (1983). A behavioral approach to instructional prescription. In C. M.
Reigeluth (Ed.), Instructional design theories and models (Vol. 1, pp. 101–161).
Lawrence Erlbaum Associates.
Guasch, T., Espasa, A., Alvarez, I. M., & Kirschner, P. A. (2013). Efects of feedback on
collaborative writing in an online learning environment: Type of feedback and the
feedback-giver. Distance Education, 34(3), 324–338. https://doi.org/10.1080/
01587919.2013.835772
Gulikers, J. T. M., Bastiaens, T. J., & Martens, R. L. (2005). The surplus value of an
authentic learning environment. Computers in Human Behavior, 21(3), 509–521.
https://doi.org/10.1016/j.chb.2004.10.028
Güney, Z. (2019a). A sample design in programming with four-component instruc-
tional design (4C/ID) model. Malaysian Online Journal of Educational Technol-
ogy, 7(4), 1–14. https://doi.org/10.17220/mojet.2019.04.001
Güney, Z. (2019b). Four-component instructional design (4C/ID) model approach
for teaching programming skills. International Journal of Progressive Education,
15(4). https://doi.org/10.29329/ijpe.2019.203.11
Haji, F. A., Khan, R., Regehr, G., Ng, G., de Ribaupierre, S., & Dubrowski, A.
(2015). Operationalising elaboration theory for simulation instruction design:
A Delphi study. Medical Education, 49(6), 576–588. https://doi.org/10.1111/
medu.12726
Half, H. M. (1993). Supporting scenario- and simulation-based instruction: Issues
from the maintenance domain. In J. M. Spector, M. C. Polson, & D. J. Muraida
(Eds.), Automating instructional design: Concepts and issues (pp. 231–248). Edu-
cational Technology Publications.
Hall, K. G., & Magill, R. A. (1995). Variability of practice and contextual interfer-
ence in motor skill learning. Journal of Motor Behavior, 27(4), 299–309. https://
doi.org/10.1080/00222895.1995.9941719
Hambleton, R. K., Jaeger, R. M., Plake, B. S., & Mills, C. (2000). Setting perfor-
mance standards on complex educational assessments. Applied Psychological Meas-
urement, 24(4), 355–366. https://doi.org/10.1177/01466210022031804
Hammick, M., Freeth, D., Koppel, I., Reeves, S., & Barr, H. (2007). A best evidence
systematic review of interprofessional education: BEME Guide no. 9. Medical
Teacher, 29(8), 735–751. https://doi.org/10.1080/01421590701682576
Harden, R. M., Stevenson, M., Downie, W. W., & Wilson, G. M. (1975). Assessment
of clinical competence using objective structured examination. BMJ, 1, 447–451.
https://doi.org/10.1136/bmj.1.5955.447
Hartley, J. (1994). Designing instructional text (3rd ed.). Kogan Page.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational
Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
References 405

Hays, R. T., & Singer, M. J. (1989). Simulation fdelity in training system design:
Bridging the gap between reality and training. Springer.
Helsdingen, A. S., van Gog, T., & van Merriënboer, J. J. G. (2011a). The efects of
practice schedule on learning a complex judgment task. Learning and Instruction,
21(1), 126–136. https://doi.org/10.1016/j.learninstruc.2009.12.001
Helsdingen, A., van Gog, T., & van Merriënboer, J. J. G. (2011b). The efects of
practice schedule and critical thinking prompts on learning and transfer of a com-
plex judgment task. Journal of Educational Psychology, 103(2), 383–398. https://
doi.org/10.1037/a0022370
Hennekam, S. (2015). Career success of older workers: The infuence of social skills
and continuous learning ability. Journal of Management Development, 34(9),
1113–1133. https://doi.org/10.1108/JMD-05-2014-0047
Herrington, J., & Parker, J. (2013). Emerging technologies as cognitive tools for
authentic learning. British Journal of Educational Technology, 44(4), 607–615.
https://doi.org/10.1111/bjet.12048
Hill, J. R., & Hannafn, M. J. (2001). Teaching and learning in digital environments:
The resurgence of resource-based learning. Educational Technology Research and
Development, 49(3), 37–52. https://doi.org/10.1007/BF02504914
Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (Eds.). (1989).
Induction: Processes of inference, learning, and discovery. MIT Press.
Holsbrink-Engels, G. A. (1997). The efects of the use of a conversational model and
opportunities for refection in computer-based role-playing. Computers in Human
Behavior, 13(3), 409–436. https://doi.org/10.1016/S0747-5632(97)00017-4
Holtslander, L. F., Racine, L., Furniss, S., Burles, M., & Turner, H. (2012).
Developing and piloting an online graduate nursing course focused on experien-
tial learning of qualitative research methods. Journal of Nursing Education, 51(6),
345–348. https://doi.org/10.3928/01484834-20120427-03
Hoogerheide, V., van Wermeskerken, M., Loyens, S. M. M., & van Gog, T. (2016).
Learning from video modeling examples: Content kept equal, adults are more
efective models than peers. Learning and Instruction, 44, 22–30. https://doi.org/
10.1016/j.learninstruc.2016.02.004
Hoogveld, A. W. M., Paas, F., & Jochems, W. M. G. (2005). Training higher edu-
cation teachers for instructional design of competency-based education: Product-
oriented versus process-oriented worked examples. Teaching and Teacher Education,
21(3), 287–297. https://doi.org/10.1016/j.tate.2005.01.002
Hopkins, S., & O’Donovan, R. (2021). Using complex learning tasks to build
procedural fuency and fnancial literacy for young people with intellectual dis-
ability. Mathematics Education Research Journal, 33, 163–181. https://doi.org/
10.1007/s13394-019-00279-w
Hummel, H. G. K., Slootmaker, A., & Storm, J. (2021). Mini-games for entrepre-
neurship in construction: Instructional design and efects of the TYCON game.
Interactive Learning Environments. https://doi.org/10.1080/10494820.2021.
1995759
Hung, W. E., Dolmans, D. H. J. M., & van Merriënboer, J. J. G. (2019). A review
to identify key perspectives in PBL meta-analyses and reviews: Trends, gaps
and future research directions. Advances in Health Sciences Education, 24(5),
943–957. https://doi.org/10.1007/s10459-019-09945-x
406 References

Husnin, H. (2017). Design and development of learning material with the ten steps to
complex learning: A multiple case study. PhD thesis, University of Warwick, UK.
http://webcat.warwick.ac.uk/record=b3228128~S15
Huwendiek, S., De leng, B. A., Zary, N., Fischer, M. R., Ruiz, J. G., & Ellaway, R.
(2009). Towards a typology of virtual patients. Medical Teacher, 31(8), 743–748.
https://doi.org/10.1080/01421590903124708
Janesarvatan, F., & van Rosmalen, P. (2023). Instructional design of virtual patients
in dental education through a 4C/ID lens: A narrative review. Journal of Comput-
ers in Education. https://doi.org/10.1007/s40692-023-00268-w
Janssen-Noordman, A. M. B., van Merriënboer, J. J. G., van der Vleuten, C. P.
M., & Scherpbier, A. J. J. A. (2006). Design of integrated practice for learn-
ing professional competences. Medical Teacher, 28(5), 447–452. https://doi.
org/10.1080/01421590600825276
Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., &
Eika, B. (2012). Conveying clinical reasoning based on visual observation via eye-
movement modelling examples. Instructional Science, 40(5), 813–827. https://
doi.org/10.1007/s11251-012-9218-5
Jelley, R. B., Gofn, R. D., Powell, D. M., & Heneman, R. L. (2012). Incen-
tives and alternative rating approaches: Roads to greater accuracy in job perfor-
mance assessment? Journal of Personnel Psychology, 11(4), 159–168. https://doi.
org/10.1027/1866-5888/a000068
Jonassen, D. H. (1992). Cognitive fexibility theory and its implications for design-
ing CBI. In S. Dijkstra, H. P. M. Krammer, & J. J. G. van Merriënboer (Eds.),
Instructional models in computer-based learning environments (NATO ASI Series F)
(Vol. 104, pp. 385–403). Springer.
Jonassen, D. H. (1997). Instructional design models for well-structured and III-
structured problem-solving learning outcomes. Educational Technology Research
and Development, 45(1), 65–94. https://doi.org/10.1007/BF02299613
Jonassen, D. H. (1999). Designing constructivist learning environments. In C.
M. Reigeluth (Ed.), Instructional design theories and models: A new paradigm of
instructional theory (Vol. 2, pp. 215–239). Lawrence Erlbaum Associates.
Jonassen, D. H. (2000). Computers as mindtools for schools: Engaging critical think-
ing (2nd ed.). Prentice Hall.
Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for
instructional design. Routledge.
Jüttner, M., & Neuhaus, B. J. (2012). Development of items for a pedagogical
content knowledge test based on empirical analysis of pupils’ errors. International
Journal of Science Education, 34(7), 1125–1143. https://doi.org/10.1080/095
00693.2011.606511
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kali, Y., McKenney, S., & Sagy, O. (2015). Teachers as designers of technology
enhanced learning. Instructional Science, 43(2), 173–179. https://doi.org/
10.1007/s11251-014-9343-4
Kalyuga, S. (2009). Knowledge elaboration: A cognitive load perspective. Learning and
Instruction, 19(5), 402–410. https://doi.org/10.1016/j.learninstruc.2009.02.003
Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal efect. Edu-
cational Psychologist, 38(1), 23–31. https://doi.org/10.1207/S15326985EP3801_4
References 407

Kalyuga, S., Rikers, R., & Paas, F. (2012). Educational implications of expertise
reversal efects in learning and performance of complex cognitive and sensorimotor
skills. Educational Psychology Review, 24(2), 313–337. https://doi.org/10.1007/
s10648-012-9195-x
Kester, L., & Kirschner, P. A. (2012). Cognitive tasks and learning. In N. Seel (Ed.),
Encyclopedia of the sciences of learning (pp. 619–622). Springer.
Kester, L., Kirschner, P. A., & van Merriënboer, J. J. G. (2004). Information pres-
entation and troubleshooting in electrical circuits. International Journal of Science
Education, 26(2), 239–256. https://doi.org/10.1080/69032000072809
Kester, L., Kirschner, P. A., & van Merriënboer, J. J. G. (2005). The management
of cognitive load during complex cognitive skill acquisition by means of com-
puter-simulated problem solving. British Journal of Educational Psychology, 75(1),
71–85. https://doi.org/10.1348/000709904X19254
Kester, L., Kirschner, P. A., & van Merriënboer, J. J. G. (2006). Just-in-time
information presentation: Improving learning a troubleshooting skill. Contem-
porary Educational Psychology, 31(2), 167–185. https://doi.org/10.1016/
j.cedpsych.2005.04.002
Kester, L., Kirschner, P. A., van Merriënboer, J. J. G., & Baumer, A. (2001). Just-
in-time information presentation and the acquisition of complex cognitive skills.
Computers in Human Behavior, 17(4), 373–391. https://doi.org/10.1016/
S0747-5632(01)00011-5
Kicken, W., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2008). Scafolding advice
on task selection: A safe path toward self-directed learning in on-demand educa-
tion. Journal of Vocational Education & Training, 60(3), 223–239. https://doi.
org/10.1080/13636820802305561
Kicken, W., Brand-Gruwel, S., van Merriënboer, J. J. G., & Slot, W. (2009a). The
efects of portfolio-based advice on the development of self-directed learning skills
in secondary vocational education. Educational Technology Research and Develop-
ment, 57(4), 439–460. https://doi.org/10.1007/s11423-009-9111-3
Kicken, W., Brand-Gruwel, S., van Merriënboer, J. J. G., & Slot, W. (2009b). Design
and evaluation of a development portfolio: How to improve students’ self-directed
learning skills. Instructional Science, 37(5), 453–473. https://doi.org/10.1007/
s11251-008-9058-5
Kirschner, F., Paas, F., & Kirschner, P. A. (2009). Individual and group-based learn-
ing from complex cognitive tasks: Efects on retention and transfer efciency.
Computers in Human Behavior, 25(2), 306–314. https://doi.org/10.1016/j.
chb.2008.12.008
Kirschner, P. A. (1992). Epistemology, practical work and academic skills in science
education. Science and Education, 1(3), 273–299. https://doi.org/10.1007/
BF00430277
Kirschner, P. A. (2009). Epistemology or pedagogy, that is the question. In S. Tobias &
T. M. Dufy (Eds.), Constructivist instruction: Success or failure? (pp. 144–157).
Routledge.
Kirschner, P. A. (2015). Facebook as learning platform: Argumentation superhigh-
way or dead-end street? Computers in Human Behavior, 53, 621–625. https://
doi.org/10.1016/j.chb.2015.03.011
408 References

Kirschner, P. A., Ayres, P., & Chandler, P. (2011). Contemporary cognitive load
theory research: The good, the bad and the ugly. Computers in Human Behavior,
27(1), 99–105. https://doi.org/10.1016/j.chb.2010.06.025
Kirschner, P. A., & Davis, N. (2003). Pedagogic benchmarks for information and
communications technology in teacher education. Technology, Pedagogy and Edu-
cation, 12(1), 125–147. https://doi.org/10.1080/14759390300200149
Kirschner, P. A., Hendrick, C., & Heal, J. (2022). How teaching happens: Semi-
nal works in teaching and teacher efectiveness and what they mean in practice.
Routledge.
Kirschner, P. A., & Kirschner, F. (2012). Mental efort. In N. Seel (Ed.), Encyclope-
dia of the sciences of learning (pp. 2182–2184). Springer.
Kirschner, P. A., Martens, R. L., & Strijbos, J. W. (2004). CSCL in higher edu-
cation? A framework for designing multiple collaborative environments. In
J. W. Strijbos, P. A. Kirschner, & R. L. Martens (Eds.), What we know about
CSCL, and implementing it in higher education (pp. 3–30). Kluwer Academic
Publishers.
Kirschner, P. A., & Selinger, M. (2003). The state of afairs of teacher education with
respect to information and communications technology. Technology, Pedagogy and
Education, 12(1), 5–17. https://doi.org/10.1080/14759390300200143
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during
instruction does not work: An analysis of the failure of constructivist, discovery,
problem-based, experiential, and inquiry-based teaching. Educational Psychologist,
41(2), 75–86. https://doi.org/10.1207/s15326985ep4102_1
Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best?
Urban legends in education. Educational Psychologist, 48(3), 169–183. https://
doi.org/10.1080/00461520.2013.804395
Kirschner, P., & Wopereis, I. G. J. H. (2003). Mindtools for teacher communities:
A European perspective. Technology, Pedagogy and Education, 12(1), 105–124.
https://doi.org/10.1080/14759390300200148
Kogan, J. R., Holmboe, E. S., & Hauer, K. E. (2009). Tools for direct observation
and assessment of clinical skills of medical trainees: A systematic review. JAMA,
302(12), 1316. https://doi.org/10.1001/jama.2009.1365
Kok, E. M., de Bruin, A. B. H., Leppink, J., van Merriënboer, J. J. G., & Robben, S.
G. F. (2015). Case comparisons: An efcient way of learning radiology. Academic
Radiology, 22(10), 1226–1235. https://doi.org/10.1016/j.acra.2015.04.012
Kok, E. M., de Bruin, A. B. H., Robben, S. G. F., & van Merriënboer, J. J. G. (2013).
Learning radiological appearances of diseases: Does comparison help? Learning and
Instruction, 23, 90–97. https://doi.org/10.1016/j.learninstruc.2012.07.004
Kok, E. M., & Jarodzka, H. (2017). Before your very eyes: The value and limita-
tions of eye tracking in medical education. Medical Education, 51(1), 114–122.
https://doi.org/10.1111/medu.13066
Kolcu, M. İ. B., Öztürkçü, Ö. S. K., & Kaki, G. D. (2020). Evaluation of a dis-
tance education course using the 4C-ID model for continuing endodontics
education. Journal of Dental Education, 84, 62–71. https://doi.org/10.21815/
JDE.019.138
References 409

Könings, K. D., Seidel, T., & van Merriënboer, J. J. G. (2014). Participatory


design of learning environments: Integrating perspectives of students, teachers,
and designers. Instructional Science, 42(1), 1–9. https://doi.org/10.1007/
s11251-013-9305-2
Konings, K. D., van Zundert, M., & van Merriënboer, J. J. G. (2019). Scafold-
ing peer-assessment skills: Risk of interference with learning domain-specifc
skills? Learning and Instruction, 60, 85–94. https://doi.org/10.1016/
j.learninstruc.2018.11.007
Koriat, A. (1997). Monitoring one’s own knowledge during study: A cue-utilization
approach to judgments of learning. Journal of Experimental Psychology: General,
126(4), 349–370. https://doi.org/10.1037/0096-3445.126.4.349
Kostons, D., van Gog, T., & Paas, F. (2012). Training self-assessment and task-
selection skills: A cognitive approach to improving self-regulated learn-
ing. Learning and Instruction, 22(2), 121–132. https://doi.org/10.1016/
j.learninstruc.2011.08.004
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difcul-
ties in recognizing one’s own incompetence lead to infated self-assessments.
Journal of Personality and Social Psychology, 77(6), 1121–1134. https://doi.
org/10.1037/0022-3514.77.6.1121
Kukharuk, A., Goda, Y., & Suzuki, K. (2023). Designing an online PD program
with 4C/ID from scratch. International Journal of Designs for Learning, 14(2),
72–86. https://doi.org/10.14434/ijdl.v14i2.34676
Lazonder, A. W., & van der Meij, H. (1995). Error-information in tutorial doc-
umentation: Supporting users’ errors to facilitate initial skill learning. Interna-
tional Journal of Human-Computer Studies, 42(2), 185–206. https://doi.org/
10.1006/ijhc.1995.1009
Lehmann, T., Hähnlein, I., & Ifenthaler, D. (2014). Cognitive, metacogni-
tive and motivational perspectives on prefection in self-regulated online learn-
ing. Computers in Human Behavior, 32, 313–323. https://doi.org/10.1016/
j.chb.2013.07.051
León, S. P., Panadero, E., & García-Martínez, I. (2023). How accurate are our stu-
dents? A meta-analytic systematic review on self-assessment scoring accuracy. Edu-
cational Psychology Review, 35. https://doi.org/10.1007/s10648-023-09819-0
Lievens, F., & Soete, B. (2015). Situational judgment test. In J. D. Wright (Ed.),
International encyclopedia of the social & behavioral sciences (2nd ed., Vol. 22,
pp. 13–19). Elsevier.
Lim, J., Reiser, R. A., & Olina, Z. (2009). The efects of part-task and whole-task in-
structional approaches on acquisition and transfer of a complex cognitive skill. Ed-
ucational Technology Research and Development, 57(1), 61–77. https://doi.org/
10.1007/s11423-007-9085-y
Limbu, B., Fominykh, M., Klemke, R., & Specht, M. (2019). A conceptual framework
for supporting expertise development with augmented reality and wearable sensors.
In Buchem, I., Klamma, R., & Wild, F. (Eds.), Perspectives on wearable enhanced
learning (WELL). Springer. https://doi.org/10.1007/978-3-319-64301-4_10
410 References

Linden, M. A., Whyatt, C., Craig, C., & Kerr, C. (2013). Efcacy of a powered
wheelchair simulator for school aged children: A randomized controlled trial.
Rehabilitation Psychology, 58(4), 405–411. https://doi.org/10.1037/a0034088
Littlejohn, A., & Buckingham Shum, S. (Eds.). (2003). Reusing online resources:
A sustainable approach to elearning [Special issue]. Journal of Interactive Media in
Education, 2003(1). https://doi.org/10.5334/2003-1-reuse-01
Long, Y., Aman, Z., & Aleven, V. (2015). Motivational design in an intelligent
tutoring system that helps students make good task selection decisions. In C.
Conati, N. Hefernan, A. Mitrovic, & M. F. Verdejo (Eds.), Artifcial Intelligence
in education (pp. 226–236). Springer.
Louis, M. R., & Sutton, R. I. (1991). Switching cognitive gears: From hab-
its of mind to active thinking. Human Relations, 44(1), 55–76. https://doi.
org/10.1177/001872679104400104
Lowrey, W., & Kim, K. S. (2009). Online news media and advanced learning: A test
of cognitive fexibility theory. Journal of Broadcasting & Electronic Media, 53(4),
547–566. https://doi.org/10.1080/08838150903323388
Loyens, S., Kirschner, P. A., & Paas, F. (2011). Problem-based learning. In S.
Graham, A. Bus, S. Major, & L. Swanson (Eds.), APA educational psychology
handbook-application to learning and teaching (Vol. 30, pp. 403–425). American
Psychological Association.
Lukosch, H., Bussel, R., & Meijer, S. (2013). Hybrid instructional design for
serious gaming. Journal of Communication and Computer, 10, 1–8. https://doi.
org/10.17265/1548-7709/2013.01001
Maddens, L., Depaepe, F., Raes, A., & Elen, J. (2020). The instructional design
of a 4C/ID-inspired learning environment for upper secondary school students’
research skills. International Journal of Designs for Learning, 11(3), 126–147.
https://doi.org/10.14434/ijdl.v11i3.29012
Mager, R. F. (1997). Preparing instructional objectives: A critical tool in the develop-
ment of efective instruction (3rd ed.). The Center for Efective Performance.
Maggio, L. A., Ten Cate, O., Irby, D. M., & O’Brien, B. C. (2015). Designing
evidence-based medicine training to optimize the transfer of skills from the class-
room to clinical practice: Applying the four component instructional design
model. Academic Medicine, 90(11), 1457–1461. https://doi.org/10.1097/
ACM.0000000000000769
Maran, N. J., & Glavin, R. J. (2003). Low- to high-fdelity simulation – a con-
tinuum of medical education? Medical Education, 37(S1), 22–28. https://doi.
org/10.1046/j.1365-2923.37.s1.9.x
Marcellis, M., Barendsen, E., & van Merriënboer, J. J. G. (2018). Designing a
blended course in Android app development using 4C/ID. In Proceedings of the
18th Koli International Conference on Computing Education Research (article nr.
19). New York: ACM. https://doi.org/10.1145/3279720.3279739
Marei, H. F., Donkers, J., Al-Eraky, M. M., & van Merriënboer, J. J. G. (2017). The
efectiveness of sequencing virtual patients with lectures in a deductive or inductive
learning approach. Medical Teacher, 39, 1268–1274. https://doi.org/10.1080/
0142159X.2017.1372563
Mavilidi, M. F., & Zhong, L. (2019). Exploring the development and research focus
of cognitive load theory, as described by its founders: Interviewing John Sweller,
References 411

Fred Paas, and Jeroen van Merriënboer. Educational Psychology Review, 31, 499–
508. https://doi.org/10.1007/s10648-019-09463-7
Mayer, R. E. (Ed.). (2014). The Cambridge handbook of multimedia learning (2nd
rev. ed.). Cambridge University Press.
Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia
learning: When presenting more material results in less understanding. Journal of
Educational Psychology, 93(1), 187–198. https://doi.org/10.1037/0022-0663.
93.1.187
McDaniel, M. A., & Schlager, M. S. (1990). Discovery learning and transfer of problem-
solving skill. Cognition and Instruction, 7(2), 129–159. https://doi.org/10.1207/
s1532690xci0702_3
McGaghie, W. C., Issenberg, S. B., Petrusa, E. R., & Scalese, R. J. (2010). A criti-
cal review of simulation-based medical education research: 2003–2009. Medical
Education, 44(1), 50–63. https://doi.org/10.1111/j.1365-2923.2009.03547.x
McGraw, R., Newbigging, J., Blackmore, E., Stacey, M., Mercer, C., Lam, W.,
Braund, H., & Gilic, F. (2023). Using cognitive load theory to develop an emer-
gency airway management curriculum: The Queen’s University Mastery Airway
Course (QUMAC). Canadian Journal of Emergency Medicine, 25, 378–381.
https://doi.org/10.1007/s43678-023-00495-1
Meguerdichian, M. J., Bajaj, K., & Walker, K. (2021). Fundamental underpinnings of
simulation education: Describing a four-component instructional design approach to
healthcare simulation fellowships. Advances in Simulation, 6, 18. https://doi.org/
10.1186/s41077-021-00171-3
Melo, M. (2018). The 4C/ID-model in physics education: Instructional design of
a digital learning environment to teach electrical circuits. International Journal of
Instruction, 11(1), 103–122. https://doi.org/10.12973/iji.2018.1118a
Melo, M., & Miranda, G. L. (2015). Learning electrical circuits: The efects of the
4C-ID instructional approach in the acquisition and transfer of knowledge. Journal
of Information Technology Education: Research, 14, 313–337. https://doi.org/
10.28945/2281
Merrill, M. D. (2002). A pebble-in-the-pond model for instructional design. Perfor-
mance Improvement, 41(7), 41–46. https://doi.org/10.1002/pf.4140410709
Merrill, M. D. (2020). First principles of instruction (Revised). Association for Edu-
cational Communications and Technology.
Merrill, P. (1987). Job and task analysis. In R. M. Gagné (Ed.), Instructional technol-
ogy: Foundations (pp. 141–173). Lawrence Erlbaum Associates.
Mettes, C. T. C. W., Pilot, A., & Roossink, H. J. (1981). Linking factual and procedural
knowledge in solving science problems: A case study in a thermodynamics course.
Instructional Science, 10(4), 333–361. https://doi.org/10.1007/BF00162732
Meutstege, K., Van Geel, M., & Visscher, A. (2023). Evidence-based design of a
teacher professional development program for diferentiated instruction: A whole-
task approach. Education Sciences, 13(985), 1–24. https://doi.org/10.3390/
educsci13100985
Miller, G. E. (1990). The assessment of clinical skills/competence/performance.
Academic Medicine, 65(9), S63–S67. https://doi.org/10.1097/00001888-
199009000-00045
412 References

Mills, R., Tomas, L., & Lewthwaite, B. (2016). Learning in earth and space science: A
review of conceptual change instructional approaches. International Journal of Science
Education, 38(5), 767–790. https://doi.org/10.1080/09500693.2016.1154227
Miranda, G., Rafael, M., Melo, M., Pardal, C., De Almeida, J., & Pontes, T.
(2020). 4C-ID model and cognitive approaches to instructional design and
technology: Emerging research and opportunities. IGI Global. https://doi.
org/10.4018/978-1-7998-4096-1
Moulton, C., Regehr, G., Lingard, L., Merritt, C., & MacRae, H. (2010). Slow-
ing down to stay out of trouble in the operating room: Remaining attentive in
automaticity. Academic Medicine, 85(10), 1571–1577. https://doi.org/10.1097/
ACM.0b013e3181f073dd
Moust, J. H. C., Van. Berkel, H. J. M., & Schmidt, H. G. (2005). Signs of ero-
sion: Refections on three decades of problem-based learning at Maastricht
University. Higher Education,50(4), 665–683.https://doi.org/10.1007/s10734-
004-6371-z
Mulder, Y. G., Lazonder, A. W., & de Jong, T. (2011). Comparing two types of
model progression in an inquiry learning environment with modelling facili-
ties. Learning and Instruction, 21(5), 614–624. https://doi.org/10.1016/
j.learninstruc.2011.01.003
Mulders, M. (2022). Vocational training in virtual reality: A case study using the
4C/ID model. Multimodal Technologies and Interaction, 6, 49. https://doi.org/
10.3390/mti6070049
Musharyanti, L., Haryanti, F., & Claramita, M. (2021). Improving nursing students’
medication safety knowledge and skills on using the 4C/ID learning model. Jour-
nal of Multidisciplinary Healthcare, 14, 287–295. https://doi.org/10.2147/
JMDH.S293917
Nadolski, R. J., Kirschner, P. A., & van Merriënboer, J. J. G. (2006). Process
support in learning tasks for acquiring complex cognitive skills in the domain
of law. Learning and Instruction, 16(3), 266–278. https://doi.org/10.1016/
j.learninstruc.2006.03.004
Nadolski, R. J., Kirschner, P. A., van Merriënboer, J. J. G., & Hummel, H. G. K. (2001).
A model for optimizing step size of learning tasks in competency-based multime-
dia practicals. Educational Technology Research and Development, 49(3), 87–101.
https://doi.org/10.1007/BF02504917
Nadolski, R. J., Kirschner, P. A., van Merriënboer, J. J. G., & Wöretshofer, J.
(2005). Development of an instrument for measuring the complexity of learn-
ing tasks. Educational Research and Evaluation, 11(1), 1–27. https://doi.
org/10.1080/13803610500110125
Naylor, J. C., & Briggs, G. E. (1963). Efects of task complexity and task or-
ganization on the relative efciency of part and whole training methods. Jour-
nal of Experimental Psychology, 65(3), 217–224. https://doi.org/10.1037/
h0041060
Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and
new fndings. In G. H. Bower (Ed.), Psychology of learning and motivation (Vol.
26, pp. 125–173). Elsevier. https://doi.org/10.1016/S0079-7421(08)60053-5
Newell, A., & Simon, H. A. (1972). Human problem solving. Prentice-Hall.
References 413

Nixon, E. K., & Lee, D. (2001). Rapid prototyping in the instructional design process.
Performance Improvement Quarterly, 14(3), 95–116. https://doi.org/10.1111/
j.1937-8327.2001.tb00220.x
Nkambou, R., Bordeau, J., & Mizoguchi, R. (Eds.). (2010). Advances in intelligent
tutoring systems. Springer.
Norman, G. R., & Schmidt, H. G. (2000). Efectiveness of problem-based learning
curricula: Theory, practice and paper darts. Medical Education, 34(9), 721–728.
https://doi.org/10.1046/j.1365-2923.2000.00749.x
Norman, G. R., van der Vleuten, C. P. M., & van der Graaf, E. (1991). Pitfalls in the
pursuit of objectivity: Issues of validity, efciency and acceptability. Medical Educa-
tion, 25(2), 119–126. https://doi.org/10.1111/j.1365-2923.1991.tb00037.x
Noroozi, O., Kirschner, P. A., Biemans, H. J. A., & Mulder, M. (2017). Promot-
ing argumentation competence: Extending from frst- to second-order scafold-
ing through adaptive fading. Educational Psychology Review, 30(1), 153–176.
https://doi.org/10.1007/s10648-017-9400-z
Nückles, M., Hübner, S., Dümer, S., & Renkl, A. (2010). Expertise reversal efects in
writing-to-learn. Instructional Science,38(3), 237–258.https://doi.org/10.1007/
s11251-009-9106-9
O’Flaherty, J., & Phillips, C. (2015). The use of fipped classrooms in higher educa-
tion: A scoping review. The Internet and Higher Education, 25, 85–95. https://
doi.org/10.1016/j.iheduc.2015.02.002
Paas, F., Tuovinen, J. E., van Merriënboer, J. J. G., & Aubteen Darabi, A. (2005).
A motivational perspective on the relation between mental efort and performance:
Optimizing learner involvement in instruction. Educational Technology Research
and Development, 53(3), 25–34. https://doi.org/10.1007/BF02504795
Paas, F., van Gog, T., & Sweller, J. (2010). Cognitive load theory: New conceptual-
izations, specifcations, and integrated research perspectives. Educational Psychol-
ogy Review, 22(2), 115–121. https://doi.org/10.1007/s10648-010-9133-8
Paas, F., & van Merriënboer, J. J. G. (1994). Variability of worked examples and transfer
of geometrical problem-solving skills: A cognitive-load approach. Journal of Educa-
tional Psychology, 86(1), 122–133. https://doi.org/10.1037/0022-0663.86.1.122
Paas, F., & van Merriënboer, J. J. G. (2020). Cognitive-load theory: Methods to man-
age working memory load in the learning of complex tasks. Current Directions in Psy-
chological Science, 29, 394–398. https://doi.org/10.1177/0963721420922183
Paik, E. S., & Schraw, G. (2013). Learning with animation and illusions of under-
standing. Journal of Educational Psychology, 105(2), 278–289. https://doi.org/
10.1037/a0030281
Paivio, A. (1971). Imagery and verbal processes. Holt, Rinehart, and Winston.
Paivio, A. (1986). Mental representations. Oxford University Press.
Palmeri, T. J. (1999). Theories of automaticity and the power law of practice. Jour-
nal of Experimental Psychology: Learning, Memory, and Cognition, 25(2), 543–
551. https://doi.org/10.1037/0278-7393.25.2.543
Peng, J., Wang, M. H., Sampson, D., & van Merriënboer, J. J. G. (2019). Using a
visualisation-based and progressive learning environment as a cognitive tool for
learning computer programming. Australasian Journal of Educational Technology,
35(2), 52–68. https://doi.org/10.14742/ajet.4676
414 References

Petrusa, E. R. (2002). Clinical performance assessment. In G. R. Norman, C. P. M.


van der Vleuten, & D. I. Newble (Eds.), International handbook for research in
medical education (pp. 673–709). Kluwer Academic Publishers.
Pontes, T., Miranda, G., & Celani, G. (2018). Algorithm-aided design with python:
Analysis of technological competence of subjects. Education Sciences, 8(4), 200.
https://doi.org/10.3390/educsci8040200
Popova, A., Kirschner, P. A., & Joiner, R. (2014). Enhancing learning from lectures
with epistemic primer podcasts activity—A pilot study. International Journal of
Learning Technology, 9(4), 323. https://doi.org/10.1504/IJLT.2014.067735
Postma, T. C., & White, J. G. (2015). Developing clinical reasoning in the class-
room—Analysis of the 4C/ID-model. European Journal of Dental Education,
19(2), 74–80. https://doi.org/10.1111/eje.12105
Postma, T. C., & White, J. G. (2016). Developing integrated clinical reasoning
competencies in dental students using scafolded case-based learning—Empirical
evidence. European Journal of Dental Education, 20(3), 180–188. https://doi.
org/10.1111/eje.12159
Prins, F. J., Sluijsmans, D. M. A., Kirschner, P. A., & Strijbos, J. (2005). Formative peer
assessment in a CSCL environment: A case study. Assessment & Evaluation in Higher
Education, 30(4), 417–444. https://doi.org/10.1080/02602930500099219
Raaijmakers, S. F., Baars, M., Schaap, L., Paas, F., van Merriënboer, J. J. G., &
van Gog, T. (2017). Training self-regulated learning skills with video modeling
examples: Do task-selection skills transfer? Instructional Science, 46(2), 273–290.
https://doi.org/10.1007/s11251-017-9434-0
Reber, A. S. (1996). Implicit learning and tacit knowledge: An essay on the cognitive
unconscious. Oxford University Press.
Reeves, T. C. (2006). How do you know they are learning? The importance of
alignment in higher education. International Journal of Learning Technology, 2,
294–309. https://doi.org/10.1504/IJLT.2006.011336
Reigeluth, C. M. (1987). Lesson blueprints based on the elaboration theory of
instruction. In C. M. Reigeluth (Ed.), Instructional theories in action: Lessons illus-
trating selected theories and models (pp. 245–288). Lawrence Erlbaum Associates.
Reigeluth, C. M. (1992). Elaborating the elaboration theory. Educational Technology
Research and Development, 40(3), 80–86. https://doi.org/10.1007/BF02296844
Reigeluth, C. M. (2007). Order, frst step to mastery: An introduction to sequenc-
ing in instructional design. In F. E. Ritter, J. Nerb, E. Lehtinen, & T. O’Shea
(Eds.), In order to learn: How the sequence of topics infuences learning (pp. 19–40).
Oxford University Press.
Reigeluth, C. M., Beatty, B. J., & Myers, R. D. (Eds.). (2017). Instructional design
theories and models: The learner-centered paradigm of education (Vol. 4). Routledge.
Reigeluth, C. M., Watson, W. R., & Watson, S. L. (2012). Personalized integrated
educational systems: Technology for the information-age paradigm of education
in higher education. In S. P. Ferris (Ed.), Teaching, learning and the Net genera-
tion: Concepts and tools for reaching digital learners (pp. 41–60). IGI Global.
Reiser, B. J. (2004). Scafolding complex learning: The mechanisms of structur-
ing and problematizing student work. Journal of the Learning Sciences, 13(3),
273–304. https://doi.org/10.1207/s15327809jls1303_2
References 415

Renkl, A. (2002). Worked-out examples: Instructional explanations support learn-


ing by self-explanations. Learning and Instruction, 12(5), 529–556. https://doi.
org/10.1016/S0959-4752(01)00030-5
Renkl, A., & Atkinson, R. K. (2003). Structuring the transition from example
study to problem solving in cognitive skill acquisition: A cognitive load per-
spective. Educational Psychologist, 38(1), 15–22. https://doi.org/10.1207/
S15326985EP3801_3
Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of plan-
ning. Policy Sciences, 4(2), 155–169. https://doi.org/10.1007/BF01405730
Rohrer, D., & Taylor, K. (2006). The efects of overlearning and distributed practise
on the retention of mathematics knowledge. Applied Cognitive Psychology, 20(9),
1209–1224. https://doi.org/10.1002/acp.1266
Rosenberg-Kima, R. B., Merrill, M. D., Baylor, A. L., & Johnson, T. E. (2022).
Explicit instruction in the context of whole task: The efectiveness of the task-
centered instructional strategy in computer science education. Educational Tech-
nology Research and Development, 70, 1627–1655. https://doi.org/10.1007/
s11423-022-10143-7
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of
intrinsic motivation, social development, and well-being. American Psychologist,
55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68
Ryder, J. M., & Redding, R. E. (1993). Integrating cognitive task analysis into
instructional systems development. Educational Technology Research and Develop-
ment, 41(2), 75–96. https://doi.org/10.1007/BF02297312
Salden, R. J. C. M., Paas, F., van der Pal, J., & van Merriënboer, J. J. G. (2006a).
Dynamic task selection in fight management system training. The International
Journal of Aviation Psychology, 16(2), 157–174. https://doi.org/10.1207/
s15327108ijap1602_3
Salden, R. J. C. M., Paas, F., & van Merriënboer, J. J. G. (2006b). A comparison of
approaches to learning task selection in the training of complex cognitive skills.
Computers in Human Behavior, 22(3), 321–333. https://doi.org/10.1016/
j.chb.2004.06.003
Salden, R. J. C. M., Paas, F., & van Merriënboer, J. J. G. (2006c). Personalised
adaptive task selection in air trafc control: Efects on training efciency and
transfer. Learning and Instruction, 16(4), 350–362. https://doi.org/10.1016/
j.learninstruc.2006.07.007
Salomon, G. (1998). Novel constructivist learning environments and novel tech-
nologies: Some issues to be concerned with. Learning and Instruction, 8(Suppl.
1), 3–12. https://doi.org/10.1016/S0959-4752(98)00007-3
Sarfo, F. K., & Elen, J. (2007). Developing technical expertise in secondary techni-
cal schools: The efect of 4C/ID learning environments. Learning Environments
Research, 10(3), 207–221. https://doi.org/10.1007/s10984-007-9031-2
Sarfo, F. K., & Elen, J. (2008). The moderating efect of instructional conceptions
on the efect of powerful learning environments. Instructional Science, 36(2),
137–153. https://doi.org/10.1007/s11251-007-9023-8
Schank, R. C. (2010). The pragmatics of learning by doing. Pragmatics and Society,
1(1), 157–172. https://doi.org/10.1075/ps.1.1.10sch
416 References

Schellekens, A., Paas, F., Verbraeck, A., & van Merriënboer, J. J. G. (2010a). Design-
ing a fexible approach for higher professional education by means of simulation
modelling. Journal of the Operational Research Society, 61(2), 202–210. https://
doi.org/10.1057/jors.2008.133
Schellekens, A., Paas, F., Verbraeck, A., & van Merriënboer, J. J. G. (2010b). Flex-
ible programmes in higher professional education: Expert validation of a fexible
educational model. Innovations in Education and Teaching International, 47(3),
283–294. https://doi.org/10.1080/14703297.2010.498179
Schneider, J., Börner, D., Rosmalen, P., & Specht, M. (2016). Enhancing public speak-
ing skills: An evaluation of the presentation trainer in the wild. In K. Verbert, M. Shar-
ples, & T. Klobučar (Eds.), Adaptive and adaptable learning (pp. 263–276). Springer.
Schneider, W. (1985). Training high-performance skills: Fallacies and guidelines.
Human Factors: The Journal of the Human Factors and Ergonomics Society, 27(3),
285–300. https://doi.org/10.1177/001872088502700305
Schneider, W., & Detweiler, M. (1988). The role of practice in dual-task perfor-
mance: Toward workload modeling in a connectionist/-control architecture.
Human Factors: The Journal of the Human Factors and Ergonomics Society, 30(5),
539–566. https://doi.org/10.1177/001872088803000502
Schubert, S., Ortwein, H., Dumitsch, A., Schwantes, U., Wilhelm, O., & Kiessling,
C. (2008). A situational judgement test of professional behaviour: Development
and validation. Medical Teacher, 30(5), 528–533. https://doi.org/10.1080/
01421590801952994
Schuwirth, L. W. T., & van der Vleuten, C. P. M. (2004). Diferent written assessment
methods: What can be said about their strengths and weaknesses? Medical Educa-
tion, 38(9), 974–979. https://doi.org/10.1111/j.1365-2929.2004.01916.x
Shumway, J. M., & Harden, R. M. (2003). AMEE Guide No. 25: The assessment
of learning outcomes for the competent and refective physician. Medical Teacher,
25(6), 569–584. https://doi.org/10.1080/0142159032000151907
Si, J., & Kim, D. (2011). How do instructional sequencing methods afect cog-
nitive load, learning transfer, and learning time? Educational Research, 2(8),
1362–1372.
Sloep, P., & Berlanga, A. (2011). Learning networks, networked learning [Redes
de aprendizaje, aprendizaje en red]. Comunicar, 19(37), 55–64. https://doi.
org/10.3916/C37-2011-02-05
Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2002a). Peer
assessment training in teacher education: Efects on performance and perceptions.
Assessment & Evaluation in Higher Education, 27(5), 443–454. https://doi.
org/10.1080/0260293022000009311
Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T.
J. (2002b). The training of peer assessment skills to promote the development of
refection skills in teacher education. Studies in Educational Evaluation, 29(1),
23–42. https://doi.org/10.1016/S0191-491X(03)90003-4
Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Martens, R.
L. (2004). Training teachers in peer-assessment skills: Efects on performance and
perceptions. Innovations in Education and Teaching International, 41(1), 59–78.
https://doi.org/10.1080/1470329032000172720
References 417

Sluijsmans, D. M. A., & Moerkerke, G. (1999). Creating a learning environment by


using self- peer- and co-assessment. Learning Environments Research, 1, 293–319.
https://doi.org/10.1023/A:1009932704458
Sluijsmans, D. M. A., Straetmans, G. J. J. M., & van Merriënboer, J. J. G. (2008).
Integrating authentic assessment with competence-based learning in vocational
education: The protocol portfolio scoring. Journal of Vocational Education &
Training, 60(2), 159–172. https://doi.org/10.1080/13636820802042438
Soderstrom, N. C., & Bjork, R. A. (2015). Learning versus performance: An inte-
grative review. Perspectives on Psychological Science, 10(2), 176–199. https://doi.
org/10.1177/1745691615569000
Spanjers, I. A. E., Könings, K. D., Leppink, J., Verstegen, D. M. L., de Jong, N., Cza-
banowska, K., & van Merriënboer, J. J. G. (2015). The promised land of blended
learning: Quizzes as a moderator. Educational Research Review, 15, 59–74. https://
doi.org/10.1016/j.edurev.2015.05.001
Spanjers, I. A. E., van Gog, T., & van Merriënboer, J. J. G. (2010). A theoretical
analysis of how segmentation of dynamic visualizations optimizes students’ learn-
ing. Educational Psychology Review, 22(4), 411–423. https://doi.org/10.1007/
s10648-010-9135-6
Spector, J. M., & Anderson, T. M. (Eds.). (2000). Holistic and integrated perspec-
tives on learning, technology, and instruction: Understanding complexity. Lawrence
Erlbaum Associates.
Stefaniak, J., & Xu, M. (2020). An examination of the systemic reach of instruc-
tional design models: A systematic review. TechTrends, 64(5), 710–719. https://
doi.org/10.1007/s11528-020-00539-8
Steinberg, M. S., Brown, D. E., & Clement, J. (1990). Genius is not immune to
persistent misconceptions: Conceptual difculties impeding Isaac Newton and
contemporary physics students. International Journal of Science Education, 12(3),
265–273. https://doi.org/10.1080/0950069900120305
Stoof, A., Martens, R. L., & van Merriënboer, J. J. G. (2006). Efects of web-based
support for the construction of competence maps. Instructional Science, 34(3),
189–211. https://doi.org/10.1007/s11251-006-0003-1
Stoof, A., Martens, R. L., & van Merriënboer, J. J. G. (2007). Web-based sup-
port for constructing competence maps: Design and formative evaluation. Edu-
cational Technology Research and Development, 55(4), 347–368. https://doi.
org/10.1007/s11423-006-9014-5
Susilo, A. P., van Merriënboer, J., van Dalen, J., Claramita, M., & Scherpbier, A.
(2013). From lecture to learning tasks: Use of the 4C/ID model in a com-
munication skills course in a continuing professional education context. The
Journal of Continuing Education in Nursing, 44(6), 278–284. https://doi.
org/10.3928/00220124-20130501-78
Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer.
Sweller, J., Kirschner, P. A., & Clark, R. E. (2007). Why minimal guidance dur-
ing instruction does not work: A reply to commentaries. Educational Psychologist,
42(2), 115–121. https://doi.org/10.1080/00461520701263426
Sweller, J., van Merriënboer, J. J. G., & Paas, F. (2019). Cognitive architecture and
instructional design: 20 years later. Educational Psychology Review, 31(2), 261–
292. https://doi.org/10.1007/s10648-019-09465-5
418 References

Sweller, J., van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive archi-
tecture and instructional design. Educational Psychology Review, 10(3), 251–296.
https://doi.org/10.1023/A:1022193728205
Taatgen, N. A., & Lee, F. J. (2003). Production compilation: A simple mechanism to
model complex skill acquisition. Human Factors: The Journal of the Human Factors and
Ergonomics Society, 45(1), 61–76. https://doi.org/10.1518/hfes.45.1.61.27224
Taminiau, E. M. C., Kester, L., Corbalan, G., Alessi, S. M., Moxnes, E., Gijselaers,
W. H., Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Why advice on task
selection may hamper learning in on-demand education. Computers in Human
Behavior, 29(1), 145–154. https://doi.org/10.1016/j.chb.2012.07.028
Taminiau, E. M. C., Kester, L., Corbalan, G., Spector, J. M., Kirschner, P. A., & van
Merriënboer, J. J. G. (2015). Designing on-demand education for simultaneous
development of domain-specifc and self-directed learning skills. Journal of Com-
puter Assisted Learning, 31(5), 405–421. https://doi.org/10.1111/jcal.12076
Tawfk, A. A., Graesser, A., Gatewood, J., & Gishbauger, J. (2020). Role of ques-
tions in inquiry-based instruction: Towards a design taxonomy for question-asking
and implications for design. Educational Technology Research and Development,
68, 653–678. https://doi.org/10.1007/s11423-020-09738-9
Ten Cate, O. (2013). Nuts and bolts of entrustable professional activities. Jour-
nal of Graduate Medical Education, 5(1), 157–158. https://doi.org/10.4300/
JGME-D-12-00380.1
Tennyson, R. D., & Cocchiarella, M. J. (1986). An empirically based instructional
design theory for teaching concepts. Review of Educational Research, 56(1),
40–71. https://doi.org/10.3102/00346543056001040
Thijssen, J. G. L., & Walter, E. M. (2006). Identifying obsolescence and related factors
among elderly employees (Conference paper). www.ufhrd.co.uk/wordpress
Thornton, G. C., & Kedharnath, U. (2013). Work sample tests. In K. F. Geisinger,
B. A. Bracken, J. F. Carlson, J.-I. C. Hansen, N. R. Kuncel, S. P. Reise, & M. C.
Rodriguez (Eds.), APA handbook of testing and assessment in psychology: Test the-
ory and testing and assessment in industrial and organizational psychology (Vol. 1,
pp. 533–550). American Psychological Association. https://doi.org/10.1037/
14047-029
Tjiam, I. M., Schout, B. M. A., Hendrikx, A. J. M., Scherpbier, A. J. J. M., Witjes, J.
A., & van Merriënboer, J. J. G. (2012). Designing simulator-based training: An ap-
proach integrating cognitive task analysis and four-component instructional design.
Medical Teacher, 34(10), e698–e707. https://doi.org/10.3109/0142159X.2012.
687480
Topping, K. (1998). Peer assessment between students in colleges and universi-
ties. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/
00346543068003249
Torre, D. M., Schuwirth, L. W. T., & van der Vleuten, C. P. M. (2020). Theoretical
considerations on programmatic assessment. Medical Teacher, 42(2), 213–220.
https://doi.org/10.1080/0142159X.2019.1672863
Tracey, M. W., & Boling, E. (2013). Preparing instructional designers and educa-
tional technologists: Traditional and emerging perspectives. In M. Spector, D.
Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational com-
munications and technology (4th ed., pp. 653–660). Springer.
References 419

Tricot, A., & Sweller, J. (2014). Domain-specifc knowledge and why teaching
generic skills does not work. Educational Psychology Review, 26(2), 265–283.
https://doi.org/10.1007/s10648-013-9243-1
Turns, J., Atman, C. J., & Adams, R. (2000). Concept maps for engineering education:
A cognitively motivated tool supporting varied assessment functions. IEEE Trans-
actions on Education, 43(2), 164–173. https://doi.org/10.1109/13.848069
Van Boxtel, C., van der Linden, J., & Kanselaar, G. (2000). Collaborative learning
tasks and the elaboration of conceptual knowledge. Learning and Instruction,
10(4), 311–330. https://doi.org/10.1016/S0959-4752(00)00002-5
Van Bussel, R., Lukosch, H., & Meijer, S. A. (2014). Efects of a game-facilitated
curriculum on technical knowledge and skill development. In S. A. Meijer & R.
Smeds (Eds.), Frontiers in gaming simulation (pp. 93–101). Springer.
Van den Boom, G., Paas, F., & van Merriënboer, J. J. G. (2007). Efects of elic-
ited refections combined with tutor or peer feedback on self-regulated learning
and learning outcomes. Learning and Instruction, 17(5), 532–548. https://doi.
org/10.1016/j.learninstruc.2007.09.003
Van der Klink, M., Gielen, E., & Nauta, C. (2001). Supervisory support as a major
condition to enhance transfer. International Journal of Training and Develop-
ment, 5(1), 52–63. https://doi.org/10.1111/1468-2419.00121
Van der Meij, H. (2003). Minimalism revisited. Document Design, 4(3), 212–233.
https://doi.org/10.1075/dd.4.3.03mei
Van der Meij, H., & Lazonder, A. W. (1993). Assessment of the minimalist approach
to computer user documentation. Interacting with Computers, 5(4), 355–370.
https://doi.org/10.1016/0953-5438(93)90001-A
Van der Vleuten, C. P. M., Verwijnen, G. M., & Wijnen, W. H. F. W. (1996). Fifteen years
of experience with progress testing in a problem-based learning curriculum. Medi-
cal Teacher, 18(2), 103–109. https://doi.org/10.3109/01421599609034142
Van Geel, M., Keuning, T., Frèrejean, J., Dolmans, D., van Merriënboer, J., & Viss-
cher, A. J. (2019). Capturing the complexity of diferentiated instruction. School
Efectiveness and School Improvement, 30(1), 51–67. https://doi.org/10.1080/0
9243453.2018.1539013
Van Gog, T., Ericsson, K. A., Rikers, R. M. J. P., & Paas, F. (2005). Instructional
design for advanced learners: Establishing connections between the theoreti-
cal frameworks of cognitive load and deliberate practice. Educational Technology
Research and Development, 53(3), 73–81. https://doi.org/10.1007/BF02504799
Van Gog, T., Jarodzka, H., Scheiter, K., Gerjets, P., & Paas, F. (2009). Attention
guidance during example study via the model’s eye movements. Computers in
Human Behavior, 25(3), 785–791. https://doi.org/10.1016/j.chb.2009.02.007
Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2004). Process-oriented worked
examples: Improving transfer performance through enhanced understanding.
Instructional Science, 32(1/2), 83–98. https://doi.org/10.1023/B:TRUC.
0000021810.70784.b0
Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2005). Uncovering expertise-
related diferences in troubleshooting performance: Combining eye movement
and concurrent verbal protocol data: Uncovering expertise-related diferences.
Applied Cognitive Psychology, 19(2), 205–221. https://doi.org/10.1002/acp.1112
420 References

Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2006). Efects of process-
oriented worked examples on troubleshooting transfer performance. Learn-
ing and Instruction, 16(2), 154–164. https://doi.org/10.1016/j.learninstruc.
2006.02.003
Van Gog, T., Paas, F., & van Merriënboer, J. J. G. (2008). Efects of studying
sequences of process-oriented and product-oriented worked examples on trouble-
shooting transfer efciency. Learning and Instruction, 18(3), 211–222. https://
doi.org/10.1016/j.learninstruc.2007.03.003
Van Gog, T., Paas, F., van Merriënboer, J. J. G., & Witte, P. (2005). Uncovering the
problem-solving process: Cued retrospective reporting versus concurrent and ret-
rospective reporting. Journal of Experimental Psychology: Applied, 11(4), 237–244.
https://doi.org/10.1037/1076-898X.11.4.237
Van Loon, M. H., de Bruin, A. B. H., van Gog, T., & van Merriënboer, J. J. G.
(2013). The efect of delayed-JOLs and sentence generation on children’s moni-
toring accuracy and regulation of idiom study. Metacognition and Learning, 8(2),
173–191. https://doi.org/10.1007/s11409-013-9100-0
Van Luijk, S. J., van der Vleuten, C. P. M., & Schelven, R. M. (1990). Observer and
student opinions about performance-based tests. In W. Bender, R. J. Hiemstra, A.
J. Scherpbier, & R. P. Zwierstra (Eds.), Teaching and assessing clinical competence
(pp. 199–203). Boekwerk Publications.
Van Meeuwen, L. W., Brand-Gruwel, S., Kirschner, P. A., de Bock, J., & van Mer-
riënboer, J. J. G. (2018). Fostering self-regulation in training complex cognitive
tasks. Educational Technology Research and Development, 66(1), 53–73. https://
doi.org/10.1007/s11423-017-9539-9
Van Merriënboer, J. J. G. (1990). Strategies for programming instruction in high
school: Program completion vs. program generation. Journal of Educational Com-
puting Research, 6(3), 265–285. https://doi.org/10.2190/4NK5-17L7-TWQV-
1EHL
Van Merriënboer, J. J. G. (1997). Training complex cognitive skills: A four-component
instructional design model for technical training. Educational Technology
Publications.
Van Merriënboer, J. J. G. (2000). The end of software training? Journal of
Computer Assisted Learning, 16(4), 366–375. https://doi.org/10.1046/
j.1365-2729.2000.00149.x
Van Merriënboer, J. J. G. (2007). Alternate models of instructional design: Holistic
design approaches and complex learning. In R. A. Reiser & J. V. Dempsey (Eds.),
Trends and issues in instructional design and technology (2nd ed., pp. 72–81). Pear-
son/Merrill Prentice Hall.
Van Merriënboer, J. J. G. (2013). Perspectives on problem solving and instruction.
Computers & Education, 64, 153–160. https://doi.org/10.1016/j.compedu.
2012.11.025
Van Merriënboer, J. J. G. (2016). How people learn. In N. Rushby & D. W. Surry
(Eds.), The Wiley handbook of learning technology (pp. 15–34). Wiley Blackwell.
Van Merriënboer, J. J. G. (2017). Instructional design. In J. A. Dent, R. M. Harden, &
D. Hunt (Eds.), A practical guide for medical teachers (5th ed., pp. 162–169).
Elsevier.
References 421

Van Merriënboer, J. J. G., & Boot, E. (2005). A holistic pedagogical view of learn-
ing objects: Future directions for reuse. In J. M. Spector, C. Ohrazda, A. van
Schaaik, & D. A. Wiley (Eds.), Innovations in instructional technology: Essays in
honor of M. David Merrill (pp. 43–64). Lawrence Erlbaum Associates.
Van Merriënboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints
for complex learning: The 4C/ID-model. Educational Technology Research and
Development, 50(2), 39–61. https://doi.org/10.1007/BF02504993
Van Merriënboer, J. J. G., & de Bruin, A. B. H. (2014). Research paradigms and
perspectives on learning. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop
(Eds.), Handbook of research on educational communications and technology (4th
ed., pp. 21–30). Springer.
Van Merriënboer, J. J. G., & de Croock, M. B. M. (1992). Strategies for com-
puter-based programming instruction: Program completion vs. program genera-
tion. Journal of Educational Computing Research, 8(3), 365–394. https://doi.
org/10.2190/MJDX-9PP4-KFMT-09PM
Van Merriënboer, J. J. G., & de Croock, M. B. M. (2002). Performance-based ISD:
10 steps to complex learning. Performance Improvement, 41(7), 35–40. https://
doi.org/10.1002/pf.4140410708
Van Merriënboer, J. J. G., de Croock, M. B. M., & Jelsma, O. (1997). The transfer
paradox: Efects of contextual interference on retention and transfer performance
of a complex cognitive skill. Perceptual and Motor Skills, 84(3), 784–786. https://
doi.org/10.2466/pms.1997.84.3.784
Van Merriënboer, J. J. G., & Dolmans, D. H. J. M. (2015). Research on instructional
design in the health sciences: From taxonomies of learning to whole-task models.
In J. Cleland & S. J. Durning (Eds.), Researching medical education (pp. 193–
206). Wiley Blackwell.
Van Merriënboer, J. J. G., Gros, B., & Niegemann, H. (2018). Instructional design
in Europe: Trends and issues. In R. A. Reiser & J. V. Dempsey (Eds.), Trends
and issues in instructional design and technology (4th ed., pp. 192–198). Pearson
Education.
Van Merriënboer, J. J. G., Jelsma, O., & Paas, F. G. W. C. (1992). Training for
refective expertise: A four-component instructional design model for complex
cognitive skills. Educational Technology Research and Development, 40(2), 23–43.
https://doi.org/10.1007/BF02297047
Van Merriënboer, J. J. G., & Kester, L. (2008). Whole-task models in education. In
J. M. Spector, M. D. Merrill, J. J. G. van Merriënboer, & M. P. Driscoll (Eds.),
Handbook of research on educational communications and technology (3rd ed.,
pp. 441–456). Lawrence Erlbaum Associates/Routledge.
Van Merriënboer, J. J. G., & Kester, L. (2014). The four-component instructional
design model: Multimedia principles in environments for complex learning. In R.
E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd rev. ed.,
pp. 104–148). Cambridge University Press.
Van Merriënboer, J. J. G., Kester, L., & Paas, F. (2006). Teaching complex rather than
simple tasks: Balancing intrinsic and germane load to enhance transfer of learn-
ing. Applied Cognitive Psychology, 20(3), 343–352. https://doi.org/10.1002/
acp.1250
422 References

Van Merriënboer, J. J. G., & Kirschner, P. A. (2018). 4C/ID in the context of


instructional design and the learning sciences. In F. Fischer, C. E. Hmelo-Silver,
S. R. Goldman, & P. Reimann (Eds.), International handbook of the learning
sciences (pp. 169–179). Routledge. https://doi.org/10.4324/9781315617572-17
Van Merriënboer, J. J. G., Kirschner, P. A., & Kester, L. (2003). Taking the load of
a learner’s mind: Instructional design for complex learning. Educational Psycholo-
gist, 38(1), 5–13. https://doi.org/10.1207/S15326985EP3801_2
Van Merriënboer, J. J. G., Kirschner, P. A., Paas, F., Sloep, P. B., & Caniëls, M. C. J.
(2009). Towards an integrated approach for research on lifelong learning. Educa-
tional Technology, 49(3), 3–14.
Van Merriënboer, J. J. G., & Krammer, H. P. M. (1987). Instructional strategies and
tactics for the design of introductory computer programming courses in high school.
Instructional Science, 16(3), 251–285. https://doi.org/10.1007/BF00120253
Van Merriënboer, J. J. G., & Luursema, J. J. (1996). Implementing instructional
models in computer-based learning environments: A case study in problem selec-
tion. In T. T. Liao (Ed.), Advanced educational technology: Research issues and
future potential (pp. 184–206). Springer.
Van Merriënboer, J. J. G., & Martens, R. (Eds.). (2002). Computer-based tools for
instructional design [Special issue]. Educational Technology Research and Develop-
ment, 50(4).
Van Merriënboer, J. J. G., McKenney, S., Cullinan, D., & Heuer, J. (2017). Aligning
pedagogy with physical learning spaces. European Journal of Education, 52(3),
253–267. https://doi.org/10.1111/ejed.12225
Van Merriënboer, J. J. G., Seel, N. M., & Kirschner, P. A. (2002). Mental models as
a new foundation for instructional design. Educational Technology, 17(2), 60–66.
Van Merriënboer, J. J. G., & Sluijsmans, D. M. A. (2009). Toward a synthesis of cog-
nitive load theory, four-component instructional design, and self-directed learn-
ing. Educational Psychology Review, 21(1), 55–66. https://doi.org/10.1007/
s10648-008-9092-5
Van Merriënboer, J. J. G., & Sweller, J. (2005). Cognitive load theory and com-
plex learning: Recent developments and future directions. Educational Psychology
Review, 17(2), 147–177. https://doi.org/10.1007/s10648-005-3951-0
Van Merriënboer, J. J. G., & Sweller, J. (2010). Cognitive load theory in health pro-
fessional education: Design principles and strategies. Medical Education, 44(1),
85–93. https://doi.org/10.1111/j.1365-2923.2009.03498.x
Van Merriënboer, J. J. G., & van der Vleuten, C. P. M. (2012). Technology-based
assessment in the integrated curriculum. In M. C. Mayrath, J. Clarke-Midura, D.
H. Robinson, & G. Schraw (Eds.), Technology-based assessments for 21st century
skills (pp. 345–370). Information Age Publishing.
Van Zundert, M., Sluijsmans, D., & van Merriënboer, J. J. G. (2010). Efective
peer assessment processes: Research fndings and future directions. Learning and
Instruction,20(4), 270–279.https://doi.org/10.1016/j.learninstruc.2009.08.004
Vandewaetere, M., Manhaeve, D., Aertgeerts, B., Clarebout, G., van Merriënboer,
J. J. G., & Roex, A. (2015). 4C/ID in medical education: How to design an edu-
cational program based on whole-task learning: AMEE Guide No. 93. Medical
Teacher, 37(1), 4–20. https://doi.org/10.3109/0142159X.2014.928407
Vanfeteren, R., Elen, J., & Charlier, N. (2022). Blueprints of an online learning envi-
ronment for teaching complex psychomotor skills in frst aid. International Journal
of Designs for Learning, 13(1), 79–95. https://doi.org/10.14434/ijdl.v13i1.32697
References 423

Vosniadou, S., & Brewer, W. F. (1992). Mental models of the earth: A study of con-
ceptual change in childhood. Cognitive Psychology, 24(4), 535–585. https://doi.
org/10.1016/0010-0285(92)90018-W
Vosniadou, S., & Ortony, A. (1989). Similarity and analogical reasoning. Cam-
bridge University Press.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological pro-
cesses. Harvard University Press.
Wade, C. H., Wilkens, C., Sonnert, G., & Sadler, P. (2023). Presenting a new model
to support the secondary-tertiary transition to college calculus: The secondary
precalculus and calculus four component instructional design (SPC 4C/ID)
model. Journal of Mathematics Education at Teachers College, 14(1), 1–9. https://
doi.org/10.52214/jmetc.v14i1.10483
Wasson, B., & Kirschner, P. A. (2020). Learning design: European approaches. Tech-
Trends, 64, 815–827. https://doi.org/10.1007/s11528-020-00498-0
Wedman, J., & Tessmer, M. (1991). Adapting instructional design to project
circumstance: The layers of necessity model. Educational Technology, 31(7), 48–52.
Westera, W., Sloep, P. B., & Gerrissen, J. F. (2000). The design of the virtual company:
Synergism of learning and working in a networked environment. Innovations in
Education and Training International, 37(1), 23–33. https://doi.org/10.1080/
135580000362052
Wetzels, S. A. J., Kester, L., & van Merriënboer, J. J. G. (2011). Adapting prior
knowledge activation: Mobilisation, perspective taking, and learners’ prior knowl-
edge. Computers in Human Behavior, 27(1), 16–21. https://doi.org/10.1016/
j.chb.2010.05.004
White, B. Y., & Frederiksen, J. R. (1990). Causal model progressions as a founda-
tion for intelligent learning environments. Artifcial Intelligence, 42(1), 99–157.
https://doi.org/10.1016/0004-3702(90)90095-H
Wickens, C. D., Hutchins, S., Carolan, T., & Cumming, J. (2013). Efectiveness
of part-task training and increasing-difculty training strategies: A meta-analysis
approach. Human Factors: The Journal of the Human Factors and Ergonomics Soci-
ety, 55(2), 461–470. https://doi.org/10.1177/0018720812451994
Wightman, D. C., & Lintern, G. (1985). Part-task training for tracking and manual
control. Human Factors: The Journal of the Human Factors and Ergonomics Soci-
ety, 27(3), 267–283. https://doi.org/10.1177/001872088502700304
Wiley, D., Bliss, T. J., & McEwen, M. (2014). Open educational resources: A review
of the literature. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.),
Handbook of research on educational communications and technology (4th ed.,
pp. 781–790). Springer.
Wiley, D., Hilton, J. L., III, Ellington, S., & Hall, T. (2012). A preliminary examina-
tion of the cost savings and learning impacts of using open textbooks in middle
and high school science classes. The International Review of Research in Open and
Distributed Learning, 13(3), 262. https://doi.org/10.19173/irrodl.v13i3.1153
Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evalua-
tion, 37(1), 3–14. https://doi.org/10.1016/j.stueduc.2011.03.001
Wittrock, M. C. (1989). Generative processes of comprehension. Educational
Psychologist, 24(4), 345–376. https://doi.org/10.1207/s15326985ep2404_2
Wolterinck, C., Poortman, C., Schildkamp, K., & Visscher, A. (2022). Assessment
for learning: Developing the required teacher competencies. European Journal of
Teacher Education. https://doi.org/10.1080/02619768.2022.2124912
424 References

Wood, D. F. (2003). Problem based learning. BMJ, 326, 328–330. https://doi.


org/10.1136/bmj.326.7384.328
Wood, H., & Wood, D. (1999). Help seeking, learning and contingent tutoring.
Computers & Education, 33(2–3), 153–169. https://doi.org/10.1016/S0360-
1315(99)00030-5
Woolley, N. N., & Jarvis, Y. (2007). Situated cognition and cognitive apprentice-
ship: A model for teaching and learning clinical skills in a technologically rich and
authentic learning environment. Nurse Education Today, 27(1), 73–79. https://
doi.org/10.1016/j.nedt.2006.02.010
Wopereis, I., Frèrejean, J., & Brand-Gruwel, S. (2015). Information problem solv-
ing instruction in higher education: A case study on instructional design. In S.
Kurbanoğlu, J. Boustany, S. Špiranec, E. Grassian, D. Mizrachi, & L. Roy (Eds.),
Information literacy: Moving toward sustainability. ECIL 2015. Communications
in computer and information science (Vol. 552, pp. 293–302). Springer Interna-
tional Publishing. https://doi.org/10.1007/978-3-319-28197-1_30
Wopereis, I. G. J. H., & van Merriënboer, J. J. G. (2011). Evaluating text-based
information on the World Wide Web. Learning and Instruction, 21(2), 232–237.
https://doi.org/10.1016/j.learninstruc.2010.02.003
Wrigley, W., van der Vleuten, C. P. M., Freeman, A., & Muijtjens, A. (2012). A sys-
temic framework for the progress test: Strengths, constraints and issues: AMEE
Guide No. 71. Medical Teacher, 34(9), 683–697. https://doi.org/10.3109/
0142159X.2012.704437
Yan, H., Xiao, Y., & Wang, Q. (2012). Innovation in the educational technology
course for pre-service student teachers in East China Normal University. Australa-
sian Journal of Educational Technology, 28(6). https://doi.org/10.14742/ajet.813
Yates, K. A., & Feldon, D. F. (2011). Advancing the practice of cognitive task analy-
sis: A call for taxonomic research. Theoretical Issues in Ergonomics Science, 12(6),
472–495. https://doi.org/10.1080/1463922X.2010.505269
Young, J. Q., Thakker, K., John, M., Friedman, K., Sugarman, R., van Merriënboer, J. J.
G., Sewell, J. L., & O’Sullivan, P. S. (2021). Exploring the relationship between emo-
tion and cognitive load types during patient handovers. Advances in Health Sciences
Education, 26(5), 1463–1489. https://doi.org/10.1007/s10459-021-10053-y
Yuan, B., Wang, M. H., van Merriënboer, J. J. G., Tao, X., Kushniruk, A., & Peng,
J. (2019). Investigating the role of cognitive feedback in practice-oriented learn-
ing for clinical diagnostics. Vocations and Learning, 13, 159–177. https://doi.
org/10.1007/s12186-019-09234-z
Zhou, D., Gomez, R., Wright, N., Rittenbruch, M., & Davis, J. (2022). A design-
led conceptual framework for developing school integrated STEM programs: The
Australian context. International Journal of Technology and Design Education, 32,
383–411. https://doi.org/10.1007/s10798-020-09619-5
Zwart, D. P., Goei, S. L., van Luit, J. E. H., & Noroozi, O. (2022). Nursing stu-
dents’ satisfaction with the instructional design of a computer-based virtual learn-
ing environment for mathematical medication learning. Interactive Learning
Environments. https://doi.org/10.1080/10494820.2022.2071946
Author Index

Achtenhagen, F. 8 Baumer, A. 343


Adams, R. 329 Baylor, A. L. 7
Aertgeerts, B. 306, 344 Beatty, B. J. 340
Akkaya, A. 70 Beckers, J. 120, 154
Akpinar, Y. 70 Beers, P. J. 182
Al-Eraky, M. M. 67 Benjamin, A. S. 291
Alessi, S. M. 155 Berlanga, A. 349
Aleven, V. 35, 232 Biemans, H. J. A. 36, 348
Alred, G. J. 226 Birnbaum, M. S. 7, 75, 282
Alvarez, I. M. 176 Bjork, E. L. 7, 75, 282
Aman, Z. 35 Bjork, R. A. 7, 73, 74, 75, 166, 242,
Amsing-Smit, P. 114 282, 291, 302
Anderson, J. R. 23, 222, 224, 249 Blackmore, E. 67
Anderson, L. W. 104 Bliss, T. J. 51
Anderson, T. M. 7 Blume, B. D. 4
Argelagós, E. 306 Blumenfeld, P. C. 79, 340
Atkinson, R. K. 22, 93 Bochatay, N. 56
Atman, C. J. 329 Bogdan, M. 351
Aubteen Darabi, A. 349 Bohle Carbonell, K. 4, 112
Ausubel, D. P. 165, 183 Boivin, M. 350
Ayres, P. L. 27, 30, 91, 80 Boling, E. 5
Boot, E. 51
Baars, M. 153 Bordeau, J. 35
Baartman, L. K. J. 116, 147 Borner, D. 67
Bagley, E. 182 Boshuizen, H. P. A. 182
Baillifard, A. 180, 232, 352 Boud, D. 148
Bajaj, K. 63 Brailovsky, C. 328
Baldwin, T. T. 4 Brand-Gruwel, S. 10, 120, 121, 130,
Balslev, T. 86 148, 149, 152, 175, 199, 304, 306,
Banta Lavenex, P. 180, 232, 352 308, 312, 336
Barbazette, J. 56 Braund, H. 67
Barendsen, E. 342 Bray, C. W. 291
Barnes, L. B. 80, 340 Brewer, W. F. 213
Barr, H. 62, 79 Briggs, G. E. 6
Bastiaens, E. 10 Brinkman, W. 250
Bastiaens, T. J. 66, 116, 147, 149 Broekkamp, H. 199
Battig, W. F. 73, 74 Brown, D. E. 216
426 Author Index

Brown, J. S. 2, 86 Davis, J. 11
Bruner, J. S. 183 Davis, N. 89
Brusaw, C. T. 226 De Almeida, J. 9
Buckingham Shum, S. 51 de Bock, J. 304
Bullock, A. D. 148 de Bruin, A. B. H. 75, 302, 304, 340
Burles, M. 70 Deci, E. L. 348
Bussel, R. 71 de Croock, M. B. M. 8, 9, 22, 49, 75,
Butler, D. L. 176 82, 93, 112, 202, 351
De Groot, A. D. 209
Cakir, R. 340 de Jong, N. 41
Camp, G. 31 de Jong, T. 39, 143, 181
Caniels, M. C. J. 148 de leng, B. A. 67
Carey, J. O. 256, 262 Depaepe, F. 11
Carey, L. 256, 262 de Ribaupierre, S. 128
Carlson, R. A. 26, 293, 294 De Smet, M. J. R. 199
Carolan, T. 139 Detweiler, M. 113
Carr, J. F. 322 Dick, W. 256, 262
Carrithers, C. 280 Dionne, G. 350
Carroll, J. M. 226, 236, 237, 280 Dolmans, D. 11, 128, 343
Cassese, T. 80 Dolmans, D. H. J. M. 4, 6, 39, 66,
Celani, G. 148 120, 154, 342
Chandler, P. 27, 91, 227 Donkers, J. 67
Charlier, N. 277 Dory, V. 328
Charlin, B. 328 Downie, W. W. 332
Chase, W. G. 209 Dubrowski, A. 128
Chi, M. T. H. 172 Dumer, S. 93
Chiu, J. L. 172 Dumitsch, A. 328
Choi, H.-H. 350 Dunlosky, J. 302
Christensen, C. R. 80, 340 Dunning, D. 148, 302
Chu, Y. S. 289
Claramita, M. 277, 316 Early, S. 55, 56, 250
Clarebout, G. 344 Edwards, R. 347
Clark, R. E. 2, 22, 37, 42, 55, 56, 63, Eika, B. 86
75, 82, 202, 250 Elen, J. 11, 277, 343
Clement, J. 216 Ellaway, R. 67
Cocchiarella, M. J. 264 Ellington, S. 51
Collins, A. 2, 86, 177, 182 Elliott, R. G. 293, 294
Condron, C. 65 Eppich, W. 65
Corbalan, G. 31, 33, 71, 152, 155 Ericsson, K. A. 100, 191, 234, 291,
Costa, J. M. 2, 11 312, 314
Costley, J. 175 Ertmer, P. A. 80
Craig, C. 11 Espasa, A. 176
Crossman, E. R. F. W. 284, 285 Eva, K. W. 148
Cullinan, D. 343
Cumming, J. 139 Faber, T. J. E. 70, 71
Custers, E. J. F. M. 165 Fassier, T. 56
Czabanowska, K. 41 Fastré, G. M. J. 114
Feinauer, S. 221
Daniel, M. 80 Feldon, D. F. 55, 56, 250
Dankbaar, M. E. W. 70, 71 Fenwick, T. 347
Davis, D. A. 148 Ferguson, W. 177, 182
Author Index 427

Fiorella, L. 166, 167 Graesser, A. 172


Fischer, F. 232 Groh, I. 221
Fischer, M. R. 67 Gropper, G. L. 142, 278, 279
Fisk, A. D. 109 Gros, B. 354
Fominykh, M. 40 Guasch, T. 176
Ford, J. K. 4 Guay, F. 350
Fordis, M. 148 Gulikers, J. T. M. 66
Forster, S. 80 Güney, Z. 11
Francom, G. M. 340, 341, 354 Guzdial, M. 79, 340
Fraser, K. 350
Frederiksen, J. R. 214 Hahnlein, I. 154
Frederiksen, N. 61 Haji, F. A. 128
Freeman, A. 330 Half, H. M. 80, 287
Freeth, D. 62, 79 Hall, K. G. 73, 74
Frèrejean, J. 6, 10, 11, 65, 128, 130, Hall, T. 51
306, 312, 343 Hambleton, R. K. 114
Frey, C. B. 3 Hammick, M. 62, 79
Friedman, K. 350 Hannafn, M. J. 175
Furniss, S. 70 Hannum, W. H. 250
Hansen, A. J. 80, 340
Gabella, M. 180, 232, 352 Harden, R. M. 328, 332
Gagné, R. M. 7, 97 Harris, D. E. 322
Gagnon, R. 328 Hartley, J. 166
Gallini, J. K. 109 Haryanti, F. 277
Garcia, C. 306 Hassell, A. 148
Garcia-Martinez, I. 148 Hattie, J. 238
Gardner, J. 340 Hauer, K. E. 117
Garon-Carrier, G. 350 Hays, R. T. 65
Gatewood, J. 172 Heal, J. 73, 74, 238, 242
Geary, D. C. 222 Heath, C. 148
Gerjets, P. 39, 86, 179, 351 Heiser, J. 66
Gerrissen, J. F. 67 Helsdingen, A. S. 8, 75
Gessler, M. 127 Hendrick, C. 73, 74, 238, 242
Gielen, E. 154 Hendrikx, A. J. M. 250
Gijselaers, W. H. 155, 182 Heneman, R. L. 147
Gilic, F. 67 Hennekam, S. 3
Gillet, D. 181 Herrington, J. 89
Ginns, P. 227, 241 Heuer, J. 343
Gishbauger, J. 172 Hill, J. R. 175
Glasgow, T. 80 Hilton, J. L. 51
Glavin, R. J. 66 Holland, J. H. 74
Goda, Y. 11 Holmboe, E. S. 117
Goei, S. L. 350 Holmqvist, K. 86
Gofn, R. D. 147 Holsbrink-Engels, G. A. 169
Göksu, I. 340 Holtslander, L. F. 70
Gomez, R. 11 Holyoak, K. J. 74
Gopher, D. 131 Hoogerheide, V. 86
Gorbunova, A. 175 Hoogveld, A. W. M. 343
Gordon, J. 353 Hopkins, S. 257
Goulet, F. 328 Huang, J. L. 4
Govaerts, M. J. B. 116 Hubner, S. 93
428 Author Index

Hufman, J. 350 Kogan, J. R. 117


Hummel, H. G. K. 70, 88 Kok, E. M. 75, 86
Hung, W. E. 39, 66 Kolcu, M. İ. B. 11
Husnin, H. 56 Könings, K. D. 41, 112, 149, 343
Hutchins, S. 139 Koppel, I. 62, 79
Huwendiek, S. 67 Koriat, A. 302
Kornell, N. 7, 75, 282, 302
Ifenthaler, D. 154 Kostons, D. 152, 335
Irby, D. M. 66 Kovas, Y. 350
Issenberg, S. B. 66 Krajcik, J. S. 79, 340
Krammer, H. P. M. 142
Jaeger, R. M. 114 Krathwohl, D. R. 104
Janesarvatan, F. 67 Kruger, J. 302
Janssen-Noordman, A. M. B. 6 Kukharuk, A. 11
Jarodzka, H. 86 Kushniruk, A. 176
Jarvis, Y. 2
Jelley, R. B. 147 Lam, W. 67
Jelsma, O. 8, 9 Lazonder, A. W. 143, 236, 238
Jochems, W. M. G. 343 Lebiere, C. 23
John, M. 350 Lee, D. 50
Johnson, T. E. 7 Lee, F. J. 222, 224
Joiner, R. 176 Lehmann, A. C. 312
Jonassen, D. H. 2, 61, 89, 180, 250 Lehmann, T. 154
Jüttner, M. 238 Lemelin, J. P. 350
León, S. P. 148
Kahneman, D. 23 Leppink, J. 41, 75
Kaki, G. D. 11 Lewthwaite, B. 213
Kali, Y. 342 Lievens, F. 147
Kalyuga, S. 30, 91, 163 Lim, J. 7
Kanselaar, G. 163 Limbu, B. 40
Kedharnath, U. 147 Linden, M. A. 11
Kerr, C. 11 Lingard, L. 112, 294
Kester, L. 3, 5, 19, 20, 31, 33, 71, 92, Linn, M. C. 39
152, 155, 158, 178, 220, 227, 240, Lintern, G. 283
341, 343 Littlejohn, A. 51
Keuning, T. 11, 128, 343 Long, Y. 35
Khan, R. 128 Lonn, S. 66
Khoo, B. H. 293, 294 Louis, M. R. 112, 294
Kicken, W. 120, 121, 152 Lowrey, W. 180
Kiessling, C. 328 Loyens, S. 39, 66, 76, 182, 306
Kim, D. 143 Loyens, S. M. M. 86
Kim, K. S. 180 Lukosch, H. 71, 345
Kirschner, F. 27, 172 Luursema, J. J. 93, 230
Kirschner, P. A. 2, 3, 9, 10, 22, 24, 27,
36, 39, 63, 66, 73, 74, 75, 76, 79, Ma, I. 350
82, 88, 89, 90, 92, 116, 130, 147, MacRae, H. 112, 294
148, 155, 172, 176, 179, 182, 183, Maddens, L. 11
199, 201, 202, 220, 227, 238, 242, Mager, R. F. 103
304, 306, 312, 340, 342, 343, 348 Maggio, L. A. 66
Klemke, R. 40 Magill, R. A. 73, 74
Knapen, M. M. H. 120 Manhaeve, D. 344
Author Index 429

Maran, N. J. 66 Nendaz, M. 56
Marcellis, M. 306, 342 Neuhaus, B. J. 238
Marei, H. F. 67 Newbigging, J. 67
Markham, W. A. 148 Newell, A. 76
Martarelli, C. S. 180, 232, 352 Newman, S. E. 2, 86
Martens, R. 10, 351 Ng, G. 128
Martens, R. L. 66, 79, 100, 149 Niegemann, H. 354
Marx, R. W. 79, 340 Nisbett, R. E. 74
Mavilidi, M. F. 30 Nixon, E. K. 50
Mayer, R. E. 66, 166, 167, 178 Nkambou, R. 35
Mazmanian, P. E. 148 Norman, G. R. 326, 340
McDaniel, M. A. 172 Noroozi, O. 36, 348, 350
McEwen, M. 51 Nückles, M. 93
McGaghie, W. C. 66 Nystrom, M. 86
McGraw, R. 67
McIlwrick, J. 350 O’Brien, B. C. 66
McKenney, S. 342, 343 O’Donovan, R. 257
McLaughlin, K. 350 O’Flaherty, J. 41, 342
Meguerdichian, M. J. 63 Olina, Z. 7
Meijer, S. 71 Oliu, W. E. 226
Meijer, S. A. 345 Ortony, A. 87
Melo, M. 2, 9, 10 Ortwein, H. 328
Mercer, C. 67 Osborne, M. A. 3
Merrill, M. D. 2, 7, 46, 52, 53, 340 O’Sullivan, P. S. 350
Merrill, P. 198 Ozcan, K. V. 340
Merritt, C. 112, 294 Ozturkcu, O. S. K. 11
Mettes, C. T. C. W. 194
Meutstege, K. 11 Paas, F. 20, 27, 30, 31, 39, 66, 76, 82,
Miller, G. E. 322 86, 87, 91, 100, 100–101, 148,
Mills, C. 114 152, 153, 154, 172, 182, 191, 306,
Mills, R. 213 314, 335, 343, 347, 349, 350, 351
Miranda, G. 9, 148 Paas, F. G. W. C. 9, 82
Miranda, G. L. 2, 10, 11 Paik, E. S. 302
Mizoguchi, R. 35 Paivio, A. 178, 271
Moerkerke, G. 148 Palincsar, A. 79, 340
Moulton, C. 112, 294 Palmeri, T. J. 285
Moust, J. H. C. 340 Panadero, E. 148
Moxnes, E. 155 Pardal, C. 9
Muijtjens, A. 330 Parker, J. 89
Muijtjens, A. M. M. 116 Peng, J. 176, 340
Mulder, M. 36, 348 Perrier, L. 148
Mulder, Y. G. 143 Petrusa, E. R. 66, 334
Mulders, M. 71 Petzoldt, T. 221
Musharyanti, L. 277 Phillips, C. 41, 342
Myers, R. D. 340 Pilot, A. 194
Plake, B. S. 114
Nadolski, R. J. 88, 90 Pontes, T. 9, 148
Narens, L. 301 Poortman, C. 343
Nauta, C. 154 Popova, A. 176
Naylor, J. C. 6 Postma, T. C. 9
Nelson, T. O. 301 Powell, D. M. 147
430 Author Index

Prins, F. J. 148 Schlager, M. S. 172


Privado, J. 306 Schlanbusch, H. 351
Schmidt, H. G. 340
Raaijmakers, S. F. 153 Schneider, J. 67
Racine, L. 70 Schneider, W. 26, 113, 291, 294
Raes, A. 11 Schout, B. M. A. 250
Rafael, M. 9 Schraw, G. 302
Rapp, A. 56 Schubert, S. 328
Reber, A. S. 74 Schuwirth, L. W. T. 116, 322, 333
Redding, R. E. 56 Schwantes, U. 328
Reeves, S. 62, 79 Schworm, S. 232
Reeves, T. C. 322 Seel, N. M. 9, 202
Regehr, G. 112, 128, 148, 294 Segers, M. 4, 112
Reigeluth, C. M. 8, 127, 128, 147, Seguin, J. 350
212, 340 Seidel, T. 343
Reiser, B. J. 22, 91 Selinger, M. 343
Reiser, R. A. 7 Sewell, J. L. 350
Renkl, A. 22, 93, 172 Shafer, D. W. 182
Rethans, J.-J. 56 Shumway, J. M. 328
Rikers, R. M. J. P. 31, 91, 100, 191, 314 Si, J. 143
Rittel, H. W. J. 61, 316 Siegel, D. 131
Rittenbruch, M. 11 Simon, H. A. 76, 209
Robben, S. G. F. 75 Singer, M. J. 65
Roex, A. 306, 344 Sloep, P. 349
Rohrer, D. 291 Sloep, P. B. 67, 148
Roossink, H. J. 194 Slootmaker, A. 70
Rosenberg-Kima, R. B. 7 Slot, W. 120, 121
Rosenstiel, W. 351 Sluijsmans, D. M. A. 37, 114, 117,
Rosmalen, P. 67 148, 149
Roy, L. 328 Sobczak, M. 350
Ruiz, J. G. 67 Soderstrom, N. C. 74
Russell, J. D. 80 Soete, B. 147
Ryan, R. M. 348 Soloway, E. 79, 340
Ryder, J. M. 56 Sonnert, G. 10–11
Sotiriou, S. 181
Sadler, P. 10–11 Spanjers, I. A. E. 41, 179
Sagy, O. 342 Specht, M. 40, 67
Salden, R. J. C. M. 31 Spector, J. M. 7, 155
Salomon, G. 180 Stacey, M. 67
Sampson, D. 340 Stahl, E. 232
Sarfo, F. K. 343 Stalmeijer, R. E. 4, 112
Scalese, R. J. 66 Stefaniak, J. 49
Schaap, L. 153 Steinberg, M. S. 216
Schank, R. C. 2 Stevenson, M. 332
Scheiter, K. 86 Stojan, J. 80
Schellekens, A. 30, 347 Stoof, A. 100
Schelven, R. M. 334 Storm, J. 70
Scherpbier, A. 316 Straetmans, G. J. J. M. 117, 149
Scherpbier, A. J. J. A. 6, 250 Strauch, U. 65
Scherpbier, A. J. J. M. 250 Strijbos, J. 148
Schildkamp, K. 343 Strijbos, J. W. 79
Author Index 431

Sugarman, R. 350 Van Geel, M. 11, 128, 343


Sullivan, M. A. 26 Van Gog, T. 8, 27, 75, 86, 87, 100,
Suls, J. M. 148 100–101, 152, 153, 179, 191, 302,
Susilo, A. P. 316 314, 335, 343
Sutton, R. I. 112, 294 van Harrison, R. 148
Suzuki, K. 11 Van Loon, M. H. 302
Sweller, J. 2, 22, 27, 30, 63, 75, 82, 91, Van Luijk, S. J. 334
227, 300, 347 van Luit, J. E. H. 350
Van Meeuwen, L. W. 304
Taatgen, N. A. 222, 224 van Merriënboer, J. J. G. 2, 4, 5, 6, 8,
Taminiau, E. M. C. 155 9, 10, 11, 19, 20, 22, 23, 24, 27,
Tao, X. 176 30, 31, 33, 36, 37, 39, 41, 49, 51,
Taqui, B. 80 55, 56, 61, 65, 66, 67, 70, 71, 75,
Tawfk, A. A. 172 82, 86, 87, 88, 90, 92, 93, 100,
Taylor, K. 291 100–101, 111, 112, 114, 117, 120,
Ten Cate, O. 66, 325, 326 121, 128, 142, 148, 149, 152, 153,
Tennyson, R. D. 264 154, 155, 158, 175, 176, 178, 179,
Tessmer, M. 51, 250 191, 202, 220, 227, 230, 234, 240,
Thagard, P. R. 74 250, 252, 293, 302, 303, 304, 306,
Thakker, K. 350 316, 340, 341, 342, 343, 344, 347,
Thijssen, J. G. L. 3 348, 349, 350, 351, 354
Thornton, G. C. 147 Vanpee, D. 328
Thorpe, K. E. 148 van Rosmalen, P. 67
Timperley, H. 238 van Strien, J. L. H. 10, 130, 306, 312
Tjiam, I. M. 250 van Tilburg, J. 10
Tomas, L. 213 van Wermeskerken, M. 86
Topping, K. 148 Van Zundert, M. 148, 149
Torre, D. M. 322 Velthorst, G. J. 10, 306
Tracey, M. W. 5 Verbraeck, A. 30, 347
Tremblay, R. 350 Vermetten, Y. 175, 308, 336
Tricot, A. 300, 347 Verstegen, D. M. L. 41
Tseng, S. S. 289 Verwijnen, G. M. 329
Tullis, J. 291 Visscher, A. 11, 343
Tuovinen, J. E. 349 Visscher, A. J. 128, 343
Turner, H. 70 Vitaro, F. 350
Turns, J. 329 Voskort, S. 221
Vosniadou, S. 87, 213
Van. Berkel, H. J. M. 340 Vygotsky, L. S. 91, 127
Van Boxtel, C. 163
Van Bussel, R. 345 Wade, C. H. 10–11
van Dalen, J. 316 Walker, K. 63
Van den Boom, G. 154 Wall, D. W. 148
van der Graaf, E. 326 Wallace, R. 232
van der Klink, M. R. 114, 154 Walter, C. 351
van der Linden, J. 163 Walter, E. M. 3
Van der Meij, H. 235, 236, 238 Wang, M. H. 176, 340
van der Pal, J. 31 Wang, Q. 343
van der Vleuten, C. P. M. 6, 116, 120, 147, Wasson, B. 340
322, 326, 328, 329, 330, 333, 334 Watson, S. L. 147
Vandewaetere, M. 344 Watson, W. R. 147
Vanfeteren, R. 277 Webber, M. M. 61, 316
432 Author Index

Wedman, J. 51 Woolley, N. N. 2
Weil, M. 131 Wopereis, I. 175, 306, 308, 312, 336
Westera, W. 67 Wopereis, I. G. J. H. 89, 175, 343
Wetzels, S. A. J. 158 Woretshofer, J. 90
White, B. Y. 214 Wright, B. 350
White, J. G. 9 Wright, N. 11
Whitehouse, A. B. 148 Wrigley, W. 330
Whyatt, C. 11
Wickens, C. D. 139 Xiao, Y. 343
Wightman, D. C. 283 Xu, M. 49
Wijnen, W. H. F. W. 329
Wiley, D. 51 Yan, H. 343
Wilhelm, O. 328 Yang, C. C. 289
Wiliam, D. 237 Yang, H. C. 289
Wilkens, C. 10–11 Yates, K. A. 55, 56, 250
Wilson, G. M. 332 Young, J. Q. 350
Winne, P. H. 176 Yuan, B. 176
Witjes, J. A. 250 Yuksel, G. 340
Witte, P. 86, 100–101, 191
Wittrock, M. C. 166 Zacharia, Z. C. 39
Wolf, M. 80 Zander, T. O. 351
Wolfhagen, I. H. A. P. 4, 342 Zary, N. 67
Wolterinck, C. 343 Zemke, R. 353
Wood, D. 232 Zhong, L. 30
Wood, D. F. 306 Zhou, D. 11
Wood, H. 232 Zwart, D. P. 350
Subject Index

Note: Page numbers in italics indicate a fgure and page numbers in bold indicate
a table on the corresponding page. Page numbers followed by “b” with numbers
refer to boxes.

3D-models 267 and/or-graphs 211


4C/ID model (four-component artifcial intelligence (AI) 177, 232,
instructional design) 2; components 351–353
of 8, 9; task-centered learning 340; assessment formats: on-the-job
and Ten Steps 8, 9; whole-task performance assessments 147;
practice 114 simulation-based performance tests
21st-century skills: defning 315; 147; situational judgment tests 147;
frameworks for 316; guidelines for work sample tests 147
317; importance of 347; types of assessment instruments 95, 148, 156,
316 317, 335, 336
assessment programs: for acquired
acquired knowledge 322 knowledge 322; for cognitive
action-oriented writing 226, 235 strategies 328; context of 333–334;
action verbs 104, 105 entrustable professional activities
active learning and exploration 236 (EPAs) 326–327, 327; for
ACT-R theory 222b mental models 329; and Miller’s
Adaptive Control of Thought (Anderson) pyramid 323–325; non recurrent
222b part-tasks 332–334; objective
adaptive learning 35, 175 structured clinical examinations
ADDIE model 55, 192, 366 (OSCE) 332–334; and procedural
Aloys (assistant looking over your information 331–332; progress
shoulder) 35, 232, 240, 343 testing 329–330; and self-assessment
analogical relationships 207 326; summative assessment 322,
analysis, design, development, 325–330, 334–337; for supportive
implementation and summative information 329–330; variability of
evaluation see ADDIE model 334; written 328
analysis of cognitive rules see cognitive assessors: bias reduction in 148; peer
rule analysis assessment 148; selection of 148;
analysis of cognitive strategies see self-assessment 147–149
cognitive strategy analysis atomistic design 5, 8
analysis of prerequisite knowledge see attention focusing 286
prerequisite knowledge analysis attitudes 106–107
434 Subject Index

attributes 204 cause-efect relationships 208, 217


augmented reality 40, 40, 67, 69, 227 ChatGPT 67, 68, 352
authentic tasks 2, 66, 323 closed questions 329
automaticity 26, 55 closure 224
autonomy 350 CLT see cognitive load theory (CLT)
coaching meetings 153–154, 304, 335
backward chaining 140, 139, 141–142, cognitive apprenticeship learning 2, 86,
282 340
behavioral task analysis 250 Cognitive Apprenticeship Learning
bidirectional relationships 256 (Gessler) 127
big data 347 cognitive feedback: defning 175–176;
blended learning: defning 41; and and diagnosis of intuitive strategies
educational multimedia 344–345; 177; epistemic 176; positioning
fipped classrooms 41, 342 184; and promotion of refection
Bloom’s Revised Taxonomy 104, 104 176–177; and supportive
blueprint components: avoiding information 175–176
fragmentation 20–23; and cognitive cognitive fexibility theory 180
load theory 27b–30b; and design cognitive load theory (CLT):
steps 10; and individualized assumptions of 28b; defning 27b;
instruction 30–37; learner extraneous cognitive load 28b;
control 34; learning tasks 9, 12, germane processing 28b; intrinsic
19–23, 34–35; media for 37, cognitive load 27b–28b; and
38, 39–42; part-task practice 9, learning tasks 29b; limitations of
17, 26–27, 36; and prevention 29b–30b; and part-task practice 29b;
of compartmentalization 17–19; and procedural information 29b; and
procedural information 9, 17, supportive information 29b; and task
24–25, 35; schematic 17; supportive complexity 27b; and task processing
information 9, 16, 24–25, 35; 27b–28b
system control 34; transfer paradox Cognitive Load Theory (Sweller) 27b
23–24 cognitive rule analysis: algorithmic
brainstorming 182, 183 248; and behavioral task analysis
breadth-frst approach 196, 199 250; and cognitive task analysis
building blocks 264 250–255; and design decisions
built-in task support 77–82, 134, 152, 256–258; guidelines for 258–259;
152, 308 hierarchical approach 269; IF-THEN
butterfy defect 180 rules 248–250, 255–256; necessity
of 247–248; and part-task practice
CASCO (Completion Assignment 49, 248; and prerequisite knowledge
COnstructor) 230 48, 256–257; and procedural
case method 77, 80, 81, 86, 340 information 257
case studies: and domain models cognitive rules: in just-in-time
169–170; multiple viewpoints in information displays 221; and
216; and supportive information prerequisite information 48; and
168, 168, 215 recurrent skills 109–110
causal models: and case studies 170; cognitive schemas: and constituent
defning 166; identifying 210–211; skills 23; construction of 37, 38, 39;
predicting future states 166; and experienced learners 84; generalized
process simulations 181; simple-to- 23; and part-task practice 40; and
complex 214; state of afairs 166; procedural information 40–41; and
and task domains 212 supportive information 39
Subject Index 435

cognitive strategies: assessment of 328; concepts: defning 204, 264; feature


and mental models 159; progression lists 266; interrelation of 191;
of 133–134; and supportive partonomy 205; and prerequisite
information 159 knowledge 263–264; and principles
cognitive strategy analysis: cause- 265–266; taxonomy 205–206; and
efect relationships 217; goals templates 209
of 191; guidelines for 199–200; conceptual change 213
intuitive 196–197; location-in-time conceptual knowledge: concepts, plans
relationships 217; vs. mental models and principles 263; domain models
analysis 216–217; necessity of 263; facts 263; identifying physical
189–190; of phases 191–194; rules models 267–268; levels of 262–263;
of thumb 192; and SAPs 48, 54, 83, specifying 262–267
185, 186–198 conceptual models: and case
cognitive task analysis: information studies 169; concept maps 206;
processing 252–254; rule-based concepts in 204–205; defning
250–252; specifying 254–255 163; hierarchically ordered 206;
cognitive tools 89–90 identifying 204–205; kind-of
collaborative learning 182 relationships 205, 205, 207;
combination analysis 256, 262 part-of relationships 205–206;
compartmentalization: defning 5; and presenting 165; relationships in
instructional design 5–6; prevention 205–208
of 17–19 conceptual nodes 266, 267
competence 348–349 confict resolution 252
competence maps 100, 113 constituent skills: as aspects of a
competency: in complex learning 2; complex skill 20–21, 98; to-be-
and domain-specifc skills 348; automated recurrent 110–111;
professional 55–56, 182, 306, 318 classifying 109–114; and complex
completion strategy 22, 93 learning 21; double classifed 112;
completion tasks 82, 90 hierarchy of 19, 18; nonrecurrent
complex learning: aim of 17–19; 24, 26, 108, 109; not to be
blueprint components for 16–17, taught 112–113; recurrent 24,
17; components of 9; constituent 26, 109–110; rule-based 23;
skills for 19, 18, 21; defning 2; schema-based 23–24; sequential
designing instruction for 46–47, 47; 19; simultaneous 19; and skill
development of 3; education and decomposition 96–97; task classes of
training 4; see also Ten Steps 21–22; and task domain 19
component fuency hypothesis 293 constructive alignment 322
compressing simulated time 276, constructivist learning environments 2
290–291, 294 content specifcity problem 334
computer-based design tools 351 contextual interference 73b, 75, 282
computer-based media: augmented contingent tutoring 232, 286–288
reality 40, 40; drill-and-practice control 301
40; educational software 40; and controlled processes 61, 161b
learning tasks 41; and procedural conventional practice items 278
information 40; and supportive conventional tasks: support for 77–80,
information 39 82, 86–87; types of 79
computer-based simulations 67, 69, 70, coordinate concepts 205, 205
70, 71 corrective feedback 220, 236–237,
computer-based training (CBT) 293 288, 289
concept maps 206, 206 CRAFT 345, 346
436 Subject Index

criteria 106 domain models: and case studies 169;


cued retrospective reporting 100, 191 causal 166, 170, 181, 210–212;
cues 302 combining types of 212; conceptual
curriculum: assessment in 332; game 163–165, 169, 204–208; defning
facilitated 345, 346; integration 163; and design decisions 214; and
of 4; lecture-based 322; OSCE experiential relationships 167–168;
in 332–334; problem-based 66; instructional methods for 163–164,
progress testing in 329–330; spiral 164, 186; mental models in 159;
183; training situations in 138; specifying 202–204; structural
whole-task 322, 334 165, 169–170, 208–210, 212; and
supportive information 48
data gathering 100–102 domain-specifc learning skills 307–309,
deductive-expository strategies 171, 312, 315, 335, 348
173–174, 230 double-blended learning 42, 42
deductive-inquisitory strategies 171 double-classifed constituent skills 112
deductive presentation strategies drill-and-practice computer-based
170–171, 173–175, 183 training (CBT) 40, 293
deductive reasoning 170–171 dual coding 271
deliberate practice: defning 312–313; Dunning-Kruger efect 302
intermixed training in 314; and part- dynamic task selection: control
task practice 291, 313–314; and of 33–37; cycle of 32; and
procedural information 313–314; individualized instruction 30; learner
skills for 304–305 control 34; performance assessment
demonstrations 220, 229–232, 233 in 32–33; support and guidance
dependent part-task practice 36 in 32–33; system control 34; task
depth-frst approach 196 classes 31; variability in 32
designing learning tasks see learning task
design educational software 40
design teams: learners on 343; and education and training: demand
performance assessment 48; teacher for 4; instructional design for 4;
342–343; and team tasks 62 value of 3
design theory 4 elaboration: and schema construction
desirable difculty 75, 166, 242 25; strategies for 162b; and
development portfolios 120–122, 335 meaningful learning 161b; and
discovery methods 172 structural understanding 161b; and
discrimination 72b tacit knowledge 162b
divergent practice items 281–282 elaboration theory 340
domain experts 100 Elaboration Theory (Reigeluth) 127
domain-general skills: 21st-century e-learning 344–345
skills 315–317, 347; deliberate electronic development portfolios
practice 312–315; guidelines for 120–121, 121, 154
317–318; information literacy emotions 350
skills 305–309, 312; intertwining emphasis manipulation method 128,
with domain specifc skills 300, 130–132
309, 312, 315, 344, 347–348; empirical analysis 196, 202, 213, 255,
requirements of 348; self-directed 262, 269, 286, 289
learning (SDL) 301, 304–305; self- entrustable professional activities
regulated learning (SRL) 301–304; (EPAs) 326–327, 327
summative assessment of 334–337; episodic memory 172
task selection as 300; teaching of epistemic feedback 176
306 epistemic games 182
Subject Index 437

error recovery 236 generalization 72b


errors: analysis of 255–256; general practitioners (GPs) 344–345
feedback on 238; in just-in-time generative learning strategies 166–167,
information displays 276; learning 167
from 237–238; and procedural geographical maps 217
information design 258 germane processing 28b
evaluation phase 306 goal directedness 236
experienced learners: expertise reversal goal-free problems 80
efect 91; support and guidance for group discussions 182
92 guidance 22–23, 77, 86–87, 89–90,
experiential relationships 167–168, 207 134
expertise reversal efect 91 guided discovery methods 172, 174,
expert task performer 191, 194, 202 216
exploded views 267–268
expository methods 166 heterarchical models 206
extraneous cognitive load 28b heterarchical organizations 100
eye-movement modeling examples 86 heuristics 77, 159, 190–191
higher-order skills 252
face-to-face learning 41 high fdelity 60, 66–67, 71
facts 263, 266 hints 237–238, 245, 257, 289
fading 91, 92, 221, 242 holistic design approach: defning 5;
fading support 279–280 and integration of learning 6; and
fault trees 211, 212 modeling 8; and transfer of learning
feature lists 266–268, 271 4; and transfer paradox 8
feedback by discovery 216 horizontal relationships 99–100
feedback gathering 183 how-to instruction 220–221
fdelity: functional 65–67; high 66, hypermedia 179–180, 344
67; low 70; physical 65–66, 65, 70;
psychological 65–66 IF-THEN rules: errors and malrules
Fingerspitzengefühl 162b in 255, 255, 256; goals of 249;
frst-order models 215 and part-task practice 55; and
frst-order scafolding 152 prerequisite knowledge 262–263;
frst principles of instruction 2 and recurrent skills 249–250;
fxed procedures 46 in rule-based analysis 250–252;
fipped classrooms 41, 342 specifying 248–249
fowcharts 253, 253, 254 ill-structured problems 60–61
formative assessment 114 imitation tasks 86
forward chaining 140, 141, 282 immediate corrective feedback 288
four-component instructional design implicit knowledge 162b
(4C/ID) see 4C/ID model implicit learning 73b–74b
fractionation 282, 283, 297 independent part-task practice 36,
fragmentation: avoiding 8, 16, 20–23; 291–293
defning 6 and instructional design individualized instruction: control
6–7; and learning 17 of 35–37; dynamic task selection
fully guided discovery learning 174 in 31–36; learner control 34, 34;
functional fdelity 65–67 performance assessment in 32–33;
functional models 166, 211 system control 34, 34; training
blueprints as a framework in 30
game-facilitated curriculum 345, 346 individualized learning trajectories
general information positioning 145–147, 146
183–184 inductive-expository strategy 171, 216
438 Subject Index

inductive-inquisitory strategies 171 in part-task practice 286, 294, 295;


inductive learning 19; discrimination in whole-task practice 286
72b; facilitation of 71; generalization interprofessional education 62
72b; implicit learning 73b–74b; intrinsic cognitive load 27b–28b
mindful abstraction 73b intrinsic motivation 348–350
inductive presentation strategies intuitive cognitive strategies: analysis
171–172, 183 of 190, 196–197, 199; diagnosis of
inductive reasoning 171 177
information literacy skills: and general intuitive mental models 213–216
information 184; guidance in intuitive strategies: analysis of 196–197;
307; intermixed training in 309, depth-frst approach 196; diagnosis
310–311, 312; and SAPs 308–310; of 177; identifying 199; refning
in self-directed learning 290–292; task classes 199; and supportive
skill hierarchy 336; task support in information 199
307–308; teaching of 305–306; ISD see instructional systems design
variability in 307 (ISD)
information-processing analysis: iteration 49–50, 52
defning 250; and fowcharts
253–254; procedures in 252; and job aids 38, 40, 54, 220–221, 235,
temporal order 252 239, 245, 248, 270
inquisitory methods 172 Judgement of Learning (JoL) 148,
instances 220, 229–230 301–302
instructional analysis 256, 262 just-in-time (JIT) information displays:
action-oriented writing in 226; and
instructional design: activities for
closure 224; and cognitive rules
46–47, 47, 48–49; atomistic
221; and deductive-expository
5, 8; backbone of learning
strategies 230; defning 220–221;
tasks 53; challenges of 4;
and demonstrations 229–232;
compartmentalization in 5–7;
error recovery information in 258;
component knowledge 54–55; examples of 225, 265; exemplifying
computer-based 351; and domain 229; formulating 226; goal-oriented
models 214–215; fragmentation vs. system-oriented titles 236;
in 6–7; holistic approach to 4–6; and instances 229; integrated/
ISD context 55–56; pebble-in- nonintegrated 227, 228; and
the-pond approach 52–53, 54–55; learning tasks 242; and prerequisite
performance objectives in 102–103, information 225; and prerequisite
113–114; reuse of instructional knowledge 221; solicited 234–235;
materials 51; revival of 353–354; solicited information presentation
task-centered learning 340; and 231; split attention prevention in
teachers 342–343; tools 351; 226, 227, 230, 235; structure of
toppling approach of 341, 341, 342 224–226; system-initiated help 232;
instructional designers 112–113, 142, and typical errors 238; unsolicited
213, 342, 352 information presentation 231–234,
instructional systems design (ISD) 46, 239
55–56, 353
integrative goals 103 kind-of relationships 205, 207
interleaving 7, 73b Kirchov’s law 215
intermixed training: in deliberate knowledge: and complex learning 2–3,
practice 315; in domain-general 17; conceptual 5; and constituent
skills 309–310, 311, 315; in skills 19, 21; construction of 8;
domain-specifc skills 309, 312, 315; diferent use of the same 23; implicit
Subject Index 439

162b; integrated 4, 6, 19, 20; and features of 74; task classes 21;
nonrecurrent skills 176; same use of variability of practice 20, 23
the same 23; tacit 162b lenses (selection) 101, 119
knowledge progression methods 128, librarians: cognitive strategies 23;
133–134, 197–198 constituent skills for 23–24; training
programs for 19; variability of
laws 166 practice 20
layers of necessity 51 lifelike learning 345
leading questions 172 location-in-space relationships 208
learner control 34, 36–37 location-in-time relationships
learning: cognitive apprenticeship 2; 207, 217
complex 3–4; integration of 6; and low fdelity 65, 66, 71
motivation 348–351; see also transfer Luursema, J. J 230
of learning
learning aids 25, 40, 241 malrules 239, 255–256, 258
learning analytics 347 mash-ups 51
learning by doing 340 mass customization 345–347
learning goals 97–99, 104–105 massed practice 291
learning management systems 351 matching 286
learning networks 349 meaningful learning 161b
learning objectives 7, 322 means-ends analysis 91
learning processes 38 mechatronics 345, 346
learning task design: inductive learning media: blended 41; for blueprint
72b–74b; necessity of 59–60; components 37, 38, 39–40;
primary medium for 63; problem- computer based 39–40; help
solving guidance in 82–87, 88, systems 240; hypermedia 179–180;
89–91; real-life tasks 60–63, 66; multimedia 178–183; and
simulated task environments 63–67, physical fdelity 39; for procedural
70, 70; support and guidance in information 239; selection of 42;
75–76, 77–79, 80–81, 92; and smartphones and tablets 40, 241
teachers 342–343; variability of media nodes 179
practice 71, 73b, 74b, 75; for whole medicine and medical education: and
task practice 47; zigzagging in 10 assessment 326; development of 4;
learning tasks 9; adaptive learning 34– diagnostic skills for 130, 177, 227;
35; and cognitive load theory 28b; and emotions 350; history of 1–2;
completion tasks 82; control of 34; media for 39; and multidisciplinary
conventional 77–79, 80; defning tasks 61; and part-task practice
9, 16; design of performance 26–27, 36; and procedural
assessments 48, 53; imitation tasks information 234; and simulated task
86; and inductive learning 19; environments 64, 64, 65, 67, 70;
media for 37, 38, 39; monitoring and variability of practice 74b, 75
progress 116–117, 302; with a memorization 233–234
nonspecifc goal 80; on-demand mental image 267
education 35–36; real-life 62–63, mental model progression 214, 215,
350; representative 71; reverse tasks 216
80; scafolding 22; sequencing of 48, mental models: assessment of 329; and
53, 125–126; and standards 116, cognitive feedback 177
118, 117; structural features of 74; mental models analysis: and case
summative assessment of 325–326; studies 215–216; causal models
support and guidance 22–23; and 210–211; cause-efect relationships
supportive information 168; surface 217; and cognitive feedback 215;
440 Subject Index

vs. cognitive strategy analysis multiple representations 286


216–217; combining models in multiple viewpoints 216
212; conceptual models 204–208;
document study in 203; and naive models 177, 213
domain models 159, 202–214, narrative reports 116
217; and feedback by discovery natural-process relationships 208, 210
216; guided discovery methods non-integrated learning objectives 7
in 216; guidelines for 217–218; nonrecurrent constituent skills 108,
inductive expository strategy in 216; 109, 176
intuitive 213–216; location-in-time nonrecurrent part-tasks: assessment of
relationships 217; naive models 332–333; context of 333
213; necessity of 201–202; and nonrecurrent skills: defning 24; and
problem solving 48; refning task elaboration 25; and supportive
classes 214; and SAPs 217; structural information 24–25
models 208–210; and supportive nonspecifc goal tasks 78, 80
information 215–216 novice learners 91
metadata 146
micro worlds 180–182 objective structured clinical
Miller’s pyramid 323–324, 324, 325, examinations (OSCE) 332–333,
327 333, 334
mindful abstraction 73b objects 101, 103, 105–106
minimalism 236 Ohm’s law 215
minimal manuals 236, 245 on-demand education 34–35, 151,
misconceptions 213, 269–271 175, 350
mnemonics 233 online learning 41, 349
modality principle 240–241, 240 on-the-job performance assessments
modeling: case study method 86–87; 147
cognitive apprenticeship 86; eye Open Educational Resources (OERs)
movement 86; in holistic design 8; 51
in problem-solving 86–87; thinking open-ended questions 329
aloud 86 orientation phase 306
modeling examples 86, 168, 168, 169 OSCE see objective structured clinical
model tracing 238, 288–289 examinations (OSCE)
model-tracing paradigm 289 overlearning 290
modular structure 235
monitoring 301–303 parsimonious relationships 208
monitoring progress 116–119 participatory design 343
MOOC platforms 351 partitioning 224
motivation 348–350 part-of relationships 205–206
multidisciplinary tasks 61–62 partonomy 205
multimedia: defning 178; educational part-task practice 9; assessment of
344–345; epistemic games 182; 322–323; and automaticity 26,
hypermedia 179–180; micro worlds 55; and cognitive load theory 30b;
180–181; principles for procedural and cognitive rules 49, 256–257;
information design 239–240, 240; control of 34; defning 10, 17;
principles of 178, 179; redundancy and deliberate practice 313–314;
principle 179; segmentation dependent 36; independent
principle 178–179; self-pacing 31–32; media for 38, 40;
principle 178; social media 182–183 metacognitive prompts for 303; and
multimedia principle 178, 179 procedural information 26, 242;
Subject Index 441

and strengthening 26, 40, 257; on-the-job 147; pebble-in-the-pond


summative assessment of 330–331 approach 53; protocol portfolio
part-task practice design: and attention scoring 149–151, 150; scoring
focusing 286; and automaticity 47, rubrics 114, 115, 116, 147;
55; changing performance criteria self-assessment 148; and standards
for 290; computer-based training 114, 116–119, 117; vertical 149;
(CBT) 293; and contingent tutoring see also performance assessment
286–287; and corrective feedback design
288; distributing practice over time performance assessment design:
291; fading support 279–280, 285; guidelines for 122; necessity of
guidelines for 296–297; IF-THEN 95–97; performance objectives in
rules 278; independent 291–293; 96, 102–107; skill decomposition in
intermixed training in 286, 294, 97–102
295; and matching 286; media for performance conditions 105
293; and model tracing 288–289; performance constraints 89, 90, 197,
and multiple representations 280
286; necessity of 275–277; and performance criteria 290
overlearning 290–291; part-task performance defciencies 102
sequencing for 282, 283, 285; performance objectives: action verbs
performance constraints 280; 104, 122; classifying 107–114;
practice items 277–282; and elements of 103, 104; formulating
procedural information 285–286; 102–107; guidelines for 122; for
and simulated time compression performance assessments 95–96;
290; and strengthening 283b–284b; performance conditions 105;
and subgoaling 286; in the training standards in 106–107; terminal
blueprint 294–296; training wheels objective of 106–108; tools and
interfaces 280–281 objects 105–106; using 113–114
part-task sequencing: backward performance support 40, 91
chaining 140–142, 282; combining phases: identifying 192–193; in SAPs
methods for 142; defning 138; 191
examples of 138; forward chaining physical fdelity 39, 43, 65, 66, 67
140, 282; for part-task practice physical models 226, 267–268, 268,
282, 283; skill clusters in 139–140; 269
snowballing 140–142; techniques planned information provision 35, 173
and guidelines for 141 plans 264–266, 269
part-whole sequencing 142–143, 142, portfolios: and assessment 325–327,
144, 145 330, 334–337, 345–347;
path analysis 198 in coaching meetings 154;
pebble-in-the-pond approach 46, development 97, 120–122, 147,
52–53, 56–57 153; electronic development
pedagogical content knowledge 238 120–121, 154, 304, 318; examples
peer assessment 148 of 121; protocol scoring 149–151,
performance assessment: combining 150
methods for 147; and complex power law of practice 284b
learning 48–49; development practice items: conventional 278;
portfolios 120–122; and dynamic divergent 281–282; edit 278; high
task selection 32–33; formats contextual interference 282; low
147–148; horizontal 151; in contextual interference 282; for part
individualized instruction 30; task practice 278; recognize 279;
monitoring progress 116–119; types of 278, 279
442 Subject Index

prerequisite knowledge: and cognitive worksheets 87, 89, 197; rules of


rules 256–257; defning 262; in thumb 87, 88, 89; and scafolding
just-in time information displays 91–92; tutor guidance 90
220–221, 226 procedural information 9; analysis of
prerequisite knowledge analysis: as prerequisite knowledge 48; assessment
combination analysis 262; and of 331–332; and augmented reality
concepts, plans and principles 40, 40; and cognitive load theory
264–268; and conceptual 27b–28b; control of 34; defning 9,
knowledge 262–269; defning 48; 17; and deliberate practice 312–313;
and design decisions 270–271; and IF-THEN rules 55; media for 38,
facts 266; and feature lists 268; 39–40; metacognitive prompts for
guidelines for 272–273; hierarchical 303–304; and part-task practice 26;
approach 264, 269–270; IF-THEN and recurrent skills 24–25; solicited
rules 262–263; as instructional information presentation
analysis 262; misconceptions 35; unsolicited information
269–271; necessity of 261–262; presentation 35
and procedural information 271; procedural information design: and
specifying 262–264 cognitive rules 48–49; corrective
prerequisite relationships 97, 207 feedback in 236–237; defning 47;
prescriptive principles 194 and demonstrations 220, 229–230,
presentation skills 67 232–234; diagnosis of malrules
presentation strategies: deductive 239; errors and malrules in 258; and
170–171, 173–175; inductive fading 242; guidelines for 244–245;
171–172, 174; for procedural instances in 220, 229; and just-in-
information design 231; selection time (JIT) information displays
of 174–175; solicited 231–232; 220–221, 224–225, 229–236,
unsolicited 231–232 241; learner errors in 237–238; and
Presentation Trainer, The 67 learning tasks 241–242; media for
presenters 343 239–240; necessity of 219–220; and
primary medium 63 part-task practice 242, 285–286; and
primary training blueprints 306, 309, prerequisite knowledge 48–49, 271;
315, 317–318, 348 presentation strategies 231–232;
principles 166, 210, 264–266 and rule formation 221b–224b
prior knowledge 158 257; in small units 224–225; split
problem-based learning 39, 79, 90, attention efect 227, 230, 235; steps
182, 306, 340 for 54–55; and task classes 242,
problem solving: framework of 76, 243–244; in the training blueprint
76, 78; task support 76, 76; see also 241–242
systematic approaches to problem procedures 225–226, 252
solving (SAPs) process-oriented support 86, 198
problem-solving guidance: built-in process simulation 181, 181
task support for 84–85, 152–153; process worksheets 87, 89, 197
coaching meetings 153–154; fading produce item 278
91; given situation 83; goal situation producing footage 99, 119
83; and heuristics 77; in information product-oriented support 215
literacy learning tasks 308; and professional tasks 326–327
intuitive strategies 199; leading to progression of cognitive strategies 133
solution 83; modeling examples progressive diferentiation 183
86–87; performance constraints progress testing 329–330
89; phases in 87, 88, 89; process project-based learning 79, 340
Subject Index 443

properties 204 rule formation: and prerequisite


propositional nodes 266–267 knowledge 263; and procedural
propositions 204, 264, 266, 267 information 25, 39, 220, 221b, 257;
protocol portfolio scoring 149–151, production compilation 222b–224b;
150, 154 vs. strengthening 284b; weak
psychological fdelity 65–67 methods 222b
rules of thumb: analysis of 195–196;
quantitative models 215 and cognitive feedback 184; and
cognitive strategies 159, 216; and
rapid prototyping 50 complex skills 88; as heuristics
rational analysis 196, 213, 255, 269 190; identifying 192, 195; and
real-life tasks: defning 60–61; instructional methods 158; intuitive
ill-structured problems 61; 196; and performance constraints
interprofessional education 62; 89; and process worksheets 87, 89;
and learning tasks 62–63, 350; in SAPs 159–160, 186, 190–192;
multidisciplinary 61–62; and and scafolding 92; for scoring
simulated task environments 63–69; rubrics 116; for task selection 154;
team tasks 62 tutor guidance for 90
real task environment 63, 64, 65,
67, 70 SAP-charts 159, 160
reciprocal relationships 216–217 SAPs see systematic approaches to
recognize-act cycles 250 problem solving (SAPs)
recognize-edit-produce sequence 279 saw-tooth pattern of support
recognize practice items 279 134, 134
recurrent constituent skills 107–109 scafolding 152; expertise reversal
recurrent skills: automating 110–111, efect 91; fading 92; frst-order
318, 330; defning 24; and part- 152; second-order 37, 152, 307; in
task practice 26–27; and procedural self-directed learning skills 36–37;
information 24–25; and rule-based support and guidance in 22–23,
practice 23–24; and rule formation 28b–29b, 91; techniques for 92
25; techniques for 283 schema automation: media for 38; and
recursivity 335–336 procedural information 25
redundancy principle 178, 179 schema-based processes 23–24
refection 176–177 schema construction: and cognitive
refective expertise 112 feedback 176; media for 38; and
rehearsal 233 supportive information 24–25
relatedness 349 schema enhancement 73b
resource-based learning: and autonomy scoring rubrics 114–115, 116, 147
350; defning 174; and self-directed script concordance tests 328
learners 175–176, 305 scripts 165
retiary organizations 100 secondary training blueprint 306, 315,
reuse of instructional materials 51 317, 318, 348
reverse tasks 80, 89 second-order scafolding 37, 151,
role playing 67 152–153, 307
rubrics 114, 115, 116 segmentation 283
rule automation 222b segmentation principle 178–179, 178
rule-based analysis: defning 250; self-assessment 148, 326
IF-THEN rules 250–253 self-determination theory (SDT)
rule-based instruction 221 348–349, 349
rule-based processes 23–24 self-directed learners 175, 291
444 Subject Index

self-directed learning (SDL): and skill clusters 139–140, 139, 143, 144
autonomy 350; deliberate practice skill decomposition: and constituent
skills 304–305; information literacy skills 97–99; skill hierarchy in
skills 304–305; learning and 96–102; validation cycles in 101
teaching skills for 304; monitoring skill hierarchy: data gathering in
and control in 301; relevant skills for 100–102; guidelines for 100–101;
305; in resource-based learning 306 horizontal relationships 99–100; and
self-directed learning skills: importance learning goals 97; and performance
of 36; second-order scafolding 36 assessment 107–109; relationships
self-explanation principle 162b in 100; simultaneous relationships
self-pacing principle 178 99; temporal relationships 99;
self-regulated learning (SRL): cues in transposable relationships 99
302; learning and teaching skills for smartphones and tablets 40, 240
301–305; monitoring and control in snowballing 140, 139, 141–142
301–303, 303 social media 182–183
self-study phase 306 social networks 349
semantic networks 207 solicited information presentation 35,
sequencing: assessment formats 231–232, 234–236, 350
147–148; assessors 147–149; solution-process guidance 22
guidelines for 156; individualized spaced practice 291
learning trajectories 145–147, 146; spatial split-attention principle 240, 241
necessity of 125–126; part-task spiral curriculum 183
138–142, 283, 283; part-whole split attention efect 226, 227, 230,
142–143, 142, 144; protocol 235, 241
portfolio scoring 149–151, 150; standard-centered assessment 117, 149,
second-order scafolding 151–153; 292
task-selection skills 151–153; whole- standards: attitudes 106; consistent
part 142–143, 142; whole-task 115; criteria 106; and performance
127–133, 138, 141, 145 assessment 114, 116–117, 118;
serious games 70 values 106
shared control 152 standards-task matrix 117, 118
signaling principle 240–241, 240 step-by-step instruction 221
simplifcation 283 strengthening: accumulating 283b; and
simplifying conditions method 128; learning processes 38; media for 37;
available time 129; location 129; and part-task practice 26, 29b–30b,
participants dynamics 129–130; 40, 283b–284b; and power law of
patent examination 130; project goal practice 284b; and practice items 277;
128–129; substantive examination and procedural information 257–258;
130; video length 128 and rule formation 221b, 284b
simulated task environments: computer structural features 75
based 67, 69, 70, 70; defning structural models: and case studies
63; fdelity of 65–67, 70, 70, 71; 169–170; defning 165; examples of
reasons for using 63–64, 64, 65; 210; identifying 208–210; and task
serious games 70 domains 212
simulated time compression 290–291 structural understanding 161b
simulation-based games 70 subgoaling 286
simulation-based performance tests 147 subordinate concepts 205, 205
simultaneous relationships 99, 193 summative assessment: at the does
situated practice 182 level 325–326; of domain-general
situational judgment tests 147, 328 skills 334–338; of part-tasks 322,
Subject Index 445

330–331; and professional tasks 159, 190, 254; and information


326–327; on the shows-how level literacy learning 307–308, 309;
330; of supportive information instructional methods for 160,
327–328; and test tasks 325; and 161b, 186; knowledge progression
unsupported/unguided tasks 325 in 197; and modeling examples
superordinate concepts 205, 205 169; nonlinear 193; performance
support 22–23, 75–76, 91 constraints 197; phases in 192–193;
supportive information 9; analysis of problem solving see problem-solving
cognitive strategies 48; and case guidance; and process worksheets
studies 168; and cognitive load 89; and refection 176; purpose
theory 27b–28b; control of 34; of video 160; and rules of thumb
defning 9, 16; in fipped classrooms 159–160, 194–195; success in 226;
41; media for 38, 39; metacognitive and supportive information 48,
prompts for 302–303; and modeling 159–160, 198–199
examples 168; and nonrecurrent system control 34
skills 24; planned information system dynamics: defning 49; iteration
provision 35; progress testing 49–50; layers of necessity 51; zigzag
329–330; summative assessment of design 51–52
330–331 system-initiated help 234, 287
supportive information design: and case
studies 169; and cognitive feedback tacit knowledge 162b
175–177, 184; and cognitive task-centered assessment 117, 120, 151
strategies 48, 159; collaborative task-centered learning 340
learning 182; defning 47; and task classes: and complex learning
domain models 163–165, 186, 17, 134–135; defning 21; and
215; and elaboration 161b–160b; dynamic task selection 31–33;
general information positioning refning 197–198, 214; sequencing
183–184; guidelines for 186–190; of 126–131, 131, 133; simple-to-
and intuitive mental models 216; complex 128–130, 131; skill clusters
and intuitive strategies 199; media 145; specifcation of 134; support
for 178–183; and mental models 48, and guidance in 134, 135–137
159; and modeling examples 169; task databases 146
necessity of 157–158; presentation task domain: cognitive strategies for
strategies for 170–172, 173, 174, 23; and constituent skills 16; mental
183–184; resource-based learning models in 159
in 174; and SAPs 159–163, task environments: fdelity of 65–67,
167–168, 186, 197–198; for self- 70; real 63, 64, 65, 66, 69;
directed learners 175; steps for 50; simulated 63–65, 69
in the training blueprint 183–185, task selection 32
185–186 task-selection skills: built-in task
surface features 74 support for 151; in coaching
systematic approaches to problem meetings 154; shared control of
solving (SAPs): analysis of 195; 152; teaching 304
chart 159; and coaching meetings task support: built-in 77–79, 80,
154; and cognitive strategies 134, 152, 152, 308; case study
54, 82, 189–197; creating method 80, 81; completion tasks
storyboard 159; and design 82; conventional tasks 77–79;
decisions 197–198; examples defning 24; imitation tasks 86;
of 160, 193; and experiential in information literacy learning
relationships 167–168; heuristic tasks 307; with nonspecifc goal
446 Subject Index

80; and problem solving 75–76, transfer of learning: holistic approach


76, 78; reverse tasks 80; saw-tooth to 4, 8; and non-integrated learning
pattern of 134, 134; second-order objectives 7; and random practice
scafolding 307; worked-out tasks 7–8
79, 80 transfer paradox 7–8, 23–24, 302
tasks with a nonspecifc goal 80, 90 transfer tasks 302
taxonomy 205, 205, 206 transposable relationships 99, 194
teachers: as Aloys 232, 240, 286; tricks-of-the-trade 162b
analyzing misconceptions 270; tutor guidance 90–91
and assessment 147–149, 304; as tutoring 90
coaches 343; and coaching meetings tutors 343
304; as designers 342–343; and typical errors 237
guidance 308; as instructors 343;
as presenters 343; and procedural unidirectional relationships 256
information 29b–30b; roles of unsolicited information presentation
342–343; and social media 178; 35, 231
and supportive information 178; as unsolicited information presentation in
tutors 343 advance 231
team tasks 62 Urban Science 182
templates 209, 264, 340
temporal relationships 99 values 105
temporal split-attention principle 240, variability 307, 334
240 variability of practice: and contextual
Ten Steps: applications of 10; interference 73b; and learning tasks
assessment in 322–324; blended 20, 71, 74; and mindful abstraction
learning in 344–345; blueprint 73b; and schema-based processes 23;
components of 9; cognitive load and structural features 74; support
theory (CLT) in 30b; and complex and guidance in 134; and surface
learning 9; defning 8; design features 74
activities 46–49; and emotions video editing footage 119
350; game-facilitated curriculum in video: edit practice items 278;
345; individualized instruction in postproduction 19, 83, 87, 90;
30–34; and instructional systems preproduction 19, 83, 87, 90; video
design (ISD) 55–56; learner control production 19, 24–26, 80, 83,
33, 34; mass customization in 87–88, 90, 109, 129; plan 99, 109,
345–347; and motivation 348–351; 119; video production compilation
progression of 10; as a research- 222b–224b; video shooting video
based model 344, 344; system 99, 119
control 33, 34; system dynamics of verbal encoding 271
49–52; as task-centered learning vicarious experience 182
340; task-frst approach of 341 virtual companies 67
terminal objectives 106–108 virtual patients 67
theories 166, 211 virtual reality (VR) 65, 181
thinking-aloud 86, 191 visual encoding 271
to-be-automated recurrent constituent
skills 110–111, 323, 330 well-structured problems 61
tools 101, 102, 105–106 whole-part sequencing 142–143, 142,
training blueprints see blueprint 144, 145, 145
components whole-task practice: assessment of
training wheels interfaces 280–281 322; in educational programs 322;
Subject Index 447

importance of 40; intermixed 241; and ease of recall 302; and


training in 291; performance learning from feedback 237–238;
assessment of 114 and multimedia presentations
whole-task sequencing 142; combining 178; and prerequisite knowledge
methods for 134, 142; emphasis 266; and procedural information
manipulation method 130–133; 231–232; and redundancy 179;
knowledge progression methods and rule formation 220; and
128, 133–134; and learner support specifcity 254
134, 135–138; simplifying work sample tests 147
conditions method 128–130; of task written assessments 328
classes 127–128, 145–146
worked-out tasks 79, 80 zero-order models 214
working memory: in cognitive zigzag design 51–52
architecture 27b–28b; in cognitive zone of proximal development 91, 127
load theory (CLT) 27b–28b; and zoom lens metaphor (elaboration
dual-mode presentation techniques theory) 127, 127

You might also like