Data Analytics and Adaptive Learning
Data Analytics and Adaptive Learning
The group of
experts assembled in this book share important ideas and trends related
to learning analytics and adaptive learning that will surely influence all of
our digital learning environments in the future.”
—Charles R. Graham, Professor,
Department of Instructional Psychology and
Technology, Brigham Young University
“Moskal, Dziuban, and Picciano challenge the reader to keep the student
at the center and imagine how data analytics and adaptive learning can
be mutually reinforcing in closing the gap between students from different
demographics.”
—Susan Rundell Singer, President of St. Olaf
College and Professor of Biology
“We are currently living in a digital age where higher education institu-
tions have an abundance of accessible data. This book contains a series of
chapters that provide insight and strategies for using data analytics and
adaptive learning to support student success and satisfaction.”
—Norman Vaughan, Professor of Education,
Mount Royal University, Calgary, Alberta,
Canada
“An important book that comes at a critical moment in higher education.
We are swimming in an ocean of data and this book from some of the
country’s top researchers and practitioners will help us make sense of it
and put it in the service of student success.”
—Thomas Cavanagh, PhD, Vice Provost
for Digital Learning, University of Central
Florida
“Data analytics and adaptive learning compromise two of the most rel-
evant educational challenges. This book provides excellent research
approaches and analysis to answer practical questions related to digital
education involving teachers and learners.”
—Josep M Duart and Elsa Rodriguez,
Editor-in-Chief & Editorial Manager of
the International Journal of Educational
Technology in Higher Education, the
Universitat Oberta de Catalunya
“Data, analytics, and machine learning are impacting all jobs and indus-
tries. For education, the opportunities are immense, but so are the chal-
lenges. This book provides an essential view into the possibilities and
pitfalls. If you want to use data to impact learners positively, this book is
a must-read.”
—Colm Howlin, PhD, Chief Data Scientist,
ReliaQuest
“This book shines a spotlight on the potential for data analytics, adaptive
learning, and big data to transform higher education. The volume lights
the way for those brave enough to embrace a new paradigm of teaching
and learning that enacts a more equitable and person-centered experience
for all learners.”
—Paige McDonald, Associate Professor and
Vice Chair, Department of Clinical Research
and Leadership, The George Washington
School of Medicine and Health Sciences
“This book brings together top scholars making the connection between
data analytics and adaptive learning, all while keeping pedagogical theory
on the central stage. It’s a powerhouse driven in equal parts by excellence
and innovation providing vision for educators on the quest for learner suc-
cess across the spectrum.”
—Kimberly Arnold, Director of Learning
Analytics Center of Excellence, University of
Wisconsin-Madison
Data Analytics and Adaptive Learning offers new insights into the use
of emerging data analysis and adaptive techniques in multiple learning
settings. In recent years, both analytics and adaptive learning have helped
educators become more responsive to learners in virtual, blended, and
personalized environments. This set of rich, illuminating, international
studies spans quantitative, qualitative, and mixed-methods research in
higher education, K–12, and adult/continuing education contexts. By
exploring the issues of definition and pedagogical practice that permeate
teaching and learning and concluding with recommendations for the
future research and practice necessary to support educators at all levels,
this book will prepare researchers, developers, and graduate students
of instructional technology to produce evidence for the benefits and
challenges of data-driven learning.
SECTION I
Introduction 1
SECTION II
Analytics 21
3 System-wide momentum 38
Tristan Denley
xiv Contents
SECTION III
Adaptive Learning 145
SECTION IV
Organizational Transformation 263
SECTION V
Closing 301
Index 323
ABOUT THE EDITORS
About the Editors xvii
In 2000, Chuck was named UCF’s first ever Pegasus Professor for extraor-
dinary research, teaching, and service, and in 2005 received the honor of
Professor Emeritus. In 2005, he received the Sloan Consortium (now the
Online Learning Consortium) award for Most Outstanding Achievement
in Online Learning by an Individual. In 2007, he was appointed to the
National Information and Communication Technology (ICT) Literacy
Policy Council. In 2010, he was made an inaugural Online Learning
Consortium Fellow. In 2011, UCF established the Chuck D. Dziuban
Award for Excellence in Online Teaching in recognition of his impact
on the field. UCF made him the inaugural collective excellence awardee
in 2018. Chuck has co-authored, co-edited, andcontributed to numer-
ous books and chapters on blended and online learning and is a regular
invited speaker at national and international conferences and universities.
Currently he spends most of his time as the university representative to the
Rosen Foundation working on the problems of educational and economic
inequality in the United States.
xviii About the Editors
ACKNOWLEDGEMENTS
We are grateful to so many who helped make Data Analytics and Adaptive
Learning: Research Perspectives a reality. First, our colleagues at the
Online Learning Consortium, both past and present, for providing an
environment that fosters inquiry as presented in our work. Second, Dan
Schwartz of Taylor & Francis and his staff provided invaluable design,
editorial, and production support for this book. Third, we are most grate-
ful to Annette Reiner and Adyson Cohen, Graduate Research Assistants
at the Research Initiative for Teaching Effectiveness of the University of
Central Florida, for the untold hours of editorial work on this book. There
is no real way we can properly thank them for their efforts on our behalf.
Lastly and most importantly, however, to the authors of the outstanding
chapters found herein: this is not our book, it is a celebration of their out-
standing research in data analytics and adaptive learning.
Heartfelt thanks to you all.
Patsy, Chuck, and Tony
CONTRIBUTORS
xxii Contributors
than 30 countries to support their work of ensuring all learners are thriv-
ing every day. Rhonda’s research examines how teachers develop inclusive
teaching practices and differentiated instruction expertise throughout their
career using new technologies.
Alfred Essa is CEO of Drury Lane Media, LLC, Founder of the AI-Learn
Open Source Project, and author of Practical AI for Business Leaders,
Product Managers, and Entrepreneurs. He has served as CIO of MIT’s
Sloan School of Management, Director of Analytics and Innovation at
D2L, and Vice President of R&D at McGraw-Hill Education. His current
research interests are in computational models of learning and applying AI
Large Language Models to a variety of educational challenges, including
closing the achievement gap.
Phil Ice is currently Senior Product Manager for Data Analytics and
Intelligent Experiences at Anthology. In this capacity, he works with various
parts of the organization to create predictive and prescriptive experiences
to improve learning outcomes, engagement, and institutional effective-
ness. Prior to joining Anthology, Phil worked as faculty at West Virginia
University (WVU) and University of North Carolina, Charlotte (UNCC),
Vice President of R&D at APUS, Chief Learning Officer for Mirum, and
Chief Solutions Officer at Analytikus. Regardless of the position, he has
remained passionate about using analytics and data-driven decision-mak-
ing to improve Higher Education.
Kari Goin Kono is a senior user experience designer with over 10 years of
experience in online learning and designing digital environments within
higher education. She has an extensive research agenda geared toward
supporting faculty with inclusive teaching practices including digital
xxvi Contributors
Andrea Leonard spent nearly a decade designing and teaching both online
and hybrid chemistry courses at the University of Louisiana at Lafayette
before joining the Office of Distance Learning as an Instructional Designer
in 2019. Andrea’s education and certification include a BSc in chemis-
try from UL Lafayette and an MSc in chemistry with a concentration in
organic chemistry from Louisiana State University. She is also a certified
and very active Quality Matters Master Reviewer. Her research interests
include the discovery and application of new adaptive and interactive
teaching technologies
Morgan C. Wang received his PhD from Iowa State University in 1991.
He is the funding Director of the Data Science Program and Professor
of Statistics and Data Science at the University of Central Florida, USA.
He has published one book (Integrating Results through Meta-Analytic
Review, 1999), and over 100 articles in refereed journals and conference
proceedings on topics including big data analytics, meta-analysis, com-
puter security, business analytics, healthcare analytics, and automatic
intelligent analytics. He is an elected member of International Statistical
Association, data science advisor of American Statistical Association, and
member of American Statistical Association and International Chinese
Statistical Association.
Introduction
1
DATA ANALYTICS AND
ADAPTIVE LEARNING
Increasing the odds
Data analytics and adaptive learning are two critical innovations gaining a
foothold in higher education that we must continue to support, interrogate,
and understand if we are to bend higher education’s iron triangle and realize
the sector’s longstanding promise of catalyzing equitable success. The oppor-
tunity and peril of these pioneering 21st century technologies (made availa-
ble by recent advances in the learning sciences, high performance computing,
cloud data storage and analytics, AI, and machine learning) requires our
full engagement and study. Without the foundational and advanced insights
compiled in this volume and continued careful measurement, transparency
around assumptions and hypotheses testing, and open debate, we will fail to
devise and implement both ethical and more equitable enhancements to the
student experience – improvements that are both urgent and vital for real-
izing higher education’s promise on delivering greater opportunity and value
to current and future generations of students.
—Rahim S. Rajan, Deputy Director,
Postsecondary Success, Bill & Melinda Gates
Foundation
DOI: 10.4324/9781003244271-2
4 Philip Ice and Charles Dziuban
1. baseline status
2. scores on individual learning modules
3. course activity completion rate
4. average scores across modules
5. course engagement time
6. interaction with course material
7. interaction with other students
8. revision activities
9. practice engagement
10. growth from the baseline
Big Data
All data analytics professionals, adaptive learning specialists, and big data
scientists should pay attention to and respect these criticisms and cau-
tions. The responsibility is considerable and best summarized by Tufte
(2001) when speaking to visual presentations. Paraphrased they become:
It ain’t what you don’t know that gets you into trouble. It’s what you
know for sure that just ain’t so.
As data scientists and the practitioners with whom they collaborate begin
utilizing increasingly sophisticated machine learning techniques, keeping
this warning in the front of one’s mind is critical.
By 2013, computational power had reached a point that deep learning,
a branch of machine learning based on artificial neural networks, was
developed by connecting multiple layers of processing to extract increas-
ingly higher-level entities from data. In other words, it is doing what
humans hopefully do every day—learn by example. A classic example of
how this works is that if you show the deep learning algorithm enough
pictures of a dog it will be able to pick the pictures of dogs out of a col-
lection of animal pictures. Over time, deep learning’s capabilities have
increased and certain techniques have been able to move, to a degree,
from the specific to the general, using semi-supervised deep learning; giv-
ing the algorithm some examples and then letting it discover similar fea-
tures on its own (Pearson, 2016).
While this technology is extremely impressive and is used by virtually
every tech and marketing company, it is also problematic in that we do
not fully understand how it works. As it is modeled after the way that
biological neural networks operate, it is informative to consider how a
person thinks. They may see an apple, classify it as such based on prior
knowledge, apply both subjective and objective analysis to it, consult with
their internal systems to see if they are hungry, then make the decision of
whether to eat the apple. This scenario may take only a few seconds, but
there are billions of neurons involved in the process and even if we were
able to scan the person’s brain, while those interactions were occurring,
we could not make sense out of the data. Nor in many cases would the
person be able to tell you what factors they considered, in what order,
14 Philip Ice and Charles Dziuban
to arrive at the decision; mirroring the situation that data scientists find
themselves in with deep learning.
Compounding the problem, researchers at Columbia University recently
created a semi-supervised deep learning experiment in which the algorithm
detected variables that it could not describe to the researchers. To determine
if complex motion could be analyzed by artificial intelligence to surface new
variables, the researchers showed the algorithm examples of well-under-
stood phenomena, such as double pendulums, from which additional vari-
ables, not considered by humans, were detected. Then the team asked it to
examine fire, lava lamps, and air dancers. In the case of flame patterns, 25
variables were returned. However, when queried about their nature, the
program was incapable of describing them, even though the predictions of
flame action and patterns were largely accurate (Cheng et al., 2022).
Even now, some of the companies engaged in machine learning applica-
tions for higher education have publicly stated that they do not fully under-
stand the models that are created (Ice & Layne, 2020). While disconcerting,
at least the variables that are being fed to machine learning algorithms can
be understood and analyzed using more conventional statistical techniques
to provide some degree of interpretability. However, imagine a scenario in
which a deep learning model was trained on data that are traditionally used
to predict phenomena such as retention, then was allowed to ingest all the
data that exist on campus, and then went on to produce highly accurate
predictions related to which students were at risk, but was unable to pro-
vide researchers with an explanation of what it was measuring. While this
scenario may, at first, appear theoretical, the speed with which deep learn-
ing is beginning to understand complex systems is quite stunning.
What if the deep learning model above was trained on data that had
inherent, though unintentional, biases? The algorithm does not have a
sense of bias or an understanding of ethics, it simply applies what it has
learned to new data, even if that means transference of bias. While deep
learning is certainly the current darling of the analytics world, the collec-
tion, analysis, and reporting processes historically have relied on models,
categorization, and labeling based on human input. While humans still
have some touchpoints with the model on which deep learning is trained,
it is important to remember that the complete history of human research
history is distilled and embedded within those examples. This process
has the potential to result in the generation of oversimplified models and
incomplete, or even harmful through the perpetuation of stereotypes that
are culturally repressive practices. As such, it is imperative that practition-
ers are thoughtful about the processes and tools they are creating and
that institutions exercise appropriate oversight to ensure equity (2022
EDUCAUSE Horizon Report Data and Analytics Edition, 2022).
Data analytics and adaptive learning 15
problem within the context of the three institutional forces that are
responsible for creating, maintaining, and governing the integration of
big data, analytics, and adaptive learning. However, as societal demands
on the Academy continue to increase, it is imperative that these forces
be reconciled. It is our hope that this book is a step in that direction.
That said, perhaps this preface is best concluded with another quote from
Master Yoda.
References
2022 EDUCAUSE horizon report: Data and analytics edition. (2022, July 18).
EDUCAUSE. https://library.educause.edu /resources/2022/ 7/2022- educause
-horizon-report-data-and-analytics- edition
Aljohani, N. R., Aslam, M. A., Khadidos, A. O., & Hassan, S.-U. (2022). A
methodological framework to predict future market needs for sustainable
skills management using AI and big data technologies. Applied Sciences,
12(14), 6898. https://doi.org/10. 3390/app12146898
Almond-Dannenbring, T., Easter, M., Feng, L., Guarcello, M., Ham, M.,
Machajewski, S., Maness, H., Miller, A., Mooney, S., Moore, A., & Kendall,
E. (2022, May 25). A framework for student success analytics. EDUCAUSE.
https://library . educause . edu / resources / 2022 /5 /a - framework - for - student
-success-analytics
Aspegren, E. U. T. (2021, March 29). These colleges survived World Wars, the
Spanish flu and more: They couldn’t withstand COVID-19 pandemic. USA
TODAY. https://eu.usatoday.com/story/news/education/2021/01/28/covid-19
-colleges- concordia-new-york- education/4302980001/
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim
Code. Polity.
Boghosian, B. M. (2019). The inescapable casino. Scientific American, 321(5),
70–77.
Carnegie, M. (2022, February 11). After the great resignation, tech firms are
getting desperate. WIRED. https://www.wired.com /story/great-resignation
-perks-tech/
Carroll, J. B. (1963). A model of school learning. Teachers College Record, 64(8),
723–733.
Chai, W., Ehrens, T., & Kiwak, K. (2020, September 25). What is CRM
(customer relationship management)? SearchCustomerExperience. https://
www.techtarget.com /searchcu stomerex perience /definition /CRM- customer
-relationship-management
Chen, B., Huang, K., Raghupathi, S., Chandratreva, I., Du, Q., & Lipson, H.
(2022, July 25). Automated discovery of fundamental variables hidden in
experimental data. Nature Computational Science, 2(7), 433–442. https://doi
.org /10.1038/s43588- 022- 00281-6
Data analytics and adaptive learning 17
Clark, R. M., Kaw, A. K., & Braga Gomes, R. (2021). Adaptive learning: Helpful
to the flipped classroom in the online environment of COVID? Computer
Applications in Engineering Education, 30(2), 517–531. https://doi.org/10
.1002/cae. 22470
Conklin, S. (2020, November 30). ERIC - EJ1311024 - Using change management
as an innovative approach to learning management system, online journal of
distance learning administration, 2021. https://eric.ed.gov/?id=EJ1311024
Cornett, J. W. (1990). Teacher thinking about curriculum and instruction: A case
study of a secondary social studies teacher. Theory and Research in Social
Education, 28(3), 248–273.
Corritore, M., Goldberg, A., & Srivastava, S. B. (2020). The new analytics of
culture. Harvard Business Review. https://hbr.org /2020/01/the-new-analytics
-of- culture
Deloitte United States. (2020, May 29). COVID-19 impact on higher education.
https://www2 .deloitte.com /us /en /pages/public-sector/articles /covid-19-impact
-on-higher- education.html
Digital Analytics Association. (n.d.). https://www.digitalanaly ticsassociation
.org/
Dziuban, C., Howlin, C., Johnson, C., & Moskal, P. (2017). An adaptive learning
partnership. Educause Review. https://er.educause.edu /articles/2017/12/an
-adaptive-learning-partnership
Dziuban, C., Howlin, C., Moskal, P., Johnson, C., Griffin, R., & Hamilton,
C. (2020). Adaptive analytics: It’s about time. Current Issues in Emerging
Elearning, 7(1), Article 4.
Dziuban, C., Moskal, P., Parker, L., Campbell, M., Howlin, C., & Johnson,
C. (2018). Adaptive learning: A stabilizing influence across disciplines
and universities. Online Learning, 22(3), 7–39. https://olj.onlinelearningc
onsortium.org /index.php/olj/article /view/1465
Dziuban, C. D., Hartman, J. L., Moskal, P. D., Brophy-Ellison, J., & Shea, P.
(2007). Student involvement in online learning. Submitted to the Alfred P.
Sloan Foundation.
Dziuban, C. D., Picciano, A. G., Graham, C. R., & Moskal, P. D. (2016).
Conducting research in online and blended learning environments: New
pedagogical frontiers. Routledge.
Essa, A., & Mojarad, S. (2020). Does time matter in learning? A computer
simulation of Carroll’s model of learning. In Adaptive instructional systems
(pp. 458–474). https://doi.org/10.1007/978-3- 030-50788- 6_ 34
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police,
and punish the poor. St. Martin’s Press.
Floridi, L. (2016). The 4th revolution: How the infosphere is reshaping human
reality. Oxford University Press.
Forrester, J. W. (1991). System dynamics and the lessons of 35 years. In K. B. D.
Greene (Ed.), Systems-based approach to policymaking. Kluwer Academic.
http://static. clexchange.org /ftp /documents /system- dynamics / SD1991- 04S
DandL essonsof35Years.pdf
Friedman, S., & Laurison, D. (2019). The class ceiling: Why it pays to be
privileged. Policy Press.
18 Philip Ice and Charles Dziuban
education /our-insights/using-machine-learning-to-improve-student-success-in
-higher- education
Wang, M. C., Dziuban, C. D., Cook, I. J., & Moskal, P. D. (2009). Dr. Fox rocks:
Using data-mining techniques to examine student ratings of instruction. In M.
C. Shelley II, L. D.Yore, & B. Hand (Eds.), Quality research in literacy and
science education: International perspectives and gold standards. Dordrecht,
the Netherlands: Springer.
Wilkerson, I. (2011). The warmth of other suns: The epic story of America’s
Great Migration. Vintage.
Williamson, K., & Kizilcec, R. (2022). A review of learning analytics dashboard
research in higher education: Implications for justice, equity, diversity, and
inclusion. LAK22: 12th International Learning Analytics and Knowledge
Conference. https://doi.org /10.1145/3506860. 3506900
Wood, C. (2021, May 5). Physicists get close to taming the chaos of the ‘three-
body problem’. LiveScience. https://www.livescience.com /three-body-problem
-statistical-solution.html
SECTION II
Analytics
2
WHAT WE WANT VERSUS
WHAT WE HAVE
Transforming teacher performance analytics
to personalize professional development
DOI: 10.4324/9781003244271-4
24 Rhonda Bondie and Chris Dede
areas. Our PD was embedded into existing courses held by each program
during the 2020–2021 school year.
Like many teacher educators (e.g., Dieker et al., 2017; Cohen et al.,
2020), we sought to test the potential of PD through MRS. However, our
goals were to use the technology affordances of MRS—such as the capac-
ity to erase or speed up time and instantly change locations—to imple-
ment differentiated and personalized professional learning. Following the
simulations, we explored how data analytics can be used to answer what
worked for whom under what conditions and what teachers should do
next in their professional learning.
FIGURE 2.1
Weighted Mean of Teacher Feedback Type in Mixed Reality
Simulation by Treatment (Coaching versus No Coaching)
FIGURE 2.2
Weighted Mean of Teacher Feedback Type in Mixed Reality
Simulation by Teaching Experience (Preparation, Early Career, and
In-service)
FIGURE 2.3
Teacher Feedback Types across Three Trials by Condition
(Coaching versus Self-reflection)
Purpose
Researchers and course instructors could use the outcomes from the
checklists of observed behaviors during the simulations to tailor future
30 Rhonda Bondie and Chris Dede
35
30
Teacher Feedback Frequency Mean
25
Prompt Thinking
20 Correct
Value Student
15 Clarify Student Response
Low Information
10
0
io S0
S1
S2
re S0
1
S2
ce 0
1
S2
:S
S
:S
n:
er
vi
at
er
Ca
ar
-S
rly
ep
In
Ea
Pr
Perils
As demonstrated in the weighted mean analytics, researchers often meas-
ure teaching behaviors without considering the alignment with the teach-
er’s identity and unique strengths, the values of the teacher’s practice, and
the particular teaching and learning context. In addition, studies rarely
assess the impact on student learning or use rubrics that generalize teach-
ing strategies, as if the teaching behavior is the end goal rather than P–12
student growth. One way to tackle these challenges is to report teacher
performance reflecting the distribution of feedback given to individual
avatar students. Figure 2.5 displays this approach illustrating for the
entire sample displaying the percentage of each type of feedback individ-
ual avatars received across three simulations.
The researcher value system weighted prompting student thinking with
the highest weight, yet teachers used that specific type of feedback the
What we want versus what we have 31
least. For example, although Ethan received the most feedback, he received
primarily low information and clarifying feedback. This may be a result
of the simulation script. We had three standardized challenges, where at a
planned time in the simulation (e.g., 15 seconds, 1 minute, and 2 minutes)
a specific avatar student interrupts the teacher with a question, “Why am
I in this group?” or a statement, “I don’t understand why my answer is
wrong.” Ethan is the first student to interrupt the teacher; therefore, he
may receive more feedback due to that interruption. These data analyt-
ics begin to provide a dashboard of student learning. For example, we
see that Savannah received more valuing feedback than any other student.
Researchers could investigate this to determine if being on the far left drew
visual attention as teachers read from left to right, or if Savannah as a White
girl reflected the teacher sample and therefore teachers valued her think-
ing more than the other avatars, or perhaps Savannah’s initial response
planned by researchers in the MRS script provided more opportunity for
teachers to use valuing feedback. It is difficult to determine the factors that
may have led to differences in the feedback that students received during
the MRS. We need greater context in the teacher performance dashboard.
FIGURE 2.6 Measuring Change in Teacher Feedback to Individual Students as Time Elapsed During Three MRS Exposures
33
34 Rhonda Bondie and Chris Dede
teachers, who completed the baseline simulation in the summer and then
the PD, Sim 1, and Sim 2 at the start of the school year. The differences
in early career teacher performance in the MRS and their logistics raise
questions for further exploration about the effectiveness of required PD
that takes place during the school day on teacher learning of instructional
practices for beginning teachers (Bondie et al., 2022).
In addition to logistics, other factors may have contributed to the
lower levels of high-information feedback for early career teachers, such
as adjusting to the responsibilities of classroom teaching. Perhaps early
career teachers felt greater dissonance between their current students in
their classroom and the avatar students in the virtual classroom. Teachers
in preparation had limited experiences to compare avatar students to
experienced teachers in adapting to different classes of students, therefore,
a future study might look more closely into finding out how early career
teachers feel about the avatar students and the simulated classroom. Post-
simulation surveys suggested that teachers at all experience levels enjoyed
the simulation experience and found the MRS experience to be relevant
and useful. However, early career teachers did not increase their use of
high-information feedback in ways similar to preparation and experienced
teachers. It is important to note that in our study, all early career teach-
ers were teaching in the same school district, so our data are limited and
not generalizable, however, even the limited data help researchers form
new questions regarding how teachers at different points in their career
trajectory may benefit from using MRS as a means for professional learn-
ing. For example, researchers might explore ways to give teachers greater
autonomy, even within a required PD topic and format for learning. In
addition, given greater use of online meetings and learning, options in
logistics for required PD might also be explored.
References
Argyris, C., & Schön, D. A. (1992 [1974]). Theory in practice: Increasing
professional effectiveness. Jossey-Bass.
Bondie, R., Dahnke, C., & Zusho, A. (2019). Does changing “one-size fits all” to
differentiated instruction impact teaching and learning? Review of Research
in Education, 43(1), 336–362. https://doi-org.ezp-prod1.hul.harvard.edu /10
.3102/0091732X18821130
Bondie, R., & Dede, C. (2021). Redefining and transforming field experiences
in teacher preparation through personalized mixed reality simulations. What
What we want versus what we have 37
length Lof the graph to those of a random graph with the same number of
vertices and same average degree Cr and Lr .
C
C
r
L
L
r
The degree to which σ > 1 measures the graph’s small worldliness.
We constructed the graduate-transcript graph for both the Tennessee
Board of Regents (TBR) institutions and the institutions in the University
System of Georgia (USG). For TBR, σ = 15.15; for USG, σ = 123.57
Figure 3.1 displaying the degree distribution of the TBR system shows
that it follows an inverse power law, characteristic of small-world net-
works (Denley, 2016). The USG distribution follows a similar pattern.
This structural curricular analysis suggests that deepening student
learning in these “hub” classes will reach across the student body very
quickly and will also influence student success across the breadth of the
curriculum. We have used this theoretical observation to steer several
Degree Distribution
80
70
60
50
Count
40
30
20
10
0
0
50
100
150
200
250
300
350
400
450
500
550
600
650
700
750
800
850
900
950
1000
1050
1100
1150
1200
Degree Value
I c
x A, B,C , D, F
g ’c x bc x
.
x A, B,C , D, F
gc x bc x
English Composition I
American Government
College Algebra Intro to Psychology
80
Percentage of students who passed
70
a college-level math class within
60
one academic year
50
40
30
20
10
0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
2013 Traditional DevEd (7214 Students)
2015-17 Foundations (8813 Students)
2015-17 Corequisite (9089 Students)
FIGURE 3.3
USG System-Wide Comparison of English Developmental
Education Models
46 Tristan Denley
80
Percentage of students who passed a
college-level math class within one
70
60
academic year
50
40
30
20
10
0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
2013 Traditional DevEd (3590 Students)
2015-17 Foundations (4718 Students)
2015-17 Corequisite (4694 Students)
FIGURE 3.4
USG System-Wide Comparison of Mathematics Developmental
Education Models (African-American Students)
While the improvement in the results for the overall student popula-
tion were impressive, so, too, was the corequisite model’s effectiveness in
improving success rates for all student subpopulations and in eliminating
equity gaps. This is illustrated by the results for African-American students
who studied Mathematics using the three models, as displayed in Figure
3.4. Those students who took a corequisite Mathematics course successfully
earned a “C” or better at more than twice the rate compared to their peers
who used the foundations or traditional approach. Once again, there were
significant improvements in success rates for students at every preparation
level with the largest gains for those with the lowest preparation scores.
These results are analogous to the improvements achieved by other
racial groups and student subpopulations, such as low-income students
and adult learners. For each category, we saw a doubling in the rates at
which students successfully completed a college-level Mathematics course.
Moreover, that doubling holds true for students across the preparation
spectrum, whether using ACT, SAT, or high-school GPA as the measure
of preparation.
The analysis for gateway English course success followed a very similar
pattern. We show these results in Figure 3.5. Once again, the students who
were educated using the corequisite model were almost twice as likely to
earn at least a “C” grade in their college-level math class when compared
with their peers who used either of the other two prerequisite approaches.
System-wide momentum 47
100
Percentage of students who passed a
college-level English class within one
80
academic year
60
40
20
0
12 13 14 15 16 17 18 19 Total
ACT Writing Sub-score
FIGURE 3.5
USG System-Wide Comparison of English Developmental
Education Models
As with Mathematics, the gains in success rates were apparent all across
the preparation spectrum, producing very similar success rates regardless
of incoming high-school preparation or student demographic. The data
for the other measures of preparation were similarly compelling.
In light of these results, all 26 University System of Georgia universities
and colleges moved entirely to the corequisite model of development edu-
cation for college Mathematics and English beginning Fall 2018.
While there are a variety of ways to implement the corequisite model,
we chose a scaling approach that followed three design principles:
80
college-level math class within one
60
academic year
40
20
0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
2013 Traditional DevEd (7214 Students)
2015-17 Foundations (8813 Students)
2015-17 Corequisite (9089 Students)
Corequisite Full implementation 2018-20 (18455 Students)
FIGURE 3.6
USG System-Wide Comparison of Mathematics Developmental
Education Models
System-wide momentum 49
80
academic year
60
40
20
0
12 13 14 15 16 17 18 19 Total
ACT Writing Sub-score
FIGURE 3.7
USG System-Wide Comparison of English Developmental
Education Models
80
Percentage of students who passed a
college-level math class within one
70
60
academic year
50
40
30
20
10
0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
Black Students (9688 Students)
Latinx Students (2747 Students)
White Students (4493 Students)
FIGURE 3.8
USG System-Wide Comparison of Success in Credit-Bearing
Mathematics Classes During Full Implementation of Corequisite
100
Percentage of students who passed a
college-level English class within one
80
academic year
60
40
20
0
12 13 14 15 16 17 18 19 Total
ACT Writing Sub-score
FIGURE 3.9
USG System-Wide Comparison of Success in Credit-Bearing
English Classes During Full Implementation of Corequisite
Latino and White students for English and Mathematics corequisite stu-
dents during the full implementation phase, since these are the largest
racial groups among the USG student population.
While at each preparation level there is some variation between the suc-
cess rates of students of differing races, the differences are almost never
System-wide momentum 51
Purpose
As part of the survey, students were asked to judge the importance of vari-
ous factors that contribute to their motivation to attend college, i.e., did
they have a purpose for being at college? By far the most prominent factor
52 Tristan Denley
All students
Agree 84.1 4089
Agree Agree 86.1 3281 5.9 99
Agree Disagree 76.6 808
Disagree 82.6 1079
Disagree Agree 85.6 390 2 95
Disagree Disagree 80.9 689
Agree 86 3671 6.1 99
Disagree 78.6 1497
Corequisite students
Agree 77.8 983
Agree Agree 79.6 788 2.5 99
Agree Disagree 70.6 195
Disagree 76.4 250
Disagree Agree 81.1 95 1.4 90
Disagree Disagree 73.5 155
Agree 79.8 883 2.8 99
Disagree 71.9 350
System-wide momentum 53
All students
Agree 91.4 3568
Agree Agree 91.9 3110 2.4 99
Agree Disagree 88.1 458
Disagree 91.2 658
Disagree Agree 91.7 356 0.5
Disagree Disagree 90.5 302
Agree 91.9 3466 2.3 95
Disagree 89.1 760
Corequisite students
Agree 83.2 478
Agree Agree 84.1 426 1.2
Agree Disagree 76.4 52
Disagree 77.4 66
Disagree Agree 78.4 39 0.2
Disagree Disagree 76.0 27
Agree 83.6 465 1.4 90
Disagree 76.3 79
= significant at 90% level
** = significant at 95% level
*** = significant at 99% level
the semester passed at an almost 10 pp higher rate than those who moved
from agree to disagree at the end. Similarly, those who began in disagree-
ment but moved to agreement by the end of the semester outperformed
their colleagues who remained in disagreement by almost 5 pp. All of
these differences were very strongly statistically significant.
The importance of the choice of mathematics course and its fit with the
student’s major is an important aspect of this. By the end of the semester,
34.6 percent of students who were not in STEM or Business majors, but
chose to study College Algebra, did not agree that the course would be
useful in their future career, compared with only 24.6 percent of their
STEM/Business colleagues (effect size 3.2). Indeed, significantly more
of the non-STEM/Business students who initially agreed changed their
minds by the end of the semester.
The picture is similar for those students who describe their perceptions
of their Freshman Writing. The results are summarized in Table 3.2. Once
again, while perceptions at the beginning of the semester are not significantly
linked with outcomes, a significantly higher proportion of students, who at
the end of the semester agreed that what they learned in the course would
help them in their career, passed the class with a C or better compared to their
colleagues who disagreed. There were much smaller effect sizes with writing
than with Mathematics. Those students who agreed at the end of the semester
were 5 pp more likely to earn an A grade (effect size 2.6).
Interestingly, while these differences remain significant when we con-
trol for preparation and or consider only White and Latinx students, they
disappear for Black students.
TABLE 3.3 “Your math intelligence is something about you that you can’t change
very much”
All students
Growth 84.8 3069
Growth Growth 86.6 2225 3.9 99
Growth Fixed 80.5 844
Fixed 81.9 2079
Fixed Growth 84.1 798 2.1 95
Fixed Fixed 80.5 1281
Growth 86 3023 5.1 99
Fixed 80.5 2125
Corequisite students
Growth 78.3 625
Growth Growth 80.6 426 2 95
Growth Fixed 73.3 199
Fixed 76.5 611
Fixed Growth 77.9 208 0.5
Fixed Fixed 75.9 403
Growth 79.7 634 2 95
Fixed 75 602
Black students
Growth 84.7 851
Growth Growth 86.7 583 2.2 95
Growth Fixed 80.5 268
Fixed 82.3 769
Fixed Growth 87.2 274 2.75 99
Fixed Fixed 79.7 495
Growth 86.8 857 3.7 99
Fixed 80 763
TABLE 3.4
“Your English intelligence is something about you that you can’t
change very much”
All students
Growth 91.8 3333
Growth Growth 92.8 2498 3.4 99
Growth Fixed 88.7 835
Fixed 89.1 938
Fixed Growth 91 431 1.7 95
Fixed Fixed 87.6 507
Growth 92.6 2929 4.2 99
Fixed 88.3 1342
Corequisite students
Growth 83.9 380
Growth Growth 85.9 253 1.4 90
Growth Fixed 79.8 127
Fixed 79.1 166
Fixed Growth 87.5 66 2.3 95
Fixed Fixed 73.7 100
Growth 86.2 319 2.7 99
Fixed 77.1 227
Black students
Growth 88.2 1049
Growth Growth 89.4 750 1.8 95
Growth Fixed 85.2 299
Fixed 83.2 341
Fixed Growth 86.4 137 1.3 90
Fixed Fixed 81.1 204
Growth 89 887 2.8 99
Fixed 83.5 503
outcomes. Much has been written about interventions that enable a stu-
dent to have a growth mindset during the semester and links to gains in
academic performance, especially for students of color (Broda et al., 2018;
Yeager et al., 2019). We see those same gains mirrored here with students
who began the semester with a fixed mindset but ended with a growth
mindset, increasing their pass rate by 4 pp (effect size 2.1) and by 7.5
pp (effect size 2.75) for Black students. What is novel in this study is the
apparent impact on those students who move from a growth mindset at
the beginning of the semester to a fixed mindset at the end. Those students
undergo a decrease of 6.1 pp in their pass rate (effect size 3.9) and are 7.7
pp less likely to earn an A grade. This effect remains apparent when we
further disaggregate by race or by preparation.
System-wide momentum 57
References
Albert, R., & Barabási, A.-L. (2002). Statistical mechanics of complex networks.
Reviews of Modern Physics, 74(1), 47–97.
Albert, R., Jeong, H., & Barabási, A.-L. (2000). Error and attack tolerance of
complex networks. Nature, 406(6794), 378–382.
Broda, M., Yun, J., Schneider, B., Yeager, D. S., Walton, G. M., & Diemer, M.
(2018). Reducing inequality in academic success for incoming college students:
A randomized trial of growth mindset and belonging interventions. Journal
of Research on Educational Effectiveness, 11(3), 317–338. https://doi.org/10
.1080/19345747. 2018.1429037
Complete College America. (2021). Corequisite works: Student success models at
the university system of Georgia. https://files.eric.ed.gov/fulltext/ ED617343.pdf
Denley, T. (to appear). The power of momentum, recreating multi-campus
university systems: Transformations for a future reality (J. R. Johnsen, ed.).
Johns Hopkins University Press.
Denley, T. (2021a). Comparing co-requisite strategy combinations, university
system of Georgia academic affairs technical brief no. 2.
Denley, T. (2021b). Scaling developmental education, university system of Georgia
academic affairs technical brief no. 1.
System-wide momentum 59
Introduction
The purpose of this study was to identify academically at-risk students
using big data analytics each semester in order to give instructional per-
sonnel enough lead time to provide support and resources that can help
reduce a student’s risk of failure. Such a system should be consistent with
high precision (i.e., the students identified as at-risk should be accurately
defined as those with a high probability of failure) (Smith et al., 2012;
Wladis et al., 2014). The consequences of course failure for students
include, but are not limited to, (1) prolonging time to graduation; (2)
increasing the student’s financial burden; and (3) increasing the chance
of dropping out entirely (Miguéis et al., 2018; Simanca et al., 2019). In
addition to the impact on students, these consequences also have a sig-
nificant negative and financial impact on the university through students’
increased time to completion and decreased retention rates.
There have been several data analytic attempts to identify at-risk stu-
dents using a variety of methods with the most effective approaches incor-
porating the concept of precision instead of accuracy as a methodological
foundation. However, most published studies use accuracy as the primary
selection criterion. The following example illustrates why model precision
provides more useful results. Assume that one class has 250 students with
a 30% failure rate. The confusion matrices for Model A and Model B are
shown in Table 4.1.
The accuracy rate for Model A is 80%, which is higher than the 78%
accuracy rate for Model B. However, Model B has a 60% precision rate
and identifies 20 more at-risk students than Model A. This cohort is
DOI: 10.4324/9781003244271-6
A precise and consistent early warning 61
TABLE 4.2
The Relationship between Class Size and
Available Resource Metrics
0–50 10
51–100 12
101–200 25
201+ 50
62 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal
Methodology
The data mining process used in this study identified previously unob-
served informational patterns that cannot be observed with more tra-
ditional analyses procedures. For this work, six courses including
three STEM (Statistics, Physics, and Chemistry) and three non-STEM
(Psychology, Political Science, and Western Civilization) from Fall 2017
were selected for model development. The same offerings for the Fall 2018
term were used for validation. In the predictive building processes, the
authors developed a baseline at the beginning of courses, and then built
five individual progressive models from week 2 to week 6. The predictions
were built for STEM courses and non-STEM courses separately. For each
course, there are a total of six predictive models. The baselines were built
based on student information system (SIS) data including demographics,
academic type and level, grade point average (GPA), and standardized test
scores (ACT, SAT). The progressive models were built using both SIS and
learning management system (LMS) data augmented with course exami-
nation and assignment outcome scores from the LMS gradebook. The
authors applied the six models to make weekly predictions for the courses
in Fall 2018 semester.
Note that the baselines were built with fewer data elements; therefore,
they are theoretically less precise. On the other hand, the weekly pro-
gressive results enhance precision because they utilize additional data ele-
ments from the LMS system. This means that the intervention program
can help early identified at-risk students engage more quickly and increase
their likelihood of success.
Datasets
This study incorporated the two previously mentioned data sources. The
first originated from the university SIS containing both demographic and
academic variables. Most students have either an SAT score or an ACT score
that the authors combined into one composite through a cross imputation
technique before performing analysis. Demographic variables included
gender, ethnicity, and indicators for: academic level, full-/part-time sta-
tus, first-generation college student, transfer student, and birth date. SIS
data were used to build the baseline model to predict students’ likelihood
of failing a course at the first week of each semester. The second dataset
originated from the LMS. These data sets contain students’ activities dur-
ing the semester including quiz scores, engagement time and date for all
quizzes, exam results, and assignment completion metrics. The informa-
tion also includes students’ interaction with the LMS system such as the
duration of login time, the number of times logged in, and the number of
A precise and consistent early warning 63
At-Risk No-Risk
100%
89.56% 90.66% 90.05% 92.03%
88.93%
90%
80% 76.30%
70%
60%
50%
40%
30% 23.70%
20%
10.44% 9.34% 9.95% 11.07%
10% 7.97%
0%
Statistics Physics Chemistry Psychology Political Western
(N=1,363) (N=862) (N=364) (N=1,709) Science Civilization
(N=542) (N=477)
120%
At-Risk No-Risk
60%
40%
24.01%
20% 14.95% 11.95%
8.24% 6.17%
4.17%
0%
Statistics Physics Chemistry Psychology Political Western
(N=1,112) (N=883) (N=528) (N=1,590) Science Civilization
(N=962) (N=535)
Study Population 2,589 100.00 2,142 447 2,728 100.00 2,142 447
Gender
Female 1,448 55.93 1,204 244 1,448 55.93 1,204 244
Male 1,141 44.07 938 203 1,141 44.07 938 203
Race
White 1,235 47.70 1,060 175 1,235 47.70 1,060 175
Black 293 11.32 207 86 293 11.32 207 86
Hispanic 684 26.42 559 125 684 26.42 559 125
Asian 176 6.80 151 25 176 6.80 151 25
Other 201 7.76 165 36 201 7.76 165 36
Academic year
Freshman 609 23.52 542 67 609 23.52 542 67
Sophomore 1,035 39.98 876 159 1,035 39.98 876 159
Junior 627 24.22 484 143 627 24.22 484 143
Senior 318 12.28 240 78 318 12.28 240 78
Full/Part time
Full time 2,319 89.57 1,944 375 2,319 89.57 1,944 375
Part time 270 10.43 198 72 270 10.43 198 72
Transfer
Yes 694 26.81 524 170 694 26.81 524 170
No 1,895 73.19 1,618 277 1,895 73.19 1,618 277
First Generation
Yes 589 22.75 455 134 589 22.75 455 134
No 2,000 77.25 1,687 313 2,000 77.25 1,687 313
Tables 4.3 and 4.4 display the data for students in courses included in
the study by STEM and non-STEM status. Demographic frequency and
percentages are included in Tables 4.3 and 4.4 for gender, race, academic
year, and full-/part-time, transfer, and first-generation status for courses
in Fall 2017 and Fall 2018, respectively.
Tables 4.5 and 4.6 show descriptive statistics for numerical variables
including age, term course load (semester hours), UCF GPA, high-school
GPA, SAT total, ACT total, assignments and quiz scores in each week, as
well as first exam grade for STEM and non-STEM courses in Fall 2017
and 2018, respectively. The missing values for high school GPA, SAT total,
and ACT total were replaced with relative mean values. Assignments, quiz
scores, and exam grades were calculated by using grade weight according
A precise and consistent early warning 65
Assignment week 3 7.90 11.10 0.00 35.00 2.60 4.80 0.00 15.00
Quiz week 3 8.20 7.90 0.00 20.00 13.70 18.40 0.00 85.00
Assignment week 4 7.60 10.60 0.00 35.00 2.60 4.80 0.00 15.00
Quiz week 4 8.10 7.80 0.00 20.00 13.70 18.20 0.00 85.00
Assignment week 5 7.40 10.20 0.00 35.00 2.60 4.70 0.00 15.00
Quiz week 5 7.80 7.50 0.00 20.00 14.10 18.40 0.00 85.00
Assignment week 6 7.30 10.10 0.00 35.00 2.40 4.60 0.00 14.90
Quiz week 6 7.70 7.40 0.00 20.00 14.30 18.90 0.00 84.20
First Exam week 6 62.70 22.70 0.00 100.00 63.70 22.40 0.00 100.00
TABLE 4.6 Descriptive Statistics for Fall 2018
UCF GPA 3.10 0.70 0.00 4.00 3.10 0.70 0.00 4.00
High school GPA 3.80 0.40 1.80 4.80 3.80 0.40 1.60 4.80
SAT total 1076.00 76.60 660.00 1490.00 1074.00 62.20 510.00 1570.00
ACT total 24.10 2.80 7.00 35.00 24.30 2.90 11.00 36.00
Assignment week 2 12.40 7.40 0.00 24.00 10.40 12.10 0.00 40.00
Quiz week 2 11.60 7.50 0.00 25.00 12.00 19.90 0.00 81.20
Assignment week 3 12.30 7.30 0.00 24.00 9.60 11.50 0.00 40.00
Quiz week 3 11.70 7.40 0.00 25.00 12.60 22.20 0.00 100.00
Assignment week 4 12.30 7.40 0.00 25.00 9.50 11.50 0.00 40.00
Quiz week 4 11.50 7.10 0.00 25.00 12.50 22.10 0.00 100.00
Assignment week 5 12.20 7.40 0.00 25.00 9.80 11.60 0.00 40.00
Quiz week 5 11.40 7.00 0.00 25.00 12.40 22.10 0.00 100.00
Assignment week 6 12.10 7.30 0.00 25.00 9.80 11.70 0.00 40.00
Quiz week 6 11.40 7.00 0.00 25.00 12.30 21.80 0.00 100.00
First Exam week 6 54.80 18.50 0.00 93.60 58.50 28.00 0.00 100.00
A precise and consistent early warning
67
68 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal
Baseline SIS
STEM Courses
STEM
Fall 2017
Sample
Generation
Non-STEM Baseline SIS
Courses Non-STEM
STEM
Progressive
STEM Models Weeks
Courses 2-6
Fall 2018
(1) Training data set node: 2017 STEM course training data or non-
STEM training data, including all the variables. Target variable is
at-risk:
STEM course target: At-risk = 1: 447 (17.27%); No-risk = 0: 2142
(82.73%).
Non-STEM course target: At-risk = 1: 269 (9.82%); No-risk = 0: 2460
(90.18%)
The target is unbalanced and the model prediction is a rare event data
analysis. The down sampling method was used to sample the data for
balance analysis.
(2) Sample node: Down sampling method was used. The sampling
method was stratified sampling the number of target non-events = 0
with an equal number of target events = 1.
A precise and consistent early warning 69
The sample size for STEM courses was 894 (50% for at-risk = 1.447
and 50% for no-risk = 0.447); The sample size for non-STEM courses
was 538 (50% for at-risk = 1.269 and 50% for no-risk = 0.269).
(3) Data partition node: 70% for training and 30% for validation.
(4) Model regression node: After the data partition node, the diagram
was divided into six branches. Each branch represented a model incor-
porating hierarchical logistic regression. First, the baseline model was
used to find significant control variables. These significant variables
were also used in the weekly progressive models. As mentioned previ-
ously, the baseline model only considered student demographics and
academic background variables—no assignments, quizzes, or exam
variables were considered. This method used logistic regression with
backward variable selection to identify significant variables. The
logistic regression for the baseline model is described in equation (1):
,
p (1)
log 0 i
1 p i 1
m
p
log
1
p week j
0 Assig
i 1
i week j Quizweek j Examweek j 6 (2)
The modeling coefficients and their significance levels are shown in Tables
4.7 and 4.8 for STEM and non-STEM courses. Model performances with
receiver operator characteristic (ROC) curves were calculated for each
progressive and baseline model for STEM and non-STEM modeling,
respectively. However, both the baseline and progressive ROC curves cor-
respond so closely at each iteration of the study that Figure 4.3 presents a
combined prototype of all six curves for both the STEM and non-STEM
courses. ROC curves display the overall correct classification of students
at-risk or not yielded by the logistic regression procedures. The educa-
tional value add comes from being able to compare the occurrences of true
and false at-risk student designation. In a perfect classification system,
the curve simply would be a dot in the upper left corner of the intersec-
tion of the x-axis and y-axis. That would indicate no classification errors.
However, by plotting the true and false classification rates for multiple cut
points 0.2, 0.4, 0.6, 0.8, etc., the ROC evolves. The objective is for the
curve to rise and approach the upper left-hand corner as closely as pos-
sible. Figure 4.4 indicates that this happened for the baseline and progres-
sive analyses for both STEM and non-STEM courses. The prototype curve
that flattens out near the y (horizontal) axis indicates that the increasing
precision of at-risk classification has minimal impact on the false at-risk
designation. A second indicator of the ROC curve precision is presented
in Figure 4.4. The area under the curve (AUC) index increases as the curve
approaches the upper left corner. The larger the AUC, the more effective
predictive analysis. Therefore, Figure 4.4 presents AUC for the baseline
and progressive analyses through each phase of the study.
The major findings of this study are:
(1) In the baseline model, four variables (UCF GPA, academic year, trans-
fer status, and ACT total) are significant. These variables were used as
control variables in the weekly progressive calculations.
72 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal
AUC AUC
0.8 0.8
Baseline: 0.867 Baseline: 0.923
0.6 Week 2: 0.873 0.6 Week 2: 0.923
Week 3: 0.875 Week 3: 0.923
0.4 Week 4: 0.881 0.4 Week 4: 0.923
0.2 Week 5: 0.885 0.2 Week 5: 0.923
Week 6: 0.894 Week 6: 0.925
0 0
0 0.2 0.4 0.6 0.8 1 1.2 0 0.2 0.4 0.6 0.8 1 1.2
False At-Risk Rate False At-Risk Rate
(a) STEM Courses (b) Non-STEM Courses
(2) Among all the variables, UCF GPA is the most important baseline
predictor for the weekly progressive determination. For example,
in the baseline model, coefficients for UCF GPA are –2.84 for the
STEM model and –3.66 for the non-STEM model, which means
that keeping all other predictors constant with increasing UCF
GPA, the log odds ratio of at-risk decreases by 2.84 units in the
STEM model and 3.66 units in non-STEM, respectively. For both
STEM and non-STEM models, the effect of UCF GPA remains vir-
tually constant from week 2 to week 5 and decreases slightly in
week 6 as exam variables were added. Fundamentally, UCF GPA
can be used as the most important variable in the early weeks of
courses to identify at-risk students.
(3) Other academic background variables such as academic year, trans-
fer, and ACT total show significant effects that can identify at-risk
student status. In the STEM model, for the academic year variable
and keeping all other predictors constant, freshman have 1.30 units
log odds ratio significantly less than sophomores (reference group),
seniors have 0.93 units log odds ratio significantly greater than sopho-
mores, and juniors have no significant difference when compared to
sophomores. But in the non-STEM model, juniors have 0.74 units
log odds ratio significantly greater than sophomores and there are no
significant differences between freshman and sophomore or seniors
and sophomores. From the coefficients of both models, it may be con-
cluded that the higher the academic year, the higher the probability for
accurately and precisely identifying at-risk students. At-risk students
in the freshman year are the most difficult to identify. Additionally,
A precise and consistent early warning 73
ACT total and transfer student status present challenges for at-risk
identification.
(4) In the weekly models, although the coefficients of assignment for vari-
ables were negative (the more assignments, the less risk), they are not
significant in both STEM and non-STEM courses. The quiz variable
is significant in all the weekly models for STEM, but it is only signifi-
cant in the week 6 model for the non-STEM courses and not signifi-
cant in previous weeks. Thus, it may be concluded that quiz scores are
more useful than assignments for identifying at-risk students in the
early weeks for STEM courses and in week 6 for non-STEM courses.
(5) The exam variable score is useful for identifying at-risk students
because it is significant in both STEM and non-STEM modeling.
(6) AUC for the ROC analysis increases from the baseline model to
weekly progressive models in STEM modeling, but remains almost
constant in non-STEM modeling.
(7) In the early prediction phase, significant student demographics and
academic background variables can be used as effectively for both
STEM and non-STEM modeling. Quizzes can provide additional
information for STEM from week 2 to week 5. Since the first exam is
administered about the sixth week for many courses, this early identi-
fication of at-risk students can be accomplished prior to when the first
exam is taken.
1 0 Agree 1 0 Agree
Prediction in 1 213 83 93.43% 265 47 97.37%
baseline 0 98 2362 34 2741
A precise and consistent early warning 75
TABLE 4.11 Precision Using Highest At-risk Students from Baseline and Weekly
Progressive predictive data
STEM At-risk 62 62 62 62 63 64
STA2023 No-risk 8 8 8 8 7 6
Precision 88.57% 88.57% 88.57% 88.57% 90% 91.4%
Non- At-risk 40 41 41 41 41 46
STEM No-risk 10 9 9 9 9 4
PSY2012 Precision 80% 82% 82% 82% 82% 92%
TABLE 4.12
Top At-risk Students from Baseline Who Remain At-risk Through
Week 6
STEM 1 70 0 70 0 69 1 68 2 60 10
STA2023
Non-STEM 1 50 0 50 0 50 0 50 0 50 0
PSY2012
students in the early weeks. Even at the beginning of the course, the base-
line model exhibits excellent performance, however, model prediction can
be enhanced by using the weekly progressive results in week 6.
For prediction in non-STEM courses, the accuracies are 92.3% for the
baseline model, remaining constant with values of 92.3% in the follow-
ing three weeks, showing little improvement until week 5 and week 6
with values of 92.5% and 92.9%. The precision has the same pattern
as accuracies. Compared to STEM models, the non-STEM semester has
higher accuracy but lower precision. For the overall model prediction in
non-STEM courses, the model performance exhibits an excellent ability
to identify at-risk students in the early weeks. Even at the beginning of
the course, the baseline model has acceptable performance. However, the
prediction does not improve in the weekly progressive models but shows a
little improvement at week 6.
Table 4.10 presents the agreement of the same students who were iden-
tified in the baseline model with those in the weekly 6 progressive models.
The results show that there is excellent agreement with values of 93.43%
in STEM courses and 97.37% in non-STEM courses.
To obtain a high precision model, the authors examined the most at-
risk students identified by the baseline model and weekly progressive
76 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal
model for each course. The cut-off number of a most at-risk student was
based on the at-risk rate and available student sample sizes shown in Table
4.3. We used STEM statistics courses and non-STEM psychology courses
as examples. The cut-off size is 70 for the statistics course and 50 for the
psychology course. The relative precisions are shown in Table 4.11. The
results indicate that the prediction has higher precision from 88.57% to
91.4% in the statistics course if we cut the top 70 at-risk students, and a
precision from 80% to 92% in the psychology course if we cut the top 50
at-risk students.
The same most at-risk students identified in the baseline model were
examined for each course to determine if they remained at-risk in the
weekly progressive models. The result is shown in Table 4.12. It may
be observed that the 70 at-risk students identified in baseline prediction
decrease to 60 in week 6 for the statistics course. At the end of the semes-
ter, 85% of them remain at-risk. For psychology, 100% of the at-risk stu-
dents were identified in baseline prediction and remained at that status at
the end of the semester. Therefore, interventions for these students, such
as tutors or additional instruction, have the potential for a 50% improve-
ment, thereby helping 35 students in statistics and 25 in psychology ulti-
mately pass these courses.
References
Miguéis, V. L., Freitas, A., Garcia, P. J., & Silva, A. (2018). Early segmentation
of students according to their academic performance: A predictive modelling
approach. Decision Support Systems, 115, 36–51.
Simanca, F., González Crespo, R., Rodríguez-Baena, L., & Burgos, D. (2019).
Identifying students at risk of failing a subject by using learning analytics for
subsequent customised tutoring. Applied Sciences, 9(3), 448.
Smith, V. C., Lange, A., & Huston, D. R. (2012). Predictive modeling to forecast
student outcomes and drive effective interventions in online community college
courses. Journal of Asynchronous Learning Networks, 16(3), 51–61.
Wladis, C., Hachey, A. C., & Conway, K. (2014). An investigation of course-
level factors as predictors of online STEM course outcomes. Computers and
Education, 77, 145–150.
5
PREDICTIVE ANALYTICS, ARTIFICIAL
INTELLIGENCE AND THE IMPACT
OF DELIVERING PERSONALIZED
SUPPORTS TO STUDENTS FROM
UNDERSERVED BACKGROUNDS
Timothy M. Renick
Chapter overview
This chapter explores the promise of analytics, predictive modeling, and
artificial intelligence (AI) in helping postsecondary students, especially those
from underserved backgrounds, graduate from college. The chapter offers a
case study on the nature and impact of these approaches, both inside and out-
side of the classroom, at Georgia State University. Over the past decade and
a half, Georgia State has broadened its admissions criteria, doubled the num-
ber of low-income students it enrolls (to 55%), and significantly increased
its minority-student populations (to 77%). At the same time, it has raised its
graduation rates by 81% and eliminated equity gaps in its graduation rates
based on race, ethnicity, and income level. Georgia State now graduates more
African Americans than any nonprofit college or university in the nation.
A critical factor in Georgia State’s attainment of improved and more equi-
table student outcomes has been its use of analytics and technology to deliver
personalized services to its students. The university has consistently been at
the leading edge of adopting new technologies in the higher-ed sector, in some
cases helping to develop them. These efforts include the scaling of adaptive
learning approaches, the design and adoption of a system of proactive aca-
demic advising grounded in predictive analytics, and the development of an
AI-enhanced chatbot designed to support student success.
This chapter will share research from a series of randomized control
trials and other independent studies that have been conducted regarding
Georgia State’s innovations in these three areas. It will include key lessons
learned in implementing analytics-based solutions to student-success chal-
lenges at a large, public institution.
DOI: 10.4324/9781003244271-7
Predictive analytics, AI and the impact of personalized supports 79
jobs, most of them off campus, the initial thinking was that the online,
adaptive aspects of the course would be completed by students on their
own computers and on their own time, at home or at work. This would
allow students to complete adaptive modules when most convenient to
their schedules and save the university costs of retrofitting classrooms to
serve the courses. Within a semester, it became clear that the model was
not working. Too many students were not completing the adaptive exer-
cises at all, and, even for those who did, there was no easy way for them
to get timely help when they hit a roadblock. Non-pass rates in the courses
did not budge.
After a team of faculty members from the math department visited
Virginia Tech to learn about its emporium model for math instruction,
Georgia State began to pilot adaptive sections of its math courses that
require students to meet one hour a week with their instructors in tradi-
tional classrooms and two to three hours a week in classrooms retrofit-
ted to allow students to complete the adaptive learning components of the
course while their instructors, teaching assistants, and/or near-peer tutors
are in the room. Student attendance in the computer classrooms, dubbed
the MILE (Math Interactive Learning Environment) labs, was officially
tracked and made part of the students’ final grades, and students were sur-
rounded by resources to provide support and guidance when they faltered.
Almost immediately, Georgia State began to see improvements in course
outcomes.
MILE course sections began to regularly show 20% drops in DFW
rates when compared to their face-to-face counterparts. The results were
particularly promising for minority and Pell-eligible students. DFW rates
for these groups of students declined by 11 percentage points (24%)
(Bailey et al, 2018).
Achieving positive results took trial and error. Lower outcomes were
typical in the first few semesters of implementation, when faculty were
often still experimenting with course formats. Faculty dedicated hundreds
of hours to the iterative process of improvement. According to one math
faculty member: “You need to invest a lot of time considering the learn-
ing objectives and how they map to one another. I worked for 40 hours a
week for six weeks to build a viable course” (Bailey et al., 2018, p. 51). In
response, Georgia State adopted strategies to build and maintain faculty
buy-in for the adaptive courses. First, it regularly collected and shared
outcome data with the faculty—including longitudinal data showing
how students perform in subsequent math and STEM courses. Second, it
allowed faculty members to select the specific adaptive courseware used
in their courses. Many platforms and products were tried, from commer-
cial to open source, with data collected to see which tools produced the
Predictive analytics, AI and the impact of personalized supports 83
to have a STEM major in the Spring 2020 term as other Georgia State stu-
dents (16%), across both the entire study sample and the treatment group”
(Rossman et al., 2021, p. 16).
While much attention has fallen on the novelty of Georgia State’s intro-
duction of predictive analytics in advising, the bulk of the time and effort
in launching and sustaining the program has been dedicated to struc-
tural and cultural changes to advising at the university. The university
has taken years to develop the administrative and academic supports that
allow the data to have a positive impact on students. Academic advising
was totally redesigned by Georgia State to support the new, data-based
approach. New job descriptions for advisors were approved by HR, new
training processes were established, and a new centralized advising office
representing all academic majors at Georgia State was opened. Roughly
40 additional advisors have been hired. More than this, new support sys-
tems for students had to be developed. When the university was blind
to the fact that hundreds of students were struggling in the first weeks
of certain courses, it was not compelled to act. Once hundreds of alerts
began to be triggered in the first weeks of Introduction to Accounting or
Critical Thinking, the university needed to do more than note that a prob-
lem had emerged. It needed to help. Advisors not only had to let students
know they were at risk of failing these courses but they also had to offer
some form of support. As a result, Georgia State has implemented one of
the largest near-peer tutoring programs in the country. In the current aca-
demic year, the university will offer more than one-thousand undergradu-
ate courses that have a near-peer tutor embedded in the course. Advisors
can refer struggling students identified by the GPS system to these “sup-
plemental instructors.” It has also opened a new STEM tutoring lab that
serves all STEM disciplines including math in one location. It delivers
most of the supports virtually as well as face-to-face.
These efforts cost money, but they generate additional revenues as well.
An independent study by the Boston Consulting Group looked specifically
at Georgia State’s GPS Advising initiative with attention to impacts and
costs (Bailey et al., 2019). It found that “the direct annual costs to advise
students total about $3.2 million, or about $100 per student” (Bailey
et al., 2019, p. 16). This is about three times the cost to advise students
before the initiative. Another $2.7 million has been invested by Georgia
State in new academic supports, such as the near-peer tutoring program.
But the return on investment of these programs must be factored in, as
well. Each one-percentage-point increase in success rates at Georgia State
generates more than $3 million in additional gross revenues to the uni-
versity from tuition and fees. Since the launch of GPS Advising, success
rates have increased by 8 percentage points. While the additional costs
Predictive analytics, AI and the impact of personalized supports 87
of the GPS program are close to $5 million, GPS Advising has helped the
university to gross $24 million in additional revenues. The annual costs
of the technology needed to generate the predictive analytics, by the way,
represent only about $300,000 of the $5 million total.
AI-enhanced chatbot
In 2015, Georgia State had a growing problem with “summer melt”—the
percentage of confirmed incoming freshmen who never show up for fall
classes. This is a vexing challenge nationally, especially for students from
underserved, urban backgrounds. Lindsay Page and Ben Castleman esti-
mate that 20–25% of confirmed that college freshmen from some urban
school districts never make it to a single college class (Castleman & Page,
2014). They simply melt out of the college-going population in the sum-
mer between finishing high school and starting college.
Ten years ago, Georgia State’s summer melt rate was 10%. By 2015,
with the large increase in low-income and first-generation college students
applying at Georgia State, the number had almost doubled to 19%. Once
again, Georgia State turned to data to understand the growing problem.
Using National Student Clearinghouse data to track the students, the uni-
versity found 278 students scheduled to start at Georgia State in the Fall
semester of 2015 who, one year later, had not attended a single day of
postsecondary work anywhere across the US. These students were 76%
non-White and 71% low income. Equity gaps in higher education are, in
effect, beginning before the first day of college classes.
In preparation for the Fall of 2016, Georgia State conducted an analy-
sis of all the bureaucratic steps it requires students to complete during
the summer before their freshmen year—a time when students are no
longer in contact with high-school counselors but not yet on college cam-
puses—and documented the number of students who had been tripped
up by each step. There were students who never completed the FAFSA,
the federal application for financial aid. Others completed the FAFSA but
did not comply with follow-up “verification” requests from the federal
government for additional documentation. Still others were tripped up
by requests for proof of immunization, transcript requirements, deposits,
and so forth. In each case, the data showed that the obstacles had been
disproportionately harmful to low-income and first-generation students—
students who typically lack parents and siblings who have previously navi-
gated these bureaucratic processes and who can offer a helping hand.
With better understanding of the nature of the problem, Georgia State
partnered with a start-up technology company, Mainstay (then known
as Admit Hub), to deploy one of the first AI-enhanced student-support
88 Timothy M. Renick
that they were, at times, more likely to ask a question of the chatbot than
the course instructor or graduate assistant because they did not have to
feel “embarrassed” or “judged” for not knowing the answer. There was
even an instance where a student confided to the chatbot that the student
was depressed, and Georgia State was able to connect the student with
mental health supports. Ninety-two percent of treatment group students
said they support expanding the use of the tool to other courses (Meyer et
al., 2022), and Georgia State will be launching additional pilots using the
chatbot in high-enrollment courses during the Fall of 2022.
When the Brown researchers were asked why the chatbot intervention
in Introduction to American Government was so impactful, their response
was telling: “The AI chatbot technology enabled the instructional team
to provide targeted, clear information to students about their course per-
formance to date and necessary tasks to complete to ensure success in the
course” (Meyer et al., 2022). In short, it allowed the instructors, even in
an asynchronous online course of 500, to deliver personalized attention
to students.
Conclusion
Georgia State University’s student-success transformation, including an
81% increase in graduation rates and the elimination of equity gaps, has
been grounded in its development of innovative and scaled student sup-
port approaches that leverage analytics, AI, and technology to deliver
personalized supports to students. Emerging evidence from a series of
research studies demonstrates that these approaches can result in signifi-
cantly improved student outcomes and that they disproportionality ben-
efit students from underserved backgrounds.
References
Alamuddin, R., Rossman, D., & Kurzweil, M. (2019). Interim findings report:
MAAPS advising experiment. Ithaka S+R.
Bailey, A., Vaduganathan, V., Henry, T., Laverdiere, R., & Jacobson, M. (2019).
Turning more tassels: How colleges and universities are improving student
and institutional performance with better advising (2019). Boston Consulting
Group.
Bailey, A., Vaduganathan, V., Henry, T., Laverdiere, R., & Pugliese, L. (2018).
Making digital learning work: Success strategies from six leading universities
and community colleges. Boston Consulting Group.
Castleman, B., & Page, L. C. (2014). Summer melt: Supporting low-income
students through the transition to college. Harvard Education Press.
Predictive analytics, AI and the impact of personalized supports 91
Introduction
The advancement of educational technology has allowed blended multi-
modal language learning content to become richer and more diverse. In
Hong Kong and beyond, institutions are increasingly adopting blended
components in the design of English for Academic Purposes (EAP) courses
(authors; Terauchi et al., 2018). Blended learning allows students to flex-
ibly, punctually, and continuously participate within the learning man-
agement system (LMS) (e.g., Blackboard, Canvas, Moodle) (Rasheed
et al., 2020). In this autonomous learning environment, self-regulation
becomes a critical factor for students to succeed in their studies (see Lynch
& Dembo, 2004). Students who cannot regulate their learning efficiently
may be less engaged with the learning content, leading to dissatisfaction
and unsatisfactory course grades (authors). As higher education institu-
tions are increasingly relying on blended learning to improve the quality
of learning and enhance and accelerate student success, exploring which
engagement variables are more predictive of student outcomes is critical.
Recently, learning analytics (LA) has emerged with the aim of allowing
stakeholders (e.g., administrators, course designers, teachers) to make
informed strategic decisions to create more effective learning environ-
ments and improve the quality of education (Sclater, 2016). By employing
LA, it is possible to understand, optimize, and customize learning, and
thus bridge pedagogy and analytics (Rienties et al., 2018).
This study investigates how students’ self-regulated behaviors are asso-
ciated with their outcomes via a LA approach. Learners in this study were
students in an EAP course at a university in Hong Kong that included a
DOI: 10.4324/9781003244271-8
Predicting student success with self-regulated behaviors 93
Literature Review
Generally, LA can be defined as “the measurement, collection, analysis,
and reporting of data about learners and their contexts, for purposes of
understanding and optimizing learning and the environments in which
it occurs” (Siemens & Long, 2011, p. 32). LA provides various benefits
for higher education institutions and stakeholders as it produces summa-
tive, real-time, and predictive models that provide actionable informa-
tion during the learning process (Daniel, 2015). Hathaway (1985, p. 1)
argues that the “main barrier to effective instructional practice is lack of
information.” LA data can also facilitate the detailed analysis and moni-
toring of individual students and allow researchers to construct a model
of successful student behaviors (Nistor & Hernandez-Garcia, 2018).
This includes recognizing a loss of motivation or attention, identifying
potential problems, providing personalized intervention for at-risk stu-
dents, and offering feedback on progress (Avella et al., 2016; Gašević
et al., 2016).
Recently, several articles were published on how LA can be used to gain
insight into students’ learning (Du et al., 2021). One study documented
an early intervention approach used to predict students’ performance
and report outcomes directly to them via email (Arnold & Pistilli, 2012).
This “Course Signal” intervention collected various data points, such
as demographics, grades, academic history, and engagement with learn-
ing activities. The personalized email informed students of their learn-
ing performance to date and included additional information, such as
changes required for at-risk students. LA can make complex data digest-
ible for students and teachers by breaking it down and using visuals to
show meaningful information (e.g., trends, patterns, correlations, urgent
issues) through a dashboard (Sahim & Ifenthaler, 2021). Schwendimann
et al. (2017), in a review of articles published between 2010 and 2015,
found that dashboards often rely on data from a single LMS and utilize
simple pie charts, bar charts, and network graphs. However, in Brown’s
(2020) study, teachers were bewildered by how data were presented in the
dashboard, which they suggested undermined their pedagogical strate-
gies. While LA is essential for improving education as it helps students
and teachers make informed decisions (Scheffel et al., 2014), more atten-
tion should be paid to the delivery of data (see Vieira et al., 2018). It is
94 Dennis Foung, Lucas Kohnke, and Julia Chen
al. (2015) found no correlation between online activity indicators and the
degree of commitment or teamwork. Thus, the efficacy of LA in predict-
ing and measuring student outcomes warrants further investigation.
As more students enroll in large-scale EAP courses in higher education
institutions around the world, it is becoming increasingly important to pre-
dict their outcomes based on their interaction with course content on LMSs
(author). EAP students face challenges in improving their academic language
proficiency and communicative competence so as to succeed in their academic
studies (Hyland, 2006). Although studies have confirmed the positive ben-
efits of LMS and blended learning for EAP students (Harker & Koutsantoni,
2005), more research is needed to shed light on which measures of engage-
ment can support students’ efforts and help them succeed.
Through careful analysis of data collected by an LMS, LA offers insti-
tutions and EAP teachers opportunities to make adjustments and improve-
ments to the content and delivery of their courses. Although studies in other
disciplines have analyzed LMS data to make sense of learning behavior and
its impact on learning and persistence in the learning process (e.g., Choi et al.,
2018; Tempelaar et al., 2018), LA is still an emerging field in EAP.
With surging interest in LA, especially for predicting student outcomes,
many studies are interested in comparing the effectiveness of various
machine learning algorithms (e.g., Yagci, 2022; Waheed et al., 2020; Xu et
al., 2019). Within higher education, algorithm comparison studies include
sample sizes ranging from around 100 (Burgos et al., 2018) to more than
10,000 (Waheed et al., 2020; Cru-Jesus et al., 2020). However, large-
scale comparison (with sample size > 10,000) is still very rare. Some com-
monly assessed algorithms include Logistical Regression, Support Vector
Machine (SVM), Artificial Neural Network (ANN), and Classification
Tree. The metric most commonly used to evaluate the effectiveness of
algorithms is the accuracy rate, which is simply the percentage of correct
predictions divided by the total number of cases. Some studies also exam-
ined the F1 ratio, which shows the ratio between recall and precision (e.g.,
Yagci, 2022). In the educational context, most studies achieve a predictive
accuracy rate of between 70% and 80% (Yagci, 2022: 70–75%; Waheed
et al., 2020: 80%). The F1 ratio varies from 0.69 to 0.87. Numerous stud-
ies agree that SVM can achieve a high accuracy rate and may perform
better than other algorithms, whether it is being compared with ANN
(Xu et al., 2019), with Logistical Regression/Classification Tree (Yagci,
2022). However, SVM does not always outperform other algorithms.
For example, Nieto et al. (2018) found no significant difference between
ANN and SVM in their context. Also, some algorithms are similar to
a black box with low interpretability, such ANN and SVM (Cru-Jesus
et al., 2020). Not many studies have directly compared only Logistical
96 Dennis Foung, Lucas Kohnke, and Julia Chen
Regression, ANN, and Classification Tree. This suggests the need for a
large-scale comparison between algorithms in the educational context. In
particular, this chapter aims to address this knowledge gap by answering
the following questions:
1. In what ways can three common machine learning algorithms
(Logistical Regression, ANN, and Classification Tree) be used to pre-
dict students’ grades?
2. Among the three algorithms (Logistical Regression, ANN, and
Classification Tree), which one performs best when employed for adap-
tive learning design?
Methodology
Context of study
The study was conducted with students in a Hong Kong university taking
an English for Academic Purposes course (EAPC). While the university
requires students to complete the university EAP requirement with differ-
ent entry and exit points, EAPC with an annual intake of around 1000 is
a mandatory course for all students. Some students with a lower English
proficiency may take a proficiency-based course before taking EAPC, but
most students will take EAPC as the first university English course fol-
lowed by an advanced course.
EAPC is a 3-credit 13-week course and there were more than fifty sec-
tions for each academic year. The course has been delivered as a blended
learning course and the classes meet in person every week with a sup-
plementary multimodal learning package. This study used a dataset from
the university learning management system that includes students from 7
years and 14 cohorts. This dataset runs from the first year that this course
included the blended learning package, to right before the Fall semester
in 2019 when the course delivery mode was temporarily changed due to
the social unrest in Hong Kong and the COVID-19 pandemic. The course
mode remains in its altered form at the time of writing.
While the dataset includes several cohorts and course sections, a strict
and well-established quality assurance mechanism made comparison across
cohorts/sections possible. All cohorts and sections completed the same assess-
ment tasks with the same assessment weighting and grading descriptors. All
cohorts and sections used standardized notes and online resources distrib-
uted through the university learning management system. There were only
very minor changes to the assessment guidelines and notes during the study
period, e.g., removal of typographical errors. While teachers have the flexibil-
ity to upload more materials, standardized notes and materials are used as the
Predicting student success with self-regulated behaviors 97
event log with all events in the course, including the date and time of
attempts at MLP quizzes and scores for each MLP attempt. To process
the data for analysis, the two datasets were merged: each row of data
represents one student with the respective overall course grade, the day
the student started completing a quiz in MLP, the percentage of quizzes
that the student completed, the number of days between the first and the
last attempt at any MLP quizzes, the average score of any first attempt of
any MLP quizzes and the average score of the last attempt at any MLP
quiz. Retrieving and computing these variables will allow all rows to be
comparable despite variations in the number of quizzes across cohorts.
After merging the two datasets, further data processing and cleansing
began. First, students who did not complete the course (with no final grade)
were removed. Then, all MLP score variables were transformed to a standard
score for easy analysis (i.e., the average of the first attempt, that of the last
attempt, and differences between attempts). The days-related variables were
not transformed for easy understanding (i.e., the start day in using MLP, the
number of days until the first attempt at an MLP quiz, the number of days
between first and last attempt, etc.). The choice of predictors was made for
convenience, as the goal of the analysis is to identify predictors that are read-
ily available on the learning management system for further adaptive learn-
ing design. The overall grades (which range from 0 to 4.5) were transformed
into a binary variable, Good or At-Risk Performance, with 3.0 being the
cut-off point. The cut-off point was decided because (1) the notation for 3.0
was “Good” according to the university grading scheme, and (2) the subject
leader considers this cut-off point to be pedagogically meaningful.
Participants
After the data cleansing and processing procedures described above, there
were a total of 17,968 participants in the dataset. While the original data-
set did not include students’ demographic information, students taking this
course came from various disciplines including Applied Science, Engineering,
Design, Tourism and Hospitality, Nursing, and Rehabilitation Science.
Data analysis
This study tested three types of machine learning techniques to predict
students’ grades. To maximize comparability across the results provided
by the three predictive algorithms, the dataset was first randomly divided
into a training (n = 12,000) and testing (n = 5968) dataset, and all three
algorithms were trained with the same dataset. The three machine learning
techniques were tested with the same target variable, the binary variable
Predicting student success with self-regulated behaviors 99
Results
This study aims to examine whether the three common machine learning
algorithms, Classification Tree, Logistical Regression, and ANN, can be
used to predict students’ grades for adaptive learning purposes. Whether
students can attain a “Good” grade or not was used as an indicator of suc-
cess, and seven self-regulated learning indicators were used as predictors
for each predictive algorithm.
Performance of models
All three models were successfully established, with Table 6.1 showing
their overall accuracy. Three algorithms achieved a satisfactory accuracy,
with an overall accuracy of 80% or above. While there is no one common
100 Dennis Foung, Lucas Kohnke, and Julia Chen
threshold for overall accuracy, 80% accuracy will mean that most of the
students will be classified as Good (expected to obtain a grade of 3.0 or
above) or At-Risk (likely to achieve below 3.0) accurately, which seems
to be operationally acceptable. An overall accuracy rate of 80% is also
comparable to Yagci (2022) but only the accuracy rate for Classification
Tree was as high as that from Waheed et al. (2020) with an accuracy of
85%. While there are differences in performance (e.g., Precision, Recall,
F1) to be discussed later, in general, all three models seem to be effective
in predicting students’ performance from self-regulated behaviors.
While the overall accuracy of all three models seems satisfactory, there
is a need to further evaluate the performance of each model. As a joint
measure of precision and recall rate, the F1 ratio provides information on
whether a model can successfully predict Good grades. The Classification
Tree with an F1 ratio of 0.77 performs better than Logistical Regression
(F1: 0.65) and ANN (F1: 0.68). This is also better than some past stud-
ies, such as Yagci (2022) with 0.69–0.72. Further consideration of these
parameters seems to suggest that the Classification Tree performs slightly
better than the other algorithms.
Details of predictors
The Classification Tree identifies only the average final score in quizzes
and improvement between attempts as predictors. (see Figure 6.1 for
the Classification Tree’s results.) In the first layer, the average final score
was used as the predictor and the cut-off was 0.41. In the second and
third layers, students were classified based on their improvement between
attempts. For example, if the average final score of a student is 0.3 and the
difference in attempt was –0.1, the student would be classified as At-Risk.
The adequacy of using two predictors in adaptive learning design deserves
further discussion.
Predicting student success with self-regulated behaviors 101
FIGURE 6.1
Useful Information for Adaptive Design: Classification Tree
Developed
TABLE 6.2 Useful Information for Adaptive Design: Strength of Predictors from Logistical Regression
Start Day 24.56 0.0078 Ten days later 26.95 1.51 Later better
days
nth Day Before 19.76 0.0291 Ten days later 20.32 –5.12 Earlier better
Deadline for First days
Attempt
Duration 47.53 0.0079 Ten days longer 26.97 1.53 Longer better
days
No. of Quizzes 8.11 0.9841 One more quiz 27.35 1.91 More better
Submitted quiz
(10 quizzes assumed)
Dennis Foung, Lucas Kohnke, and Julia Chen
Average Score of 0.00 0.3954 One standard 33.60 8.16 Higher better
First Attempt (in deviation higher
standard score)
Average Score of 0.00 1.3051 One standard 55.72 30.28 Higher better
Final Attempt (in deviation higher
standard score)
Improvement between 0.00 –2.0061 One standard 4.39 –21.05 No improvement
First and Final deviation more better
Attempt (in improvement
standard score)
Baseline probability: If a student achieves the mean score for all items, the probability of obtaining a Good grade will be 25.44%.
Predicting student success with self-regulated behaviors 103
Discussion
to remind them, with a link for the quiz. The same can be done based
on the Classification Tree finding predicting reduced success if students
complete a quiz and then wait more than a week to retake it. Students
could be sent reminders after a week if they scored low on a quiz and had
not yet revisited it. Such adaptive intervention is similar to those intro-
duced in other disciplines (see Arnold & Pistilli, 2012; Daniel, 2015;
Picciano, 2012). Messages could also be sent to teachers, enabling them
to follow up with students regarding their online behaviors. Depending
on the way these models are implemented in the LMS, the recommenda-
tion/reminder message may serve as a good source of feedback to help
students and teachers in the teaching and learning process (Siemens &
Long, 2011). Also, while all these indicators are relevant in general to
certain aspects of self-regulated learning (which are beyond the scope of
the chapter on data analytics and adaptive learning), the incorporation
of these students’ success models into the LMS will, to a certain degree,
promote self-regulated learning (Siemens & Long, 2011) and improve the
learning experience of students.
As a study to model the outcomes of students’ online behaviors for the
purpose of better adaptive learning design, this study confirms existing
findings that these behaviors are important predictors of outcomes. While
past studies examine participatory variables such as forum participation
(Fidalgo-Blanco et al., 2015; Macfadyen & Dawson, 2012), this study
used participatory indicators from online quizzes, such as Start Day, nth
Day Before Deadline for First Attempt, Duration, Number of Quizzes
Submitted, Average Score of First Attempt and at Final Attempt, and
Improvement between Attempts. The results seem to suggest that par-
ticipatory variables for online activities, such as participation in forums
or online quizzes, can be good predictors of students’ success. Also,
this study indicates that changes in online activities (e.g., improvement
between scores) can be an important predictor of success, a conclusion
that aligns with Wolff and Zdrahal’s (2012) findings. These indictors can
help teachers focus on particular aspects of online learning when identify-
ing at-risk and/or outstanding students.
the balance the algorithm can strike between predicting cases correctly and
not having too many positive cases. From this perspective, the Classification
Tree seems to perform better, and in fact, the Classification Tree achieved
the highest predictive accuracy in the current study as well. While most
past studies identified SVM (not tested here) to have better performance,
Classification Tree is also seldom identified as the best compared with
ANN or Logistical Regression, making the findings of the present study
unusual (e.g., Xu et al., 2019). However, in Xu et al. (2019) the accuracy
rate of the comparable studies was lower, with 62.3% for Classification
Tree and 70.95 for ANN. This may also indicate that Classification Tree
outperforms ANN with a large sample size but not with a smaller one.
Another interesting finding was that no interpretation can be made with
ANN except on whether a student is predicted to perform well or not. Like
SVM, ANN works as a “black box” that provides limited information
about its calculations (Cruz-Jesus et al., 2020). In contrast, Classification
Tree and Logistical Regression provide useful information to practitioners
on how and in what ways a student may or may not perform well. This
advantage in interpretability can be important for decision-making in the
educational context (see Nieto et al., 2018 on how decisions can be made
in an academic context with these algorithms). Therefore, we argue that
Classification Tree and Logistical Regression seem to be more appropriate
for an educational context that aims for adaptive design.
Conclusion
This study examined how learners’ online interactions in an English for
Academic Courses module delivered in a Hong Kong university can be
used to predict student success and which algorithm performs better in
this context. We successfully established three algorithms using Logistical
Regression, Artificial Neural Networks, and Classification Tree. While
all these algorithms performed well in terms of predictive accuracy,
Classification Tree had a superior F1 ratio. Both Logistical Regression
and Classification Tree produce useful insights for adaptive learning.
Therefore, we suggest the more extensive use of Classification Tree to pro-
mote adaptive learning.
References
Arnold, K., & Pistilli, M. D. (2012). Course signals: Using learning analytics to
increase student success. In Proceedings of the 2nd international conference
on learning analytics and knowledge (pp. 267–270). ACM.
Atherton, M., Shah, M., Vazquez, J., Griffiths, Z., Jackson, B., & Burgess, C.
(2017). Using learning analytics to assess students engagement and academic
106 Dennis Foung, Lucas Kohnke, and Julia Chen
Gašević, D., Dawson, S., Rogers, T., & Gašević, D. (2016). Learning analytics
should not promote one size fits all: The effects of instructional conditions in
predicting academic success. Internet and Higher Education, 28, 68–84.
Golonka, E. M., Bowles, A. R., Frank, V. M., Richardson, D. L., & Freynik, S.
(2014). Technologies for foreign language learning: A review of technology
types and their effectiveness. Computer Assisted Language Learning, 27(1),
70–105. https://doi.org/10.1080/09588221. 2012.700315
Greller, W., & Drachsler, H. (2012). Translating learning into numbers: A generic
framework for learning analytics. Educational Technology and Society, 15(3),
42–57.
Harker, M., & Koutsantoni, D. (2005). Can it be as effective? Distance versus
blended learning in a web-based EAP programme. ReCALL, 17(2), 197–216.
https://doi.org/10.1017/S095834400500042X
Hathaway,W. E. (1985). Hopes and possibilities for educational information
systems. Paper presented at the invitational conference Information Systems
and School improvement: Inventing the future. UCLA Center for the Study of
Evaluation.
Hinkelman, D. (2018). Blending technologies in second language classrooms.
Valencia, SP: Palgrave Macmillan.
Hsieh, T. C., Wang, T. I., Su, C. Y., & Lee, M. C. (2012). A fuzzy logic-based
personalized learning system for supporting adaptive English learning. Journal
of Educational Technology and Society, 15(1), 273–288.
Hyland, K. (2006). English for academic purposes: An advanced resource book.
Routledge.
Iglesias-Pradas, S., Ruiz-de-Azcárate, C., & Agudo-Peregrina, Á. F. (2015).
Assessing the suitability of student interactions from Moodle data logs as
predictors of cross-curricular competencies. Computers in Human Behavior,
47, 81–89.
Kerr, P. (2016). Adaptive learning. ELT Journal, 70(1), 88–93.
Lynch, R. & Dembo, M. (2004). The relationship between self-regulation and
online learning in blended learning context. International Review of Research
in Open and Distance Learning, 5(2), 1–16.
Larusson, J. A., & White, B. (2014). Learning analytics: From research to
practice. Springer.
Macfadyen, L. P., & Dawson, S. (2012). Numbers are not enough: Why
e-learning analytics failed to inform an institutional strategic plan. Journal of
Educational Technology and Society, 15(3), 149–163.
Milborrow, S. (2022). Package ‘rpart.plot’. https://cran.r-project.org/web/
packages/rpart.plot/rpart.plot.pdf
Nguyen, A., Gardner, L. A., & Sheridan, D. (2018). A framework for applying
learning analytics in serious games for people with intellectual disabilities.
British Journal of Educational Technology, 49(4), 673–689.
Nieto, Y., García-Díaz, V., Montenegro, C., & Crespo, R. G. (2018). Supporting
academic decision making at higher educational institutions using machine
learning-based algorithms. Soft Computing (Berlin, Germany), 23(12), 4145–
4153. https://doi.org/10.1007/s00500- 018-3064-6
108 Dennis Foung, Lucas Kohnke, and Julia Chen
Nistor, N., & Hernández-Garcíac, Á. (2018). What types of data are used in
learning analytics? An overview of six cases. Computers in Human Behavior,
89(1), 335–338. https://doi.org/10.1016/j.chb. 2018.07.038
Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational
data mining in practice: A systematic literature review of empirical evidence.
Journal of Educational Technology and Society, 17, 49–64.
Picciano, A. G. (2012). The evolution of big data and learning analytics in
American higher education. Journal of Asynchronous Learning Networks,
16(3), 9–20.
Rasheed, R. A., Kamsin, A., & Abdullah, N. A. (2020). Challenges in the
online component of blended learning: A systematic review. Computers and
Education, 144. https://doi.org/10.1016/j.compedu. 2019.103701
Rienties, B., Lewis, T., McFarlane, R., Nguyen, Q., & Toetenel, L. (2018).
Analytics in online and offline language environments: The role of learning
design to understand student online engagement. Computer Assisted Language
Learning, 31(3), 273–293. https://doi.org/10.1080/09588221. 2017.1401548
Ripley, B., & Venables, W. (2022). Package ‘nnet’. https://cran.r-project.org/web
/packages/nnet /nnet.pdf
Sahim, M., & Ifenthaler, D. (2021). Visualisations and dashboards for learning
analytics. Springer.
Scheffel, M., Drachsler, H., Stoyanov, S., & Specht, M. (2014). Quality indicators
for learning analytics. Educational Technology and Society, 17(4), 117–132.
Schwendimann, B. A., Rodriques-Triana, M. J., Vozniuk, A., Prieto, L. P.,
Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2017). Perceiving
learning at a glance: A systematic literature review of learning dashboard
research. IEEE Transactions on Learning Technologies, 10(1), 30–41. https://
doi.org/10.1109/ TLT. 2016. 2599522
Sclater, N. (2016). Developing a code of practice for learning analytics. Journal of
Learning Analytics, 3(1), 16–42. https://doi.org/10.18608/jla. 2016. 31.3
Siemens, G., & Long, P. (2011). Penetrating the fog: Analytics in learning and
education. EDUCAUSE Review, 46(5), 30–40.
Tempelaar, D., Rienties, B., Mittelmeier, J., & Nguyen, Q. (2018). Student
profiling in a dispositional learning analytics application using formative
assessment. Computers in Human Behavior, 78, 408–420. https://doi.org/10
.1016/j.chb. 2017.08.010
Terauchi, D. T., Rienties, B., Mittelmeier, J., & Nguyen, Q. (2018). Student
profiling in a dispositional learning analytics application using formative
assessment. Computers in Human Behavior, 78, 408–420. https://doi.org/10
.1016/j.chb. 2017.08.010
The R Core Team. (2022). R: A language and environment for statistical
computing. https://cran.r-project.org /doc/manuals/r-release/fullrefman.pdf
Therneau, T., Atkinson, B., & Ripley, B. (2022). Package ‘rpart’. https://cran.r
-project.org/web/packages/rpart/rpart.pdf
Vieira, C., Parsons, P., & Byrd, V. (2018). Visual learning analytics of educational
data: A systematic literature review and research agenda. Computers and
Education, 122(1), 119–135. https://doi.org/10.1016/j.compedu. 2018.03.018
Predicting student success with self-regulated behaviors 109
Waheed, H., Hassan, S., Aljohani, N. R., Hardman, J., Alelyani, S., & Nawaz, R.
(2020). Predicting academic performance of students from VLE big data using
deep learning models. Computers in Human Behavior, 104, Article 106189.
https://doi.org/10.1016/j.chb. 2019.106189
Wise, A. F., Zhao, Y., & Hausknecht, S. N. (2013, April 8–12). Learning analytics
for online discussions: A pedagogical model for intervention with embedded
and extracted analytics. In Society for Analytics Research: In Proceedings of
the third international conference on learning analytics and knowledge (pp.
48–56).
Wolff, A., & Zdrahal, Z. (2012). Improving retention by identifying and
supporting ‘at-risk’ students. Educause Review Online. http://er.educause.edu
/articles / 2012 / 7/ improving-retention-by-identifying- and- supporting- atrisk
-students
Xu, X., Wang, J., Peng, H., & Wu, R. (2019). Prediction of academic performance
associated with internet usage behaviors using machine learning algorithms.
Computers in Human Behavior, 98, 166–173. https://doi.org/10.1016/j.chb
.2019.04.015
Yağcı, M. (2022). Educational data mining: Prediction of students’ academic
performance using machine learning algorithms. Smart Learning
Environments, 9(1), 1–19. https://doi.org /10.1186/s40561- 022- 00192-z
7
BACK TO BLOOM
Why theory matters in closing
the achievement gap
Alfred Essa
Introduction
Closing the achievement gap is the holy grail in education. Everything has
been tried, but nothing has succeeded at scale and with replicability. In
this chapter, I go back to basics. I go back to Bloom. Benjamin Bloom is
well known in educational research for his learning taxonomy, his state-
ment of the 2 Sigma problem, and his design of mastery learning. But his
most import contribution to educational research is neglected and all but
forgotten.
In this chapter, I present Bloom’s theory of learning and show how it
can be applied to solving the achievement gap. I explicate Bloom’s theory
at two levels. First, I examine its structure. I will argue that the achieve-
ment gap is insoluble without this structure. In Bloom’s words, by struc-
ture we mean a “causal system in which a few variables may be used
to predict, explain, and determine different levels and rates of learning”
(Bloom, 1976, p. 202). The key idea is that a group of variables must be
identified, and their interactions studied together in an ongoing set of
experiments. His theory of learning as structure is empirically and peda-
gogically neutral, allowing for multiple instantiations and approaches. At
the second level, I examine Bloom’s specific theory of mastery learning as
an exemplary instance of the theoretical structure. His theory of learning
as mastery learning generates specific hypotheses subject to empirical con-
firmation or refutation. In Bloom’s words, mastery learning “is a special
case of a more general theory of school learning” (Bloom, 1976, p. 6).
I explicate Bloom’s theory with a formal model, enacted as a computer
simulation. The model, which I call learning kinematics, serves both a
DOI: 10.4324/9781003244271-9
Back to bloom 111
These had to be pretty smart kids, or they would not have been admit-
ted; they had to be ambitious kids, or they would not have wanted
to attend an Ivy; they had to be brave kids, or they would not have
wanted to attend a snobbish, traditionally racist college, which was
what Princeton was trying to cease to be in 1969.
(Glymour, 2010, p. 28)
112 Alfred Essa
Glymour concluded that the problem was most likely not the students,
but “me or my course or both.” He then conducted a thought experiment.
Suppose that the students who had failed, including the White student,
were less well prepared than other students in the class. Would that have
mattered? Also, suppose that they were not as good at taking timed tests
under pressure: not because of ability, but because they had not practiced
under those conditions. Would that have made a difference in their per-
formance? In short, did the course design give underprepared students an
opportunity to catch up?
Based on his thought experiment, Glymour conjectured that his course
design did not meet the needs of all his students. Glymour hypothesized
that grit (“sweat equity”) alone cannot make up for lack of preparation.
As a good scientist, Glymour designed his next course offering as an
experiment.
I found a textbook that divided the material into a large number of very
short, very specific chapters, each supplemented with a lot of fairly easy
but not too boring exercises. For each chapter I wrote approximately
fifteen short exams, any of which a prepared student could do in about
fifteen minutes. I set up a room staffed for several hours every weekday
by a graduate student or myself where students could come and take a
test on a chapter. A test was immediately graded; if failed, the student
would take another test on the same material after a two-day wait.
No student took the very same test twice. I replaced formal lectures
with question-and-answer sessions and extemporaneous mini-lectures
on topics students said they found difficult. I met with each student for
a few minutes every other week, chiefly to make sure they were keeping
at it. Grading was entirely by how many chapter tests were passed; no
penalty for failed tests.
(Glymour, 2010, pp. 28–29)
What was the result of Glymour’s new course design? The following
semester approximately the same number of Black students enrolled. This
time, however, half of them earned an A and the others all received a B
grade. Glymour believed he was on to something.
What I had discovered was that a lot of capable students need informa-
tion in small, well-organized doses, that they often do not know what
they do not know until they take a test, that they need a way to recover
from their errors, that they need immediate feedback, that they learn at
different rates, that they need a chance to ask about what they are read-
ing and trying to do, and that, if given the chance, motivated students
Back to bloom 113
can and will successfully substitute sweat for background. This turns
out to be about all that education experts really know about college
teaching, and of course it is knowledge that is systematically ignored.
(Glymour, 2010, p. 29)
Learning kinematics
We began with the question: Why do good students fail? Although we
should be wary of drawing conclusions from a single example, Glymour’s
experience at Princeton suggests that there are major cracks in the tradi-
tional model of education. One such crack is the assumption that under-
prepared students can make up for lost ground by doing more: more effort,
more grit, and more mindset. It’s not that these concepts are not impor-
tant for learning. They are. Glymour’s experiment suggests though that
placing the burden of success exclusively on the shoulders of the student is
misdirected. It is also dangerous. Why do good students fail? Glymour’s
experiment suggests that the primary culprit is flawed course design.
In this section, I present a formal model of learning, calling it “learn-
ing kinematics.” It is enacted in the form of a computer simulation. The
model seeks to confirm Glymour’s hypothesis formally. The advantage of
using a simulation is that it allows us to study the learning environment as
a causal system in which several key variables interact to bring about an
outcome. The model also lays the conceptual groundwork for explicating
Bloom’s theory.
The word “kinematics” comes from physics, where it means the study
of motion. In physics, motion is defined principally in terms of posi-
tion, velocity, acceleration, and time. The starting point for applying
kinematics to learning is the analogy of a foot race. Suppose we invite
a group of individuals to run a marathon. Suppose also that the run-
ners are drawn randomly from the population. We can safely assume
that the runners will have different levels of aptitude and motivation.
By aptitude, I mean a set of permanent attributes favorable to running
success. For example, elite runners have some natural advantages (e.g.,
greater VO2 max) over average runners. By motivation, I will mean a
set of malleable psychological attributes favorable to running success.
Distance running requires, for example, a high level of motivation and
persistence.
114 Alfred Essa
Low Middle
Initial Position (%) Initial Position (%)
Low 91 51
Final Position
Middle 8 32
Final Position
High 1 17
Final Position
Learning velocity
Let’s look more deeply at the connection between initial position and final
position. At a simple level (when we assume velocity is constant), the gov-
erning equation for particle motion is:
d = v xt
Back to bloom 117
to be very effective for some learners and relatively ineffective for other
learners. This aspect of the process of schooling is likely to be replete with
errors which are compounded over time (emphasis mine). Unless there are
ways of identifying and correcting the flaws in both the teaching and the
learning, the system of schooling is likely to produce individual differences
in learning which continue and are exaggerated over time.
(Bloom, 1976, p. 9).
Low Middle
Initial Position (%) Initial Position (%)
Low 64 27
Final Position
Middle 35 36
Final Position
High 1 37
Final Position
FIGURE 7.7
Bloom Effect: Final Distributions of Traditional vs Optimized
Instruction
122 Alfred Essa
Adaptive learning
In this section, we turn to adaptive learning systems. We saw earlier that
the key idea of mastery learning is differentiated instruction. Different
learners need different instructional resources and strategies during their
learning journey. Adaptive learning systems are intelligent systems that
can scaffold the learning process by supporting differentiated real-time
feedback, correction, and enrichment for each learner. They can be used
for self-paced learning or in mastery learning mode as a supplement to
instructor-led classrooms.
In his monograph A Decision Structure for Teaching Machines, Richard
Smallwood (1962) gives one of the earliest and clearest statements of the
workings of “teaching machines” based on adaptive principles.
The first four properties embody core principles of mastery learning. The
fifth property anticipates embedded learning analytics, or the continuous
collection of data to update and improve the quality of learning models.
According to Smallwood, the fundamental “desirable” property of a
teaching machine, however, is that it be able to vary its presentation of
learning material based on the individual characteristics and capacities
of each learner. Smallwood observes that “this adaptability requires that
the device be capable of branching—in fact, one would expect the poten-
tial adaptability of a teaching machine be proportional to its branching
capability” (Smallwood, 1962, p. 11). As Smallwood notes, the idea of
branching is fundamental to teaching machines as adaptive systems. Until
recently, branching has been achieved through a limited set of ad hoc,
finite rules programmed by humans. The application of machine learning
to teaching machines eliminates this restriction entirely.
Adaptive learning systems have evolved rapidly since Smallwood’s for-
mulation. At a high level of generality, all adaptive learning systems can
be said to rely on five interacting formal models (Essa, 2016). The domain
model specifies what the learner needs to know. What the learner needs
to know is further decomposed into a set of concepts, items, or learn-
ing objectives. The domain model is also referred to as the knowledge
space. Once we have specified explicitly what the learner needs to know
or master, the learner model represents what the learner currently knows.
The learner model is also called the knowledge state, which is a subset of
the knowledge space. The assessment model is how we infer a learner’s
knowledge state. Because the learner’s knowledge state is inferred through
assessment probes, adaptive learning systems are inherently probabilis-
tic. The transition model determines what the learner is ready to learn
next, given the learner’s current knowledge state. Finally, the pedagogi-
cal model specifies the activities to be performed by the learner to attain
the next knowledge state. The pedagogical model can encompass a wide
range of activities, from watching a video to engaging in a collaborative
exercise with peer learners.
Because a sophisticated adaptive learning system depends upon an
accurate assessment of each learner’s knowledge, such systems can provide
a fine-grained picture of each learner’s knowledge state. Next-generation
adaptive systems can also pinpoint precisely which topics a leaner is ready
to learn next, and which topics a learner is likely to struggle with, given
the learner’s history and profile. Even if an instructor chooses not to use
adaptive systems for instruction, their built-in formative assessment capa-
bility can serve as a powerful instrument for diagnostics and remediation.
In summary, well-designed adaptive learning systems allow an instruc-
tor to carry out differentiated instruction and formative assessments
126 Alfred Essa
Conclusion
Bloom’s singular ambition, throughout his long and distinguished career,
was to find methods for closing the achievement gap. As I have tried to
show, his approach was to outline a theoretical structure which tries to
explain variation in outcomes in terms of the learners’ previous charac-
teristics and the quality of instruction. With his theory of mastery learn-
ing, Bloom went one step further by demonstrating a learning design that
incorporates formative assessments and feedback-corrective procedures at
its core. But as Glymour realized, implementing such a system is not scal-
able without some form of machine intelligence.
While adaptive learning systems have considerable promise, two major
obstacles remain in realizing Bloom’s vision, the first is technical and the
second is institutional. Bloom envisioned a “causal system in which a few
variables may be used to predict, explain, and determine different lev-
els and rates of learning” (Bloom, 1976). But traditional statistical tech-
niques, as well as new methodologies in machine learning, are entirely
restricted to the realm of correlation. True interventions require an under-
standing of causal mechanisms. Unfortunately, the study of causality has
been hampered by Randomized Control Trials (RCTs), which are often
impractical, expensive, laborious, and ethically problematic in education.
Fortunately, recent advances in causal modeling have opened approaches
to causality using observational data.
The second barrier to implementing Bloom’s theoretical ideas is insti-
tutional. Technical advances in statistics, machine learning, and causal
modeling has outstripped the ability of nontechnical educators to apply
the new techniques to advance educational goals.
We believe that much of the issue springs from a lack of genuine col-
laboration between analysts (who know and understand the methods
to apply to the data) and educators (who understand the domain, and
so should have influence over what critical questions are asked by
Learning Analytics researchers).
(Hicks et al., 2022, p. 22)
If Bloom’s ideas are to be tested and put into practice, educators and prac-
titioners will need the ability to “think with causal models.” Fortunately,
the learning research team at University of Technology Sydney have devel-
oped a visual formalism and accompanying set of techniques to think
Back to bloom 127
with causal models. In an important recent paper, the UTS team makes a
convincing case that
It’s been nearly forty years since Bloom published his landmark paper
“The 2 Sigma Problem: The Search for Methods of Group Instruction as
Effective as One-to-One Tutoring.” The paper challenged researchers to
devise methods of instruction which would allow most students to attain
a high degree of learning. Most of the elements are now in place to finally
realize Bloom’s vision of equity in education.
References
Bloom, B. S. (1976). Human characteristics and school learning. Mc-Graw Hill.
Bloom, B. S. (1984). The 2 Sigma problem: The search for methods of group instruction
as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16.
Carroll, J. B. (1963). A model of school learning. Teachers College Record, 64(8),
723–733.
Chi, M., & VanLehn, K. (2008). Eliminating the gap between the high and
low students through meta-cognitive strategy instruction. In International
conference on intelligent tutoring systems (pp. 603–613). Springer. https://
link.springer.com /chapter/10.1007/978-3 -540 - 69132-7_ 63
Essa, A. (2016). A possible future for next generation adaptive learning systems.
Smart Learning Environments, 3(1), 16.
Essa, A. (2020). Does time matter in learning: A computer simulation of Carroll’s
model of learning. In International conference on human computer interaction
(pp. 458–474). Springer. https://dl.acm.org/doi/abs/10.1007/978-3- 030-50788
-6_ 34
Glymour, C. (2010). Galileo in Pittsburgh. Harvard University Press.
Hicks, B., Kitto, K., Payne, L., & Shum, S. B. (2022). Thinking with causal
models: A visual formalism for collaboratively crafting assumptions. In
LAK22: 12th international learning analytics and knowledge conference (pp.
250–259). https://cic.uts.edu.au/wp- content/uploads/2022/02/ LAK22_ Hicks
_etal_Thinking _with_Causal_ Models.pdf
Morrison, H. C. (1926). The practice of teaching in the secondary school (2nd
ed.). University of Chicago Press.
Smallwood, R. (1962). A decision structure for teaching machines. MIT Press.
Washburne, C. W. (1922). Educational measurement as a key to individual
instruction and promotion. Journal of Educational Research, 5: 195–206.
8
THE METAPHORS WE LEARN BY
Toward a philosophy of learning analytics
W. Gardner Campbell
One of the first observations to be made about the various separate disci-
plines of science and technology is that each has built up its own framework
or culture represented by preferred ways of looking at the world, preferred
methodology and preferred terminology, preferred ways of presenting
results, and preferred ways of acting…. The natural philosophers were …
unhampered by the accumulated culture of science. They built their own
frameworks for understanding the world. They were primarily engaged in
“search” as opposed to research. They developed ways of discovering how
to think about problems as opposed to getting answers to questions posed
within an established or traditional framework. Searching appears to be the
traditional function of the philosopher. With the sharp line drawn in mod-
ern times between philosophy and science, searching for the better frame-
work in science seems to be confined to the very few philosophers of science
and technology—the many do research.
—Kennedy, J.L. and Putt, G. H. “Administration
of Research in a Research Corporation.” RAND
Corporation Report No. P-847, April 20, 1956.
The most grotesque model of analytics and assessment I have yet encoun-
tered resides in the title of an “occasional paper” from the National
Institute for Learning Outcomes Assessment: A Simple Model for
Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig (Fulcher et al.,
2014). The “pig” in the title is a metaphor for students, and the weigh–
feed–weigh is a metaphor for analytics, pedagogical intervention, and
assessment. If, as Lakoff and Johnson (2003) argue, our lives are framed
DOI: 10.4324/9781003244271-10
The metaphors we learn by 129
Indeed.
Higher education might have been the Jane Jacobs of the digital age,
fighting to preserve the idea of networked villages offering a scaled ser-
endipity of cultural, intellectual, and emotional linkages. Jacobs’ vision
of networked neighborhoods is analogous to the dreams of those who
imagined and helped to build cyberspace in the 1960s and 1970s, dream-
ers and designers such as J. C. R. Licklider, Doug Engelbart, Ted Nelson,
Bob Taylor, Alan Kay, and Adele Goldberg. Instead, higher education
emulated Jacobs’ implacable opponent Robert Moses, whose worship of
cars and the revenues generated by stacked and brutal “development” of
the environments where people lived and worked led him to envision a
lower Manhattan Thruway that would have utterly destroyed Soho and
Greenwich villages, two of the most culturally rich and vibrant areas in
New York City. Higher education’s mission proclaims Jacobs’ values, but
132 W. Gardner Campbell
its business model, particularly when it comes to our use of the Internet and
the Web, consistently supports Moses’ more remunerative and destructive
“vision” (For the story of Jacobs’ heroic and finally successful struggle to
block Robert Moses’ destructive plans, see the documentary Citizen Jane:
The Battle for the City, Altimeter Films, 2017).
It is difficult to say what sector within higher education might provide
an effective intervention. Centers for Teaching and Learning very often
constitute themselves as anti-computing, or at least as retreats from the
network, emphasizing mantras like “active learning” and “problem-based
learning” and “Bloom’s Taxonomy” (now in several varieties) and per-
petuating catchy, reductive, and empty jingles like “not the sage on the
stage but the guide at the side,” thereby insulting sages, guides, and learn-
ers all at once. The IT departments are busy with Student Information
Systems that retrieve data from, and in turn replenish, the LMS contain-
ers. Various strategies for “assessment” combine IT, teaching-learning
centers, offices of institutional research, and so-called “student success”
initiatives to demonstrate institutional prowess with specious measures
of what constitutes such “success” rather than the kind of transforma-
tive learning that curriculum scholar Lawrence Stenhouse (1975) argues
will produce unpredictable outcomes, not quantifiable “optimization” (p.
82). Faculty in their departments might yet make a difference, but most
faculty appear to have resigned themselves to the various strategies for
success, scaling, and sustainability generated by their administrators. This
is so perhaps because for now those strategies appear to preserve some
space for faculty autonomy—though the “evergreen” courses faculty help
design, particularly in the fraught area of general education (that arena of
FTE competition), may someday erase faculty autonomy altogether.
Two recent documents—one concerning information literacy, the other
concerning networked scholarship, offer better metaphors to learn by,
better ideas to think with, better ideas about true communities of learn-
ing—and how to analyze, assess, and promote their formation. One is
the ACRL’s Framework for Information Literacy for Higher Education
(2016). The other is Jean Claude Guédon’s Open Access: Toward the
Internet of the Mind (2017). Each insists that the value and life of the net-
worked mind implicit in the idea of a university is a dream that need not
die. The lessons drawn from each document are about levels of thought
and modes of understanding, abstractions generalized into principles and
particularized into actions. Both documents imagine and argue for some-
thing comprehensive and conceptually rich, a mode of understanding like
the one that computer pioneer Doug Engelbart (1962) called “a way of
life in an integrated domain where hunches, cut-and-try, intangibles, and
the human ‘feel for a situation’ usefully co-exist with powerful concepts,
The metaphors we learn by 133
below, the ACRL Framework for Information Literacy (2016) can help to
ameliorate. Look up the OED definition of “information,” and this entry
appears at the top of the list:
The most profound possibility of all is feeding the system’s output level
back into each individual. If everyone is aware of the system’s behavior
it could be “autopoietic”, that is, self-creating. Local changes in one
area—say, a death in a community—would affect the whole, and the
whole would affect the community in turn. One gets the possibility
of nonlinear feedback loops generating complex, unpredictable behav-
ior …. Multiple energies come together and form more than the sum
of their parts. The outcome is constantly surprising and new. Many-
to-many networks generate these kinds of events, and if the system is
robust enough, the feedback loop never stops. It regenerates itself like
a living flame.
(Chorost, 2011, pp. 177, 178)
on throughout the 2017 essay: What are the purposes of scholarly com-
munication? And with regard to those purposes, what substantial, indeed
defining changes have been wrought by the advent of widespread digi-
tal communication? The answer to the first question is not new, though
Guédon’s analysis furnishes intriguing evidence that the answer has
been lost or obscured within institutional practices, practices that have
led to much of the (often-willed) new ignorance in answering the second
question.
In Guédon’s analysis, scholars are a pronounced instance of the gen-
eral human impulse to study things carefully and rigorously. Scholars
devote themselves to discovering or creating knowledge, a process that
relies on unfettered communication. In higher education, libraries facili-
tate and liberate effective scholarly communication, serving their com-
munities of learning by the thoughtful stewardship and support of that
communication:
Now to the second question: What difference does the advent of wide-
spread digital communication make for scholarly communication? What
possibilities for “unprecedented public good” emerge from the “new tech-
nology” of networked digital communication, and what blocks those pos-
sibilities from being accomplished realities? Guédon’s answer is clear: a
failure of knowledge and imagination, in many cases sustained by defense
of outmoded and damaging business plans, has overtaken the contempo-
rary academy. Contemporary higher education, as practiced across the
wide range of institutions in the US and globally as well, neither beholds
“the bright countenance of truth in the quiet and still air of delightful
studies” nor engages the protean varieties of new knowledge within the
light-speed conveyances and connections of a global telecommunica-
tions network. Instead, the academy has fallen between two stools. For
Guédon, the question of whether the academy can recover either or both
of these possibilities (he implies they are linked) will depend upon recov-
ering a sense of scale—not the scale of credit hours to sell or vendors (of
technologies or of “content”) to handcuff ourselves to, but the scale of
possibility and commitment represented by higher education at its best,
and in particular, its libraries.
In an age in which one hears over and over that higher education is
a “business” (as if all businesses were alike) and that resources must be
allocated not on the basis of judgment and mission but on the basis of
“performance” or “productivity” (the optimization mindset that all too
often drives analytics and assessment), libraries are particularly vulner-
able. Libraries do not typically award degrees or generated billable credit
hours. They stand for what those degrees and credit hours themselves
are supposed to represent, and they exist to empower learners through-
out the environment. The move to “responsibility center management” or
“responsibility-based budgeting,” as Michael Katz noted in the late 1980s,
“ties the allocation of resources to the enrollment of individual schools and
departments and thereby fosters competitive, entrepreneurial activity on
individual campuses” (Katz, 1987, p. 177). Katz observes that such budg-
eting reflects “the precarious situation of universities caught by inflation,
decreased government funding, and a smaller cohort from which to draw
students,” but goes on to argue that “the application of the law of supply
and demand to internal decisions” inevitably determines “educational and
scholarly worth” merely by “market value,” with the accompanying result
that “faculty scholarship” becomes little more than “a commodity” (Katz,
1987, p. 177). It is not hard to see that libraries, reduced to the role of
“servicing” the “production” of scholarship, degrees, credit hours, and all
the other “outputs” in the contemporary university, are at risk, not only in
terms of resources, but also in terms of leadership within the university and
The metaphors we learn by 139
Adjustment, but with effect; content, but also process. Wiener’s definition
specifies information as “content,” but as we have seen, “content” alone
is not enough to explain or explore the nature of information—as indeed
Wiener’s subsequent elaboration acknowledges. The Information Literacy
Framework also acknowledges and embraces a wider and more inclusive
view of information as phenomenon, activity, and commitment. This view
is itself informed by learning science, philosophy, and the wisdom that
comes from practicing what anthropologist and cyberneticist Gregory
Bateson (1972/2000) calls “the business of knowing.”
Bateson’s (1972/2000) intriguing phrase emerges in a “Metalogue,”
his name for conversations with his daughter Mary Catherine that he
published from time to time. These conversations recall Socratic dia-
logues and portray the mysterious ways in which naivete and sophistica-
tion can find an unexpected and sometimes revelatory reciprocity in the
arts of inquiry. Indeed, Bateson’s definition of “metalogue” has suggestive
connections with the nature and purpose of the Information Literacy
Framework:
I believe Siemens takes Ellul’s words to heart. Yet as we have seen, “LA
activities” and the “actionable insights” they produce are never value-
neutral, for those activities not only collect but use data in the service of
an optimization mindset. Instead of a great cloud of witnesses, we have
another kind of cloud, one that swallows up the questions and concerns of
Ellul and all those who aspire to the life of the networked mind. Sadly, by
the time the optimization mindset wreaks its destruction, the activities of
optimization come to seem self-justifying. Tragically, what Bateson terms
“the early questions” seem already to have been answered in the service of
the “business” of education.
In 1997, Doug Engelbart received computer science’s highest honor, the
Turing Award. Named for pioneering information and computer scientist
Alan Turing, this award is often called the “Nobel Prize for computer
science.” During the Q&A following his Turing lecture, Engelbart was
asked, given the 32 years to that time that it had taken for something as
basic as the mouse to win widespread acceptance, how long he thought
it would take for his larger vision of human-computer coevolution to be
understood and acted on. Engelbart replied that he didn’t know—but that
he did know that the goal would not be reached by simply following the
marketplace. (The marketplace, unfortunately, is now itself a primary
metaphor for universities and scholarship.) Speaking to an audience of
distinguished information scientists, Engelbart pointed out that optimiz-
ing for a multidimensional environment—that is, a complex set of inter-
dependent variables—faced a particular danger:
Any of you that knows about trying to find optimum points in a multi-
dimensional environment … you know that your optimizing program
The metaphors we learn by 143
can often get on what they call a “local maximum”. It’s like it finds a
hill and gets to the top of it, but over there ten miles there’s a much big-
ger hill that it didn’t find.
(Engelbart, 1998)
References
Altimeter Films. (2017). Citizen Jane: Battle for the city. Directed by Matt Tyrnauer.
Association of College and Research Libraries (ACRL). (2016). Framework for
information literacy for higher education. ACRL. https://www.ala.org/acrl/
standards/ilframework
Bateson, G. (1972). Steps to an ecology of mind. University of Chicago Press.
Reprinted with a new Foreword by Mary Catherine Bateson, 2000.
Bruner, J. (1979). The act of discovery. In On knowing: Essays for the left hand.
Boston: Harvard University Press.
Campbell, G. (2015). A taxonomy of student engagement. YouTube. https://www
.youtube.com/watch?v=FaYie0guFmg
Chorost, M. (2011). World wide mind: The coming integration of humanity,
machines, and the internet. Free Press.
Ellul, J. (1964). The technological society. Vintage Books.
Engelbart, D. (1962). “Augmenting human intellect: A conceptual framework.”
SRI Summary Report AFOSR-3223. Prepared for Director of Information
Sciences, Air Force Office of Scientific Research, Washington DC. http://www
.dougengelbart.org/pubs/Augmenting-Human-Intellect.html
Engelbart, D. (1998, November 16). Bootstrapping our collective intelligence.
Turing Award Lecture at the 1998 ACM Conference on Computer-Supported
Cooperative Work. https://youtu.be/ DDBwMkN6IKo
Fulcher, K. H., Good, M. R., Coleman, C. M., & Smith, K. L. (2014). A simple
Modelfor learning improvement: Weigh pig, feed pig, weigh pig. Occasional
Paper #23. National Institute for Learning Outcomes Assessment, December
2014. https://files.eric.ed.gov/fulltext / ED555526.pdf
Guédon, J.-C. (2017). Open access: Toward the Internet of the mind. http://www
.budapestopenaccessinitiative.org / boai15/ Untitleddocument.docx
Johnson, S. (1755). A dictionary of the English language.
Katz, M. (1987). Reconstructing American education. Harvard University Press.
Kennedy, J. L., & Putt, G. H. (1956, December). Administration of research in a
Research Corporation RAND Corporation Report No. P-847, April 20, 1956.
Administrative Science Quarterly, 1(3), 326–339.
Kneese, T. (2021, January 27). How a dead professor is teaching a university
art history class. Slate. https://slate.com /technology/2021/01/dead-professor
-teaching- online- class.html
144 W. Gardner Campbell
Lakoff, G., & Johnson, M. (2003). Metaphors we live by. University of Chicago
Press
Siemens, G. (2013). Learning analytics: The emergence of a discipline. American
Behavioral Scientist, 57(10), 1380–1400.
Simpson, J. A. (1924). The Oxford English Dictionary. Clarendon Press.
Stenhouse, L. (1975). An introduction to curriculum research and development.
Heinemann Educational.
Weinstein, J. M., Reich, R., & Sahami, M. (2021). System error: Where big tech
went wrong and how we can reboot. Hodder & Stoughton.
Wiener, N. (1967). The human use of human beings: Cybernetics and society.
Avon Books, p. 1954. Discuss Edition.
SECTION III
Adaptive Learning
9
A CROSS-INSTITUTIONAL SURVEY
OF THE INSTRUCTOR USE OF DATA
ANALYTICS IN ADAPTIVE COURSES
James R. Paradiso, Kari Goin Kono, Jeremy Anderson,
Maura Devlin, Baiyun Chen, and James Bennett
DOI: 10.4324/9781003244271-12
148 James R. Paradiso, et al.
Others have argued the importance of teaching tools and their situated
utilizations for enhancing teachers’ self-efficacy (Pajares, 2006)—which
may partially explain the potentially intimidating incorporation of ALS
dashboards into one’s pedagogical routine(s).
In a recent dissertation, Ngwako (2020) identified a significantly dif-
ferent from zero positive correlation (p < 0.001) between the Teacher Self
Efficacy (TSE) scale and the Perception of ALS survey. Furthermore, the
study found that the number of courses taught with ALSs significantly
moderated the relationship between TSE and ALS perception, support-
ing one component of Bandura’s model (1977) that “successful mastery
breeds more successful mastery.”
Additional evidence on the growth of instructor self-efficacy can be
found in an experimental study of teacher candidates by Wang et al.
(2004), which focused on vicarious experiences. They found that among
280 preservice teacher candidates assigned to either treatment or control
course sections of an introductory educational technology course, vicari-
ous learning experiences and goal setting (considered independently) had
positive impacts on teachers’ self-efficacy for using technology. When
both vicarious learning experiences and goal setting were present, even
stronger senses of self-efficacy for integrating technology in teaching were
found (cf. Wang et al., 2004, p. 239).
A cross-institutional survey 151
Survey distribution
The survey was distributed to higher educational institutions across
the United States—including Bay Path University, University of Central
Florida, Dallas College, and Northern Arizona University. Bay Path
University is a small private university in Massachusetts that serves over
3,300 students each year. University of Central Florida is a large pub-
lic research institution with a student population of around 70,000.
Dallas College is a large public community college in Texas serving over
70,000 annually. Northern Arizona University is a large public research
152 James R. Paradiso, et al.
Participants
Participants were university and college instructors who had used ALS
data analytics to inform their teaching and learning practices. The par-
ticipants identified from seven public and private universities in the
United States with the majority of them from Bay Path University (n =
17), Northern Arizona University (n = 14), University of Central Florida
(n = 14), and Dallas College (n = 5). These participants represent various
disciplines, such as Mathematics and Statistics (n = 8), Business (n = 8),
Foreign Languages (n = 5), Biology (n = 4), History (n = 4), Psychology (n
= 4), and others. Table 9.1 illustrates the ALSs used by our respondents;
more than half of them used either Realizeit (n = 29) or ALEKS (n = 11).
Method
To gain an understanding of how instructors use data analytics in adap-
tive software, we examined instructor confidence (i.e., self-efficacy) about
ALS dashboards. First, we adapted a survey instrument of quantitative and
TABLE 9.1 Adaptive Learning Systems Used by Respondents
Acrobatiq 1
ALEKS 11
Cengage MindTap 4
CogBooks 1
Hawkes Learning 1
InQuizitive 2
Knewton 2
Learning Management System 1
Lumen Waymaker 1
Macmillan Achieve 1
Other/Not Reported 1
Pearson MyLab/Mastering 7
Realizeit 29
Smartbook 5
Total 67
A cross-institutional survey 153
Data analysis
Researchers conducted a convergent mixed methods data analysis
process, employing separate quantitative and qualitative analyses and
then compared results from the findings (Creswell & Creswell, 2018;
Merriam & Tisdell, 2016). Quantitative analysis examined self-efficacy
as an independent variable and its relationship with the dependent vari-
ables of faculty members’ frequency and granularity of analytics use in
ALSs. A separate analysis sought to determine if there was a relation-
ship between self-efficacy or the presence of a CoP and frequency and
granularity of ALS analytics use. Qualitative data addressed the three
free-response survey questions using a three-step qualitative coding pro-
cess to identify categories and themes from the data (Saldaña, 2016).
Results from the qualitative analysis were then compared to quantitative
findings about self-efficacy as it relates to frequency and granularity of
dashboard usage; qualitative analysis clarified how instructors used ALS
data dashboards.
Quantitative analysis
Data preparation consisted of identifying and compensating for missing
values in responses and constructing scale scores for the independent and
dependent variables. One respondent had valid responses on nine of ten
items for the self-efficacy scale. In this case, the person mean imputa-
tion approach was taken whereby the mean of available responses for
the individual was used (Heymans & Eekhout, 2019). Seven respondents
were deleted from the analysis of frequency and granularity and nine
were omitted from the analysis of community of practice due to a lack of
responses on all items for those respective scales.
154 James R. Paradiso, et al.
Qualitative analysis
The qualitative portion of analysis utilized coding software called Atlas.ti
where researchers conduct line-by-line coding of data. Open-ended ques-
tions from the survey were uploaded using several a priori codes related
to the goals of the study, which included searching for a fuller under-
standing of instructor use of ALS dashboard reporting, how instructors
adapted teaching practices, and how instructors interacted with learn-
ers (Charmaz, 2006; Saldaña, 2016). Using an initial coding practice,
additional protocols were added from the text (Charmaz, 2006; Saldaña,
2016). Responses were double-coded to confirm and validate their status.
The final results were subjected to a categorization and thematic analysis
process which was then applied into narrative writing to inform the quan-
titative component of our methodological approach of identifying instruc-
tor best practices of dashboard use of student data (Merriam & Tisdell,
2016; Saldaña, 2016).
Results
Results are shared of the three research inquiries—using a combination of
quantitative and qualitative data to support our findings.
TABLE 9.2
Correlation Analysis Results for Self-Efficacy and Granularity of
Dashboard Use
TABLE 9.3
Descriptive Statistics and Self-Efficacy Ratings for Granularity of
Dashboard Items and Responses
dashboards to exhibit higher mean self-efficacy scores than those with less
frequent use. However, only the differences in means across the categories
for the use of dashboards for teaching purposes were significant (H =
15.78, p = 0.003, df = 4, n = 54). Differences between categories for gen-
eral use of dashboards approached significance (H = 7.04, p = 0.071, df =
3, n = 54), while differences in using the dashboards to inform communi-
cation to students were not significant (H = 7.67, p = 0.105, df = 4, n = 54).
The relationships between having a CoP and the other variables
under study represented the second line of analysis. Respondents com-
pleted a single item pertaining to CoP by selecting or not selecting five
optional responses about the types of roles involved in the CoP: course
instructors, instructional designers, administrators, other roles, and
no CoP. The authors treated selection of one of the responses as a score
of “1” and then summed the scores for a range of zero to four (no CoP
also was treated as zero). The scale score served as an ordinal categori-
cal variable. Kruskal-Wallis ANOVA analyses were derived to compare
the mean scores for self-efficacy, granularity of dashboard use, and
frequency of dashboard use since the CoP variable was ordinal. None
of these ANOVA analyses yielded significant results. To further inves-
tigate, the researchers attempted to determine if a particular type of
member in the CoP yielded different results. They categorized presence
of course instructors and presence of instructional designers into yes/
no groups. Mann-Whitney U tests again demonstrated that there was
not a significant difference in self-efficacy, granularity of dashboard
use, or frequency of dashboard use across respondents who said course
instructors were in the CoP or were not, nor across those who said
instructional designers were in the CoP or were not.
Quantitative results demonstrated that there were significant and posi-
tive relationships between self-efficacy and granularity of dashboard use
and between self-efficacy and frequency of dashboard use for informing
teaching practices. Nonsignificant but similarly positive relationships
were present between self-efficacy and general frequency of dashboard
use and frequency of use for communicating with students. There was
no significant relationship between presence of a CoP and self-efficacy,
granularity of dashboard use, or frequency of dashboard use.
in the material and remind them to be more so.” Another instructor paid
attention to how much time students were spending within the ALS,
I do not use data analytics effectively. I know they exist and how to
use them, but I am not in the practice of doing so. I recognize that it
would benefit my students to leverage this information to tailor class to
address weaknesses in understanding.
This instructor response might indicate a need for support teams such as
instructional designers to demonstrate the benefits of using analytics in
one’s teaching practice for student learning.
In summary, instructors relied on adaptive analytics from their dash-
boards to evaluate student performance and engagement data, revise their
teaching focus, and adapt to student learning needs. Instructors utilized
student performance data to assess student understanding of concepts,
to edit areas of confusion within course materials, and to adapt to stu-
dent learning needs in the medium term (i.e., daily, weekly, or monthly).
Instructors assessed student engagement data in their dashboards and fol-
lowed up with students to offer reminders, connect about learning, and
provide additional academic assistance.
Nearly half of surveyed instructors used the ALS data dashboards weekly
or daily—with a majority opting to use the dashboards weekly in all cases
except adjusting their teaching approach, which was a close tie with once
or twice a semester.
The qualitative questions and answers provided more detailed examples
of how instructors incorporated data analytics into their teaching. Initial
coding and subsequent categorical analysis illustrated how instructors
adapted to varying student learning needs (within courses that utilized
adaptive learning in the curriculum) by applying an evaluative process
that either promoted the acceleration or remediation of course topics dur-
ing the term or for future iterations of the course. Instructors adjusted
their teaching practices based on student data in the ALS dashboard(s) by
help[ed] me [to] think about the course in the future to help adjust to
the individual learnings for students. For example, I have students who
have asked for more chapters to be covered so [I] modify the course to
integrate additional chapters.
Discussion
This study sought to unpack the extent to which educator self-efficacy
mediates the frequency and level of granularity with which data from
A cross-institutional survey 161
Significant finding #1
A positive relationship between instructor self-efficacy and the granular-
ity of dashboard use was a significant finding from this study; however,
particular reasons as to “why” require further exploration.
A majority of surveyed instructors felt confident with respect to pos-
sessing the skills necessary to teach using an ALS, yet not quite as con-
fident in terms of (1) providing students with personalized feedback and
(2) collecting and analyzing data produced by the ALS to enhance their
instructional practices—which are both key affordances of learning ana-
lytics dashboards (Knobbout & van der Stappen, 2018).
The most common categories of data leveraged by the surveyed faculty
were “topic mastery” and “topic completion” per student. This indicates
that faculty prefer viewing performance at the individual student level,
yet their interest trends toward high-level completion and mastery data,
rather than individual question metrics or time on task, which are more
indicative of student engagement with the learning content (cf. Hussain et
al., 2018; Kahu, 2013).
This tendency for faculty to prefer high-level performance data over
more granular student engagement data is not surprising, as main-
stream learning management systems (LMSs) have historically rendered
data about student progress through a centralized gradebook (Oyarzun
et al., 2020). Therefore, faculty are accustomed to deciphering mastery
and completion data. Yet, one of the main benefits of using an ALS is
that large amounts of student engagement data are being collected and
analyzed by the system to inform content recommendations, alternative
learning materials, questions at varying difficulty levels, and customized
feedback (Knobbout & van der Stappen, 2018). By not leveraging these
data, faculty members are less equipped to make predictions as to why
162 James R. Paradiso, et al.
Significant finding #2
This study also found a positive relationship between instructors’ level
of confidence using an ALS with how often they leverage ALS data dash-
boards to improve their instructional practice(s).
Checking ALS dashboards weekly was by far the most common
practice among surveyed instructors, and certainly, monitoring student
progress and mastery at that rate can be helpful in determining student
performance at mid-range. There is, however, something to be said for
having access to data that measure student achievement and engagement
in real-time (cf. Holstein et al., 2018) and considering the impact that
data (across varying timescales) can have on instructional practices and
student learning.
What is reasonable to expect from an instructor who has anywhere
between 50 and 500 students in a single course? Are just-in-time inter-
ventions even sustainable by the most well-intentioned and hard-working
educators? There may come a point when monitoring student learning
data and intervening based on that data become unsustainable and inde-
pendent of self-efficacy, principally due to the sheer scale of interventions
that overburdens the human-side workload (Kornfield et al., 2018).
A promising feature of adaptive learning technology is that it can adjust
its behavior based on the data it receives from the “actors,” which in this
case are students. If an ALS has any sort of artificial intelligence/machine
learning component(s), these algorithms can learn to communicate relevant
information “just-in-time” to enhance student achievement (Teusner et al.,
2018), while taking some of the burden off of the instructors who may not
be able to take action on ALS dashboard analytics in a timely manner.
A cross-institutional survey 163
Awareness. This stage is concerned with just data, which can be visualized
as activity streams, tabular overviews, or other visualizations.
164 James R. Paradiso, et al.
Reflection. Data in themselves are not very useful. The reflection stage
focuses on users’ asking questions and assessing how useful and rel-
evant these are.
Sensemaking. This stage is concerned with users’ answering the ques-
tions identified in the reflection process and the creation of new
insights.
Impact. In the end, the goal is to induce new meaning or change behavior
if the user deems it useful to do so.
(Verbet et al., 2013, p. 1501)
Future research
The findings from this study reinforce the role of instructor self-efficacy
in the use of ALS data dashboards, and although a couple of signifi-
cant findings were observed, further investigation might prove helpful
not only for improving the instructor experience when working with
ALS data analytics, but also for optimizing the student learning expe-
rience. For example, replicating a study after implementing a training
specifically designed to support instructor self-efficacy and ALS data
dashboard use could reveal a different set of findings (cf. Rienties et al.,
2018).
Scholars might also draw a parallel between a more highly researched
area, such as the impact student-facing dashboards (Santos et al., 2012)
have on academic achievement through self-regulation/learner awareness
(Bodily et al., 2018), and investigate how promoting self-regulated learn-
ing (SRL) in faculty development programming might increase levels of
self-efficacy, which could translate into more deliberate and effective use
of ALS data dashboards and, ultimately, student success (Zheng et al.,
2021; Schipper et al., 2018; Van Eekelen et al., 2005).
References
Amro, F., & Borup, J. (2019). Exploring blended teacher roles and obstacles
to success when using personalized learning software. Journal of Online
Learning Research, 5(3), 229–250.
Arnold, K. E., & Pistilli, M. D. (2012, April). Course signals at Purdue: Using
learning analytics to increase student success. In Proceedings of the 2nd
international conference on learning analytics and knowledge (pp. 267–270).
https://dl.acm.org /doi /10.1145/2330601. 2330666.
Ashton, P., Webb, R. B., & Doda, N. (1983). A study of teacher’s sense of
efficacy: Final report to the National Institute of Education, executive
summary. Florida University (ERIC Document Reproduction Service No.
ED231 833).
Attaran, M., Stark, J., & Stotler, D. (2018). Opportunities and challenges for big
data analytics in US higher education: A conceptual model for implementation.
Industry and Higher Education, 32(3), 169–182.
166 James R. Paradiso, et al.
Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). SAGE
Publications.
Santos, J. L., Govaerts, S., Verbert, K., & Duval, E. (2012, April). Goal-oriented
visualizations of activity tracking: A case study with engineering students. In
Proceedings of the 2nd international conference on learning analytics and
knowledge (pp. 143–152). https://dl.acm.org/doi/10.1145/2330601. 2330639
Schipper, T., Goei, S. L., de Vries, S., & van Veen, K. (2018). Developing
teachers’ self-efficacy and adaptive teaching behaviour through lesson study.
International Journal of Educational Research, 88, 109–120.
Schwendimann, B. A., Rodriguez-Triana, M. J., Vozniuk, A., Prieto, L. P.,
Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2016). Perceiving
learning at a glance: A systematic literature review of learning dashboard
research. IEEE Transactions on Learning Technologies, 10(1), 30–41.
Seneviratne, K., Hamid, J. A., Khatibi, A., Azam, F., & Sudasinghe, S. (2019).
Multi-faceted professional development designs for science teachers’ self-
efficacy for inquiry-based teaching: A critical review. Universal Journal of
Educational Research, 7(7), 1595–1611.
Shabaninejad, S., Khosravi, H., Indulska, M., Bakharia, F, Isaias, P. (2020).
Automated insightful drill-down recommendations for learning analytics
dashboards (2020). Paper presented at The International Learning Analytics
and Knowledge (LAK) Conference. https://www.researchgate.net/publication
/338449127 _ Automated _ Insightful _ Drill - Down _ Recommendations _ for
_Learning _ Analytics_ Dashboards
Teusner, R., Hille, T., & Staubitz, T. (2018, June). Effects of automated
interventions in programming assignments: Evidence from a field experiment.
In Proceedings of the fifth annual ACM conference on learning at scale (pp.
1–10). https://dl.acm.org /doi /10.1145/3231644. 3231650
Tschannen-Moran, M., & Hoy, A. W. (2001). Teacher efficacy: Capturing an
elusive construct. Teaching and Teacher Education, 17(7), 783–805.
Van Eekelen, I. M. V., Boshuizen, H. P. A., & Vermunt, J. D. (2005). Self-
regulation in higher education teacher learning. Higher Education, 50(3),
447–471.
Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J. L. (2013). Learning
analytics dashboard applications. American Behavioral Scientist, 57(10),
1500–1509.
Wang, L., Ertmer, P. A., & Newby, T. J. (2004). Increasing preservice teachers’
self-efficacy beliefs for technology integration. Journal of Research on
Technology in Education, 36(3), 231–250. http://doi.org/10.1080/15391523
.2004.10782414
Zheng, J., Huang, L., Li, S., Lajoie, S. P., Chen, Y., & Hmelo-Silver, C. E. (2021).
Self-regulation and emotion matter: A case study of instructor interactions
with a learning analytics dashboard. Computers and Education, 161, 104061.
10
DATA ANALYTICS IN ADAPTIVE
LEARNING FOR EQUITABLE OUTCOMES
Jeremy Anderson and Maura Devlin
The United States’ higher education system is rife with inequities along race
and class “fault” lines, with students belonging to first-generation, lower-
socioeconomic, or racially diverse groups not benefiting from the promises
of higher education as much as their more privileged and affluent White
peers. News outlets commonly report on college admissions processes favor-
ing the privileged (Jack, 2020; Saul & Hartocollis, 2022); higher educa-
tion’s uneven return on investment (Carrns, 2021; Marcus, 2021); and, to
a lesser extent, college degree attainment data by race and socioeconomic
status (Edsall, 2021; Nietzel, 2021). Notable lawsuits arising from perceived
inequities in race-based admissions processes make their way to the Supreme
Court (Brown & Douglas-Gabriel, 2016; Krantz & Fernandes, 2022; Jasick,
2022c). Analyses of persistence, retention, and debt load related to social class
are becoming increasingly prevalent, at least among think tanks and public
interest groups (Brown, 2021; Rothwell, 2015; Libassi, 2018). The Varsity
Blues scandal of 2019 (Medina, Benner & Taylor, 2019) brought to light
practices that the wealthy deploy to ensure their progeny succeed in optimiz-
ing higher education. Public discourse about whether higher education is still
a force for social mobility is common (Stevens, 2019), and college scorecards
and ranking organizations (U.S. News and World Reports, n.d.) have incor-
porated measures of social mobility to their data tools.
DOI: 10.4324/9781003244271-13
Data analytics in adaptive learning for equitable outcomes 171
Methodology
This post-test only study with nonequivalent groups sought to answer
the research question of whether exposure to adaptive learning activities
resulted in improved learning outcomes for students who had diverse char-
acteristics. The comparison groups under study were students enrolled
in sections of courses utilizing embedded adaptive learning activities,
compared with comparable sections without adaptive learning activities.
Further, each group was disaggregated according to certain characteris-
tics—Pell eligibility, race and ethnicity, and first-generation status. Course
grades on a typical 4.0 scale (with plus or minus grading) were used as
the dependent variable. Increased average grades for Pell-eligible, first-
generation, and Black or African American and Hispanic students would
support the hypotheses and support continued investment in designing
adaptive courses at the institution and in contexts with similarly diverse
student populations.
The following hypotheses were tested using five years of course out-
comes data stored in the SIS at the college:
races were included in the data set but did not have sufficient counts to
ensure the preservation of anonymity so were excluded from analysis.)
A 4.00
A– 3.67
B+ 3.33
B 3.00
B- 2.67
C+ 2.33
C 2.00
C- 1.67
D+ 1.33
D 1.00
F 0.00
176 Jeremy Anderson and Maura Devlin
Analysis plan
In preparing data for analyses, the researchers first divided the file into a
series of six files, one per course. The split file function in SPSS allowed
for dividing each course file by the categories of independent variables
(Pell eligibility, first-generation status, race/ethnicity) addressed by each
hypothesis. The final step in setting up the analyses was to use the pres-
ence of adaptive learning as the grouping variable to compare means, sub-
tracting the mean of the subgroup without adaptive learning from the
corresponding mean of the subgroup with adaptive learning from each
course. An example analysis, therefore, was a comparison of average
grades earned by Black or African American students in those sections of
Statistics (MAT120) with adaptive learning versus those without.
The researchers used the Student’s t-test to compare means unless vari-
ances were unequal, in which case Welch’s t-test was substituted (Field,
2018), since this test has been found to be robust even if model assump-
tions are not upheld (Lumely et al., 2002). Cohen’s d was useful for cal-
culating effect sizes which were classified as small (0.2), medium (0.5), or
large (0.8) (Cohen, 1988).
Results
Math courses
The average course grade for Pell-eligible students increased more than
non-Pell-eligible students in all three math courses after adaptive learn-
ing was adopted. All improvements in average grade are in terms of a 4.0
scale, such that a difference of 0.33 is the equivalent of one grade incre-
ment (from B to B+, e.g.). In MAT104, the difference was a 0.51-point
increase for Pell-eligible students versus 0.01 point for non-Pell-eligible; in
MAT112 it was 0.46 versus 0.39; and in MAT120 it was 0.35 versus 0.31.
The gains for Pell-eligible students in MAT112 (t(195) = 2.39, p = 0.018)
and MAT120 (t(454) = 2.93, p = 0.004) were significant for those in sec-
tions with adaptive learning, with small to medium and small-effect sizes
(d = 0.37 in MAT112; d = 0.30 in MAT120). Findings were not signifi-
cant for MAT104, though the effect size was small to medium (d = 0.38).
There is good support for Hypothesis 1 in math courses as a result of these
Data analytics in adaptive learning for equitable outcomes 177
English courses
Impacts of adaptive learning in English courses were mixed for Pell-
eligible students. Average grades dropped by 0.18 points in ENG114 and
TABLE 10.2 Descriptive Statistics and t-test Results for Adaptive and Non-Adaptive Sections of Math Courses Disaggregated by
Demographic Groups
178
Non-adaptive Adaptive
Non-adaptive Adaptive
Implications
Implications for practice
The goal of this study was to examine whether the use of adaptive learning
can positively impact course grades for learners of diverse demographic
backgrounds. The authors compared grades for students who differed by
continuing and first-generational status, race, and ethnicity groups, and
income levels (using Pell eligibility as a proxy) for classes utilizing adaptive
learning, compared with prior nonadaptive sections. Findings for math
courses were mixed but mostly positive for Pell-eligible, first-generation,
Black or African American, and Hispanic students and often were signifi-
cant or approaching significance with effect sizes ranging from small to
large. Student outcomes were strongest in Applied College Math (MAT
112) and Statistics (MAT120). Institutions and faculty members looking
to address equity gaps in mathematics at the course level should consider
adopting adaptive learning. Remedial courses at state systems often rep-
resent a disproportionate and stagnating starting place for low income
and diverse learners (Burke, 2022). Adaptive technology in math courses,
where students can remediate within college-level, credit-bearing courses
that use the technology’s content and algorithms, can allow the faculty
member to intervene in a personalized way. More research is needed on
when and how to implement this technology effectively in these courses,
but this research shows promise.
There was a contrast in the case of English courses where most findings
were mixed or negative with grades decreasing for many groups engaged
with adaptive learning. These findings most often were not significant and
had nearly universally small effect sizes. Still, adaptive learning may not
be as effective at impacting equity in English courses, so institutions and
instructors should be selective in adopting this technology in that context.
The other implications of this study revolve around measuring the con-
cept of equity and using it to inform course design projects. Decision-
makers should define their equity framework and goals ahead of time.
Determining which approach to operationalizing equity—for example,
calculating differences in subgroup versus overall average, setting a crite-
rion threshold to compare subgroups to a reference group (e.g., top achiev-
ing), or creating an index comparing the proportions of the subgroup in the
educational outcome group versus the overall population (Sosa, 2018)—
enables administrators, faculty, and staff to work collaboratively with the
same student success goals in mind. Those involved in selecting educa-
tional technologies should use the defined equitable measure of effective-
ness in student outcomes to inform decision-making. A baseline measure
should be obtained to document achievement gaps across subpopulations.
182 Jeremy Anderson and Maura Devlin
Conclusion
Bay Path University began implementing adaptive learning in 2015 with
the aim of improving student outcomes generally, but for a diverse student
body more specifically. A research team at the institution has tracked the
efficacy of the implementation for the last several years starting at lower
levels of granularity examining overall population outcomes in adaptive
and nonadaptive courses, and progressing to higher levels including out-
comes disaggregated by courses and now by demographic groups. The
current study sought to understand if adaptive learning benefited lower-
income (Pell-eligible, i.e.), first-generation, Black or African American,
and Hispanic students when comparing course grades for these subpopu-
lations before and after ALS implementation in specific courses. Analyses
of data from the student information system (SIS) demonstrated that these
groups of students generally tended to benefit in the core math sequence,
Data analytics in adaptive learning for equitable outcomes 185
References
Anderson, J., Bushey, H., Devlin, M., & Gould, A. (2021). Efficacy of adaptive
learning in blended courses. In A. Picciano (Ed.), Blended learning research
perspectives, volume III. Taylor & Francis. Pp. 147–162.
Anderson, N. (2021, June 15). MacKenzie Scott donates millions to another
surprising list of colleges. The Washington Post. https://www.washingtonpost
.com /education /2021/06/15/mackenzie-scott- college-donations-2021/
Bills, D. B. (2019). Ch. 6. The problem of meritocracy: The belief in achievement,
credentials and justice. In R. Becker (Ed.), Research handbook on the
sociology of education. Pp. 88–105. https://china.elgaronline.com /display/
edcoll/9781788110419/9781788110419.00013.xml
Brown, D. (2021, April 9). College isn’t the solution for the racial wealth gap. It’s
part of the problem. The Washington Post. https://www.washingtonpost.com
/outlook/2021/04/09/student-loans-black-wealth-gap/
Brown, E., & Gabriel-Douglas, D. (2016, June 23). Affirmative action advocates
shocked - And thrilled. The Washington Post. https://www.washingtonpost
.com/news/grade-point/wp/2016/06/23/affirmative-action-advocates-shocked
-and-thrilled-by-supreme- courts-ruling-in-university-of-texas- case/
Burke, L. (2022, May 9). Years after California limited remediation at community
colleges, reformers want more fixes. Higher Ed Dive. https://www.highereddive
.com /news/years-after- california-limited-remediation-at- community- colleges
-reformers / 623288/ ? utm _ source = Sailthru & utm _ medium = email & utm _ c
ampaig n=Newsletter%20Weekly%20Roundup:%20Higher%20E d%20Dive:
%20Da i ly%2 0 Dive %2005 -14 -2 022&utm _ term= Higher%20Ed%20Dive
%20Weekender
Cahalan, M. W., Addison, M., Brunt, N., Patel, P. R., & Perna, L. W. (2021).
Indicators of higher education equity in the United States: 2021 historical trend
report. The Pell Institute for the Study of Opportunity in Higher Education.
Council for Opportunity in Education (COE), and Alliance for Higher
Education and Democracy of the University of Pennsylvania (PennAHEAD).
Carrns, A. (2021, August 14). Will that college degree pay off? New York Times.
https://www.nytimes.com /2021/08/13/your-money/college-degree-investment
-return.html
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Routledge
Academic.
Complete College America. (2021, October 13). Complete College America
announces $2.5 million initiative focused on digital learning innovation
at historically black colleges and universities. https://completecollege.org/
186 Jeremy Anderson and Maura Devlin
resource/complete- college-america-announces-2-5-million-initiative-focused
-on-digital-learning-innovation-at-historically-black- colleges-universities/
Devlin, M., Egan, J., & Thompson, E. (2021, December). Data driven course
design and the DEI imperative (A. Kezar, ed.). Change Magazine.
Dziuban, C., Howlin, C., Moskal, P., Johnson, C., Parker, L., & Campbell,
M. (2018). Adaptive learning: A stabilizing influence across disciplines and
universities. Online Learning, 22(3), 7–39. https://doi.org/10. 24059/olj.v22i3
.1465
Dziuban, C., Moskal, P., Johnson, C., & Evans, D. (2017). Adaptive learning: A
tale of two contexts. Current Issues in Emerging eLearning, 4(1), 26–62.
Edsall, T. B. (2021, June 23). Is higher education no longer the ‘great equalizer’?
The New York Times. https://www.nytimes.com/2021/06/23/opinion/
education-poverty-intervention.html
Field, A. (2018). Discovering statistics using IBM SPSS statistics. Sage.
Fu, Q. K., & Hwang, G. J. (2018). Trends in mobile technology-supported
collaborative learning: A systematic review of journal publications from
2007 to 2016. Computers & Education, 119, 129–143. https://doi.org/10
.1016/j.compedu. 2018.01.004
Fortin, J. (2021, October 20). Amherst College ends legacy admissions favoring
children of alumni. The New York Times. https://www.nytimes.com /2021/10
/20/us/amherst- college-legacy-admissions.html
Jack, A. A. (2020, September 15). A separate and unequal system of college
admissions. The New York Times. https://www.nytimes.com/2020/09/15/
books/review/selingo-korn-levitz- college-admissions.html
Jasick, S. (2022a, February 7). For students’ extra needs. Inside higher Ed.com.
https://www.insidehighered.com /admissions/article /2022 /02 /07/colleges-start
-aid-programs-students-full-needs
Jasick, S. (2022b, April, Volume 18b). Williams gets more generous with aid.
Inside Higher Ed .com. https://www.insidehighered.com /admissions/article
/2022/04/18/williams-improves-aid-offerings
Jasick, S. (2022c, May 3). Students for fair admissions file Supreme Court brief.
Inside Higher Ed.com. https://www.insidehighered.com /quicktakes/2022 /05
/03/students-fair- admissions- files- supreme- court-brief?utm_ source= Inside
+Higher +Ed & utm _ campaign = 579e1fa972 - DNU _ 2021 _ COPY _ 02 & utm
_medium= email&utm _term= 0 _1fcbc04421-579e1fa972-197514661&mc _ cid
=579e1fa972&mc_ eid= e57f762926
Krantz, L., & Fernandes, D. (2022, January 24). Supreme Court agrees to hear
Harvard affirmative action case, could cause ‘huge ripple effect’ in college
admissions. The Boston Globe. https://www.bostonglobe.com/2022/01/24
/metro / future - affirmative - action - higher - education - limbo - supreme - court
-agrees-hear-harvard- case/
Libassi, C. J. (2018, May 23). The neglected race gap: Racial disparities
among college completers. The Center for American Progress. https://www
.americanprogress.org /article /neglected- college -race - gap -racial- disparities
-among- college- completers/
Lumely, T., Diehr, P., Emerson, S., & Chen, L. (2002). The importance of the
normality assumption in large public health data sets. Annual Review of
Data analytics in adaptive learning for equitable outcomes 187
U. S. News and World Reports. (n.d.). Top performers on social mobility. https://
www.usnews.com / best- colleges/rankings/national-universities/social-mobility
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent
tutoring systems, and other tutoring systems. Educational Psychologist, 46(4),
197–221. http://doi.org/10.1080/00461520. 2011.611369
Xie, H., Chu, H.-C., Hwang, G.-J., & Wang, C.-C. (2019). Trends and development
in technology-enhanced adaptive/personalized learning: A systematic review
of journal publications from 2007 to 2017. Computers & Education, 140,
103599. https://doi.org/10.1016/j.compedu. 2019.103599
11
BANKING ON ADAPTIVE QUESTIONS TO
NUDGE STUDENT RESPONSIBILITY FOR
LEARNING IN GENERAL CHEMISTRY
Tara Carpenter, John Fritz, and Thomas Penniston
Introduction
How do we help new college students learn how to learn? If as the saying
goes, “nobody learns from a position of comfort,” can first-year students
honestly and accurately self-assess what they currently know, understand,
or do? And be disciplined enough to put in the time and effort to develop
or strengthen what they may lack? Perhaps more importantly, can they be
nudged into taking responsibility for doing so, especially if their initial
interest or ability alone is insufficient to be successful? If so, what can
faculty do, in the design and delivery of their courses, to help initiate or
accelerate student engagement? Finally, what, if anything, can learning
analytics and adaptive learning do to help both faculty and students in
their teaching and learning roles?
In this case study from the University of Maryland, Baltimore County
(UMBC), we explore these questions through one of the university’s larg-
est courses, CHEM 102 “Principles of Chemistry II,” with a typical Spring
enrollment of 550–600 students (just under 200 in Fall). After 18 years’ expe-
rience in teaching the course as well as guiding, even exhorting students in
how to learn and succeed in it, Tara Carpenter wondered if students simply
did not know how to do so independently. So, in the middle of Spring 2021—
armed with a pedagogical theory of change and innovative, pandemic-driven
digital skills and experience—she designed an incentive model and personal-
ized learning environment in UMBC’s Blackboard LMS to help students not
only pass the exams, but also take responsibility for preparing for them.
Frequently known as “spaced practice,” in which students have time
to study, forget, re-acquire, and reorganize new knowledge or content,
DOI: 10.4324/9781003244271-14
190 Tara Carpenter, John Fritz, and Thomas Penniston
Pedagogical influences
To better understand Carpenter’s methodology of spaced practice and the
role learning analytics and adaptive learning have played in implementing
and refining it, we first need to understand her pedagogical influences.
To do so, we will focus on how we learn to learn and think about our
thinking, often known as “metacognition,” the role that memory (and
forgetting) plays in doing so, and finish with why spaced practice—unlike
cramming—is literally about the time we give our brains to discover, pro-
cess, and organize new knowledge, skills, or abilities.
Metacognition
In 2016, Carpenter joined a UMBC Faculty Development Center (FDC)
book discussion about McGuire and McGuire’s (2015), Teach Students
How to Learn: Strategies You Can Incorporate into Any Course to
Improve Student Metacognition, Study Skills, and Motivation. A year
later, she attended McGuire’s on-campus keynote presentation (2017) at
UMBC’s annual “Provost’s Symposium on Teaching and Learning.” In
both her book and UMBC talk based on it, McGuire argues that faculty
need to intentionally and explicitly introduce students to metacognition,
or thinking about thinking, by showing and telling them about Bloom’s
taxonomy of learning (see Figure 11.1): “Don’t just assume they’ve seen
Bloom’s [taxonomy] or understand it,” said McGuire, who also coau-
thored with her daughter, Stephanie, a student-focused book about meta-
cognition, Teach Yourself How to Learn (2018).
“Reading [McGuire’s] book essentially changed the way I view teach-
ing and learning and changed how I view course design,” says Carpenter,
who had long seen incoming students repeat what they were conditioned
to do in high school: memorize, regurgitate (for an exam), and promptly
forget (after it). “Most of them just aren’t prepared for the rigors of college
because they simply don’t understand the difference between memoriza-
tion and learning.”
Banking on adaptive questions to nudge students in gen chem 191
Along with her colleague, Sarah Bass, who teaches CHEM 101, and
also leveraged the same large-question-bank approach to online, “open
note” exams during the pandemic (Bass et al., 2021a, 2021b; Fritz, 2020),
Carpenter became much more intentional about introducing students to
metacognition, delivering McGuire’s recommended lecture on the topic,
and taking time in class to encourage student reflection on their own
learning, especially after exams (Carpenter et al., 2020). She even began
sending individual, personalized emails, highlighting effective strategies
for specific students who seemed to be struggling, which is remarkable
given the class size.
Methodology
Again, Carpenter recognized the effects of the Ebbinghaus forgetting
curve in many of her incoming students, especially their slavish adherence
to cramming for exams instead of following her advice to adopt better,
time-tested, and proven learning strategies. But despite the evidence and
even her own encouragement, student behaviors and approaches to their
own learning did not change.
“What if they just don’t know how,” she wondered. “What if they’re
just so overwhelmed by learning how to learn, that they can’t pull this off
on their own, partly because of immaturity or time management, etc.?”
194 Tara Carpenter, John Fritz, and Thomas Penniston
With these questions in mind, and based in part on her Fall 2020
use of large question banks to design online, “open-note” exams dur-
ing the pandemic-pivot to remote instruction (Fritz, 2020), Carpenter
began constructing a schedule for spaced practice over Spring break of
the Spring 21 term, and allowed her students to opt-into it for the remain-
ing three exams. If students opted in, they did not need to complete the
regularly scheduled homework as this would take its place. As an incen-
tive to opt in, students were offered 10 percentage points extra credit
toward the first exam where spaced practice was offered, if needed. For
the remaining two exams, no extra credit was offered, but the homework
assignments continued to be replaced by the spaced practice for students
who opted in.
Specifically, Carpenter began writing iterations of practice questions
for individual units in her course that she wished students would see
and could answer. Having written her own exam questions for years,
she found the transition to writing practice questions herself—instead
of using publishers’ homework questions—to not be very difficult. But
her focus was not on “definition” kinds of questions (per Bloom’s low-
est “remembering” level of learning). Instead, she focused on higher-level
thinking that required students to apply concepts or solve math-related
problems that relied on conceptual understanding. To help with question
variety, she used Blackboard’s “calculated question” format to change the
numeric variables in a question stem or prompt (see https://tinyurl.com/
bbcalcquestion). She also randomized the order of the questions students
would see in each spaced practice lesson.
However, Carpenter had to solve another key problem: lack of time for
her students to space their practice when she had an exam every three weeks
for the rest of the term. So, she made up her own schedule based loosely on
an “N + 2” sequence, where the first iteration or exposure to unit practice
may occur on day “zero” followed by another iteration on days 2, 5, and 7,
respectively. These might also overlap with another unit’s practice schedule,
such that students were literally practicing every day, albeit not the same
material two days in a row. For an example, see Figure 11.4.
Based on the manually intensive nature of her practice question
development process, and with a desire to personalize student learning
even further, Carpenter agreed to pilot the Realizeit Learning adaptive
learning platform in Fall 21 and Spring 22, based largely on the posi-
tive experience and recommendation from colleagues at the University of
Central Florida (UCF), who use and support it, and have also published
their experiences (Dziuban et al., 2017, 2018, 2020). This required an
initial export of Carpenter’s own question pools from Blackboard and
import into Realizeit that was not entirely automatic or without cleanup.
Banking on adaptive questions to nudge students in gen chem 195
FIGURE 11.4
CHEM102 Students’ Spaced Practice Schedule before Learning
Checkpoint 5, Spring 21
Also, Carpenter now faced her own steep learning curve to master a new
assessment authoring platform, which she did over repeated practice and
support.
Finally, with the use of Realizeit in Fall 21 providing a near infinite
supply of variable questions and answers, Carpenter essentially was able
to implement her spaced practice content and schedule across an entire
term. She kept the same exam questions, given over the course of five
Learning Checkpoints (LCPs) as she’d done using six LCPs in Spring 21,
but interestingly, she did not make Spaced Practice optional. It was now
required as the homework system for CHEM 102, which was understand-
able not only for her perceived benefits to all students (based on the Spring
21 experiment) but also her own considerable time and effort to create
the Spaced Practice environment in Realizeit. She just didn’t have time to
have students using another, publisher-based homework system, which
she understandably abandoned for Fall 21, since she could now author her
own questions for both practice and exam environments.
196 Tara Carpenter, John Fritz, and Thomas Penniston
Findings
In this section, we summarize findings from Carpenter’s experimental
implementation of spaced practice during the middle of the Spring 21 term
using Blackboard and also describe its full implementation in Fall 21 using
Realizeit Learning. We also share results from an interesting experiment
in which Carpenter, working with instructors in the next course requiring
hers, CHEM 351 “Organic Chemistry,” surveyed students who had been
enrolled in her Spring 21 version of CHEM 102, to see if and how they
carried the lessons learned from their use of spaced practice into CHEM
351 in Fall 21.
Note: By default, to avoid content duplication, UMBC’s Bb LMS course
shells combine multiple sections of the same course taught by the same
instructor. As such, CHEM 102H (Honors section, n = ~25) are always
combined with the larger CHEM 102 LMS course shell. However, in the
analysis that follows, only the “FA21” findings alone specifically excluded
Honors students for methodological reasons supporting inferential analysis.
By contrast, Honors students were included in the “SP21” and “SP21/102
to FA21/351” progression findings, which are mostly descriptive in nature.
30.0%
20.0%
10.0%
0.0%
A B C D F
Final Course Grade
her Fall 21 course outcomes with those from Fall 20. Additionally, while
CHEM 102 is offered every Fall and Spring term, the largest enrollment is
always in the Spring, since it is part of the two-semester general chemistry
sequence.
In doing so, we see that there was not a statistically significant relation-
ship between the treatment (i.e., course design) and overall DFW rates
between Fall 20 and Fall 21. However, if we disaggregate the final grade
data, we see that there is an overall statistically significant increase in As
(p < 0.01) and decrease in Cs (p < 0.05) and Ds (p < 0.05) in Fall 21 (see
Figure 11.7).
Notably, all of this gain from increasing As appears to have been from
students of color (SOC), who demonstrated a nearly 4x advantage in
attaining this grade (p < 0.001) over their peers who completed the course
prior to the redesign, while White students demonstrated no statistically
significant gain in this area. Figure 11.8 illustrates the breakdown of per-
centage of grades earned by term and White versus SOC.
There does appear to be an upward trend in Fs, although not statisti-
cally significant when comparing the past two terms. There are no statisti-
cally significant, notable grade distribution trends for female or transfer
students. Non-STEM students, however, have a statistically significant
advantage earning As over their class peers.
There also seem to be some inflection points during students’ in-term
learning. If we consider how students learning check points (LCPs) rank
compared with their peers (i.e., as z-scores), we see that there is a statisti-
cally significant 34% reduction in DFWs if students improve from the
first to second LCP (p < 0.01), along with a 2.4x increased likelihood of
198 Tara Carpenter, John Fritz, and Thomas Penniston
FIGURE 11.6 CHEM 102 “Waterfall” Chart in which Every Row Is a Student, Every Column a Week in the Semester, and Each
Cell’s “color” density is based on time spent (e.g., darker = more time). For a larger, color version of this image, see
the presentation slides from Fritz et al., 2022, in the references.
Banking on adaptive questions to nudge students in gen chem 199
earning an A for all students, and 3.4x for African American students in
particular. Considering inflection points later in the term, we see there is a
39% reduction in students’ chances of earning a DFW if there is improve-
ment from LCP 4 to LCP 5 (p < 0.01). Although controlling for race,
gender, STEM major, academic status, high school GPA, and math SAT
scores in these models only accounts for 10% of the total variance, it does
appear that growth within the semester may be contributing to academic
gains in course grade distribution.
Additionally, when we look at student engagement data from Realizeit,
including duration or time spent, and completion of spaced practice mod-
ules, we see a compelling insight into one of Carpenter’s key goals for the
course: students taking responsibility for their own learning. For example,
Figure 11.9 shows final grades earned by those students who did (or did
not) complete daily spaced practice modules in Realizeit (similar to Figure
11.6, every row is a student and every column is a daily spaced practice
module in Realizeit, with the color corresponding to final grade earned).
We also observe that after only 14 days into the Fall 21 term, a model
trained only on actual student usage in Realizeit was 82.6% accurate in
correctly predicting ABC and DFW final grades for all students (83%
precision).
Overall, the data indicate that Carpenter’s spaced practice implemen-
tation in Realizeit appeared, at the very least, to be doing no harm, and
when evaluating the implementation alongside grade distribution there
seem to be certain advantages, particularly when considering students of
color and non-STEM students.
200 Tara Carpenter, John Fritz, and Thomas Penniston
FIGURE 11.8 Chem 102 Grade Distribution by Term and Race (SOC = Students
of Color)
Carpenter also surveyed her Spring 21 CHEM 102 students before and
after their enrollment in the Fall 21 version of CHEM 351 and learned the
Banking on adaptive questions to nudge students in gen chem 201
TABLE 11.1
Post-survey Responses of CHEM 102 (SP21)
Students in CHEM 351 (FA21)
• the biggest challenge in carrying out Spaced Practice (SP) was formulat-
ing my own types of questions that integrated the many learning objec-
tives (LOs) [for CHEM 351 “Orgo”]
• translating LOs [learning objects] into challenging questions was very
difficult
• creating a practice schedule that followed the class schedule closely was
a bit difficult to do
• any tips that could help us in creating an appropriate SP schedule for a
given section of units before an exam, would be very helpful
• many students in class chats spoke of their problems actually forming
an SP schedule despite really wanting to continue the great studying
technique
Discussion
As we reflect on this UMBC case study, a few key questions emerge from
Carpenter’s implementation and refinement of spaced practice in CHEM
102. First, why do some students strongly resist spaced practice in CHEM
102 at the start of the semester and then (surprisingly) embrace it by the
end? The Scholarship of Teaching and Learning (SoTL) literature and
practice includes frequent examples of student resistance to active learn-
ing in the classroom, but spaced practice is largely a solitary activity stu-
dents do (or don’t) pursue on their own time, outside of class. Even if
shown the evidence from prior, successful cohorts, could it be that incom-
ing students are learning more, but liking it less? If so, how should faculty
respond, if at all, given student agency and responsibility for learning?
Whose problem is this to solve?
Second, what might help students be more successful in implement-
ing spaced practice in CHEM 351 after successfully doing so in CHEM
102? To be sure, Carpenter rolled up her sleeves and scaffolded an anti-
cramming approach to learning that incoming college students may not
be familiar with or even like. But at some point, they do have to learn how
Banking on adaptive questions to nudge students in gen chem 203
to learn (in college) and not all faculty can be expected to implement and
sustain Carpenter’s approach to course design. Or could they?
Finally, what is the least amount of spaced practice “time on task”
spent per unit that students need to do to be successful in CHEM 102, and
become proficient in self-regulating their own learning in CHEM 351?
This is where we want to further study the data and patterns of behav-
ior associated with student engagement in Realizeit. While the goal for
spaced practice still must be quality over quantity, Carpenter has found
that initial student resistance is based on a perception (concern) over the
number and amount of time spent on unit module spaced practice ses-
sions. To date, however, Carpenter has relied on the first exam’s results to
quell student protest.
“When they see that—unlike what they are used to – they aren’t cram-
ming for exams and are largely ready for them in CHEM 102, the vocal
minority quiets down pretty quickly” says Carpenter.
There is, however, a small group of students who continue to spend
inordinate amounts of time doing spaced practice without the results
they (and Carpenter) would like to achieve (see especially Figure 11.9
above, where some D and F students were just as active or more so than
their higher performing peers). This is where Carpenter comes back to
McGuire’s focus on metacognition.
“Some students, for whatever reason, really struggle to think criti-
cally and objectively about their own thinking,” says Carpenter, who
has noticed these students might be using spaced practice only to fur-
ther aid their prior approaches to rote memorization instead of learn-
ing. “They can recognize a problem similar to one they’ve seen before,
and even recall a pattern or process of steps to follow, but don’t recog-
nize new variables, or that I’ve changed the problem from focusing on
boiling point or freezing point, or from an acid to a base, for example.
They will simply try to repeat what they did in spaced practice instead
of doing what’s required by the current problem they are presented on
the exam.”
Next steps
Going forward, there are three key directions we could imagine pursuing
further: (1) fully implementing adaptive learning in Realizeit, (2) devel-
oping a way to make spaced practice more flexible through the use of
contract or specifications grading, and (3) working within the Chemistry
department’s general chemistry curriculum to phase students’ discovery
and maturation with spaced practice across 3–4 courses. We describe
each in more detail below.
204 Tara Carpenter, John Fritz, and Thomas Penniston
100.0 96
93
89
90.0 86
82 82
Averge Exam Score (Attempt #1)
78
80.0 73
72
70.0 66
60.0
50.0
40.0
30.0
26
20.0
0 1 2 3 4 5 6 7 8 9 10
# of Units for which 4 out of 4 Spaced Practice Assignments Were Completed
FIGURE 11.10
CHEM102 Midterm Exam Grade by Spaced Practice
Completion, Spring 22
Banking on adaptive questions to nudge students in gen chem 205
quibbles over a fraction of a point, but has seen the clear disconnect
among students between the assessments they are completing and the
goal of learning. “Additionally, not every student wants an A. If I can
clearly outline what a student needs to do to demonstrate a B or C per-
formance, and they do not need to stress about points, everyone’s stress
level can come down a bit.”
Conclusion
Throughout Carpenter’s teaching career, the key pedagogical challenge
she has strived to overcome is moving new students from being primar-
ily extrinsically motivated (for points) to become more intrinsically
motivated (to learn how they learn). Doing so at the scale of her typical
course enrollments is challenging. And yet, while her spaced practice
innovation did not appear to significantly change CHEM 102’s Fall 20
to Fall 21 DFW rate, which is a typical metric used to gauge effectiveness
of student success interventions, it is remarkable that the distribution
of the course’s higher grades did change significantly—and dramati-
cally. Not only were there more As and fewer Cs in Fall 21 than Fall 20,
but students of color were 4× more likely to have earned them, which
is remarkable in a high-enrollment gateway STEM course at a public
university.
So, what is it about the design of Carpenter’s course, exam, and practice
environments that might be working for more students? We are continuing
to explore this question, which has also been supported by a small grant
from Every Learner Everywhere to promote “Equity in Digital Learning,”
which Carpenter has participated in. Again, one group of students
Carpenter is especially interested in helping, regardless of background, are
those who actually do put in the effort and time on task in the learning
environments she designs, but still do not perform well. She fears they’ll
become discouraged and give up, but firmly believes these students need to
focus more on their metacognitive “thinking about their thinking” skills.
“I tell these students to learn—not just memorize—as if they were
going to teach the course,” says Carpenter. “It isn’t just that they need to
practice, prepare, or calculate better. They need a different mindset about
what they’re being asked to do, especially if problem variables or context
change from what they saw in practice.”
Indeed, as McGuire might suggest, improving students’ abilities—and
motivation—to honestly and accurately assess what they currently know,
understand, or can do could be the ultimate expression of improving their
metacognition.
Finally, one benefit of higher education’s pandemic-pivot to digital
learning may be more faculty becoming savvier and more capable of
expressing their pedagogy and course design at scale, regardless of the
course’s delivery or mode of instruction. Interestingly, despite it becoming
more possible to return to campus, Carpenter, and many of her UMBC
faculty colleagues who developed large exam question pools are continu-
ing to offer their exams online. Well, if it is relatively easy for faculty to
develop large numbers of exam questions to assess students, could it even
208 Tara Carpenter, John Fritz, and Thomas Penniston
become trivial for them to do so to help students practice and prepare for
them, too?
Indeed, UMBC Physics Lecturer Cody Goolsby-Cole, who also received
a 2022–2023 UMBC learning analytics mini-grant (Fritz, 2022), lever-
aged his own LMS question banks to create practice questions that were
ungraded, in terms of points contributing to a final grade, but including
correct and incorrect answers displayed to students. During the first six
units of his Spring 22 course, PHYS 122 “Introductory Physics II,” he
found students earning a 70% on the practice questions earned an aver-
age of 92% on the exam that followed. Students who did not use practice
questions earned an average exam grade of 77%. He plans to build out
and assess this practice environment further, including practice exams, to
see if and how more students might use and benefit from it.
If we could create learning environments in which faculty can model
and assign both exam and practice questions—with good feedback for
each, and maybe even adapted to students’ specific, demonstrated weak-
nesses—then students could get more exposure to the mindset and skills
needed to also demonstrate and improve their own self-regulated learn-
ing by creating more predictable and effective exam practice. Perhaps we
could also begin to encourage students to not only predict likely exam
questions—based on their experience in the course up to that point—but
also predict the likely, plausible answers, something we have dabbled in
for another course (Braunschweig & Fritz, 2019). In this context, stu-
dents could also be encouraged to “compare notes” with peers, which
might take some of the exam prep burden off of faculty alone via exam
reviews, etc. If so, this could illustrate the highest form of Bloom’s tax-
onomy where students create a learning artifact with knowledge and skills
they’ve applied from adaptive practice and exam learning environments.
All we’d need next is a Holodeck!
References
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M.
K. (2010). How learning works: Seven research-based principles for smart
teaching. John Wiley and Sons.
Bass, S., Carpenter, T., & Fritz, J. (2021a, May 18). Promoting academic
integrity in online, open-note exams without surveillance software. ELI
Annual Meeting. https://events.educause.edu /eli /annual-meeting /2021/
agenda /promoting- academic- integrity- in- online- opennote- exams-without
-surveillance-software
Bass, S., Carpenter, T., & Fritz, J. (2021b, October 27). Promoting academic
integrity in online: “Open Note” exams without surveillance software
[Poster]. Educause Annual Conference. https://events .educause
.edu
/annual
Banking on adaptive questions to nudge students in gen chem 209
Fritz, J., Penniston, T., Sharkey, M., & Whitmer, J. (2021). Scaling course design
as a learning analytics variable. In Blended learning research perspectives (1st
ed., Vol. 3). Routledge. https://doi.org/10.4324/9781003037736-7
Gray, J., & Lindstrøm, C. (2019). Five tips for integrating khan academy in your
course. Physics Teacher, 57(6), 406–408. https://doi.org/10.1119/1. 5124284
Hodges, L. C. (2015). Teaching undergraduate science: A guide to overcoming
obstacles to student learning (1st ed.). Stylus Publishing (Kindle Edition).
McGuire, S. (2017, September 22). Get students to focus on learning instead of
grades: Metacognition is the key! 4th Annual Provost’s Teaching & Learning
Symposium, UMBC. https://fdc.umbc.edu /programs/past-presentations/
McGuire, S., & McGuire, S. (2015). Teach students how to learn: Strategies
you can incorporate into any course to improve student metacognition,
study skills, and motivation. Stylus Publishing (Kindle Edition). https://sty
.presswarehouse.com / books/ BookDetail.aspx?productID= 441430
McGuire, S., & McGuire, S. (2018). Teach yourself how to learn. Stylus
Publishing (Kindle Edition). https://styluspub.presswarehouse.com/ browse/
book /9781620367568/ Teach-Yourself-How-to-Learn
Miller, M. D. (2014). Minds online: Teaching effectively with technology.
Harvard University Press. https://www.hup.harvard.edu/catalog.php?isbn
=9780674660021
Munday, P. (2016). The case for using DUOLINGO as part of the language
classroom experience. RIED: Revista Iberoamericana de Educación a
Distancia, 19(1), 83–101. https://doi.org/10. 5944/ried.19.1.14581
Murre, J. M. J., & Dros, J. (2015). Replication and analysis of Ebbinghaus’
forgetting curve. PLOS ONE, 10(7), e0120644. https://doi.org/10.1371/
journal.pone.0120644
Nilson, L. B. (2015). Specifications grading: Restoring rigor, motivating students,
and saving faculty time. Stylus Publishing. https://styluspub.presswarehouse
.com / browse/ book /9781620362426/Specifications- Grading
Nilson, L. B., & Zimmerman, B. J. (2013). Creating self-regulated learners:
Strategies to strengthen students’ self-awareness and learning skills. Stylus
Publishing.
Weimer, M. (2002). Learner-centered teaching: Five key changes to practice. John
Wiley & Sons.
12
THREE-YEAR EXPERIENCE WITH
ADAPTIVE LEARNING
Faculty and student perspectives
Introduction
As more students choose online learning, institutions of higher education
continuously seek ways to ensure online students’ success in their aca-
demic endeavors by not just offering flexible, marketable degrees, but also
by implementing feasible pedagogical approaches and tools to provide
individualized learning. Researchers have indicated that individualized
learning has been demonstrated to be more promising for student per-
formance since the instructions are adapted to meet individual students’
unique needs (Coffin & Pérez, 2015; Kerr, 2015). In order to optimize
individualized instruction in online learning, educators have made a con-
certed effort to develop an effective solution. As a result, a growing num-
ber of institutions in higher education adopted viable adaptive learning
systems to deliver the tailored learning experiences that support student
success both at distance and at scale (Dziuban et al., 2018; Mirata et al.,
2020; Kleisch et al., 2017; Redmon et al., 2021).
The concept of adaptive learning is not new in the educational arena,
it has been around for many decades. It refers to an instructional method
in which students are provided with learning materials and activities that
specifically address their personal learning needs. What is new, though, is
the emerging adaptive technology advances aligned to this concept, which
opens up a whole new set of possibilities for transforming one-size-fits-
all learning into an individualized learning experience through the use of
cutting-edge artificial intelligence systems, the collection and analysis of
individual learning data, and the provision of appropriate interventions
(Howlin & Lynch, 2014a). As students progress through the course, the
DOI: 10.4324/9781003244271-15
212 Yanzhu Wu and Andrea Leonard
Methods
Research design
This study employed a mixed methods approach using multiple data sources
in which quantitative data collection was followed by qualitative data col-
lection. The design was chosen to meet the purpose of the study, namely to
determine the views of faculty and students regarding their experience with
adaptive courses in the Realizeit platform. Analyses of the quantitative infor-
mation obtained from the students and faculty were conducted separately,
while qualitative information was analyzed collectively.
Data analysis
The quantitative data were analyzed using Excel and descriptive statistics
utilizing means and percentages were produced. The analysis of faculty
interviews was based on the recorded sessions. The qualitative data were
analyzed using an inductive approach (Creswell, 2008). After each inter-
view, the researcher transcribed each recording, verbatim, into a Word
Three-year experience with adaptive learning 217
document. Each faculty was assigned a number and was referred to by that
number, never by name, in the coding sheets. No identifiable information,
not even the faculty’s department, name of the course, and their academic
ranks, appeared anywhere on the typed notes from the interviews.
The initial coding of interview transcription began with color coding
that was used to identify keywords and phrases from each interview. The
researchers independently read the interview transcriptions and coded all
five interviews. Once initial coding was complete, the researchers com-
pared and grouped codes, and generated emerging themes for further
analysis. The inter-rater reliability between the two coders was 96%. In
the findings and discussion, the emerging themes were further elucidated.
Findings
A total of 173 out of 1049 undergraduate students who had enrolled in
an adaptive course during 2018–2021 responded to the student survey,
and five out of ten faculty members who had designed or taught at least
one adaptive course during the same time period have completed the fac-
ulty survey and volunteered to participate in the follow-up interview. All
student participants had completed at least one online or hybrid course,
and the majority of students (84.4%) had taken between three and ten
online/hybrid courses prior to their adaptive courses. The faculty’s teach-
ing experience in higher education ranged from six to nineteen years, and
their online teaching experience at UL Lafayette ranged from two to ten
years. Table 12.1 contains a summary of participant demographics.
Items M SD
(Likert Scale: 1 = Not at all beneficial to 4 = Very n = 65
Beneficial)
What you should do next block in the Realizeit dashboard 3.29 0.99
Ability to visualize learning progress with Learning Map 3.69 0.75
Capability to make choices in the lesson map 3.48 0.81
Capability to remediate and repeat lessons for continuous 4.19 1.04
improvement
Ease of access to learning materials 3.66 0.69
Immediate feedback on activities and assessments 3.75 0.64
Assessment question flagging 3.36 0.98
Guided solutions of each question 3.51 0.95
Question hints 3.15 1.09
n = 65
More engaging 24.62 12.31 30.77
Equally engaging 55.38 75.39 55.39
Less engaging 20.00 12.30 13.85
more beneficial for meeting their needs. These are timely feedback on
assessments (40%, n = 71), the opportunities of repeated practice (35%,
n = 60), retention of information (32%, n = 54), and the focused remedia-
tion (35%, n = 62) that allows students to move quickly through content
that they already know. Fifty-nine students (80.7%, n = 65) indicated that
they are somewhat motivated or highly motivated to continue improving
their composite score for each lesson through repeated practice.
The survey results also reveal that most students found adaptive courses
equally engaging or more engaging compared with different modalities
including face-to-face (80%), hybrid (87.7%), and typical online courses
(86%), whereas around 30% found adaptive course more engaging than
typical online courses (Table 12.4).
Features of Realizeit influencing teaching effectiveness. Thirteen
Likert-like scale items were used to examine faculty perceptions of the
adaptive features of the Realizeit platform. Items were assessed on a
220 Yanzhu Wu and Andrea Leonard
Items M SD
(Likert Scale: 1 = Not at all beneficial to 4 = Very Beneficial)
n=5
Increasing teaching efficiency 3.80 0.44
Monitoring student progress 4.00 0.00
Quantifying how well students have learned each concept/skill 3.60 0.54
Identifying meaningful patterns in student learning behavior 3.40 0.54
Evaluating the effectiveness of course design 3.80 0.44
Addressing student questions 3.40 0.54
Identifying students in need of support 4.00 0.00
Allowing students to make choices in the Lesson Map 3.60 0.54
Providing students the ability to practice and revise lessons 3.80 0.44
Visualizing learning progress 3.80 0.44
Providing students with easy access to learning materials 3.40 0.54
Providing immediate feedback on activities and assessments 3.80 0.44
Engaging students with learning content 3.60 0.54
TABLE 12.6
Student Performance Data (Course names have been changed to
Course A, B, C, and D)
the question pool, and I found some test questions and answers from my
course had been made available on the Internet.”
Second, three faculty members brought up the issue that the Realizeit
notification system hinders faculty from identifying and addressing stu-
dents’ issues on a timely basis because it does not generate email alerts.
This leads students to believe that they are not getting just-in-time support
from their faculty. As F4 stated, “when students sending a message in
Realizeit, I don't get any notification, I had to actively looking to Realizeit
to check students’ messages.”
Third, four faculty members raised an issue about the use of emojis,
a state framework (Howlin & Gubbins, 2018) that is integrated into the
Realizeit platform to capture students’ feedback on regular basis with-
out interrupting the learning process. With this feature, the intention was
to provide students a means of expressing and updating their emotional
states throughout the course, to help faculty to visualize how students feel
about their learning, and to ensure that students receive the appropriate
assistance at the right time. However, faculty have observed that students
chose their emotional states at the beginning of the course and seldom
update them as the course progresses. This limits the intended effective-
ness of the state framework.
Positive aspect of teaching adaptive courses. Four faculty members
believe that Realizeit is best suited for promoting formative learning.
The option to repeat practice provides students with an effective method
for consolidating what they have learned and brings them motivation
for continuous improvement. They found that students typically prac-
tice until they reach the Exemplary band, despite the fact that receiving
the Emerging band permits them to access the next sequential lessons.
Faculty commented that repeating practice provides students a bridge over
the gaps that exist between different achievement levels.
All faculty members agreed that Realizeit’s learning analytics inte-
grated into the user dashboard provided a clear visual representation of a
summary of individual students’ performance data. This made it easy to
monitor learning progress, engagement, and achievement, as well as over-
all learning progress across different modules and learning objectives. F2
stated, “I always start by going to the Learning Map to check how many
students have completed their activities, and I go to the Need-to-Know
section to check if there are any students that had any specific problems”
(see Figure 12.4). The visual indicator, such as red-colored circles, helped
them quickly identify students in need of support or intervention. This
is impossible to do in online courses hosted in a typical learning man-
agement system. Faculty members also indicated that they were able to
identify relevant patterns of learning and anticipate future engagement
224 Yanzhu Wu and Andrea Leonard
Discussion
In this study, the researchers sought to understand the experiences that
faculty members had throughout the creation and teaching process of
adaptive courses in Realizeit, as well as students’ experiences in taking
these adaptive courses.
The findings of this study indicate that both faculty and students
had overall positive experiences using Realizeit. All participants in this
research expressed that the platform is easy to use, the dashboard is
intuitive, and they expressed satisfaction with repeated and individual-
ized practices, remediation, real-time feedback, and the visualization
Three-year experience with adaptive learning 225
Design
Regarding the adaptive course design, all the courses awarded less than
40% of the course grade to the adaptive learning activities, with the
majority of them valuing adaptive activities at less than 20% of the grade.
Apparently, the use of adaptive activities was intended to increase student
engagement with learning content to better prepare students for summa-
tive assessments. Two faculty members used adaptive activities as part of
the flipped classroom model in their hybrid courses. They had students
finish the lower level of cognitive work through the adaptive learning
approach, and then they used active interactions during class to engage
students in higher cognitive learning with peers.
Faculty members indicated that building an adaptive course was time-
consuming, which was consistent with the findings of earlier research
(Cavanagh et al., 2020; Chen et al., 2017). When it comes to making
Realizeit provide students with a fully individualized learning experience,
faculty members should present a wide range of learning materials and
a large pool of assessment questions with varying degrees of difficulty
(Chen et al., 2017).
To avoid extra costs associated with taking an adaptive course, all
courses replaced textbook and publisher content with OER as the pri-
mary course material and the assessment questions. Some faculty mem-
bers shared that since they lacked prior understanding about OER, it was
quite a challenge to locate OER, adapt it to align with learning objectives,
fit the needs of their courses, and use it responsibly with proper permis-
sion. They also noticed that most students did not have any issues with
the OER as their primary assigned learning materials, but some students
preferred to have a textbook in addition to the OER.
226 Yanzhu Wu and Andrea Leonard
Delivery
Adaptive technology revolutionized the way faculty facilitate student
learning. All faculty members found that having access to a matrix of
learning analytics was beneficial for quickly identifying patterns of stu-
dent engagement, efficiently tracking individual student participation,
class performance, and spotting potential problems. Faculty highlighted
the importance of attributes of engagement data, which included the visu-
alization of student learning progress, time spent on each activity, color-
coded knowledge mastery levels, individual scores, and the list of student
difficulties that needed faculty attention.
When teaching an adaptive course, faculty are certain that adaptive
technology promotes individualized learning by providing students with
the opportunity for continuous improvement on specific learning gaps
by going through the relearning and retesting process. However, when
faculty were asked to share challenges in teaching an adaptive course,
they mentioned two issues. First was a concern regarding the possibility
of academic dishonesty occurring when taking the formative assessments
repeatedly. Paid proctoring services are not feasible considering the volume
of formative assessments in each adaptive course. Furthermore, without
proctoring, it seems to be impossible to prevent academic dishonesty and
correctly measure student performance (Weiner & Morrison, 2009). In
an attempt to reduce the likelihood of academic dishonesty, faculty used
the following practices: the inclusion of variables in assessment questions
to prevent the repeating of identical questions, the use of a large question
pool for each lesson to diversify the questions seen, and the reduction of
the impact of the adaptive learning grade on the overall course grade to
disincentivize cheating. Some researchers also recommended the consid-
eration of timing the adaptive assessments and limiting the number of
attempts (Miller et al., 2019).
The second issue is about the dissatisfaction with two communica-
tion tools in Realizeit. Some faculty members reported that the notifi-
cation system and emotional expression tools were not functioning as
designed. The notification system does not send out an email alert when
students sent messages to faculty in Realizeit, which prevents faculty
Three-year experience with adaptive learning 227
Support
All faculty participants overwhelmingly agreed that the assistance offered
by instructional designers is essential to the success of creating and deliv-
ering adaptive courses. They valued the assistance they received prior to,
during, and after the course design. Before course design begins, instruc-
tional designers meet with faculty to explain the Realizeit approach,
review the three-step process of adaptive course design shown in Figure
12.3, determine the course components that can be best supported with
adaptive learning, and set the design timeline. The active design process is
streamlined with the use of instructional designer-provided templates that
faculty fill in with course information such as learning objectives, course
structure, learning materials, and assessment questions. When the design
is complete and the course is launched, instructional designers continue to
provide support for faculty and students with respect to technical prob-
lems and continuous course improvement.
Some faculty members also provided insight into areas of concern that
administrators should consider addressing, including increasing aware-
ness of the benefits of adaptive learning and providing the necessary sup-
port to faculty members who wish to implement this innovative approach.
Conclusion
Today, the number of online course offerings has grown exponentially,
and as a result of the pandemic, the number of students learning online,
fully or partially, has increased dramatically (Lederman, 2021). Given
that students have different learning styles and different prior knowledge,
the paradigm of online education has shifted from passive to adaptive
learning, and a revolution of adaptive learning is presently underway,
powered by an explosion of robust and cost-effective adaptive technol-
ogy solutions and resources. The findings of this study contributed to the
body of knowledge about student and faculty perspectives and experi-
ences using adaptive technology, especially the Realizeit platform. It also
discussed faculty practices and challenges with the design and delivery of
an adaptive course, as well as the crucial relationship between faculty and
instructional designers in this process. Hopefully, the results of this study
will better assist faculty, instructional designers, and administrators in
understanding the requirements and rewards of adaptive courses.
References
Carstairs, J., & Myors, B. (2009). Internet testing: A natural experiment reveals
test score inflation on a high-stakes, unproctored cognitive test. Computers in
Human Behavior, 25(3), 738–742.
Cavanagh, T., Chen, B., Lahcen, R., & Paradiso, J. (2020). Constructing a
design framework and pedagogical approach for adaptive learning in higher
education: A practitioner’s perspective. International Review of Research in
Open and Distributed Learning, 21(1), 173–197.
Chen, B., Bastedo, K., Kirkley, D., Stull, C., & Tojo, J. (2017). Designing
personalized adaptive learning courses at the University of Central Florida. ELI
Brief. https://library.educause.edu /resources/2017/8/designing-personalized
-adaptive-learning- courses-at-the-university-of- central-florida
Coffin Murray, M., & Pérez, J. (2015). Informing and performing: A study
comparing adaptive learning to traditional learning. Informing Science, 18,
111–125.
Creswell, J. W. (2008). Research design: Qualitative, quantitative, and mixed
methods approaches. Sage.
Dziuban, C., Howlin, C., Moskal, P., Johnson, C., Eid, M., & Kmetz, B. (2018).
Adaptive learning: Context and complexity. E-Mentor, 5(77), 13–23.
Three-year experience with adaptive learning 229
Howlin, C., & Lynch, D. (2014a, November 25–27). A framework for the delivery
of personalized adaptive content [Conference presentation]. 2014 International
Conference on Web and Open Access to Learning (ICWOAL), Dubai, United
Arab Emirates.
Howlin, C., & Lynch, D. (2014b, October 27–30). Learning and academic
analytics in the Realizeit system [Conference presentation]. E-Learn 2014,
New Orleans, LA, United States.
Howlin, C., & Gubbins, E. (2018. Self-reported student affect. https://lab.
realizeitlearning.com/features/2018/09/19/Affective-State/
Kerr, P. (2015). Adaptive learning. ELT Journal, 70(1), 88–93.
Kleisch, E., Sloan, A., & Melvin, E. (2017). Using a faculty training and
development model to prepare faculty to facilitate an adaptive learning online
classroom designed for adult learners. Journal of Higher Education Theory
and Practice, 17(7). https://articlegateway.com/index.php/ JHETP/article/view
/1470
Lederman, D. (2021, September 16). Detailing last fall’s online enrollment surge.
Inside Higher Education.
Miller, L. A., Asarta, C. J., & Schmidt, J. R. (2019). Completion deadlines,
adaptive learning assignments, and student performance. Journal of Education
for Business, 94(3), 185–194.
Mirata, V., Hirt, F., Bergamin, P., & Westhuizen, C. (2020). Challenges and
contexts in establishing adaptive learning in higher education: Findings from
a Delphi study. International Journal of Educational Technology in Higher
Education, 17(32). https://educationaltechnolog yjournal.springeropen.com/
articles/10.1186/s41239- 020 - 00209-y
Redmon, M., Wyatt, S., & Stull, C. (2021). Using personalized adaptive learning
to promote industry-specific language skills in support of Spanish internship
students. Global Business Languages, 21, 92–112.
Schultz, M. C., Schultz, J. T., & Gallogly, J. (2007). The management of testing
in distance learning environments. Journal of College Teaching and Learning,
4(9), 19–26.
Sher, A. (2009). Assessing the relationship of student-instructor and student-
student interaction to student learning and satisfaction in web-based online
learning environment. Journal of Interactive Online Learning, 8, 102–120.
Tyton Partners. (2016). Learning to adapt 2.0: The evolution of adaptive learning
in higher education. https://d1hzkn4d3dn6lg.cloudfront.net/production/
uploads/2016/04/yton-Partners-Learning-to-Adapt-2.0-FINAL.pdf
Weiner, J., & Morrison, J. (2009). Unproctored online testing: Environmental
conditions and validity. Industrial and Organizational Psychology, 2(1),
27–30.
Wu, J., Tennyson, R. D., & Hsia, T. (2010). A study of student satisfaction in
a blended e-learning system environment. Computers and Education, 55(1),
155–164.
13
ANALYZING QUESTION ITEMS
WITH LIMITED DATA
James Bennett, Kitty Kautzer, and Leila Casteel
Introduction
Few who are familiar with robust adaptive learning systems will dismiss
their effectiveness in both providing instruction and assessing student
achievement of specific learning objectives. While a powerful teaching
tool, adaptive learning systems can also require a great deal of effort to
ensure that accurate measurement of learning is taking place. Refinement
of question items can take a number of iterations and depend upon large
sample sizes before the educator is assured of their accuracy. This process
can take a long time when courses run intermittently or with only small
cohorts.
In fact, some courses may never reach a large-enough sample size to
satisfy the recommended requirements of conventional statistics. This can
be especially concerning when dealing with learning content that is criti-
cal for student mastery and where the stakes may be high enough that the
accuracy of adaptive learning assessments must be immediately reliable.
In situations such as these educators do not have the luxury of waiting
until enough students have passed through the course to produce a large-
enough sample size.
For some time now we have undertaken to develop tools and processes
that allow for quick assessment of adaptive learning question items with-
out large sample sizes or numbers of iterations. The goal is to be able to
spot question items for possible refinement using conditional probability.
We have understood from the beginning that the best methods require
good sample sizes of data, but we also know that our learner’s interest is
DOI: 10.4324/9781003244271-16
Analyzing question items with limited data 231
better served if we quickly identify question items that may need refine-
ment and move them forward in the analysis process. In other words, it
is better spending the extra time analyzing a hand full of questions that
might prove to be accurate, rather than wait several semesters to narrow
the analysis down to fewer questions.
Conditional probability
Conditional probability is defined as a measure of the probability of an
event, given that another event or condition has already occurred (Gut,
2013). While the primary focus of conditional probability is to identify a
probable outcome based on statistical analysis (in this case, the probabil-
ity that some part of a question item is proving problematic for learners)
the approach we have been developing is one that allows us to quickly
identify question items that may be problematic. Whether they are not is
determined by further analysis.
Background
This study is a logical extension of our earlier work published by Springer
(2021). After we had arrived at a method for question item analysis, our next
challenge was to explore the use of the same method under less-than-ideal
conditions (e.g., limited number of sections, small cohorts, etc. ). We think
the details of the original method applied to question item analysis are just
as important to educators working in adaptive learning platforms as the
new findings and that they are closely tied together. In order to share those
previous findings as well as the new ones, what follows is a brief description
of each of the principles we use in our question evaluations. Also, at the end
of this chapter we include a table that presents the entire question analysis
system in the form of a tool—with step-by-step procedures.
learning question items was the major focus of our work for the Human–
Computer Interaction conference. What follows is an abbreviated descrip-
tion and explanation of each of the four categories as presented in the
earlier work.
Validity. Validity is the primary measurement of a question item’s value
and should always be the first criteria for analyzing any question item.
The focus is on whether or not an assessment captures and measures what
is intended to be assessed. Unfortunately, this critical concern is often
subjected to misalignment in many learning assessments. Too often ques-
tion items are written in a way that focus on a learner’s ability to recall or
associate terms, rather than measure learning and competency levels. An
example of misaligned assessment would be a question that asks learners
to select from a list of answers that contain parts of the human anatomy,
but what needs to be measured is the learner’s understanding of organ
functions.
There are two primary checks to determine validity for an adaptive
learning question item. The first is to make certain that the adaptive learn-
ing content supports the question item. In other words, will the learner be
able to answer the question item correctly based on the information they
have consumed.
The second check is to make certain the question item aligns with a
specific course outcome, learning objective, or required competency and
then evaluate whether the assessment correctly measures a learner’s per-
formance in that area. This can be done by using Bloom's Taxonomy
(Bloom, 1956) to compare the cognitive domain of what is being meas-
ured to what is being assessed in a question item.
An example of a valid assessment that aligns with the proper Bloom’s
cognitive domain would be a question that asks a learner to describe and
explain a concept. In this case the assessment would be measuring under-
standing. If understanding is what the assessment was intended to meas-
ure, then the assessment is likely valid.
Reliability. A question item’s reliability depends on several factors.
Does it perform consistently and do the scores demonstrate the assess-
ment is measuring student learning more accurately than mere chance?
Also, do the conditions and the context of the assessment interfere with
learning measurement?
An example of a test item that would be unreliable by its context would
be a question that gave away its own answer in its wording or gave hints
toward the solution of another question in the same exam.
The method we use to begin to determine a question item’s reliability
is by examining the Difficulty Index of the answers provided by the learn-
ers. The Difficulty Index (sometimes referred to as P value) is obtained by
Analyzing question items with limited data 233
Methodology
For some time, we have used the Difficulty Index as a first pass indica-
tor when performing assessment analysis. The Difficulty Index is easy to
obtain and is used as an identifier by psychometricians. The problem with
relying on the Difficulty Index as a question item review trigger comes
from the statistical probability that low sample sizes can lower the num-
ber artificially. How much lower and how different from the same ques-
tion item presented to a larger cohort is determined by a complex series
of factors that are difficult to track. What we needed to know was if we
could still depend on low Difficulty Index scores as a trigger for further
question item analysis. We determined that to accomplish this we would
need a good sample of small cohorts answering the same questions. This
would give us a series of comparative Difficulty Index scores for each item,
which in turn could be used to determine a conditional probability that an
item needed refinement.
The methodology for this research used a simple approach that was
applied in actual question item analysis. We wanted to learn if that ten-
dency toward low Difficulty Index values in small cohorts was still within
practical levels to use for a first indicator for question item review. In
234 James Bennett, Kitty Kautzer, and Leila Casteel
Sample size
Our sample size consisted of 108 learners in 6 cohorts with an average of
15 students per cohort (most cohorts ranged in size from 7 to 20 with one
outlier having 26 students). These cohorts were given a total of 7 knowl-
edge check quizzes with 25 question items each (150 questions total).
We calculated the Difficulty Index for each and compared that number
across all cohorts for the same quiz. As an extra measure we also collected
data on the number of question items that had at least one cohort with a
Difficulty Index above 0.30 for any question that needed revision. While
this was information outside of what we needed for our conditional prob-
ability research, we wanted to know if there were any tendencies for small
cohorts to produce artificially high Difficulty Index values that would be
missed if there was only a single cohort available for review.
Study data
Table 13.1 presents the data collected and combined from each of the
small cohorts. Each row contains the combined data for a quiz delivered
to all of the cohorts.
Analyzing question items with limited data 235
Knowledge 6 2 1 33
Check A
Knowledge 8 2 2 25
Check B
Knowledge 7 2 0 28
Check C
Knowledge 7 3 1 42
Check D
Knowledge 11 7 3 63
Check E
Knowledge 8 2 1 25
Check F
Knowledge 13 2 2 15
Check G
Average = 32
Note: The column on the right displays the conditional probability that any given question
item from the quiz, with a Difficulty Index of 0.30 or lower, may require refinement.
• Total items with any Difficulty Index of 0.30 or lower—is the total
number of question items that produced a Difficulty Index of 0.30 or
lower in a single quiz. Even if only a single cohort had a number that
low, the question item was set aside for review.
• Total items in quiz that warranted refinement after analysis—this
number is the total number of question items from each quiz that was
determined to need refinement after analysis using the four principles
to evaluate assessments.
• Number of question items that had at least one cohort with a Difficulty
Index above 0.30 for a question that needed revision. This is a special
category of data that were collected during the research, but not needed
to make the conditional probability calculations.
236 James Bennett, Kitty Kautzer, and Leila Casteel
Results
The results from the study (see Table 13.2) show some trends that can be
useful knowledge when it comes to analyzing question items with small
cohorts in an adaptive learning system. Each of these trends and a few
anomalies are noted in the following bullets.
Knowledge 6 2 1 33
Check A
Knowledge 8 2 2 25
Check B
Knowledge 7 2 0 28
Check C
Knowledge 7 3 1 42
Check D
Knowledge 8 2 1 25
Check F
Average = 30.6
Analyzing question items with limited data 237
extremes of all the data sets. Since the purpose of this study was to test
the use of the Difficulty Index as a predictive indicator in question item
analysis in small cohorts, we did not look for exemptions (e.g., question
topics that were more difficult, overall student performance, etc.). This
study focuses on the use of simple statistics.
It should be noted that if these two outliers were removed from the study a
tighter grouping of each data point would be present, but the conditional
probability does not change significantly.
Validity Yes No
Does the item truly assess Item appears valid. Further review
the knowledge or skill required.
assigned to it? (e.g.
Bloom’s verb, practical
performance, etc.)
Does the item assess Item may be valid as Item does not
prerequisite or necessary part of the body of appear valid.
supporting knowledge/ knowledge for subject
skill? domain.
Are learners presented with Further review required. Ensure content
content required to learn The assessment is presented
this concept? may simply be to learner
beyond appropriate and review
expectations for assessment again
learners at this level. with new data.
Instructor/Instructional Designer: If the item appears valid, the instructor may use this
information as augmented knowledge concerning learner performance, and adjust learner
activity accordingly (e.g., review material, etc.). If the item’s validity is suspect, it should be
corrected as a learning object so that it is valid.
238 James Bennett, Kitty Kautzer, and Leila Casteel
Reliability Yes No
Standardization Yes No
Bias Yes No
References
Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of
educational goals. Longmans, Green. Or Bloom’s revised taxonomy: Anderson,
L. W., & Krathwohl, D. R. (2001). A taxonomy for teaching, learning, and
assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.
Analyzing question items with limited data 241
Catherine A. Manly
DOI: 10.4324/9781003244271-17
When adaptivity and universal design for learning are not enough 243
(i.e., text, video, audio, interactive, or mixed content), a key UDL com-
ponent (Manly, 2022). Within this framing, I illustrate how analyzing
within-course data may help make recommendations to students.
Institutions increasingly use predictive analytics to inform feedback
to students, often through vendor-driven systems involving proprietary
algorithms with unknown characteristics. Prescriptive analytics extends
the predictive analytics approach to include modeling and simulating
alternative possibilities to investigate optimal decisions while accounting
for uncertainty (Frazzetto et al., 2019). Analytics have received increas-
ing attention by researchers, informing the individual-centered approach
investigated (Dawson et al., 2019). While predictive analytics has become
increasingly important within higher education, “analytics for the pur-
poses of improving student learning outcomes … remain sparsely used in
higher education due to a lack of vision, strategy, planning, and capacity”
(Gagliardi, 2018). Institutional mechanisms to use predictive capability
often remain nascent, of limited scope, or in early development, reflect-
ing the analytics field generally (Dawson et al., 2014). The novel analytics
approach illustrated here aims to let students know when tutoring might
be beneficial, augmenting assistance from universally designed course
elements.
The example features an introductory undergraduate English course
where system logs captured students’ learning actions in both adaptive and
traditional learning management systems (LMSs), along with information
about online tutoring received. While this online course includes addi-
tional collaborative components intended to foster a learning community
within the class (Picciano, 2017), such as discussion forums, this study
focuses on the course’s self-paced adaptive content and LMS assignment
grades, as well as optional, individualized, interactive one-on-one tutor-
ing, supplementing the class learning community. While UDL includes
numerous practice guidelines, the focal element here included presenting
options for perception by offering content through multiple modalities. I
posit identifying patterns in students’ use of multiple modalities and tutor-
ing utilization should offer insight into where students struggle to learn
course material, and thus where future students may benefit from seeking
additional support.
As a proof-of-concept, the investigation conducted illustrates the type
of analysis that could offer predictive suggestions based on past data
about modality switches and tutoring. As such, the illustration presents
preliminary results more argumentatively than analytically. Through this
example I argue that institutions should think creatively and expansively
about using the wealth of student online learning data increasingly aggre-
gated in institutional data warehouses to assist struggling students. Using
244 Catherine A. Manly
educational design can benefit such students anyway. Given the widening
range of abilities, needs, and ways of knowing that aspiring students bring
to higher education, designing courses while viewing a broad range of
abilities and experiences as normally expected becomes imperative.
Issues of learning accessibility and variation have found expression
in several related theoretical frameworks, including UDL (CAST, 2014),
Universal Instructional Design (UID; Silver et al., 1998), and Universal
Design for Instruction (UDI; Scott et al., 2003), among others (McGuire,
2014). UDL has been intimately connected to educational technology sup-
port for learning because technology facilitates complying with UDL prin-
ciples (Mangiatordi & Serenelli, 2013). Despite enough interest to generate
a variety of alternatives, such universal design frameworks still struggle
to gain acceptance in academic culture (Archambault, 2016) and remain
understudied in postsecondary education (Rao et al., 2014). Given wide-
spread interest in universal design, including implementation guidelines
and many arguments calling for its adoption (Burgstahler, 2015; CAST,
2014), the lack of research is surprising.
Within UDL research, it remains unusual to study student learning
outcomes. Subjective perceptions have been heavily researched, typically
of faculty (e.g., Ben-Moshe et al., 2005; Lombardi & Murray, 2011) and
occasionally of students (Higbee et al., 2008) or employees (Parker et al.,
2003). In one study, graduate students recognized the benefits of hav-
ing content provided in multiple formats, and to a lesser extent, reported
using them (Fidalgo & Thormann, 2017). Additionally, Webb and Hoover
(2015) conducted one of very few studies of UDL principles expressly
investigating multiple content representations, aiming to improve library
tutorial instruction. However, effects in classroom settings remain
understudied.
Interestingly for this study, faculty UDL training appears to matter for
student perception regarding content presentation. Content presentation
would be covered in typical UDL training and subsequent evaluation, and
some studies break down subtopics, allowing understanding of content
presentation within the larger study’s context. One such study of over
1,000 students surveyed before and after professors received 5 hours of
UDL training indicated improvements in areas such as faculty providing
material in multiple formats (Schelly et al., 2011). A follow-up studying
treatment and control groups and more detailed questions about UDL
practices also found improvements perceived by students after 5 hours
of faculty UDL training, again including offering materials in multiple
formats (Davies et al., 2013). However, without baseline condition evalu-
ation, pre-existing differences in faculty knowledge and practice may con-
found these results.
When adaptivity and universal design for learning are not enough 247
Data
Attending a Northeast women-only university, the undergraduates studied
were typically nontraditional age, juggling family and work with school, a
group higher education traditionally underserves. All sections of one basic
English course (called ENG1 here) from the 2018/2019 academic year
were analyzed. This course combined an LMS for discussion, assignments
and grades, with an adaptive system including content and mastery-level
learning assessments.
The six-week accelerated course contained two to nine learning activi-
ties per week designed to take about twenty minutes each. An articu-
lated knowledge map with explicit connections between content activities
made modeling structural connections between them possible. Analysis
included the full two activity sequence of adaptive learning activities in
week one. The student had a known knowledge state score upon enter-
ing each activity, which was updated after a brief assessment upon
completion.
In an approach consistent with universal design principles around pro-
viding alternatives for perception, the adaptive system encouraged stu-
dents who showed signs of struggling to understand the material to pursue
paths along alternate modalities until a path guided them to successful
content mastery. The system log recorded each path traversed with an
identifier, timestamp, and duration, along with the modality used each
time the student went through an activity’s material. Additionally, any
student could use other modalities by choosing to repeat the activity even
if she was not struggling. Even though adaptive systems can make content
available in multiple modes when such content is designed into them, this
possibility has not been a focal point of prior research investigating pres-
entation mode (Mustafa & Sharif, 2011).
In addition to modality, information from tutor .c
om was available
within the institutional data warehouse. The LMS linked to tutor.com,
where each student had several hours of free tutoring. The data ware-
house contained the tutoring subject, start date/time, and session dura-
tion. These were combined with LMS (weekly grade outcome), adaptive
248 Catherine A. Manly
Variables
The outcome was the predicted probability of achieving either an A, B, or
C, versus a D or F, on first week course assignments and quizzes.
A binary indicator representing whether the student received tutor-
ing provided the simulated intervention focus. Model training employed
actual data for this variable, whereas the simulation set this variable to
yes (1) or no (0) depending on the analysis scenario, allowing potential
outcome prediction comparisons.
The Bayesian network variables also included use of multiple con-
tent modalities during an activity, activity repetition, and other covari-
ates. Multiple modality use was operationalized as student use of at least
any second content modality (e.g., either text, video, audio, interactive,
or mixed) when learning material within the activity. A binary indica-
tor denoted activity repetition if the student went through part or all the
activity two or more times. Demographic and prior educational independ-
ent conditioning variables included measures of race/ethnicity, age, Pell
grant status (a low-income proxy), and number of credits transferred upon
entry to the institution (a prior educational experience proxy).
Methods
Adaptive log data captured sequences of usage, and patterns of content
representation use were investigated visually. Descriptive plots showed
tutoring, modality information, and general activity information, with
the first two weeks shown here. Consistent with a student-focused learn-
ing analytics approach, heatmap plots with rows for each student allowed
visualization of all students’ cases simultaneously. Variables displayed
were the same as the sequence plots. Heatmaps were clustered by similar-
ity on both rows and columns to show patterns of the combinations of
modality use and tutoring. They were created using R’s ComplexHeatmap
package with the default “complex” agglomeration method of clustering
and Euclidean distance as the clustering distance metric (i.e., the shortest
difference between two points in the multidimensional space including the
analysis variables).
The analytical proof-of-concept was based on a Bayesian network
using a directed acyclic graph (DAG) to model assumed causal relation-
ships including modality use and tutoring (Pearl, 1995, 2009). The analy-
sis describes how a network of probabilities could be updated on the fly for
individual students during a course. The approach simulates a presumed
When adaptivity and universal design for learning are not enough 249
Limitations
Those interested in applying this approach should be aware of several
considerations. While external validity is potentially limited because only
one course at one institution was studied, the intent was to illustrate the
type of analysis approach utilizing data from multiple campus systems
that could have wide applicability across institutions. To this end, several
important features of the context studied include the integration of data
within the data warehouse and the provision of tutoring via a service that
allowed warehouse inclusion. The amount of data analyzed here suffi-
ciently illustrated the process, but this work should be expanded with
additional data and piloted to explore operationalization and communi-
cating results to students.
Results
Figure 14.1 illustrates patterns of modality use and tutoring aggregated
across all course sections. The x-axis shows activity ordering from the
course’s knowledge map as instantiated in the adaptive system. Headings
distinguish the two first-week activities from the six second-week activi-
ties. Plot rows show the amount of time spent getting tutoring, ratio of
number of times multiple modalities were used by number of activity rep-
etitions, number of activity repetitions for each student, and amount of
time the student worked on the adaptive activity.
Many students used multiple modalities upon repeating the activity
and students frequently repeated activities. The amount of time spent
varied, but density of higher values did not always correspond to either
tutoring, modality use, or repetition. Notably, few students received tutor-
ing, presenting a limitation for this sample (Morgan & Winship, 2015).
However, enough students received tutoring during the second activity
to enable illustration of the technique with appropriate extrapolation. In
actual application, evaluation of sufficient overlap between treatment and
comparison cases should occur with a larger sample. As data continue to
250 Catherine A. Manly
FIGURE 14.1 Patterns of Modality Use and Tutoring, First Two Weeks for Full
Sample
FIGURE 14.2 Clustered
Heatmap of Adaptive Activity for Full Sample, Split by
Week Assignment/Quiz Grade
receiving a D/F. Two students from each group in the test data were
selected to illustrate the intervention simulation.
I highlight several student groups visible from inspection of clustering
patterns in Figures 14.2 and 14.3 labeled by the numbered regions. In
both figures, Region 1 includes students who struggled the most, receiving
252 Catherine A. Manly
FIGURE 14.4
Kernel Density Plots of Tutoring Intervention Differences for
Four Students
254 Catherine A. Manly
Discussion
This exploratory investigation of variation in modality use and tutoring
activity described clustered data patterns and illustrated how such infor-
mation might be used in a simulation that projects predictions within an
educational system. While some students used both multiple modalities
and tutoring, not all utilized both supports simultaneously. Most stu-
dents who received tutoring used multiple modalities, but not the most
frequently. The heaviest users of multiple modalities typically did not
spend the most time in the adaptive system, and vice versa. Most tutees
received weekly assignment grades of A, B, or C. Other groups’ outcomes
appear mixed, suggesting that additional tutoring may be beneficial for
some. Existence of noticeably different clusters of modalities and tutor-
ing warrants drilling into these patterns to identify combinations leading
to greater student learning at particular points in the course. Four such
groups stand out, including those who rely most heavily (a) on tutoring,
(b) on repeating activities using different modalities, and (c) on spending
time going through the adaptive system material slowly, as well as (d)
those who spend a moderate amount of time getting tutoring combined
with moderate use of multiple modalities.
When adaptivity and universal design for learning are not enough 255
Implications
The COVID-19 pandemic catapulted UDL design concerns into a broader
spotlight as faculty rushed to shift their pedagogy to emergency remote
delivery (Basham et al., 2020; Levey et al., 2021). The example analysis
presented offers one avenue to extend that changing practice to further
support students through UDL-informed instructional design decisions
(Burgstahler, 2021; Izzo, 2012). Learning behavior patterns can effec-
tively inform appropriate recommendations regarding student support
and also help instructors identify where to focus additional class time.
Bayesian network-informed approaches utilized in adaptive learning
are often either still experimental or developed behind proprietary sys-
tem paywalls (Kabudi et al., 2021). I argue that institutions could develop
similar capability or work with vendors willing to make the predictive
process transparent enough to enable augmented predictions utilizing
aggregated data across multiple systems. This illustration shows such an
approach might benefit students, particularly when an institution aggre-
gates data across different vendor systems in a data warehouse.
Adaptive learning literature has used learning style information to tai-
lor adaptivity (Khamparia & Pandey, 2020) even though the hypothesis
that matching material to learning style preferences is considered a “neu-
romyth” (Betts et al., 2019). For example, prior experimental research
on adaptive learning across 5 days with 42 students suggested students
benefit from having the initially presented modality tailored to their learn-
ing style, although use of different modalities provided was not explicitly
studied (Mustafa & Sharif, 2011). For the present study, system admin-
istrators believe it likely that the adaptive system did not have sufficient
time to determine such preferences and change the default modality pre-
sented for many, if any, students studied. How to manually change that
default was not advertised, so the default initial presentation (typically
textual) was likely enabled for almost all students. Future research could
compare results before and after the adaptive system adjusted the initial
modality presented to verify the extent to which benefit comes from using
multiple modalities regardless of order.
When adaptivity and universal design for learning are not enough 257
Conclusion
This work augments prior analytics research by providing a proof-of-
concept that could be extended to investigate, predict, and present results
connecting UDL elements to student success. It illustrates the kind of
258 Catherine A. Manly
Note
1 While acknowledging differing language choices when discussing people who
have disabilities (Association on Higher Education and Disability, 2021), I use
person-first language as a group signifier.
References
Abrams, H. G., & Jernigan, L. P. (1984). Academic support services and the
success of high-risk college students. American Educational Research Journal,
21(2), 261–274. https://doi.org/10. 2307/1162443
Archambault, M. (2016). The diffusion of universal design for instruction in
post-secondary institutions [Ed.D. dissertation, University of Hartford].
https://search . proquest . com / pqdtglobal /docview / 1783127057 /abstract
/363278AEC9FF4642PQ/64
Association on Higher Education and Disability. (2021). AHEAD statement on
language. https://www.ahead.org /professional-resources /accommodations/
statement-on-language
Barnard-Brak, L., Paton, V., & Sulak, T. (2012). The relationship of institutional
distance education goals and students’ requests for accommodations. Journal
of Postsecondary Education and Disability, 25(1), 5–19.
Basham, J. D., Blackorby, J., & Marino, M. T. (2020). Opportunity in crisis:
The role of universal design for learning in educational redesign. Learning
Disabilities: A Contemporary Journal, 18(1), 71–91.
Ben-Moshe, L., Cory, R. C., Feldbaum, M., & Sagendorf, K. (Eds.). (2005).
Building pedagogical curb cuts: Incorporating disability in the university
classroom and curriculum. Syracuse University Graduate School.
Betts, K., Miller, M., Tokuhama-Espinosa, T., Shewokis, P. A., Anderson, A., Borja,
C., Galoyan, T., Delaney, B., Eigenauer, J. D., & Dekker, S. (2019). International
report: Neuromyths and evidence-based practices in higher education. Online
Learning Consortium. https://www.academia.edu/41867738/ International
_Report_ Neuromyths_ and_ Evidence_ Based_ Practices_ in_ Higher_ Education
When adaptivity and universal design for learning are not enough 259
Silver, P., Bourke, A., & Strehorn, K. C. (1998). Universal instructional design
in higher education: An approach for inclusion. Equity and Excellence in
Education, 31(2), 47–51. https://doi.org/10.1080/1066568980310206
Snyder, T. D., de Bray, C., & Dillow, S. A. (2019). Digest of education statistics
2017 (NCES 2018-070). National Center for Education Statistics, Institute
of Education Sciences, U.S. Department of Education. https://nces .ed
.gov
/
pubsearch/pubsinfo.asp?pubid=2018070
Snyder, T. D., & Dillow, S. A. (2013). Digest of education statistics 2012 (NCES
2014-105). National Center for Education Statistics, Institute of Education
Sciences, U.S. Department of Education. https://nces .ed
.gov
/pubsearch
/
pubsinfo.asp?pubid=2014015
Tinto, V. (2006). Research and practice of student retention: What next? Journal
of College Student Retention: Research, Theory and Practice, 8(1), 1–19.
https://doi.org/10. 2190/4YNU- 4TMB-22DJ-AN4W
Tobin, T. J., & Behling, K. (2018). Reach everyone, teach everyone: Universal
design for learning in higher education. West Virginia University Press.
Webb, K. K., & Hoover, J. (2015). Universal design for learning (UDL) in the
academic library: A methodology for mapping multiple means of representation
in library tutorials. College and Research Libraries, 76(4), 537–553. https://
doi.org /10. 5860/crl.76.4. 537
Wessel, R. D., Jones, J. A., Markle, L., & Westfall, C. (2009). Retention and
graduation of students with disabilities: Facilitating student success. Journal
of Postsecondary Education and Disability, 21(3), 116–125.
Wladis, C., Hachey, A. C., & Conway, K. M. (2015). The representation of
minority, female, and non-traditional STEM majors in the online environment
at community colleges: A nationally representative study. Community College
Review, 43(1), 89–114. https://doi.org/10.1177/0091552114555904
SECTION IV
Organizational
Transformation
15
SPRINT TO 2027
Corporate analytics in the digital age
“The pace of change has never been this fast, yet it will never be this slow
again”—Justin Trudeau (Trudeau, 2018).
DOI: 10.4324/9781003244271-19
266 Mark Jack Smith and Charles Dziuban
the enterprises’ workforce to reskill at a rapid pace and embrace the digi-
talization of work historically performed onsite. The process is predicted
to be rapid and pervasive. For instance, the World Economic Forum esti-
mates that by 2025, 85 million jobs may be displaced by machines, while
97 million new more technology-driven roles emerge (“The future of job
reports,” 2020). Therefore, approximately 40% of workers will require
additional training even though business leaders expect them to acquire
these upgraded skills on the job.
This is a case study of how one company is using analytics to accom-
modate the reskilling challenge for their workforce in a five-year horizon
and the strategy it is using to retrain its existing workforce by develop-
ing a new corporate paradigm. PGS (formerly Petroleum Geo-Services) is
an integrated marine geophysical company in the energy sector. Founded
in 1991, it acquires images of the marine subsurface through seismic
data collection using streamers towed behind purpose-built vessels. The
stream spreads towed by these vessels are as long and wide as the island
of Manhattan. The data acquired are processed in onshore data centers
to produce images of the marine subsurface that may offer potential for
energy resource exploration. To accomplish this, PGS relies on a wide
variety of competencies from mechanics and maritime crew offshore to
geophysicists and data scientists onshore. However, the competencies
required of the workforce is evolving and changing rapidly as innovations
appear and PGS transitions into new business areas that include carbon
storage spaces below the subsurface and where its digital products will be
viable in sectors other than the fossil fuel industry. Figure 15.1 depicts the
16
Headcount Vessels in Operation
14 2534
2316
12
1948 - 60%
1826
10
1401 1472
8 1284
6 1035
0
14 15 16 17 18 19 20 21
Projecting to 2027
Starting with the premise that by 2027 processes in PGS will be mostly
digital, offshore operations often managed remotely and customer inter-
action will be more collaborative, management set out to understand
the long-term aggregate skills required in sales, data acquisition, and
data processing. The first step was to examine the technology drivers
for recent changes which are cloud-based data and increased bandwidth
offshore. The latter allows for the transfer of enormous amounts of data
from remote offshore sites into the cloud, allowing it to be immediately
utilized in a cooperative sharing of the imaging process with customers.
The impact of immediate data access to all stakeholders effects every role
in the company. Sales, acquisition, and data processing will not only be
more collaborative, but it will also increase data analysis speed driven
by digitalization of both PGS’ and its customer’s business processes
(Whitfield, 2019).
Recognizing that collaboration will be pervasive but varied across the
business, PGS undertook a top-down review of skills needed in 2027.
They included being able to work together on innovative technology,
adaptability to change in groups, taking an interactive solutions-oriented
approach, increased business acumen to allow for empowerment to make
decisions lower down in the organization, and most importantly the abil-
ity to communicate the value of data. These skills inform every role in the
company and require a mindset shift at every organizational level. This
fundamental change is seen as being primarily directional at the aggre-
gate and will require individual assessment and learning that is specific to
cohorts across organizational units.
268 Mark Jack Smith and Charles Dziuban
Sales
Selling starts by agreeing to acquire the data or selling existing datasets
from the company’s library. During the last decade, due primarily to cloud
technology, selling seismic data has changed to become increasingly inter-
active. Specifically, customers want data faster and to collaborate on its
processing, creating an increasing effect on the sales process and its com-
plexity. In the future, sales will be more relational constantly iterating
with customers. Sharing datasets is a fundamental change from selling
data acquired specifically for the client or them purchasing specific data-
sets for a specific period.
An example of this change is digitization of entitlements. Entitlements
are datasets for areas of the subsea. Previously, this process could take
weeks and months to determine the overlapping structure, areas allowed
for exploration, and several other factors. Using artificial intelligence,
however, the company developed software that reduces the determination
of data availability from weeks to hours.
The sales force will no longer spend time determining data availability,
but will focus on customer relationships and decreasing data delivery time
in a collaborate dialog, something that demands new skills not required of
current employees. Taking a more cooperative approach to building and
utilizing datasets that cover expansive areas from different companies and
cooperatively sharing the data in the cloud is already well underway and
will continue to grow (“TGS, CGG, and PGS Announce Versal,” 2021).
Sprint to 2027 269
Data acquisition
Digitalization will impact acquiring seismic data in the offshore environ-
ment which is a highly technical, high-risk environment, and an expensive
undertaking that currently less than 30 vessels in the world can do. Given
the complexity, risks and costs involved, PGS focused on working reliably
at level 4 on the 10-point Sheridan-Verplank automation scale, where the
computer suggests optimal vessel operating speed (Sheridan & Verplank,
1978). By 2027, PGS hopes to reach level 7, where the computer executes
the speed control and reports to the crew. This will constitute a significant
increase in automation made possible with the introduction of specially
designed vessels like PGS’ Titan class.
Automation and increased connectivity to onshore operations will
affect the size and skills of crew onboard seismic vessels. Currently, ves-
sels have a crew of approximately 50, with half working on seismic acqui-
sition. The size and composition of the crew remained relative to the size
of the vessel over the last 20 years. However, efficiency gains have been
used to reduce the time to acquire seismic data while not decreasing its
quality. That efficiency is depicted in Figure 15.2.
In 2018, to make crews more cost efficient, a large-scale initiative
started focusing on digitization of onboard systems similar to the offshore
platform industry where functionalities are operated by land-based teams
cooperatively with maintenance and repair crews onboard. This has oper-
ating advantages such as requiring fewer crew members onboard, being
35,000
+3.3%
CAGR
30,000
25,000
Sq Km/Year*
20,000
15,000
10,000
5,000
-
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021
*Normalized Square Kilometers Per Year
Ch/Stewards
Low Digizaon Potenal
Data processing
Newly acquired or existing seismic data in PGS’ library is processed to
client specifications agreed to in the sales agreements. Here too, cloud
technology will have a significant impact on the business configuration
and require reskilling of employees to accommodate faster processing in
collaboration with customers. PGS data were historically processed in
data centers located in Houston, Oslo, and Cairo. Previously these centers
were networked to supercomputers. However, in 2022 these supercom-
puters were decommissioned, and all processing was moved to the cloud.
Although data are currently delivered on physical tapes from offshore to
the processing centers, that tape is increasingly being replaced by more
portable high-capacity storage devices and will at some point be directly
transferred to the cloud from the vessel via satellite. This will replace some
preprocessing of data on the vessel, then transferred to tape, and trans-
ported to a processing center.
Immediately available data from the cloud for processing by anyone
with access have long-term strategic implications for how geophysicists
work. With faster access and ease of collaboration they will, increasingly,
create subsurface images together with the customer and take on a more
advisory role in cross-disciplinary teams. Digitalization will require that
sales and geophysical processes merge into a more collaborative relation-
ship with the customer.
Technology platform
The most important educational enabler is an integrated learning man-
agement and knowledge acquisition platform (LMS and LXP). These
technologies support learning modalities by leveraging assessment
practices. Here again the advantages of cloud technology come into
play. They provide a single source of learning for employees both on
and offshore. These learning support systems, especially when in the
cloud, enable collaborative learning with short deployment and imple-
mentation timelines utilizing assessment data to determine baseline,
growth, and final outcomes.
Business alignment
Learning strategies in commercial enterprises must follow a business
model where alignment requires an interactive process including extensive
reevaluation of capabilities, feedback, and data analysis. Examples include
assessing the ability of a crew member to perform a procedure. From the
data analysis feedback loops observed in our adaptive learning structure,
the education and retraining become auto-analytic. Learning measures
of employee’s performance skills taught with onboard data determine the
quality of the learning, and new or revised instruction needed to keep
pace with changes. However, the dynamics of the learning feedback will
require constant realignment consistent with the dynamics of the business
strategy.
Coownership
Like alignment, coownership of learning and skills training between
L&D and the business constitutes a cultural shift for many companies.
Traditionally, employees and subject matter experts defined learning needs
and enlisted L&D to linear knowledge acquisition processes. This will not
be effective in a dynamic environment that takes the agile approach to a
corporate business model.
Skills development coownership has been recognized as critically
important for some time in the corporate environment. Manville (2001)
pointed out that greater line management ownership of programs makes
learning part of the day-to-day business life, implying that integrating
Sprint to 2027 273
Assessment
In the corporate environment verifying the employee’s ability and skills
requires assessment based on the position’s requirements. These can be
routine for some skills that are related to compliance rules; however,
there appears to be a general aversion among the employee population to
testing theoretical knowledge that would reflect poorly on their perfor-
mance, compensation, or career advancement. The degree to which this is
true, however, varies from country to country depending on work ethics
and labor laws. Headquartered in Norway, in 2021 the PGS workforce
onshore is 26% Norwegian and has over 44 nationalities worldwide in
its offices. Offshore the largest nationality group is British (32%) with 31
other nationalities. This multinational workforce is a conscious decision
motivated by cost and business considerations given that vessels work in
up to six different countries in any given year. PGS has had some suc-
cess accommodating this extreme diversity using adaptive learning. This
was an initiative that successfully upskilled mechanics working offshore
to improve their hydraulics capabilities where PGS created specific adap-
tive learning modules that developed a better understanding of hydraulic
theory (Smith, 2021). Adopting an adaptive mindset which aligns with
the agile approach using blended learning proved effective for assessing
knowledge and branched learning improved the speed and efficiency of
knowledge acquisition while lowering costs.
Learning journeys
The diverse geographical spread of PGS’ workforce required that learn-
ing be delivered in multiple modalities utilizing multiple technologies
including video conferencing, asynchronous videos, gamified learning,
etc. Groups also needed to be configured into effective learning communi-
ties depending on individuals’ position and current skill level. This was
a unified learning modalities approach housed on a single platform as
described above. However, L&D staff will need to better understand arti-
ficial intelligence as it structures multiple learning modalities into educa-
tion and training journeys based on data from assessments, performance,
and biographical information.
274 Mark Jack Smith and Charles Dziuban
Learning data
Measuring the impact of learning and how it arranges priorities, builds
skills, and strengthens well-being that supports an individual’s learning
informs the other strategic areas. Data, especially in the cloud, allow
effective and fast analysis of large datasets. Learning data will be an indis-
pensable tool to improve and make reskilling effective by measuring the
quality of the learning culture. This will be discussed below as a part of
the third phase of implementation of the strategy.
Conclusion
The PGS sprint to 2027 is a metaphor for analytics in the corporate
environment created by external conditions forcing a reexamination of
a longstanding business model. The process hinged on creating an agile
organization that must accommodate continual environmental churn.
The data that drove the transformation were extant and simple. No pre-
dictive models were developed here. The corporate environment shifted
rapidly and PGS was able to monitor those conditions. However, those
simple data became the basis of a new corporate ecosystem—one based on
the nine essential components described in this chapter. The operational
questions became:
Ransbotham and colleagues (2016) make the point that the marginal and
conditional utilities of analytics in the corporate sector will require peri-
odic updates because their value add diminishes over time. These days
everyone is doing analytics and getting better at it, so the competition is
demanding.
PGS may be a small company, but it is complex. Also given its extreme
diversity it is virtually impossible to predict how an intervention will rip-
ple through it. Often, outcomes will be counter-intuitive and there will be
both positive and negative side effects that will have to be accommodated
(Forrester, 1971). Although the company designed its plan around nine
individual aspects, its culture and the new structure is far more than the
sum of its individual parts. This is the emergent property of complex sys-
tems and bases of what is described in case study (Johnson, 2017; Mitchell,
2011; Taleb, 2020). At its core emergence is a function of diversity, inter-
dependence, connectedness, and adaptiveness in proper balance. If one of
them becomes too extreme the system becomes dysfunctional. At some
level PGS has accommodated all four components. However, the extreme
diversity of the workforce creates known and has yet unknown challenges.
At its core, however, this case study describes how PGS accommodated
the reskilling and development of current employees and the onboard-
ing of new hires. Therefore, a new unit was created devoted to learning
and framed it around multiple modes of the instruction that defined a
new corporate learning paradigm that was well understood throughout
the organization. The model that evolved corresponded to Floridi’s four
stages of learning (Floridi, 2016).
References
Adkins, A., & Rigoni, B. (2016, June 30). Millennials want jobs to be development
opportunities. Gallup.com. https://www.gallup.com /workplace /236438/
millennials-jobs-development-opportunities.aspx
Angrave, D., Charlwood, A., Kirkpatrick, I., Lawrence, M., & Stuart, M. (2016).
HR and analytics: Why HR is set to fail the big data challenge. Human
Resources Management Journal, 26(1), 1–11.
Bethke-Langenegger, P., Mahler, P., & Staffelbach, B. (2011). Effectiveness of
talent management strategies. European Journal of International Management,
5(5). https://doi.org/10.1504/ejim. 2011.042177
Boushey, H., & Glynn, S. J. (2012, November 16). There are significant business
costs to replacing employees. Center for American Progress. https://www
.americanprogress.org/wp- content/uploads/2012/11/CostofTurnover.pdf
Carey, D., & Smith, M. (2016). How companies are using simulations,
competitions, and analytics to hire. Harvard Business Review. 4. https://
hbr.org / 2016 / 04 / how- companies - are - using - simulations - competitions - and
-analytics-to-hire
Clardy, A. (2018). 70-20-10 and the dominance of informal learning: A fact in
search of evidence. Human Resource Development Review, 17(2), 153–178.
https://doi.org/10.1177/1534484318759399
Delen, D., & Ram, S. (2018). Research challenges and opportunities in business
analytics. Journal of Business Analytics, 1(1), 2–12.
Floridi, L. (2016). The 4th revolution: How the infosphere is reshaping human
reality. Oxford University Press.
Forrester, J. W. (1971). Counterintuitive behavior of social systems. Theory and
Decision, 2(2), 109–140. https://doi.org/10.1007/ BF00148991
Johnson, N. F. (2017). Simply complexity: A clear guide to complexity theory.
Oneworld.
Korhonen, K. (2011). Adopting agile practices in teams with no direct
programming responsibility – A case study. In D. Caivano, M. Oivo, M.
T. Baldassarre, & G. Visaggio (Eds.), Product-Focused Software Process
Improvement. PROFES 2011. Lecture notes in computer science, 6759.
Springer.
Lin, C. L., & Chiu, S. K. (2017). The impact of shared values, corporate cultural
characteristics, and implementing corporate social responsibility on innovative
behavior. International Journal of Business Research Management (IJBRM),
8(2), 31–50.
Manville, B. (2001). Learning in the new economy. Leader to Leader, 20, 1–64.
https://doi.org/10.1002/(issn)
Mitchell, M. (2011). Complexity: A guided tour. Oxford University Press.
Prollochs, N., & Feuerriegel, S. (2020). Business analytics for strategic
management: Identifying and assessing corporate challenges via topic
modeling. Information & Management, 57(1). https://www.sciencedirect.com
/science/article/pii /S0378720617309254
Ransbotham, S., Kiron, D., & Kirk-Prentice, P. (2016). Beyond the hype: The
hard work behind analytics success. MITSloan management review: Research
report.
Sprint to 2027 279
Introduction
Institutions have been focused on improving student outcomes for dec-
ades (Vignare et al., 2018; Ehrmann, 2021). However, data continue to
show that student success remains inequitable within postsecondary edu-
cation institutions where Black, Latinx, Native American, Pacific Islander
students and those from low-income backgrounds are the most systemati-
cally penalized. Institutions also report that often first-generation adults,
single parents, and those with either physical or neurodiverse challenges
experience significant inequities in student success outcomes. These dif-
ferences in outcomes have not narrowed quickly enough, especially as
these student populations have grown and become the new majority of
students attending college (Fox, Khedkar et al., 2021).
The range of student success measured in graduation rates also varies
widely by type of institution. While the data show variation amongst insti-
tutional types (e.g., public, private, two-year, four-year, MSIs, HBCUs)
there are institutions, independent of type, which report higher rates of
student success regardless of student demographics (Fox, Khedkar et al.,
2021; Freeman et al., 2014; Lorenzo et al., 2006). It is clear that overall,
there are simply far too many students failing that should not. Recent
reporting on college completion indicates that less than half of college stu-
dents graduate on time and more than one million college students drop
out each year, with 75% of those being first-generation college students
(Katrowitz, 2021). Minoritized student populations are disproportion-
ately represented among those who drop out (Katrowitz, 2021). This lack
of success has long been blamed on students, but a better argument is that
DOI: 10.4324/9781003244271-20
Academic digital transformation 281
institutions are not incorporating learning science nor the use of real-time
student academic data to transform institutions, courses, and the student
experience in order to accelerate student success.
Prior to the pandemic, few institutions were leveraging any digital data
to support ongoing student success (Parnell et al., 2018). Further, the edu-
cational drop-out rate for students was persistent throughout the student
journey—into college, through first-year courses, as part of transfer, and
as returning adults and women and minorities in certain fields— this phe-
nomenon is often referred to as the Leaky Pipeline (Zimpher, 2009). This
analogy is weak at conveying the substantive impact and loss to both stu-
dents and society when students drop out or don’t succeed. As an outcome
of the pandemic, economists at Organisation for Economic Co-operation
and Development (OECD) have reminded us that the loss of a year’s
worth of education converts to 7–10% less in future earnings (Reimers
et al., 2020). College completion continues to remain critical to ensur-
ing that American workers remain competitive both at home and glob-
ally. When students drop out or fail, existing inequities are exacerbated
between those who are college educated and those who are not (Perez-
Johnson & Holzer, 2021). Those communities that have been historically
marginalized—whether due to racial, gender-based, economic, or other
categorizations of minoritization—will suffer the most (Perez-Johnson &
Holzer, 2021). The magnitude of loss to individuals and society is clear.
Through the intentional and effective use of internal data, institutions can
better monitor student success and set data-informed strategic goals that
aim to achieve equitable student success.
The types of interventions vary and there are a handful of highly suc-
cessful institutions that are improving student success and creating more
equitable outcomes. These institutional success stories include activities
such as the use of sophisticated advising technology coupled with smaller
caseloads that provide professional advisors with meaningful information
and time to identify, meet with, and support their students (Shaw et al.,
2022). With smaller caseloads, high-impact advising practices could more
often be implemented at scale (Shaw et al., 2022). Institutions that adopted
these practices had statistically significant evidence of narrowing the out-
comes gap for Black, Latinx, and Indigenous students (Shaw et al., 2022).
Other technological-based interventions include offering online or hybrid
courses to address student desires for increased flexibility (Bernard, 2014;
Means et al., 2013; James et al., 2016; Ortagus, 2022).
Several institutions have led well-known transformative change at the
presidential level. For example, the presidents of Arizona State University,
Georgia State University, and Southern New Hampshire University all
approached transformation work in very different ways but are similar
282 Karen Vignare, Megan Tesene, and Kristen Fox
in that they all expanded the mission of the universities to become more
equitable. Arizona State University and Georgia State University focused
on embracing the needs of their local minoritized student populations.
They challenged the university staff, faculty, and community to hold the
institution accountable so that local minoritized student populations
were as successful as any student segment. Southern New Hampshire
University instituted a model that benefitted adults through increased
online learning and competency-based education. The growth of Southern
New Hampshire University and the success of its part-time, highly diverse
students continue to inspire other institutions. Each of these institutions
shared a collaborative change approach as well as a willingness to utilize
technology to scale, data to measure and refine, and continuous innova-
tion efforts to improve equitable student success.
In many of the aforementioned institution transformation examples,
leadership initiated the changes, but we know that in order to meaning-
fully transform institutions so that they are more equitable and better
serve their diverse student communities, efforts must occur collabora-
tively across all levels of an institution. The collaborative change models
described within this chapter provide promising frameworks for those
seeking to establish intentional and effective transformation through-
out their institution. The lessons gleaned from these models highlight
the importance of implementing a collaborative model that integrates
top-down (institutional leadership), bottom-up (faculty-driven), and
middle-out (academic departments and institutional offices) approaches
to build institutional capacity and infrastructure so that progress
remains consistent across leadership or staffing changes (Miller et al.,
2017). Institutional successes in digital transformation are also impor-
tant and provide guidance in understanding how data can be used to
continuously improve processes and practices being implemented at an
institution, especially while learning is actively occurring and real-time
interventions can be leveraged (Krumm et al., 2018).
Ultimately, there is no single recipe for transformation as every institu-
tion has unique needs, resources, capacities, and contexts. As such, the
approach each institution takes should be designed for that institution.
However, when looking at institutions that are achieving transformative
equitable student success, we find common guideposts that are aligned
to the Association of American Universities (AAU) and Equitable Digital
Learning frameworks presented within this chapter. Through collabo-
ration across units, with measurement of progress through continuous
improvement, and applications of evidence-based improvement strategies,
institutions can better ensure that equitable student success is at the center
of their academic change efforts.
Academic digital transformation 283
Literature review
For several decades, the pursuit of academic transformation for student
success has been a critical focus by multiple associations, government
agencies, and philanthropic funders. There is clear evidence that active and
engaged teaching has been measured to be effective and improve student
outcomes (Miller et al., 2017; Singer et al., 2012). The AAU represents 65
research and doctoral universities, which are often recognized as contrib-
uting significantly to science, technology, engineering, and mathematics
(STEM) education. Like others, they recognized that STEM education
was not successful for all students and there was very little dissemination
of best practices by faculty members whose students were equitably suc-
cessful (AAU, 2017). Hopefully, with this work and the rigorous expec-
tations of National Science Foundation funding, more empirical studies
related to equitable student success will be conducted for STEM fields.
A meta-analysis of 225 studies found that students are more likely to
fail in courses taught in the traditional lecture style than when taught with
an active learning approach (Freeman et al., 2014). The authors reviewed
642 studies and selected 225 to include in their meta-analysis, with the
hypothesis that active learning versus traditional lecture maximizes learn-
ing and course performance. The courses included entry-level courses
across undergraduate STEM. The effect sizes indicate that on average,
student test scores and concept inventories increased by 0.47 standard
deviations under active learning (n = 158 studies), and that the odds ratio
for failing was 1.95 under traditional lecturing (n = 67 studies). These
results indicate that average examination scores improved by about 6%
in active learning sections, and that students in classes with traditional
lecturing were 1.5 times more likely to fail than were students in classes
with active learning. The results were tested among the different STEM
disciplines and no differences in effectiveness were found. The researchers
found that impact was greater in classes where only 50 students or less
were enrolled (Freeman et al., 2014). The significance of these findings
brings to the forefront, why isn’t active learning being used in more STEM
courses?
Additional research documents the positive impact that active learning
has for women and other minority populations such as Black students
and first-generation student communities (Eddy & Hogan, 2014; Haak et
al., 2011; Lorenzo et al., 2006; Singer et al., 2012). These earlier studies
looked at equitable student outcomes in active learning courses in STEM
within the small, but important, lens of course- and institutional-level
analyses. Each found that active learning was most beneficial to women
and minorities. Singer et al. (2012) laid out a case for discipline-based
284 Karen Vignare, Megan Tesene, and Kristen Fox
into the design of online and blended courses to improve student outcomes
(Stenbom, 2018). Social presence is a combination of student-to-student
interaction and being connected to a social learning community with fac-
ulty. The cognitive presence represents the content and mastery of activi-
ties to ensure learning objectives are met. Teaching presence consists of
“the design, facilitation, and direction of cognitive and social processes
for the purpose of realizing personally meaningful and educationally
worthwhile learning outcomes” (Anderson et al., 2001, p. 5).
Stenbom’s (2018) analysis systematically reviewed the 103 research
papers written between 2008 and 2017 that included the use of the
Community of Inquiry survey, which was designed to operationalize
Garrison et al.’s (2000) CoI framework. The review validated the use of
the CoI survey, which looks for evidence of the framework through the
lens of coordinated social presence, teaching presence, and cognitive pres-
ence (Stenbom, 2018). The COI framework, like other high-quality teach-
ing frameworks, provides structure and outlines which activities would
need to be implemented for a course to qualify as “high-quality.” As is
the case with well-designed and implemented active learning practices, if
you leverage high-quality online teaching practices, online learning is also
highly effective.
Pre-COVID, there was considerable effort to bring digital tools, paired
with active learning teaching strategies, into face-to-face instruction
occurring in high-priority, high-enrollment, gateway and foundational
courses. This trend has persisted and as such, the use of digital tools and
online homework systems such as adaptive courseware has grown con-
siderably (Bryant et al., 2022). While many studies and institutions have
adapted coordinated, collaborative approaches to teach critical under-
graduate courses, it is common practice to allow each instructor to decide
on pedagogy on a class-by-class basis. From Means and Peters (2016) we
also know that attempts to scale effective evidence-based instructional
practices can be problematic and do not always result in the same level
of success. Many instructors use active learning practices, but how active
learning is implemented matters significantly (Peters & Means, in press).
The issue of instructional consistency is rarely covered in the literature,
but the need for a concerted effort to increase instructional consistency is
paramount—particularly for those instructional approaches that include
well-researched, effective pedagogy such as active learning (Peters &
Means, in press).
Underlying all these efforts at academic reform are the use of data,
learning science, and improved process. While many faculty and insti-
tutions regularly review data post-course implementation to redesign
and modify their courses, assessments, and activities, few use the data
286 Karen Vignare, Megan Tesene, and Kristen Fox
CULTURAL CHANGE
SCAFFOLDING
PEDAGOGY
(Modified from AAU’s Framework for Systemic Change to Undergraduate STEM Teaching and Learning 2013; 2017)
had a different focus and approach, common themes emerged across the
set of participating institutions that helped to inform the AAU frame-
work, as was the intention of those developing the framework so that
it would provide foundational guidance to the field while being flexible
enough to be adopted and implemented by a diverse set of higher educa-
tion institutions (AAU, 2017).
Cornell University: Cornell initially named their project, “Active
Learning Initiative” (ALI), which was aimed broadly at the university’s
undergraduate curriculum. The institution encouraged change through
an internal grant competition where the prime goal was to improve stu-
dent learning, measure impact, and scale impactful practices to additional
courses and departments.
Michigan State University: They focused on bringing together the
College of Education with STEM faculty under the Collaborative Research
in Education, Assessment and Teaching Environments (CREATE) initia-
tive. The focus was to build interdisciplinary collaboration and create a
space for tenure track faculty to focus on improving teaching through
innovation.
University of North Carolina at Chapel Hill: This university estab-
lished a mentor–apprenticeship model, where faculty teach redesigned
courses in pairs. Each pair includes one experienced faculty member and
one faculty with less experience. Large, introductory, typically lecture-
based courses are redesigned to include high-structure, active learning
(HSAL) practices.
University of Arizona: They implemented three distinct, but intercon-
nected strategies to change the faculty culture so that active learning and
student-centric instructional practices were prioritized. The institution
provided faculty professional development in evidence-based teaching
practices, developed faculty learning communities, and created collabora-
tive learning spaces (CLS) so that faculty could better implement active
learning instructional practices.
The recognition by AAU and its member organizations that improved
teaching and learning is imperative and requires coordinated, collabo-
rative change ushered in a new expectation—that all institutions must
prioritize the dissemination of evidence-based teaching practices so that
all faculty, across all sections, are providing better pedagogy to support
more equitable outcomes. Further, institutions must invest in the develop-
ment and resourcing of critical scaffolded supports that enable this work,
serving both faculty and students so that meaningful cultural change
can take root and flourish. This foundational work has proliferated and
continued beyond 2017, but it simply has not narrowed gaps enough in
outcomes for minoritized, low-income and first-generation students.
Academic digital transformation 289
improved student success for those students and others, and incorporated
external partners. These case study institutions sought and established
partnerships with organizations with expertise in digital learning and
equity and knew where their institution’s internal capacity or internal
capacity limits resided and brought in expertise and support for histori-
cally under-resourced students. Considering where to partner for capabili-
ties and services versus where to invest in them permanently is a strategy
that can offer cost savings for institutions or access to targeted expertise
in digital learning and equitable course design (Fox, Vignare et al., 2021).
The findings from these case studies (see Table 16.1) support the argu-
ment that an integrated digital learning infrastructure can make the dif-
ference in increasing equitable student success (Fox, Vignare et al., 2021;
Vignare et al., 2018). This framework can be leveraged by any institution.
Given the flexibility and needs of students (and faculty) and the need to
plan for learning continuity, it is critical that institutions see a digital
infrastructure as not only as one that assumes growing online learning
as a primary goal, but one that supports the entire institution regardless
of course modality. By building out the digital infrastructure, an institu-
tion is more capable of weathering the unexpected. Additionally, digital
infrastructure makes measurement of disaggregated student data much
more accessible and transparent (Maas et al., 2016, Vignare et al., 2016;
Wagner, 2019). When institutions use outdated data to inform interven-
tions for students, it is often too late.
Conclusion
For decades, higher education institutions have attempted to find frame-
works, methodologies, and approaches that effectively describe and guide
the work necessary to improve equitable outcomes in critical courses.
Many institutions show evidence of impact, but the national data con-
tinue to show that outcomes for racially minoritized, low-income, and
first-generation students have not progressed or improved in equitable
ways. The continued lack of progress compounded by the 1.4 million stu-
dents who have left higher education since the onset of COVID as well as
the steady declines in enrollment across student populations is particu-
larly alarming (NSC, 2022). Ultimately, the impact of prior work in these
areas is not sufficient.
Fox, Vignare et al.’s (2021) framework makes a clear case for an inte-
grated approach in order to drive the actions and planning of institutional
leaders to achieve more equitable outcomes. An additional focus must be
on establishing cultural changes informed by real-time data, focused to
interrogate prioritized problems such as equity gaps in critical courses.
Academic digital transformation 295
References
Ambrose, S., Bridges, M., DiPietro, M., Lovett, M., & Norman, M. (2010).
How learning works: Seven research-based principles for smart teaching.
Jossey-Bass.
Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching
presence in a computer conferencing context. Journal of Asynchronous
Learning Networks, 5(2). https://olj.onlinelearningconsortium.org/index.php
/olj/article /view/1875
Association of American Universities. (2017). Progress toward achieving system
change: A five-year status report on the AAU undergraduate STEM education
initiative. https://www.aau.edu /sites/default/files/AAU-Files/STEM-Education
-Initiative/STEM- Status-Report.pdf
Austin, A. E. (2011). Promoting evidence-based change in undergraduate science
education (pp. 1–25). National Academies’ Board on Science Education.
https://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/
dbasse_ 072578.pdf
Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami,
P. C. (2014). A meta-analysis of blended learning and technology use in
higher education: From the general to the applied. Journal of Computing
in Higher Education, 26(1), 87–122. https://doi.org/10.1007/s12528- 013-
9077-3
Boyer commission on educating undergraduates in the research university.
(1998). Reinventing undergraduate education: A blueprint for America’s
research universities. https://dspace.sunyconnect.suny.edu / bitstream / handle
/1951 / 26012 / Reinventing%20Undergraduate%20Education%20%28Boyer
%20Report%20I%29.pdf?sequence=1&isAllowed=y
Brooks, D. C., & McCormack, M. (2020, June). Driving digital transformation
in higher education. EDUCASE Center for Analysis and Research. https://
library.educause.edu/-/media /files/ library/ 2020/6/dx2020.pdf?la= en&hash=
28FB8C377B59AFB1855C225BBA8E3CF BB0A 271DA
Academic digital transformation 297
Bryant, G., Fox, K., Yuan, L., Dorn, H., & NeJame, L. (2022, July 11). Time for
class: The state of digital learning and courseware adoption. Tyton Partners.
https://tytonpartners. com / time-for- class- the- state- of- digital- learning- and
-courseware-adoption/
Clark, R., & Mayer, R. (2021). E-learning and the science of instruction: Proven
guidelines for consumers and designers of multimedia learning (4th ed.).
Wiley.
Crow, M., & Dabars, W. (2020). The fifth wave: The evolution of higher
education. Johns Hopkins Press.
Eddy, S., & Hogan, K. (2014). Getting under the hood: How and for whom does
increasing course structure work? CBE-Life Sciences Education, 13(3), 453–
468. https://doi.org /10.1187/cbe.14 - 03- 0050
Ehrmann, S. (2021). Pursuing quality, access, and affordability: A field guide to
improving higher education. Stylus Publishing.
Fox, K., Khedkar, N., Bryant, G., NeJame, L., Dorn, H., & Nguyen, A. (2021).
Time for class – 2021: The state of digital learning and courseware adoption.
Tyton Partners; Bay View Analytics; Every Learner Everywhere. https://www
.ever ylearnerever ywhere.org /wp- content /uploads/ Time-for- Class-2021.pdf
Fox, K., Vignare, K., Yuan, L., Tesene, M., Beltran, K., Schweizer, H., Brokos,
M., & Seaborn, R. (2021). Strategies for implementing digital learning
infrastructure to support equitable outcomes: A case-based guidebook for
institutional leaders. Every Learner Everywhere; Association of Public and
Land-Grant Universities; Tyton Partners. https://www.ever ylearnerever ywhere
.org / wp - content / uploads / Strategies - for - Implementing - Digital - Learning
-Infrastructure- to - Support- Equitable- Outcomes-ACC - FINAL -1- Updated
-Links-by- CF-2.pdf
Freeman, S., Eddy, S., McDonough, M., Smith, M., Okoroafor, N., Jordt, H.,
& Wenderoth, M. (2014). Active learning increases student performance in
science, engineering, and mathematics. Proceedings of the National Academy
of Sciences of the United States of America, 111(23), 8410–8415. https://doi
.org/10.1187/cbe.14- 03- 005010.1073/pnas.1319030111
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-
based environment: Computer conferencing in higher education. Internet
and Higher Education, 2(2–3), 87–105. https://doi.org/10.1016/S1096
-7516(00)00016-6
Gumbel, A. (2020). Won’t lose this dream: How an upstart urban university
rewrote the rules of a broken system. The New Press.
Gunder, A., Vignare, K., Adams, S., McGuire, A., & Rafferty, J. (2021).
Optimizing high-quality digital learning experiences: A playbook for faculty.
Every Learner Everywhere; Online Learning Consortium (OLC); Association
of Public and Land-Grant Universities. https://www.ever ylearnerever ywhere
.org/wp- content/uploads/ele_facult yplaybook_ 2021_v3a_gc- Updated-links
-by-C F.pdf
Haak, D., Hille Ris Lambers, J., Pitre, E., & Freeman, S. (2011). Increased
structure and active learning reduce the achievement gap in introductory
biology. Science, 332(6034), 1213–1216. https://doi.org/10.1126/science
.1204820
298 Karen Vignare, Megan Tesene, and Kristen Fox
James, S., Swan, K., & Datson, C. (2016). Retention, progression and the taking
of online courses. Online Learning, 20(2), 75–96. https://doi.org/10. 24059/
OLJ.V20I2.780
Joosten, T., Harness, L., Poulin, R., Davis, V., & Baker, M. (2021). Research
review: Educational technologies and their impact on student success for
racial and ethnic groups of interest. WICHE Cooperative for Educational
Technologies (WCET). https://wcet.wiche.edu/wp- content/uploads/sites/11
/2021 / 07/ Research- Review- Educational-Technologies- and-Their- Impact- on
-Student- Success-for- Certain-Racial-and-Ethnic- Groups _ Acc _v21.pdf
Kantrowitz, M. (2021, November 18). Shocking statistics about college graduation
rates. Forbes. https://www.forbes.com /sites/markkantrowitz /2021/11/18/
shocking-statistics-about- college-graduation-rates/?sh=2e5bd8622b69
Krumm, A., Means, B., & Bienkowski, M. (2018). Learning analytics goes to
school: A collaborative approach to improving education. Routledge.
LeBlanc, P. (2021). Students first: Equity, access and opportunity in higher
education. Harvard Educational Press.
Lorenzo, M., Crouch, C. H., & Mazur, E. (2006). Reducing the gender gap in the
physics classroom. American Journal of Physics, 74(2), 118–122. https://doi
.org/10.1119/1. 2162549
Maas, B., Abel, R., Suess, J., & O’Brien, J. (2016, June 8). Next-generation digital
learning environments: Closer than you think. [Conference presentation].
EUNIS 2016: Crossroads where the past meets the future, Thessalo-niki.
Greece. https://www.eunis.org/eunis2016/wp- content/uploads/sites/8/2016
/03/ EUNIS2016 _paper_4.pdf
Means, B., & Neisler, J. (2020). Suddenly online: A national survey of undergraduates
during the COVID-19 pandemic. Digital Promise. https://elestage.wpengine.com/
wp-content/uploads/Suddenly-Online_ DP_ FINAL.pdf
Means, B., & Peters, V. (2016, April). The influences of scaling up digital
learning resources [Paper presentation]. 2016 American Educational Research
Association annual meeting. Washington, DC. https://www .sri
.com/wp
-content /uploads/2021/12/scaling-digital-means-peters-2016.pdf
Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online
and blended learning: A meta-analysis of the empirical literature. Teachers
College Record, 115(3), 1–47. https://doi.org/10.1177/016146811311500307
Miller, E., Fairweather, J., Slakey, L., Smith, T. & King, T. (2017). Catalyzing
institution transformation: Insights for the AAU STEM initiation. Change:
The Magazine of Higher Learning, 49(5), 36–45. http://dx.doi.org/10.1080
/00091383. 2017.1366810
National Student Clearinghouse Research Center (NSC). (2022, May 26). Current
term enrollment estimates: Spring 2022. https://nscresearchcenter.org/current
-term- enrollment- estimates/
Ortagus, J. (2022, August). Digitally divided: The varying impacts of online
enrollment on degree completion (Working Paper No. 2201). https://ihe
.education.ufl.edu/ wp- content/uploads/ 2022/ 08/ IHE-Working- Paper-2201
_Digitally-Divided.pdf
Parnell, A., Jones, D., & Brooks, D. C. (2018). Institutions’ use of data and
analytics for student success: Reports from a national landscape analysis.
Academic digital transformation 299
Closing
17
FUTURE TECHNOLOGICAL
TRENDS AND RESEARCH
Anthony G. Picciano
DOI: 10.4324/9781003244271-22
304 Anthony G. Picciano
Artificial
Super Cloud
Intelligence
Man-
Machine
Interfacing
Biosensing
Robotics
Devices
Beyond the market economy, this also holds true for education, medicine,
law, and other professions.
Enter COVID-19
COVID-19 has added a new dimension to the technological landscape in
all aspects of our society including education. In Spring 2020, when the
coronavirus reached our shores, it was estimated that over 90 percent of
all courses offered in education had an online component. Faculty in all
sectors converted their courses as quickly as they could to remote learn-
ing, mainly because they had no choice. It was a clear emergency with
their own and their students’ health at risk. In the opinion of some, online
technology saved the semester for the education sector during the pan-
demic (Ubell, 2020). It forced many faculty, who had never used online
306 Anthony G. Picciano
question why they should pay tuition that is much higher than that at
public institutions.
Third, prior to the pandemic, to be competitive and to attract more
students, education policymakers and administrators at colleges and uni-
versities were providing incentives to many of their faculty to teach online.
While many faculty resisted these calls, their positions and reasons for
resistance have mellowed. Indeed, many have also come to see the benefits
of online teaching. In fact, the genie may be out of the bottle and faculty
in all sectors are beginning to embrace some form of online instruction
for pedagogical reasons as well as health, safety, and the convenience of
time and commuting. Tom Standage (2021), the editor of The Economist,
in an article entitled, “The World Ahead 2022,” identified ten trends to
watch in the future, including Number Four on his list: “There is broad
consensus that the future is ‘hybrid’ and that more people will spend more
days working from home” (Standage, 2021, p. 11). He attributed this to
the effect that the pandemic had on work styles wherein people moved
online to perform their job responsibilities.
country and three (Baidu, Alibaba, and Tencent) are based in China. The
Chinese government is pouring huge amounts of capital into the devel-
opment of its AI capabilities and will very possibly take the lead in this
area in the not-too-distant future. US education will be directed, if not
forced, to respond to this AI challenge. It will be beneficial to all educa-
tion sectors to consider how they might partner with AI centers that exist
in the corporate sector to expand data analytics and adaptive technolo-
gies. Technology companies developing AI applications are proliferating
and are welcoming collaborators for their products and services.
Big data
Raj Chetty and colleagues (2017) at the National Bureau of Economic
Research reported on a project entitled, “Mobility report cards: The role
of colleges in intergenerational mobility.” The purpose of their research
was to study intergenerational income mobility at each college in the
United States using data for all 30 million college students in attendance
from 1999 to 2013. The findings were as follows:
upper-tail (bottom quintile to top 1%) mobility are highest at elite col-
leges, such as Ivy League universities. Fourth, the fraction of students
from low-income families did not change substantially between 2000-
2011 at elite private colleges but fell sharply at colleges with the highest
rates of bottom-to-top-quintile mobility.
(Chetty et al., 2017)
There are several elements of this project that are particularly noteworthy.
First, the subject matter of intergenerational income mobility is a critical area
of education research and the findings added significantly to our understand-
ing of the phenomenon. Second, the number of subjects (30 million) had
never been studied before. The study consolidated data from several data-
bases maintained by the US Department of Education, the US Treasury, and
several other nongovernmental organizations. Third, in addition to publish-
ing the findings and results, the authors established a public database that is
searchable for further research by others and for examining data on any indi-
vidual college or university. Fourth, much of the data analyses used to report
the findings relied on basic descriptive statistics such as quintile comparisons.
In sum, the magnitude of this effort has provided a model for “big data”
research for years to come. Not everyone will be doing research with subjects
totaling in the tens of millions, however, this project provides a roadmap for
what is possible as the technology advances in both the quantity and quality
of the data that can be collected and analyzed.
Most AI applications that exist today are in the lower levels of Internet and
Business AI, however, research and development with the other three is
rapidly increasing. Most AI specialists will agree that we may be decades
away from developing level 5 or artificial general intelligence. Regardless,
AI presently allows researchers to take advantage of learning analytics to
expand to support adaptive and personalized learning applications in real
time. For these applications to be successful, data are collected for each
instructional transaction that occurs in an online learning environment.
Every question asked, every student response, every answer to every ques-
tion on a test or other assessment is recorded and analyzed and stored for
future reference. A complete evaluation of students as individuals as well
as entire classes is becoming common. Alerts and recommendations can
be made as the instructional process proceeds within a lesson, from les-
son to lesson, and throughout a course. Students receive prompts to assist
in their learning and faculty can receive prompts to assist in their teach-
ing. The more data available, the more accurate will be the prompts. By
significantly increasing the speed and amount of data analyzed through
nanotechnology and quantum computing, the accuracy and speed of
adaptive or personalized programs expand and improve. In the future,
faculty will use an “electronic teaching assistant” to determine how
instruction is proceeding for individual students, a course, or an entire
academic program. They will be able to receive suggestions about improv-
ing instructional activities and customizing curricula. Most AI applica-
tions in use today, and for the near future are narrow in their application
and focus on a specific activity. Oxford scholar and best-selling author of
Sapiens, Yuval Noah Harari, comments that there is no reason to assume
that AI will ever gain consciousness because “intelligence and conscious-
ness are two different things.” Intelligence is the ability to solve problems
while consciousness is the ability to feel things such as pain, joy, love,
and anger. Humans “solve problems by intelligence and feeling things”
Future technological trends and research 313
The authors concluded their article by raising ethical concerns and chal-
lenges regarding student privacy and the potential misuse of brain data by
teachers, administrators, and parents. As this type of research advances,
these concerns will need to be addressed carefully and fully.
Statistical significance
In considering again the Raj Chetty et al. (2017) study on mobility
score cards, I was pleased to see that Chetty and colleagues used rela-
tively simple descriptive statistics to make the most important points of
their findings. Quintiles were the basic procedures used for sharing their
results. While they used sophisticated statistical analyses to establish
the accuracy of their databases and to do cross-file investigations, they
shared their data with their readers using clear and simple descriptive
statistics. I had the pleasure of meeting Chetty in 2018 and I asked him
about this. He made the point that it was done purposefully in order to
widen the audience who would understand his work. He further com-
mented that such large sample sizes do not require complex procedures
dependent upon statistical significance. In a chapter of a book that I
wrote with colleague Charles Dziuban in 2022, Blended Learning:
Research Perspectives Volume 3, we made a complementary point and
stated: “This whole conversation [statistical significance] came to a head
in the journal Nature where Valentin Amrhein and over 800 co-author
Future technological trends and research 315
Open resources
Another important element of the Chetty et al. research was their willing-
ness to share results and data through open-source venues. Their reports
and datafiles are open to the public for use and for scrutiny. This is a very
desirable trend that has been evolving in social science research and one that
should be encouraged. In 1996, when a group of researchers were beginning
to sow the seeds of what is now the Online Learning Consortium (formerly
known as the Alfred P. Sloan Consortium), one of the first decisions they
made was to establish a free, peer-reviewed open resource where researchers
would share their work with others. The Journal of Asynchronous Learning
Networks (JALN), now the Online Learning Journal (OLJ), is arguably the
most important depository of peer-reviewed research on online learning
and has been instrumental in moving the development of online and digi-
tal learning forward. Articles clarifying both benefits and drawbacks have
graced its pages and have helped in our understanding of these developments.
We strongly encourage researchers in all disciplines to consider sharing their
work in free, open resources.
Mixed-methods research
In 2002, I published an article entitled, “Beyond student perceptions:
Issues of interaction, presence, and performance in an online course” in
316 Anthony G. Picciano
Interdisciplinarity
An earlier section of this chapter described a form of neuroscience research
in the classroom using portable EEG technology. It is likely to be obvious
to the readers of this book that the expertise and experience needed to
conduct this type of inquiry is not usually found in schools of education.
To the contrary, this research requires teams of individuals with back-
grounds in biology, chemistry, computer science, and psychology as well
as education. Neuroscience is a field that does not exist in the domain of
any single academic area and exemplifies the interdisciplinary nature of
the research that is becoming prevalent not just in education but in many
fields. Education researchers are and will continue to collaborate with
318 Anthony G. Picciano
She went on to extol the facilities that digital technology and communica-
tions will provide for teaching, learning, and research. She foresaw great
benefits in technology’s ability to reach masses of students around the
globe and to interrogate easily large databases for scaling up and assess-
ment purposes. On the other hand, she made it clear that “residential
education cannot be replicated online” and stressed the importance of
physical interaction and shared experiences.
On the nature of knowledge, she noted that the common organization
of universities by academic departments may disappear because “the most
significant and consequential challenges humanity faces” require investi-
gations and solutions that are flexible and not necessarily discipline spe-
cific. Doctors, chemists, social scientists, and engineers will work together
to solve humankind’s problems.
On defining value, she pointed out that quantitative metrics are now
evolving to assess and demonstrate the importance of meaningful employ-
ment. However, she believes that higher education provides something
very valuable as well: it gives people “a perspective on the meaning and
purpose of their lives,” a type of student outcome that is not able to be
quantified. She concluded that:
So much of what humanity has achieved has been sparked and sus-
tained by the research and teaching that take place every day at colleges
and universities, sites of curiosity and creativity that nurture some of
the finest aspirations of individuals and, in turn, improve their lives—
and their livelihoods. As the landscape continues to change, we must be
careful to protect the ideals at the heart of higher education, ideals that
serve us all well as we work together to improve the world.
(Faust, 2015)
Conclusion
As we move forward in our education research agendas, most of us will
examine issues at a granular level. Student outcomes, student perceptions,
and faculty attitudes will continue to be popular avenues of inquiry. As
new technology evolves, we need to participate in determining whether
it is beneficial for all or some or none. During the past fifty plus years of
using technology in education, we have learned through our research that
new approaches are sometimes not evenly deployed, not advantageous, or
even detrimental. We should never lose sight of the role of education in
addressing complex issues related to equity, diversity, and democratic val-
ues in the larger society. Education technology has, is, and will continue
to be a critical area for inquiry in this regard.
References
AlQuraishi, M. (2018, December 9). AlphaFold @ CASP13: “What just
happened?” Blog Posting. Retrieved January 3, 2022, from https://
moalquraishi .wordpress . com / 2018 / 12 / 09 / alphafold - casp13 - what - just
-happened/
Amrhein, V., Greenland, S., & McShane, B. (2019). Comment: Retire statistical
significance. Nature, 567(7748), 305–307.
Aoun, R. E. (2017). Robot proof: Higher education in the age of artificial
intelligence. The MIT Press.
Bevilacqua, D., Davidesco, I., Wan, L., Chaloner, K., Rowland, J., Ding, F,
Poeppel, D., & Dikker, S. (2019). Brain-to-brain synchrony and learning
outcomes vary by student–teacher dynamics: Evidence from a real-world
classroom electroencephalography study. Journal of Cognitive Neuroscience,
31(3), 401–411. https:// doi.org/10.1162/jocn_ a_ 01274
Blanding, M. (2021). Thinking small. Fordham Magazine, (Fall/Winter), 34–37.
Chetty, R., Friedman, J. N., Saez, E., Turner, N., & Yagan, D. (2017).
Mobility report cards: The role of colleges in intergenerational mobility
Stanford University White Paper. Retrieved January 16, 2022, from https://
opportunityinsights.org/wp- content/uploads/2018/03/coll_ mrc_paper.pdf
Davenport, T. H. (2018, November). From analytics to artificial intelligence.
Journal of Business Analytics, 1(2). Retrieved June 1, 2022, from https://www
.tandfonline.com/doi/full/10.1080/2573234X. 2018.1543535
320 Anthony G. Picciano
Davidesco, I., Matuk, C., Bevilacqua, D., Poeppel, D., & Dikker, S. (2021,
December). Neuroscience research in the classroom: Portable brain technologies
in education research. Educational Researcher, 50(9), 649–656.
Faust, D. (2015). Three forces shaping the university of the future. World
Economic Forum. Retrieved February 1, 2022, from https://agenda.weforum
.org /2015/01/three-forces-shaping-the-university-of-the-future/
Harari, Y. N. (2018). 21 lessons for the 21st century. Seigel & Grau.
Lee, K. F. (2018). AI super-powers: China, Silicon Valley, and the new world
order. Houghton Mifflin Harcourt.
McAfee, A., & Brynjolsson, E. (2017). Harnessing our digital future: Machine
platform crowd. W.W. Norton & Company.
Metz, C. (2019, February 5). Making new drugs with a dose of artificial
intelligence. The New York Times. Retrieved January 3, 2022, from https://
www . nytimes . com / 2019 / 02 / 05 / technology /artificial - intelligence - drug
-research-deepmind.html Accessed: .
National Student Clearinghouse Research Center. (2022, May 26). Spring 2022
current term enrollment estimates. Retrieved August 2, 2022, from https://
nscresearchcenter.org /current-term- enrollment- estimates/
Perez, D. (2021, December 23). San Antonio Colleges to loan digital textbooks
for free. Government technology. Retrieved January 7, 2022, from https://
www.govtech.com /education / higher- ed /san-antonio- colleges-to -loan- digital
-textbooks-for-free
Picciano, A. G. (2019, September). Artificial intelligence and the academy’s loss
of purpose! Online Learning, 23(3). https://olj.onlinelearningconsortium.org /
index.php/olj/article /view/2023
Picciano, A. G. (2020, December 4). Blog posting, DeepMind’s artificial
intelligence algorithm solves protein-folding problem in recent CASP
competition! Tony’s thoughts. Retrieved January 3, 2022, from https://
apiccia no . com mons . gc . cu ny . edu / 2020 / 12 / 0 4 / deepm i nds - a r tif icial
-intelligence - algorithm - solves - protein - folding - problem - in - recent - casp
-competition/
Picciano, A. G., Dziuban, C., Graham, C., & Moskal, P. (2022). Blended learning
research perspectives: Volume 3. New York: Routledge/Taylor & Francis.
Science Insider. (2020, September 15). IBM promises 1000-qubit quantum
computer—a milestone—by 2023. Science. Retrieved January 5, 2022, from
https://www.science.org/content/article/ ibm-promises-1000- qubit- quantum
-computer-milestone-2023
Standage, T. (2021, November 1). The world ahead 2022. The Economist. Special
Edition. Retrieved January 15, 2022, from https://www.economist.com/the
-world-ahead-2022
Ubell, R. (2020, May 13). How online learning kept higher ed open during the
coronavirus crisis. IEEE Spectrum. Retrieved January 25, 2022, from https://
spectrum. ieee.org / tech- talk /at-work /education / how- online- learning- kept
-higher- ed- open-during-the- coronavirus- crisis
U.S. PIRG. (2021, February 24). Fixing the broken textbook market (3rd ed.).
Retrieved January 5, 2022, from https://uspirg.org/reports/usp/fixing-broken
-textbook-market-third- edition
Future technological trends and research 321
Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P.,
Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R.K., Rumbaut, R.G.,
Sloane, F., Weisner, T.S., & Wilson, J. (2019, October). Mixed methods for
studies that address broad and enduring issues in education research. Teachers
College Record, 121(10). Retrieved January 14, 2022, from https://www
.tcrecord.org/content.asp?contentid=22741
INDEX
324 Index