100% found this document useful (2 votes)
439 views

Data Analytics and Adaptive Learning

Uploaded by

assistant0849
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
439 views

Data Analytics and Adaptive Learning

Uploaded by

assistant0849
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 364

“Digital learning is the new normal in higher education.

The group of
experts assembled in this book share important ideas and trends related
to learning analytics and adaptive learning that will surely influence all of
our digital learning environments in the future.”
—Charles R. Graham, Professor,
Department of Instructional Psychology and
Technology, Brigham Young University

“The concept of personalized and adaptive learning has long been


touted but seldom enacted in education at scale. Data Analytics and
Adaptive Learning brings together a compelling set of experts that
provide novel and research-informed insights into contemporary
­education spaces.”
—Professor Shane Dawson, Executive Dean
Education Futures, University of South
Australia

“Moskal, Dziuban, and Picciano challenge the reader to keep the student
at the center and imagine how data analytics and adaptive learning can
be mutually reinforcing in closing the gap between students from different
demographics.”
—Susan Rundell Singer, President of St. Olaf
College and Professor of Biology

“We are currently living in a digital age where higher education institu-
tions have an abundance of accessible data. This book contains a series of
chapters that provide insight and strategies for using data analytics and
adaptive learning to support student success and satisfaction.”
—Norman Vaughan, Professor of Education,
Mount Royal University, Calgary, Alberta,
Canada
“An important book that comes at a critical moment in higher education.
We are swimming in an ocean of data and this book from some of the
country’s top researchers and practitioners will help us make sense of it
and put it in the service of student success.”
—Thomas Cavanagh, PhD, Vice Provost
for Digital Learning, University of Central
Florida

“Data Analytics and Adaptive Learning is an excellent addition to the


canon of literature in this field. The book offers several valuable perspec-
tives and innovative ways of approaching both new and old problems to
improve organizational outcomes.”
—Jeffrey S. Russell, P.E., PhD,
Dist.M.ASCE, NAC, F.NSPE, Vice
Provost for Lifelong Learning, Dean for
Div. of Continuing Studies, University of
Wisconsin-Madison

“Data is used to customize experiences from buying an item to book-


ing travel. What about learning—a uniquely human endeavor? This book
contextualizes the complex answers to that question shedding light on
areas with promise: learning analytics, adaptive learning, and the use of
big data.”
—Dale Whittaker, Senior Program Officer
in Post-Secondary Success, Bill and Melinda
Gates Foundation

“Data Analytics and Adaptive Learning presents a timely and wide-rang-


ing consideration of the progress of adaptive learning and analytics in
levelling the educational playing field, while providing necessary cautions
regarding the drawing of too many conclusions in what is still a nascent
area.”
—Frank Claffey, Chief Product Officer,
Realizeit Learning
“Data Analytics and Adaptive Learning provides insights and best
practices from leaders in digital learning who outline considerations for
strategies, change management, and effective decision-making related to
data. As higher education expands its work in digital learning and utiliz-
ing data for decisions, this book is a must read!”
—Connie Johnson, Chancellor, Colorado
Technical University

“Data analytics and adaptive learning compromise two of the most rel-
evant educational challenges. This book provides excellent research
approaches and analysis to answer practical questions related to digital
education involving teachers and learners.”
—Josep M Duart and Elsa Rodriguez,
Editor-in-Chief & Editorial Manager of
the International Journal of Educational
Technology in Higher Education, the
Universitat Oberta de Catalunya

“Data, analytics, and machine learning are impacting all jobs and indus-
tries. For education, the opportunities are immense, but so are the chal-
lenges. This book provides an essential view into the possibilities and
pitfalls. If you want to use data to impact learners positively, this book is
a must-read.”
—Colm Howlin, PhD, Chief Data Scientist,
ReliaQuest

“Data Analytics and Adaptive Learning helps the educational community


broaden its understanding of these two technology-based opportunities
to enhance education, looking at very different complementary contribu-
tions. Congratulations to the authors.”
—Alvaro Galvis, Professor at University of
Los Andes, Bogotá, Columbia

“The menus, dashboards, and pathways to effective data analytics and


adaptive learning can be found in this massively timely and hugely impact-
ful juggernaut.”
—Curtis J. Bonk, Professor of Instructional
Systems Technology and adjunct in the
School of Informatics, Indiana University
Bloomington
“Adaptive learning and learning analytics—should we use both or choose
one? Do they imply organizational transformation? What works and what
does not? In my opinion, the book is valuable reading for those seeking
the answers to their questions.”
—Maria Zajac, Associate Professor
(emeritus) at Pedagogical University Cracow
and SGH Warsaw School of Economics,
Certified Instructional Designer, Poland

“Data analytics and adaptive learning platforms can direct support as


needed to at-risk students, helping to create more equitable outcomes. This
volume contains a timely collection of studies that examine the impact of
these approaches.”
—John Kane, Director of the Center for
Excellence in Learning and Teaching at
SUNY Oswego

“This book shines a spotlight on the potential for data analytics, adaptive
learning, and big data to transform higher education. The volume lights
the way for those brave enough to embrace a new paradigm of teaching
and learning that enacts a more equitable and person-centered experience
for all learners.”
—Paige McDonald, Associate Professor and
Vice Chair, Department of Clinical Research
and Leadership, The George Washington
School of Medicine and Health Sciences

“Deftly weaving adaptive learning and analytic theory and practice


together, the authors offer numerous examples of how these methods can
help us address academic barriers to student success. Their work signifi-
cantly strengthens the fabric of knowledge on how adaptive learning can
benefit students (and faculty).”
—Dale P. Johnson, Director of Digital
Innovation, University Design Institute,
Arizona State University
“The authors of this book convince us that the concepts of data analyt-
ics and adaptive learning are tightly integrated, and the book provides
insights on different aspects related to utilization of intelligent technolo-
gies and how to approach the learning cycle at different stages.”
—Eli Hustad, Professor in Information
Systems, The University of Agder

“Student success is a fundamental mission for all educational institutions.


This book explores the current opportunities within analytics, adaptive
learning, and organizational transformation to generate wide-scale and
equitable learning outcomes.”
—John Campbell, Associate Professor,
Higher Education Administration, School of
Education, West Virginia University

“This book brings together top scholars making the connection between
data analytics and adaptive learning, all while keeping pedagogical theory
on the central stage. It’s a powerhouse driven in equal parts by excellence
and innovation providing vision for educators on the quest for learner suc-
cess across the spectrum.”
—Kimberly Arnold, Director of Learning
Analytics Center of Excellence, University of
Wisconsin-Madison

“Once again, a dream team of faculty, researchers, thought leaders, and


practitioners come up with this defining, must-read book for every insti-
tutional leader and teacher that is invested in the success of every student.
This book based on years of research and practice gives the ‘how-to’.”
—Manoj Kulkarni, CEO at Realizeit
Learning

“The chapters in this book bring a desperately needed clarity and a


depth of understanding to the topic of data and analytics, adaptive learn-
ing and learning more generally in higher education. You will leave this
book smarter about these topics than you started and both you and higher
education will be the beneficiary.”

—Glenda Morgan, Analyst, Phil Hill &


Associates
DATA ANALYTICS AND
ADAPTIVE LEARNING

Data Analytics and Adaptive Learning offers new insights into the use
of emerging data analysis and adaptive techniques in multiple learning
settings. In recent years, both analytics and adaptive learning have helped
educators become more responsive to learners in virtual, blended, and
personalized environments. This set of rich, illuminating, international
studies spans quantitative, qualitative, and mixed-methods research in
higher education, K–12, and adult/continuing education contexts. By
exploring the issues of definition and pedagogical practice that permeate
teaching and learning and concluding with recommendations for the
future research and practice necessary to support educators at all levels,
this book will prepare researchers, developers, and graduate students
of instructional technology to produce evidence for the benefits and
challenges of data-driven learning.

Patsy D. Moskal is Director of the Digital Learning Impact Evaluation


in the Research Initiative for Teaching Effectiveness at the University of
Central Florida, USA.

Charles D. Dziuban is Director of the Research Initiative for Teaching


Effectiveness at the University of Central Florida, USA.

Anthony G. Picciano is Professor of Education Leadership at Hunter


College and Professor in the PhD program in Urban Education at the City
University of New York Graduate Center, USA.
DATA ANALYTICS AND
ADAPTIVE LEARNING
Research Perspectives

Edited by Patsy D. Moskal, Charles D.


Dziuban, and Anthony G. Picciano
Designed cover image: © Getty Images
First published 2024
by Routledge
605 Third Avenue, New York, NY 10158
and by Routledge
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2024 selection and editorial matter, Patsy D. Moskal, Charles D. Dziuban,
and Anthony G. Picciano; individual chapters, the contributors
The right of Patsy D. Moskal, Charles D. Dziuban, and Anthony G. Picciano
to be identified as the authors of the editorial material, and of the authors for
their individual chapters, has been asserted in accordance with sections 77
and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or
utilised in any form or by any electronic, mechanical, or other means, now
known or hereafter invented, including photocopying and recording, or in
any information storage or retrieval system, without permission in writing
from the publishers.
Trademark notice: Product or corporate names may be trademarks or
registered trademarks, and are used only for identification and explanation
without intent to infringe.
Library of Congress Cataloging-in-Publication Data
Names: Moskal, Patsy D., editor. | Dziuban, Charles, editor. | Picciano,
Anthony G., editor.
Title: Data analytics and adaptive learning : research perspectives /
Edited by Patsy D. Moskal, Charles D. Dziuban, and Anthony G. Picciano.
Description: New York, NY : Routledge, 2024. | Includes bibliographical
references and index.
Identifiers: LCCN 2023020017 (print) | LCCN 2023020018 (ebook) | ISBN
9781032150390 (hardback) | ISBN 9781032154701 (paperback) | ISBN
9781003244271 (ebook)
Subjects: LCSH: Education—Data processing. | Learning—Research. |
Educational technology—Research. | Computer-assisted instruction. |
Blended learning—Research.
Classification: LCC LB1028.43 .D32 2024 (print) | LCC LB1028.43 (ebook) |
DDC 370.72—dc23/eng/20230505
LC record available at https://lccn.loc.gov/2023020017
LC ebook record available at https://lccn.loc.gov/2023020018
ISBN: 978-1-032-15039-0 (hbk)
ISBN: 978-1-032-15470-1 (pbk)
ISBN: 978-1-003-24427-1 (ebk)
DOI: 10.4324/9781003244271
Typeset in Sabon
by Deanta Global Publishing Services, Chennai, India
To colleagues who did all they could during the COVID-19
pandemic to provide instruction and services to students.
CONTENTS

About the Editors xvi


Preface xix
Acknowledgements xx
Contributors xxi

SECTION I
Introduction 1

1 Data analytics and adaptive learning: increasing the odds 3


Philip Ice and Charles Dziuban

SECTION II
Analytics 21

2 What we want versus what we have: Transforming


teacher performance analytics to personalize
professional development 23
Rhonda Bondie and Chris Dede

3 System-wide momentum 38
Tristan Denley


xiv Contents

4 A precise and consistent early warning system for


identifying at-risk students 60
Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

5 Predictive analytics, artificial intelligence and the


impact of delivering personalized supports to students
from underserved backgrounds 78
Timothy M. Renick

6 Predicting student success with self-regulated


behaviors: A seven-year data analytics study on a Hong
Kong University English Course 92
Dennis Foung, Lucas Kohnke, and Julia Chen

7 Back to Bloom: Why theory matters in closing the


achievement gap 110
Alfred Essa

8 The metaphors we learn by: Toward a philosophy of


learning analytics 128
W. Gardner Campbell

SECTION III
Adaptive Learning 145

9 A cross-institutional survey of the instructor use of


data analytics in adaptive courses 147
James R. Paradiso, Kari Goin Kono, Jeremy Anderson,
Maura Devlin, Baiyun Chen, and James Bennett

10 Data analytics in adaptive learning for equitable outcomes 170


Jeremy Anderson and Maura Devlin

11 Banking on adaptive questions to nudge student


responsibility for learning in general chemistry 189
Tara Carpenter, John Fritz, and Thomas Penniston

12 Three-year experience with adaptive learning: Faculty


and student perspectives 211
Yanzhu Wu and Andrea Leonard
 Contents xv

13 Analyzing question items with limited data 230


James Bennett, Kitty Kautzer, and Leila Casteel

14 When adaptivity and universal design for learning are


not enough: Bayesian network recommendations for
tutoring 242
Catherine A. Manly

SECTION IV
Organizational Transformation 263

15 Sprint to 2027: Corporate analytics in the digital age 265


Mark Jack Smith and Charles Dziuban

16 Academic digital transformation: Focused on data,


equity, and learning science 280
Karen Vignare, Megan Tesene, and Kristen Fox

SECTION V
Closing 301

17 Future technological trends and research 303


Anthony G. Picciano

Index 323
ABOUT THE EDITORS

Patsy D. Moskal is Director of the Digital Learning Impact Evaluation


in the Research Initiative for Teaching Effectiveness at the University of
Central Florida (UCF), USA. Since 1996, she has served as the liaison for
faculty research involving digital learning technologies and in support of
the scholarship of teaching and learning at UCF. Her research interests
include the use of adaptive learning technologies and learning analytics
toward improving student success. Patsy specializes in statistics, graphics,
program evaluation, and applied data analysis. She has extensive experi-
ence in research methods including survey development, interviewing, and
conducting focus groups and frequently serves as an evaluation consult-
ant to school districts, and industry and government organizations. She
has served as a co-principal investigator on grants from several govern-
ment and industrial agencies including the National Science Foundation,
the Alfred P. Sloan Foundation, and the Gates Foundation-funded Next
Generation Learning Challenges (NGLC). Patsy frequently serves as a
proposal reviewer for conferences and journals, and is a frequent special
editor of the Online Learning journal, in addition to serving as a reviewer
for the National Science Foundation (NSF) and the U.S. Department of
Education (DoE) proposals.
In 2011, she was named an Online Learning Consortium Fellow
“In recognition of her groundbreaking work in the assessment of
the impact and efficacy of online and blended learning.” She has co-
authored numerous articles and chapters on blended, adaptive, and
online learning and is a frequent presenter at conferences and to
other researchers. Patsy is active in both EDUCAUSE and the Online


About the Editors xvii

Learning Consortium (OLC). She serves on the EDUCAUSE Analytics


& Research Advisory Group and co-leads the EDUCAUSE Evidence of
Impact Community Group. She currently serves on the OLC Board of
Directors as its President.

Charles D. Dziuban is Director of the Research Initiative for Teaching


Effectiveness at the University of Central Florida (UCF), USA, where he
has been a faculty member since 1970 teaching research design and sta-
tistics as well as the founding director of the university’s Faculty Center
for Teaching and Learning. He received his PhD from the University
of Wisconsin, USA. Since 1996, he has directed the impact evaluation
of UCF’s distributed learning initiative examining student and faculty
outcomes as well as gauging the impact of online, blended, and adap-
tive courses on students and faculty members at the university. Chuck
has published in numerous journals including Multivariate Behavioral
Research, The Psychological Bulletin, Educational and Psychological
Measurement, the American Education Research Journal, Phi Delta
Kappan, The Internet in Higher Education, the Journal of Asynchronous
Learning Networks (now Online Learning), The EDUCAUSE Review,
e-Mentor, The International Journal of Technology in Higher Education,
Current Issues in Emerging eLearning, The International Journal of
Technology Enhanced Learning, and the Sloan-C View. He has received
funding from several government and industrial agencies including
the Ford Foundation, Centers for Disease Control, National Science
Foundation and the Alfred P. Sloan Foundation.

In 2000, Chuck was named UCF’s first ever Pegasus Professor for extraor-
dinary research, teaching, and service, and in 2005 received the honor of
Professor Emeritus. In 2005, he received the Sloan Consortium (now the
Online Learning Consortium) award for Most Outstanding Achievement
in Online Learning by an Individual. In 2007, he was appointed to the
National Information and Communication Technology (ICT) Literacy
Policy Council. In 2010, he was made an inaugural Online Learning
Consortium Fellow. In 2011, UCF established the Chuck D. Dziuban
Award for Excellence in Online Teaching in recognition of his impact
on the field. UCF made him the inaugural collective excellence awardee
in 2018. Chuck has co-authored, co-edited, andcontributed to numer-
ous books and chapters on blended and online learning and is a regular
invited speaker at national and international conferences and universities.
Currently he spends most of his time as the university representative to the
Rosen Foundation working on the problems of educational and economic
inequality in the United States.
xviii About the Editors

Anthony G. Picciano holds multiple faculty appointments at the City


University of New York’s Hunter College, USA, Graduate Center, and the
School of Professional Studies. He has also held administrative appoint-
ments at the City University and State University of New York includ-
ing that of Vice President and Deputy to the President at Hunter College.
He assisted in the establishment of the CUNY PhD Program in Urban
Education and served as its Executive Officer for 10 years (2007–2018).
Picciano’s research interests include education leadership, education policy,
online and blended learning, multimedia instructional models, and research
methods. He has authored 20 books including one novel and numerous
articles including Educational Leadership and Planning for Technology
which currently is in its fifth edition (Pearson Education).

He has been involved in major grants from the US Department of Education,


the National Science Foundation, IBM, and the Alfred P. Sloan Foundation.
He was a member of a research project funded by the US Department of
Education—Institute for Education Sciences, the purpose of which was
to conduct a meta-analysis on “what works” in postsecondary online
education (2017–2019). In 1998, Picciano co-founded CUNY Online, a
multi-million-dollar initiative funded by the Alfred P. Sloan Foundation
that provides support services to faculty using the Internet for course
development. He was a founding member and continues to serve on the
Board of Directors of the Online Learning Consortium (formerly the Sloan
Consortium). His blog started in 2009 has averaged over 600,000 visitors
per year. Picciano has received wide recognition for his scholarship and
research including being named the 2010 recipient of the Alfred P. Sloan
Consortium’s (now the Online Learning Consortium) National Award for
Outstanding Achievement in Online Education by an Individual.
Visit his website at: anthonypicciano​.c​om
PREFACE

The vast amount of data analytics available through students’ engage-


ment with online instructional programs and coursework such as adaptive
learning provides rich research opportunities on how best to use these
systems to improve students’ success in education. Data Analytics and
Adaptive Learning: Research Perspectives centers itself on recent origi-
nal research in the area conducted by the most-talented scholars in the
field with a focus on data analytics or adaptive learning techniques, spe-
cifically. In addition, several chapters focus on the organizational change
aspect that these topics influence. The past decade has seen advances in
instructional technology in adaptive and personalized instruction, virtual
learning environments, and blended learning, all of which have been aug-
mented by analytics and its companion big data software. Since 2020, the
coronavirus pandemic has resulted in a remarkable investment in myriad
online learning initiatives as education policymakers and administrators
pivoted to virtual teaching to maintain access to their academic programs.
Every indication is that when a new (post-pandemic) normal in education
emerges, it will be more heavily augmented by instructional technologies
including data analytics and adaptive learning. The strength of technol-
ogy is that it constantly changes, grows, and integrates into society and
its institutions. It is an ideal time to collect and view the evidence on the
main topic of this book to determine how it is stimulating advances in our
schools, colleges, and universities.


ACKNOWLEDGEMENTS

We are grateful to so many who helped make Data Analytics and Adaptive
Learning: Research Perspectives a reality. First, our colleagues at the
Online Learning Consortium, both past and present, for providing an
environment that fosters inquiry as presented in our work. Second, Dan
Schwartz of Taylor & Francis and his staff provided invaluable design,
editorial, and production support for this book. Third, we are most grate-
ful to Annette Reiner and Adyson Cohen, Graduate Research Assistants
at the Research Initiative for Teaching Effectiveness of the University of
Central Florida, for the untold hours of editorial work on this book. There
is no real way we can properly thank them for their efforts on our behalf.
Lastly and most importantly, however, to the authors of the outstanding
chapters found herein: this is not our book, it is a celebration of their out-
standing research in data analytics and adaptive learning.
Heartfelt thanks to you all.
Patsy, Chuck, and Tony


CONTRIBUTORS

Jeremy Anderson executes a vision for coupling digital transformation and


data-informed decision-making as Vice President of Learning Innovation,
Analytics, and Technology at Bay Path University, USA. He also is build-
ing an incremental credentialing strategy to create alternative pathways to
good-paying jobs. Prior to this role, he advanced business intelligence, data
governance, and original student success research as the inaugural Associate
Vice Chancellor of Strategic Analytics at Dallas College. Jeremy publishes
and presents nationally on analytics and teaching and learning innovation.
He holds an EdD in interdisciplinary leadership (Creighton University), an
MS in educational technology, and a BA in history education.

James Bennett has been promoting adaptive learning in post-secondary


education for over a decade. His work has included developing new pro-
cesses for evaluating adaptive learning assessments and the implementation
of “learning nets” that employ adaptive learning systems across multiple
courses and across all courses in an entire program. In addition to his par-
ticipation in large-scale adaptive learning projects, James has authored a
number of publications on adaptive learning topics.

Rhonda Bondie is Associate Professor and Hunter College director of the


Learning Lab and Lecturer at the Harvard Graduate School of Education,
USA. Rhonda began pursuing inclusive teaching as an artist-in-residence
and then spent over 20 years in urban public schools as both a special and
general educator. Rhonda’s co-authored book, Differentiated Instruction
Made Practical, translated into Portuguese, is used by teachers in more


xxii Contributors

than 30 countries to support their work of ensuring all learners are thriv-
ing every day. Rhonda’s research examines how teachers develop inclusive
teaching practices and differentiated instruction expertise throughout their
career using new technologies.

Gardner Campbell is Associate Professor of English at Virginia


Commonwealth University (VCU), USA, where for nearly three years he
also served as Vice Provost for Learning Innovation and Student Success
and Dean of VCU’s University College. His publication and teaching areas
include Milton and Renaissance studies, film studies, teaching and learn-
ing technologies, and new media studies. Prior to joining VCU, Campbell
worked as a professor and an administrator at Virginia Tech, Baylor
University, and the University of Mary Washington. Campbell has pre-
sented his work in teaching and learning technologies across the United
States and in Canada, Sweden, Italy, and Australia.

Tara Carpenter teaches general chemistry at UMBC. In teaching large,


introductory courses, she is interested in using evidence-based pedagogical
approaches that will enhance the learning experience of today’s students.
Incorporating online pedagogies has proven to be critical to her course
design. One of her primary goals is to help her students, who are primarily
freshmen, make the transition from high school to college by placing par-
ticular emphasis on utilizing effective learning strategies. She is intrigued
by the science of learning and investigating student motivation as they take
responsibility for the degree that they will earn.

Leila Casteel is a nationally certified family nurse practitioner serving over


27 years in both clinical practice and academia. She received her BSN,
MSN, and DNP from the University of South Florida in Tampa, USA,
and strives to integrate innovation into all aspects of learning, teaching,
and health care. She currently serves as the Associate Vice President of
Academic Innovation for Herzing University.

Baiyun Chen is Senior Instructional Designer at the Center for Distributed


Learning at the University of Central Florida, USA. She leads the
Personalized Adaptive Learning team, facilitates faculty professional
development programs, and teaches graduate courses on Instructional
Systems Design. Her team works in close collaboration with teaching fac-
ulty members to design and develop adaptive learning courses by utiliz-
ing digital courseware to personalize instruction that maximizes student
learning. Her research interests focus on using instructional strategies
in online and blended teaching in the STEM disciplines, professional
Contributors xxiii

development for teaching online, and the application of adaptive tech-


nologies in education.

Julia Chen is Director of the Educational Development Centre at The


Hong Kong Polytechnic University and courtesy Associate Professor at
the Department of English and Communication. Her research interests
include leveraging technology for advancing learning, English Across
the Curriculum, and using learning analytics for quality assurance and
enhancement. Julia has won numerous awards, including her university’s
President’s Award for Excellent Performance, First Prize of the Best Paper
award in Learning Analytics, the Hong Kong University Grants Committee
(UGC) Teaching Award, and the QS Reimagine Education Awards Silver
Prize in the category of Breakthrough Technology Innovation in Education.

Chris Dede is Senior Research Fellow at the Harvard Graduate School of


Education, USA, and was for 22 years its Timothy E. Wirth Professor in
Learning Technologies. His fields of scholarship include emerging tech-
nologies, policy, and leadership. Chris is a Co-Principal Investigator and
Associate Director of Research of the NSF-funded National Artificial
Intelligence Institute in Adult Learning and Online Education. His most
recent co-edited books include: Virtual, Augmented, and Mixed Realities
in Education; Learning engineering for online education: Theoretical
contexts and design-based examples; and The 60-Year Curriculum: New
Models for Lifelong Learning in the Digital Economy.

Tristan Denley currently serves as Deputy Commissioner for Academic


Affairs and Innovation at the Louisiana Board of Regents. His widely
recognized work that combines education redesign, predictive analytics,
cognitive psychology and behavioral economics to implement college com-
pletion initiatives at a state-wide scale has significantly improved student
success and closed equity gaps in several states. He also developed and
launched the nexus degree, the first new degree structure in the United
States in more than 100 years.

Maura Devlin is Dean of Institutional Effectiveness and Accreditation at


Bay Path University, USA, overseeing curricular quality, assessment of
student learning, accreditation, and compliance initiatives. She is Project
Director of a Title III Department of Education grant to develop a Guided
Pathways framework and reframe student support initiatives, undergirded
by technology platforms. She holds a PhD in educational policy and lead-
ership from UMass Amherst and has published on holistic approaches to
assessment, data-driven course design, adaptive learning, and promoting
xxiv Contributors

student engagement in online environments. She is passionate about equi-


table access to and completion of quality degree programs.

Charles (Chuck) Dziuban is Director of the Research Initiative for Teaching


Effectiveness at the University of Central Florida (UCF), USA, and
Coordinator of Educational Programs for the Harris Rosen Foundation.
He specializes in advanced data analysis methods with an emphasis on
complexity and how it impacts decision-making. Chuck has published
in numerous journals including Multivariate Behavioral Research, the
Psychological Bulletin, Educational and Psychological Measurement,
the American Education Research Journal, the International Journal of
Technology in Higher Education, and the Internet in Education. His meth-
ods for determining psychometric adequacy have been featured in both the
SPSS and the SAS packages.

Alfred Essa is CEO of Drury Lane Media, LLC, Founder of the AI-Learn
Open Source Project, and author of Practical AI for Business Leaders,
Product Managers, and Entrepreneurs. He has served as CIO of MIT’s
Sloan School of Management, Director of Analytics and Innovation at
D2L, and Vice President of R&D at McGraw-Hill Education. His current
research interests are in computational models of learning and applying AI
Large Language Models to a variety of educational challenges, including
closing the achievement gap.

Dennis Foung is a writing teacher at the School of Journalism, Writing and


Media at The University of British Columbia, Canada. He has a keen inter-
est in computer-assisted language learning and learning analytics.

Kristen Fox has spent 20 years working at the intersection of education,


innovation, digital learning equity, and workforce development and is a
frequent author and presenter. She has been an advisor to institutions and
ed-tech companies. Her work has included the development of a frame-
work for equity-centered ed-tech products and she has published national
research on the impact of the pandemic on digital learning innovation.
Kristen previously worked as a Special Advisor at Northeastern University,
where she led innovation initiatives. Prior to that, she was a Managing Vice
President at Eduventures where she led research and advised institutions
on online learning.
John Fritz is Associate Vice President for Instructional Technology and New
Media at the University of Maryland, Baltimore County, USA, where he
is responsible for leading UMBC’s strategic efforts in teaching, learning,
and technology. As a learning analytics researcher and practitioner, Fritz
Contributors xxv

focuses on leveraging student use of digital technologies as a plausible


proxy for engagement that can nudge responsibility for learning. Doing so
also helps identify, support, and scale effective pedagogical practices that
can help. As such, Fritz attempts to find, show, and tell stories in data that
can inspire the head and heart of students and faculty for change.

Colm Howlin is Chief Data Scientist at a leading cybersecurity company,


where he leads a team applying machine learning and artificial intelligence
to solve complex business and technology problems. He has nearly 20
years of experience working in research, data science and machine learn-
ing, with most of that spent in the educational technology space.

Phil Ice is currently Senior Product Manager for Data Analytics and
Intelligent Experiences at Anthology. In this capacity, he works with various
parts of the organization to create predictive and prescriptive experiences
to improve learning outcomes, engagement, and institutional effective-
ness. Prior to joining Anthology, Phil worked as faculty at West Virginia
University (WVU) and University of North Carolina, Charlotte (UNCC),
Vice President of R&D at APUS, Chief Learning Officer for Mirum, and
Chief Solutions Officer at Analytikus. Regardless of the position, he has
remained passionate about using analytics and data-driven decision-mak-
ing to improve Higher Education.

Kitty Kautzer is Chief Academic Officer at Herzing University, USA, and


has previously been appointed as Provost of Academic Affairs. Prior to
joining Herzing University, Kautzer served as the Chief Academic Officer
at an educational corporation. She also had an 11-year tenure with another
institution, where she held several leadership positions, including Vice
President of academic affairs and interim Chief Academic Officer.

Lucas Kohnke is Senior Lecturer at the Department of English Language


Education at The Education University of Hong Kong. His research interests
include technology-supported teaching and learning, professional develop-
ment using information communication technologies, and second language
learning/acquisition. His research has been published in journals such as
the Journal of Education for Teaching, Educational Technology & Society,
RELC Journal, The Asia-Pacific Education Researcher, and TESOL Journal.

Kari Goin Kono is a senior user experience designer with over 10 years of
experience in online learning and designing digital environments within
higher education. She has an extensive research agenda geared toward
supporting faculty with inclusive teaching practices including digital
xxvi Contributors

accessibility and co-construction as a practice in equitable design. She


has been published in the journals Online Learning, Current Issues in
Emerging ELearning, the Journal of Interactive Technology and Pedagogy,
and the NW Journal of Teacher Education.

Andrea Leonard spent nearly a decade designing and teaching both online
and hybrid chemistry courses at the University of Louisiana at Lafayette
before joining the Office of Distance Learning as an Instructional Designer
in 2019. Andrea’s education and certification include a BSc in chemis-
try from UL Lafayette and an MSc in chemistry with a concentration in
organic chemistry from Louisiana State University. She is also a certified
and very active Quality Matters Master Reviewer. Her research interests
include the discovery and application of new adaptive and interactive
teaching technologies

Catherine A. Manly is a Postdoctoral Researcher at the City University of


New York, USA, and a Professor of Practice in higher education at Bay
Path University, USA. She earned her PhD in higher education from the
University of Massachusetts Amherst. She brings a social justice lens to
quantitative investigation of transformational innovation. Her research
aims to improve affordable postsecondary access and success for students
underserved by traditional higher education, particularly through the
changes possible because of online and educational technologies. Her work
has been published in journals such as Research in Higher Education and
Review of Higher Education.

Patsy D. Moskal is Director of the Digital Learning Impact Evaluation at the


University of Central Florida (UCF), USA. Since 1996, she has been a liai-
son for faculty research of distributed learning and teaching effectiveness
at UCF. Patsy specializes in statistics, graphics, program evaluation, and
applied data analysis. She has extensive experience in research methods
including survey development, interviewing, and conducting focus groups
and frequently serves as an evaluation consultant to school districts, and
industry and government organizations. Currently, her research focuses on
the use of learning analytics, adaptive learning, and digital technologies to
improve student success

James R. Paradiso is Associate Instructional Designer and the Affordable


Instructional Materials (AIM) Program Coordinator for Open Education
at the University of Central Florida, USA. His main areas of research and
professional specialization are open education and adaptive learning—
with a particular focus on devising and implementing strategies to scale
Contributors xxvii

open educational practices and engineering data-driven learning solutions


across multiple internal and external stakeholder communities.

Thomas Penniston leverages institutional, academic, and learning analyt-


ics to inform course redesigns and improve student engagement and suc-
cess. He earned his PhD through the University of Maryland, Baltimore
County’s (UMBC) Language, Literacy, and Culture program, and has
extensive experience with quantitative, qualitative, and mixed methods
designs. Penniston has been involved in education for over two decades,
teaching students ranging in age and skill-level from early elementary to
doctoral, in both domestic and international settings (including as a Peace
Corps Volunteer in Moldova). He has worked in online and blended learn-
ing in different capacities for the majority of those years as an instructor,
builder, and administrator.

Anthony G. Picciano is Professor at the City University of New York’s


Hunter College and Graduate Center, USA. He has also held administrative
appointments including that of Senior Vice President and Deputy to the
President at Hunter College. He has authored or co-authored 20 books,
numerous articles, and edited 10 special journal editions. He was a founder
and continues to serve on the Board of Directors of the Online Learning
Consortium. Picciano has received wide recognition for his scholarship and
research including being named the 2010 recipient of the Alfred P. Sloan
Consortium’s National Award for Outstanding Achievement in Online
Education by an Individual.

Timothy M. Renick is the founding Executive Director of the National


Institute for Student Success and Professor of Religious Studies at Georgia
State University, USA. Between 2008 and 2020, he directed the student suc-
cess efforts of the university, overseeing a 62% improvement in graduation
rates and the elimination of all equity gaps based on students’ race, ethnic-
ity, or income level. Renick has testified on strategies for helping university
students succeed before the US Senate and has twice been invited to speak
at the White House.

Mark Jack Smith is Vice President of Human Resources at Petroleum Geo-


Services (PGS) in Oslo, Norway. He has extensive experience in developing
and leading Human Resources (HR) teams, talent management processes,
and knowledge management initiatives. Mark has also contributed to learn-
ing and development through his research and publications, including a
chapter in Blended Learning Research Perspectives, Volume 3 and a lecture
at the Learning 2020 Conference. Prior to joining PGS, Mark held senior
xxviii Contributors

HR and knowledge management positions at PricewaterhouseCoopers,


and McKinsey & Company. Mark earned an MSc degree in information
science from the Pratt Institute.

Megan Tesene is an advocate and higher education strategist who partners


with a broad range of postsecondary leaders and constituencies across the
United States to support public universities in the adoption and imple-
mentation of evidence-based teaching practices and educational technolo-
gies. Her work centers on enhancing pedagogy, improving accessibility,
and building institutional capacities to support equity in higher education.
Megan is a social scientist by training with applied expertise in program
evaluation, faculty learning communities, and equitable digital learning
initiatives. She previously worked at Georgia State University, where she
managed interdisciplinary research projects leveraging educational tech-
nologies across undergraduate gateway courses.

Karen Vignare is a strategic innovator leveraging emerging technologies


to improve access, equitable success, and flexibility within higher educa-
tion. Karen engages a network of public research universities committed to
scaling process improvement, effective use of educational technology, and
using data to continuously improve learning outcomes. Karen previously
served as a Vice Provost, at the University of Maryland University College,
the largest online public open access institution where she led innova-
tions in adaptive learning, student success and analytics. She has published
extensively on online learning, analytics, and open educational resources.

Morgan C. Wang received his PhD from Iowa State University in 1991.
He is the funding Director of the Data Science Program and Professor
of Statistics and Data Science at the University of Central Florida, USA.
He has published one book (Integrating Results through Meta-Analytic
Review, 1999), and over 100 articles in refereed journals and conference
proceedings on topics including big data analytics, meta-analysis, com-
puter security, business analytics, healthcare analytics, and automatic
intelligent analytics. He is an elected member of International Statistical
Association, data science advisor of American Statistical Association, and
member of American Statistical Association and International Chinese
Statistical Association.

Yanzhu Wu has over 15 years of experience in the instructional design and


technology field. She earned her Master’s and Ph.D. in Instructional Design
and Technology from Virginia Tech. She currently works as an Instructional
Designer for the Office of Distance Learning at the University of Louisiana
Contributors xxix

at Lafayette. As an instructional designer, she is passionate about creating


engaging and effective learning experiences that meet the needs of various
learners. Prior to joining the UL, she served as assistant director for the
Office of Instructional Technology at Virginia Tech for several years.

Jianbin Zhu, is a senior biostatistician in the Research Institute of


AdventHealth and Ph.D. candidate in the Statistics and Data Science
Department of the University of Central Florida. He received his first
Ph.D. degree in Engineering Mechanics in 2011 and currently is seeking
another Ph.D. degree in Big Data Analytics. He has ten years’ experience
in statistics and data science. His research areas are statistical analysis,
big data analytics, machine learning and predictive modeling.
SECTION I

Introduction



1
DATA ANALYTICS AND
ADAPTIVE LEARNING
Increasing the odds

Philip Ice and Charles Dziuban

Data analytics and adaptive learning are two critical innovations gaining a
foothold in higher education that we must continue to support, interrogate,
and understand if we are to bend higher education’s iron triangle and realize
the sector’s longstanding promise of catalyzing equitable success. The oppor-
tunity and peril of these pioneering 21st century technologies (made availa-
ble by recent advances in the learning sciences, high performance computing,
cloud data storage and analytics, AI, and machine learning) requires our
full engagement and study. Without the foundational and advanced insights
compiled in this volume and continued careful measurement, transparency
around assumptions and hypotheses testing, and open debate, we will fail to
devise and implement both ethical and more equitable enhancements to the
student experience – improvements that are both urgent and vital for real-
izing higher education’s promise on delivering greater opportunity and value
to current and future generations of students.
—Rahim S. Rajan, Deputy Director,
Postsecondary Success, Bill & Melinda Gates
Foundation

This book contains chapters on data analytics and adaptive learning—


two approaches used to maximize student success by removing obstacles
and increasing the favorable odds. Although considered separately, these
approaches are inextricably bound to each other in a complex educational
environment, realizing that information from data will improve the learn-
ing process. Analytics has an adaptive component, and the interactive
nature of adaptive learning is fundamental to analytics (Dziuban et al.,

DOI: 10.4324/9781003244271-2
4 Philip Ice and Charles Dziuban

2020). They come from common pedagogical theories. Both practices


address a fundamental problem in higher education. Because of the severe
asymmetry (Taleb, 2012, 2020) in virtually all aspects of student life the
playing field is not even close to even.

The asymmetry problem: it is pervasive


Findings from the Pell Institute (2021) show that students living in the
bottom economic quartile have an 11 percent chance of completing a col-
lege degree. In terms of opportunity costs, the odds against these youth
are 9:1. On the other hand, students living in the top economic quartile
have a 77 percent chance of completing college. Based on financial posi-
tion alone, the odds in favor of their college success are 3:1. Friedman
and Laurison (2019) characterized this as “the following tail wind of
wealth advantage,” which positions people from the privileged class as
seven times more likely to land in high-paying, elite positions; they con-
stitute the primary candidate pool for job entry because of the inequitable
economic, cultural, social, and educational status that they inherit. In a
recent study reported in the Scientific American, Boghosian (2019) mod-
els free market economies, showing that resources will always flow from
the poor to the rich unless markets compensate for wealth advantage by
redistribution. This economic asymmetry, according to him, is the root
cause of opportunity and educational inequity.
Additionally, the cost of postsecondary education is crippling. Hess (2020)
puts the accumulated college debt in the United States at more than 1.7 tril-
lion dollars, which, primarily, encumbers the bottom economic quartile. If
that debt were a gross domestic product (GDP) it would be the thirteenth
largest economy in the world. Mitchell (2020) presents evidence that families
in the lowest economic quartile carry the largest proportion of the encum-
brance. Mitchell further demonstrates for the period 2016–2021 that the
largest increase (43%) impacted Black students who are overrepresented in
the lowest economic quartile. A canon of recent literature documents the
unequal opportunity (asymmetry) in our country and educational system
(Benjamin, 2019; Eubanks, 2018; Jack, 2019; McGhee, 2021; Mullainathan
& Shafir, 2013; O’Neil, 2016; Taleb, 2020; Wilkerson, 2011).

Analytics and adaptive learning: possible solutions

Data analytics: I see you


Data analytics has a long history in the corporate sector especially in cus-
tomer relations management where companies realized that understand-
ing client interests and buying preferences increased their conditional
Data analytics and adaptive learning 5

profit margins (Chai et al., 2020; Corritore et al., 2020). Furthermore,


universities all over the world are exploring analytics to improve student
success. In fact, that process is a priority for educational and professo-
rial organizations such as the Digital Analytics Association (n.d.) and the
Society for Learning Analytics Research-SoLAR (2021), both of which
emphasize the importance of analytics for data scientists. Moreover, an
EDUCAUSE Horizon Report: Data and Analytics Edition (2022) features
these topics: Artificial Intelligence, Analytics, Business Intelligence, Data
Management Planning, Data Privacy, Data Security, Data Warehouse,
Diversity, Equity and Inclusion, Digital Transformation, Enterprise
Information Systems, and Student Information Systems. This list by its
sheer comprehensiveness shows how analytics has intersected with all
aspects of higher education.
At the institutional level, analytic procedures that predict at-risk
groups or individuals commonly use student information system vari-
ables to develop their models: SAT, ACT, High School GPA, Major GPA,
Institution GPA International Baccalaureate Completion, Transfer Status,
Gender, Minority Status, Dual Enrollment, and others. Strategies involve
methods such as logistic regression (Pampel, 2020), predicting the binary
variable success or not, neural networks (Samek et al., 2021) classification
and regression trees (Ma, 2018), and other predictive techniques. Data sci-
entists must decide about risk from errors identifying students who genu-
inely need support and would benefit from augmented instruction. This is
analogous to the costs of types one and two errors in statistics. However,
this is where the analysis ends and the judgment comes into play (Cornett,
1990; Setenyi,1995; Silver, 2012). Institutions must assume responsibility
for their decisions—data never make them.
Invariably their predictive models are good; however, there is a caveat.
Often the best predictors such as institutional GPA are nontractable. They
classify very well but there is little to compensate for academic history and
a small chance that such variables will be responsive to instruction. This
is the analytics conundrum. To counteract poor academic performance,
analysts must identify surrogate variables that respond to instruction
and increase the odds of student success. This is a fundamental challenge
faced by all analytics procedures. The model that works perfectly well in
one context fails in another. The question becomes not which model but
which model for which context.
Another approach to the analytics problem has been to place control
of the process in the hands of learners through student facing dashboards
where they can monitor their progress on a continuous basis (Williamson
& Kizilcec, 2022). They can check against a number of course markers
comparing their status either to others in the class or against predetermined
6 Philip Ice and Charles Dziuban

criterion values. The assumption is that when students monitor themselves,


empowerment cedes to them and encourages engagement. The alternative
is to provide the dashboard metrics to instructors and advisors so they
can monitor student progress on a continual basis, identifying those that
might be at risk in certain aspects of the learning trajectory. However,
analytics in its present state is a work in progress and best characterized
by the concept of emergence in complex systems (Mitchell, 2011; Page,
2009) rather than strategically planned and designed.

Adaptive learning: let’s make it up as we go


Adaptive learning, the second area addressed in this book, increases the
odds of student success just as data analytics does (Levy, 2013; Peng &
Fu, 2022). However, it approaches the risk problem in an alternative way.
Whereas analytics identifies potentially at-risk students, adaptive learning
assesses where students are in the learning cycle and enables their achieve-
ment at the most effective pace. Carroll (1963) outlined the tenants of
adaptive learning when he posited that if learning time is constant knowl-
edge acquisition will be variable. For instance, if all students spend one
semester in college algebra their learning outcomes will vary. However, if
the desired outcome is to have students at equivalent achievement status
at the conclusion of their studies, how long they will have to accomplish
that will differ for individuals or equivalent ability level groups. Carroll
(1963) partitioned the numerator his time allowed divided by time needed
formulation into opportunity, perseverance, and aptitude. The intersec-
tion of those elements in the numerator creates second meta-level learning
components: mediated expectations, potential progress, and likelihood of
success. All three hinge on learning potential through self-expectations,
growth, and prior odds of success. Although the notion of adaptive learn-
ing is simple its execution is not because the interactions among the ele-
ments are more important than the elements themselves. This is what gives
adaptive learning its emergent property found in complex systems. Essa
and Mojorad (2020) did an extensive numerical analysis of the Carroll
model validating that time was a key element in the learning process and
that perseverance increases knowledge gains. However, for that conclu-
sion to hold, it must function in a learning environment that reflects an
instructional quality threshold. As usual, there is no substitute for excel-
lent teaching.
The adaptive principle has been a foundation of effective teaching and
learning for decades, but without effective technology platforms to sup-
port the autocatalytic nature of the process implementation was not pos-
sible. The adaptive workload for instructors, departments, or colleges
Data analytics and adaptive learning 7

simply prevented the process from happening. However, in recent years


that has changed and there are platforms available for supporting adap-
tive learning. Their functioning has not been without glitches and prob-
lems but working with universities and corporate partners the systems
and their underlying algorithms have improved. One of the advantages of
contemporary adaptive technology, unlike analytic approaches, is its abil-
ity to provide instructors, advisors, instructional designers, and coaches
with real time and continuously updated student progress (Dziuban et al.,
2018), such as:

1. baseline status
2. scores on individual learning modules
3. course activity completion rate
4. average scores across modules
5. course engagement time
6. interaction with course material
7. interaction with other students
8. revision activities
9. practice engagement
10. growth from the baseline

These markers accumulate for individual students, groups or entire classes


creating the possibility of continuous feedback in the learning process
(Dziuban et al., 2020). Of course, depending on the platform, the metrics
for the adaptive elements will take on different formats and their avail-
ability may vary across the providers, but they create an interactive learn-
ing environment. This resonates with contemporary students who view
an effective learning experience as one that is responsive, nonambiguous,
mutually respectful, highly interactive, maximizes learning latitude—all
curated by an instructor dedicated to teaching (Dziuban et al., 2007).
Metaphorically, Pugliese (2016) described adaptive learning as a
gathering storm that requires meaningful conversation among institu-
tions, vendors, and instructional stakeholders to accomplish the benefits
of personalized learning. He asserted that progress would accrue from
student-centric instructional design. At the time he framed adaptive
algorithms as machine learning, advanced algorithms, rule-based, and
decision trees. However, the storm he predicted, like so many innova-
tions in education, never really developed because the customary initial
enthusiasm mediated into a more considered process. The storm passed
quickly.
However, three organizations, The University of Central Florida,
Colorado Technical University, and Realizeit (adaptive learning platform
8 Philip Ice and Charles Dziuban

provider) followed his advice about a broader conversation and formed


a multiyear cooperative research partnership (Dziuban et al., 2017).
Through their work, they were able to show that the adaptive environ-
ment impacts students’ engagement differently and is roughly analogous
to learning styles. They demonstrated the real-time learning variables
provided by the platform predicted student risk early in their courses.
Further, they demonstrated that the predictive lift of the student mark-
ers changed over the trajectory of a course and found critical points in a
course past which intervention is no longer effective.
Since Pugliese’s (2016) article, considerable understanding of adaptive
learning has emerged in higher education and the corporate sectors. Both
cultures have come to realize the potential of the adaptive method for
learning and training. However, like analytics, adaptive learning is a work
in progress because as Forrester (1991) cautions one can never accurately
predict how an intervention will ripple through a complex system.

“There is another”: Yoda, the three-body problem, and connections


This section begins with a consideration of a problem in physics that leads
to chaos—the three-body problem (Wood, 2021). The difficulty arises
when the gravitational forces of three bodies interact—for instance, the
earth, moon, and sun. Interestingly, Newtonian physics can easily identify
the relational pattern between two bodies such as the earth and moon.
However, introduce a third (as done in this section) and the relationship
can be more difficult, if not impossible, to identify. The system becomes
chaotic (at least in physics) with an ever-changing unpredictable cycle of
relationships. Whimsically, reports suggest that Newton said to Halley,
“You give me a headache.” The concept was the basis of an award-win-
ning science fiction trilogy by Cixin Liu (The Three-Body Problem, 2014;
The Dark Forest, 2015; Death’s End, 2016). Aliens, the Tirolians, lived
on a planet with three suns that exerted chaotic influences on them caus-
ing complete unpredictability for such things as night and day, the sea-
sons, and weather catastrophes. One might wonder why we introduced
this concept. For one, education in the digital age is just as complicated
as physics problems especially when we think about our own three-body
problem—the influence of teacher, student, and content exert on each
other. The Tirolians have nothing on educators in the twenty-first century.
Yoda’s admonition reminds us that although we address the connection
between data analytics and adaptive learning, we would be remiss in ignor-
ing the big data elephant in education. Big data comes closer to Pugliese’s
storm than either of the other two and has revolutionized our thinking
about sampling, estimation, hypothesis testing, measurement error, and
Data analytics and adaptive learning 9

decision-making. Industry has capitalized on big data to make analytics


that result in astonishingly precise predictions about customer behavior,
hiring practices, and strategic planning (Aljohani et al., 2022; Sheehy,
2020). Floridi (2016) argues that big data analysis can identify small but
important patterns that obscure with any other method. Big data has had
a similar impact on education (Dziuban et al., 2016). By examining insti-
tutional acceptance and success, registration patterns, diversity and inclu-
sion, dropout prevention, transfer status, change in major, effective major
selection and other characteristics of student life, universities have used
big data to improve marketing and enrollment projection (Social Assurity,
2020). In recent years, big data has played a critical role in understanding
and predicting trends in the COVID-19 pandemic. Without the capability
to process and simplify the data daily, the pandemic would have taken a
much greater toll on human life. There is a strong connection among big
data, data analytics, and adaptive learning (Figure 1.1).
The intersection of big data and analytics creates a new science of small
pattern prediction where traditionally those relationships were unobserv-
able. Data analytics and adaptive learning have reinforced each other
making adaptive analytics a viable alternative to more traditional meth-
ods. Combining big data with adaptive learning has enabled the identifi-
cation of learning patterns that would be unstable in most instructional

Big Data

Small Patterns Stable Learning


Prediction Patterns
Education
Data
Trifecta
Data Adaptive
Analytics Adaptive Learning
Analytics

FIGURE 1.1 The Three-Body Education Data Problem


10 Philip Ice and Charles Dziuban

situations. Apparently, the learning trifecta creates a three-body educa-


tion data solution rather than a chaotic problem. The connections are real,
strong, meaningful, and informative. There are additional connections;
for instance, there is an almost exact correspondence among the elements
of adaptive learning and the principles of supply chain management:
suppler-instructional designer, manufacturer-instructional designer/
instructor, distributor/wholesaler/adaptive platform, retailer-instructor,
customer-student (Johnson et al., 2020). All three of our bodies, data
analytics, adaptive learning, and big data, produce results that corre-
spond closely with the epidemiological paradigm: clinical-learning issues,
patient-student, global-institutional (Tufte, 2019). Artificial intelligence
big data analytics combined with adaptive systems configures systematic
mapping of literature (Kabudi et al., 2021). Adaptive learning creates the
foundation for a flipped classroom (Clark et al., 2021). Big data analytics
can identify the components of a class that will lead to a high degree of
student satisfaction (Wang et al., 2009). These are examples of the con-
nections that currently exist. More are emerging daily.

Fair and balanced


This chapter emphasizes the potential of data analytics, adaptive learning,
and the third body, big data. However, each of the processes has weath-
ered critiques and criticisms that are worthy of note.

1. Adaptive learning can isolate students, reducing instructors to the


role of facilitation, thus stripping their teaching leadership role
(Kok, 2020). Further, there have been technological difficulties
with the platforms that are at odds with instructional objectives
(Sharma, 2019). Adaptive learning requires large investments of
time and resources especially for content agnostic platforms (Kok,
2020).
2. Data analytics can overlook important variables and inappropri-
ately substitute available data for metrics of real interest—the data
bait and switch (Kahneman, 2011). Analytics can fall prey to noise
that obscures important findings (Silver, 2012). Analytic models
are simplifications of the world and can lead to misleading portray-
als (Silver, 2012). Unfortunately, most analytics algorithms are not
transparent and hidden, therefore immune to feedback (Benjamin,
2019). Data security is a perennial concern (Hartford, n.d.).
3. Big data can be cherry-picking at an industrial level collecting too
many variables where spurious relationships overwhelm mean-
ingful information (Taleb, 2012). Problems and misinformation
Data analytics and adaptive learning 11

interpreting correlations in big data arise from colinearly (Marcus


& Davis, 2014). There is always the danger of inherent bias with no
transparency and little hope of correction (Almond-dannenbring
et al., 2022).

All data analytics professionals, adaptive learning specialists, and big data
scientists should pay attention to and respect these criticisms and cau-
tions. The responsibility is considerable and best summarized by Tufte
(2001) when speaking to visual presentations. Paraphrased they become:

1. show the data


2. ensure the reader responds to the substance of your finding not your
methods
3. do not distort with presentation
4. use presentation space wisely
5. make large data sets understandable
6. encourage the reader to make meaningful comparisons
7. show results and multiple levels
8. make sure you have a purpose
9. integrate what you say with what the data say

Behind the curtain


Complicating the effective convergence of big data, advanced analytics,
and adaptive learning is the system that will be overlaid upon higher edu-
cation. While all organizations have unique histories and characteristics,
higher education may be one of the most complex. It is steeped in tradition
and built upon interconnected political, philosophical, and cultural layers
that are frequently not explicitly codified but, rather, are retained as tacit
knowledge by members of the Academy (Hill, 2022). Within this context,
successful implementations will require infrastructural transformation,
diligent oversight of processes, and complex change management.
Earlier, big data’s role in helping uncover trends during the COVID-
19 pandemic was noted. However, COVID-19 was also responsible for
driving the increased use of data in higher education. Traditional colleges
and universities are highly dependent upon ancillary cash flows, such as
housing, concessions, parking, etc. to meet the demands of their operat-
ing budgets (Deloitte United States, 2020). When the move to online was
necessitated by COVID-19, many institutions began to realize alarming
deficits, which, despite federal financial intervention, ultimately resulted
in numerous closures and increased scrutiny related to expenditures and
outcomes (Aspegren, 2021).
12 Philip Ice and Charles Dziuban

Labor in the clouds


Given the increased pressure campuses are now facing, the demand for
evidence-based approaches to both strategic and tactical decisions has
increased dramatically. However, a large percentage of institutions do not
have the ability to federate their data, because of siloed and often anti-
quated systems. In a 2021 analysis of data-related issues in higher educa-
tion, the Association of Public and Land-grant Universities interviewed
24 representatives from the Commission on Information, Measurement,
and Analysis and found that the issue of data silos was noted as a major
barrier to effective data analysis by all members, with only one institu-
tion stating that they had a centralized data repository (Nadasen & Alig,
2021). To understand why these silos persist it is necessary to examine the
two factors necessary to facilitate change: infrastructure and personnel.
Though cloud computing is seemingly ubiquitous in contemporary
society, colleges and universities have been laggards with respect to imple-
menting these systems, as a function of cost and timelines. The process of
moving from a traditional on-premises implementation to a cloud-based
solution can involve 12–18 months and costs that can quickly reach the
mid-seven-figure range. For many institutions, projects with these types of
costs and duration are simply not tenable, given already strained budgets,
making the prospects of developing an infrastructure capable of leverag-
ing the power of big data, advanced analytics, and predictive learning
grim at best.
In other instances, individual departments within an institution may
have already undertaken to move their operations to the cloud. However,
the question here becomes which cloud? AWS, Microsoft Azure, and
Google Cloud are not interoperable and require the use of a cross-cloud
replication service, such as Snowflake (Sharing Data Securely across
Regions and Cloud Platforms—Snowflake Documentation, n.d.), with
this type of integration having high costs and long timelines as well.
While costs and a high pain threshold for lengthy implementations are
the primary factors associated with infrastructure development, the rap-
idly increasing salaries of skilled IT and data science personnel are also
a limiting factor. While these groups have always been in high demand,
COVID-19 and the Great Resignation catalyzed a stratospheric increase
in salaries that will continue to accelerate as technology expands to meet
the needs of industry and society at large (Carnegie, 2022). Higher edu-
cation institutions will need to invest in individuals to fill these roles and
will likely find it difficult to fund compensation that is competitive with
other alternatives in the market. Developing mechanisms to develop both
the workforce and infrastructure necessary to realize the promise of big
Data analytics and adaptive learning 13

data, analytics, and adaptive learning, at scale, is already a daunting task,


which will become increasingly difficult as more institutions embark on
this journey.

Lava lamps and learning


In the movie, The Big Short, a quote, which has frequently been attrib-
uted to Mark Twain is displayed, at a point where it is becoming appar-
ent that no one really knows what is going on with Collateralized Debt
Obligations:

It ain’t what you don’t know that gets you into trouble. It’s what you
know for sure that just ain’t so.

As data scientists and the practitioners with whom they collaborate begin
utilizing increasingly sophisticated machine learning techniques, keeping
this warning in the front of one’s mind is critical.
By 2013, computational power had reached a point that deep learning,
a branch of machine learning based on artificial neural networks, was
developed by connecting multiple layers of processing to extract increas-
ingly higher-level entities from data. In other words, it is doing what
humans hopefully do every day—learn by example. A classic example of
how this works is that if you show the deep learning algorithm enough
pictures of a dog it will be able to pick the pictures of dogs out of a col-
lection of animal pictures. Over time, deep learning’s capabilities have
increased and certain techniques have been able to move, to a degree,
from the specific to the general, using semi-supervised deep learning; giv-
ing the algorithm some examples and then letting it discover similar fea-
tures on its own (Pearson, 2016).
While this technology is extremely impressive and is used by virtually
every tech and marketing company, it is also problematic in that we do
not fully understand how it works. As it is modeled after the way that
biological neural networks operate, it is informative to consider how a
person thinks. They may see an apple, classify it as such based on prior
knowledge, apply both subjective and objective analysis to it, consult with
their internal systems to see if they are hungry, then make the decision of
whether to eat the apple. This scenario may take only a few seconds, but
there are billions of neurons involved in the process and even if we were
able to scan the person’s brain, while those interactions were occurring,
we could not make sense out of the data. Nor in many cases would the
person be able to tell you what factors they considered, in what order,
14 Philip Ice and Charles Dziuban

to arrive at the decision; mirroring the situation that data scientists find
themselves in with deep learning.
Compounding the problem, researchers at Columbia University recently
created a semi-supervised deep learning experiment in which the algorithm
detected variables that it could not describe to the researchers. To determine
if complex motion could be analyzed by artificial intelligence to surface new
variables, the researchers showed the algorithm examples of well-under-
stood phenomena, such as double pendulums, from which additional vari-
ables, not considered by humans, were detected. Then the team asked it to
examine fire, lava lamps, and air dancers. In the case of flame patterns, 25
variables were returned. However, when queried about their nature, the
program was incapable of describing them, even though the predictions of
flame action and patterns were largely accurate (Cheng et al., 2022).
Even now, some of the companies engaged in machine learning applica-
tions for higher education have publicly stated that they do not fully under-
stand the models that are created (Ice & Layne, 2020). While disconcerting,
at least the variables that are being fed to machine learning algorithms can
be understood and analyzed using more conventional statistical techniques
to provide some degree of interpretability. However, imagine a scenario in
which a deep learning model was trained on data that are traditionally used
to predict phenomena such as retention, then was allowed to ingest all the
data that exist on campus, and then went on to produce highly accurate
predictions related to which students were at risk, but was unable to pro-
vide researchers with an explanation of what it was measuring. While this
scenario may, at first, appear theoretical, the speed with which deep learn-
ing is beginning to understand complex systems is quite stunning.
What if the deep learning model above was trained on data that had
inherent, though unintentional, biases? The algorithm does not have a
sense of bias or an understanding of ethics, it simply applies what it has
learned to new data, even if that means transference of bias. While deep
learning is certainly the current darling of the analytics world, the collec-
tion, analysis, and reporting processes historically have relied on models,
categorization, and labeling based on human input. While humans still
have some touchpoints with the model on which deep learning is trained,
it is important to remember that the complete history of human research
history is distilled and embedded within those examples. This process
has the potential to result in the generation of oversimplified models and
incomplete, or even harmful through the perpetuation of stereotypes that
are culturally repressive practices. As such, it is imperative that practition-
ers are thoughtful about the processes and tools they are creating and
that institutions exercise appropriate oversight to ensure equity (2022
EDUCAUSE Horizon Report Data and Analytics Edition, 2022).
Data analytics and adaptive learning 15

Despite the potential pitfalls, it is important to note that machine


learning also has the potential to identify historically ingrained biases,
using techniques such as gradient boosting and random forest analysis.
Traditionally, linear, rule-based approaches have been good at identifying
a few highly predictive variables, when looking at issues such as retention,
but are found lacking when attempting to detect the interaction of numer-
ous, small variables that only have significant predictive power when con-
sidered holistically (Using Machine Learning to Improve Student Success
in Higher Education, 2022).

If you build it, they still might not come


Higher education has a long history of resisting change. The tripartite
requirements of research, teaching, and service are demanding for tenure
track faculty, while those in clinical positions and community colleges
typically have an often-onerous class load. These pressures cause faculty
to resist change, while their nonacademic counterparts within the uni-
versity are similarly engaged in multitasking. Among the areas with the
greatest resistance to adoption is technology integration. Aside from tak-
ing time away from other roles, this is an area in which faculty, admin-
istrators, and staff often find themselves in novice status, thus creating
significant discomfort (Conklin, 2020).
With respect to big data, analytics, and their ability to drive adaptive
learning, it is essential that users be able to derive insights related to the
problems they are addressing. However, before those insights can be gen-
erated, it is necessary for users to fully understand what data elements or
entities represent, how to interpret findings, and what constitutes ethical
use of data. For these reasons, it is imperative for institutions to under-
stand that the transformation to a data-driven culture be prefaced by data
literacy initiatives, with multiple levels, focusing on not only those who will
generate the data, but also those that will consume it or be impacted by the
decisions of those that do (2022 EDUCAUSE Horizon Report: Data and
Analytics Edition, 2022). While ultimately, the creation of a data-driven
culture will result in increased efficiencies across all facets of the univer-
sity, such initiatives are costly, with timelines that can easily exceed those
related to infrastructure development. Here, maintaining commitment and
focusing on positive outcomes are key to long-term success.

Three times two


To understand the immensity of successfully transforming higher educa-
tion, it is necessary to step back and conceptualize the three-body data
16 Philip Ice and Charles Dziuban

problem within the context of the three institutional forces that are
responsible for creating, maintaining, and governing the integration of
big data, analytics, and adaptive learning. However, as societal demands
on the Academy continue to increase, it is imperative that these forces
be reconciled. It is our hope that this book is a step in that direction.
That said, perhaps this preface is best concluded with another quote from
Master Yoda.

Do. Or do not. There is no try.

References
2022 EDUCAUSE horizon report: Data and analytics edition. (2022, July 18).
EDUCAUSE. https://library​.educause​.edu ​/resources​/2022​/ 7​/2022​- educause​
-horizon​-report​-data​-and​-analytics​- edition
Aljohani, N. R., Aslam, M. A., Khadidos, A. O., & Hassan, S.-U. (2022). A
methodological framework to predict future market needs for sustainable
skills management using AI and big data technologies. Applied Sciences,
12(14), 6898. https://doi​.org​/10​. 3390​/app12146898
Almond-Dannenbring, T., Easter, M., Feng, L., Guarcello, M., Ham, M.,
Machajewski, S., Maness, H., Miller, A., Mooney, S., Moore, A., & Kendall,
E. (2022, May 25). A framework for student success analytics. EDUCAUSE.
https://library​ . educause​ . edu​ / resources​ / 2022​ /5​ /a​ - framework​ - for​ - student​
-success​-analytics
Aspegren, E. U. T. (2021, March 29). These colleges survived World Wars, the
Spanish flu and more: They couldn’t withstand COVID-19 pandemic. USA
TODAY. https://eu​.usatoday​.com​/story​/news​/education​/2021​/01​/28​/covid​-19​
-colleges​- concordia​-new​-york​- education​/4302980001/
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim
Code. Polity.
Boghosian, B. M. (2019). The inescapable casino. Scientific American, 321(5),
70–77.
Carnegie, M. (2022, February 11). After the great resignation, tech firms are
getting desperate. WIRED. https://www​.wired​.com ​/story​/great​-resignation​
-perks​-tech/
Carroll, J. B. (1963). A model of school learning. Teachers College Record, 64(8),
723–733.
Chai, W., Ehrens, T., & Kiwak, K. (2020, September 25). What is CRM
(customer relationship management)? SearchCustomerExperience. https://
www​.techtarget​.com ​/sea​rchc​u sto​mere​x perience ​/definition ​/CRM​- customer​
-relationship​-management
Chen, B., Huang, K., Raghupathi, S., Chandratreva, I., Du, Q., & Lipson, H.
(2022, July 25). Automated discovery of fundamental variables hidden in
experimental data. Nature Computational Science, 2(7), 433–442. https://doi​
.org ​/10​.1038​/s43588​- 022​- 00281-6
Data analytics and adaptive learning 17

Clark, R. M., Kaw, A. K., & Braga Gomes, R. (2021). Adaptive learning: Helpful
to the flipped classroom in the online environment of COVID? Computer
Applications in Engineering Education, 30(2), 517–531. https://doi​.org​/10​
.1002​/cae​. 22470
Conklin, S. (2020, November 30). ERIC - EJ1311024 - Using change management
as an innovative approach to learning management system, online journal of
distance learning administration, 2021. https://eric​.ed​.gov/​?id​=EJ1311024
Cornett, J. W. (1990). Teacher thinking about curriculum and instruction: A case
study of a secondary social studies teacher. Theory and Research in Social
Education, 28(3), 248–273.
Corritore, M., Goldberg, A., & Srivastava, S. B. (2020). The new analytics of
culture. Harvard Business Review. https://hbr​.org ​/2020​/01​/the​-new​-analytics​
-of​- culture
Deloitte United States. (2020, May 29). COVID-19 impact on higher education.
https://www2 ​.deloitte​.com ​/us ​/en ​/pages​/public​-sector​/articles ​/covid​-19​-impact​
-on​-higher​- education​.html
Digital Analytics Association. (n.d.). https://www​.dig​ital​anal​y tic​sass​ociation​
.org/
Dziuban, C., Howlin, C., Johnson, C., & Moskal, P. (2017). An adaptive learning
partnership. Educause Review. https://er​.educause​.edu ​/articles​/2017​/12​/an​
-adaptive​-learning​-partnership
Dziuban, C., Howlin, C., Moskal, P., Johnson, C., Griffin, R., & Hamilton,
C. (2020). Adaptive analytics: It’s about time. Current Issues in Emerging
Elearning, 7(1), Article 4.
Dziuban, C., Moskal, P., Parker, L., Campbell, M., Howlin, C., & Johnson,
C. (2018). Adaptive learning: A stabilizing influence across disciplines
and universities. Online Learning, 22(3), 7–39. https://olj​.onl​inel​earn​ingc​
onsortium​.org ​/index​.php​/olj​/article ​/view​/1465
Dziuban, C. D., Hartman, J. L., Moskal, P. D., Brophy-Ellison, J., & Shea, P.
(2007). Student involvement in online learning. Submitted to the Alfred P.
Sloan Foundation.
Dziuban, C. D., Picciano, A. G., Graham, C. R., & Moskal, P. D. (2016).
Conducting research in online and blended learning environments: New
pedagogical frontiers. Routledge.
Essa, A., & Mojarad, S. (2020). Does time matter in learning? A computer
simulation of Carroll’s model of learning. In Adaptive instructional systems
(pp. 458–474). https://doi​.org​/10​.1007​/978​-3​- 030​-50788​- 6​_ 34
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police,
and punish the poor. St. Martin’s Press.
Floridi, L. (2016). The 4th revolution: How the infosphere is reshaping human
reality. Oxford University Press.
Forrester, J. W. (1991). System dynamics and the lessons of 35 years. In K. B. D.
Greene (Ed.), Systems-based approach to policymaking. Kluwer Academic.
http://static​. clexchange​.org ​/ftp ​/documents ​/system​- dynamics ​/ SD1991​- 04S​
Dand​L ess​onso​f35Years​.pdf
Friedman, S., & Laurison, D. (2019). The class ceiling: Why it pays to be
privileged. Policy Press.
18 Philip Ice and Charles Dziuban

Hartford, L. (n.d.). Predictive analytics in higher education. CIO Review.


https://education​.cioreview​.com​/cioviewpoint​/predictive​-analytics​-in​-higher​
-education​-nid​-23571​- cid​-27​.html
Hess, A. J. (2020, June 12). How student debt became a $1.6 trillion crisis.
CNBC. https://www​.cnbc​.com​/2020​/06​/12​/ how​-student​-debt​-became​-a​
-1point6​-trillion​- crisis​.html
Hill, J. (2022, June 24). The importance of knowledge management in higher
education. Bloomfire. https://bloomfire​.com​/ blog​/ knowledge​-management​-in​
-higher​- education/
Ice, P., & Layne, M. (2020). Into the breach: The emerging landscape in higher
education, students. In M. Cleveland-Innes & R. Garrison (Eds.), An
introduction to distance education: Understanding teaching and learning in a
New Era. Harvard University Press. https://www​.taylorfrancis​.com ​/chapters​/
edit ​/10​.4324​/9781315166896 ​-8​/ breach​-ice​-layne
Jack, A. A. (2019). The privileged poor: How elite colleges are failing disadvantaged
students. Harvard University Press.
Johnson, A., Dziuban, C., Eid, M., & Howlin, C. (2020, January 26). Supply
chain management and adaptive learning. Realizeit Labs. https://lab​
.realizeitlearning​.com ​/research ​/2020​/01​/26​/Supply​- Chain​-Management/
Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning
systems: A systematic mapping of the literature. Computers and Education:
Artificial Intelligence, 2. https://doi​.org ​/10​.1016​/j​.caeai​. 2021​.100017
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus, and Giroux.
Kok, M.-L. (2020, February 14). Strengths and weaknesses of adaptive learning:
A case study. ELearning Industry. https://elearningindustry​ .com​
/strenghts​
-weaknesses​-adaptive​-learning​-paths​- case​-study
Levy, J. C. (2013). Adaptive learning and the human condition. Pearson.
Liu, C. (2014). The three-body problem. Head of Zeus.
Liu, C. (2015). The dark forest. Head of Zeus.
Liu, C. (2016). Death’s end. Head of Zeus.
Ma, X. (2018). Using classification and regression trees: A practical primer.
IAP.
Marcus, G., & Davis, E. (2014, April 6). Eight (no, nine!) problems with big data.
The New York Times. https://www​.nytimes​.com​/2014​/04​/07​/opinion​/eight​
-no​-nine​-problems​-with​-big​-data​.html
McGhee, H. (2021). The sum of US: What racism costs everyone and how we can
prosper together. One World.
Mitchell, J. (2020, December 7). On student debt, Biden must decide whose loans
to cancel. Wall Street Journal. https://www​.wsj​.com​/articles​/on​-student​-debt​
-biden​-must​-decide​-who​-would​-benefit​-11607356246
Mitchell, M. (2011). Complexity: A guided tour. Oxford University Press.
Mullainathan, S., & Shafir, E. (2013). Scarcity: Why having too little means so
much. Times Books.
Nadasen, D., & Alig, J. (2021, May). Data analytics: Uses, challenges, and best
practices at public research universities. National Association of Public Land-
grant Universities.
Data analytics and adaptive learning 19

O’Neil, C. (2016). Weapons of math destruction: How big data increases


inequality and threatens democracy. Crown.
Page, S. E. (2009). Understanding complexity. The Great Courses.
Pampel, F. C. (2020). Logistic regression: A primer, 132. Sage Publications.
Pearson, J. (2016, July 6). When AI goes wrong, we won’t be able to ask it why.
https://www​.vice​ . com ​ /en ​ /article ​ / vv7yd4​ /ai​ - deep ​ - learning​ - ethics​ - right​ - to​
-explanation
Peng, P., & Fu, W. (2022). A pattern recognition method of personalized adaptive
learning in online education. Mobile Networks and Applications, 27(3),
1186–1198. https://doi​.org ​/10​.1007​/s11036 ​- 022​- 01942-6
Pugliese, L. (2016, October 17). Adaptive learning systems: Surviving the storm.
Educause Review. https://er​.educause​.edu​/articles​/2016​/10​/adaptive​-learning​
-systems​-surviving​-the​-storm
Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021).
Explaining deep neural networks and beyond: A review of methods and
applications. Proceedings of the IEEE, 109(3), 247–278. https://doi​.org​/10​
.1109​/ JPROC ​. 2021​. 3060483
Setenyi, J. (1995). Teaching democracy in an unpopular democracy. Paper
presented at the what to teach about Hungarian democracy conference,
Kossuth Klub, Hungary.
Sharing data securely across regions and cloud platforms—Snowflake
documentation. (n.d.). https://docs​.snowflake​.com​/en​/user​-guide​/secure​-data​
-sharing​-across​-regions​-plaforms​.html
Sharma, G. (2019, January 19). Discussing the benefits and challenges of adaptive
learning and education apps. ELearning Industry. https://elearningindustry​.com​/
benefits​-and​-challenges​-of​-adaptive​-learning​-education​-apps​-discussing
Sheehy, M. D. (2020, February 4). Using big data to predict consumer behavior
and improve ROI. Brandpoint. https://www​.brandpoint​.com​/ blog​/using​-big​
-data​-to​-predict​- consumer​-behavior​-and​-improve​-roi/
Silver, N. (2012). The signal and the noise: Why so many predictions fail–But
some don’t. Penguin Books.
Social Assurity. (2020, February 2). The emergence of big data and predictive
analytics in college admissions decisions. Social Assurity. https://socialassurity.
university/blog/big-data-and-predictive-analytics-college-admissions
Society for Learning Analytics Research (SoLAR). (2021, March 24). https://
www​.solaresearch​.org/
Taleb, N. N. (2012). Antifragile: Things that gain from disorder. Random House.
Taleb, N. N. (2020). Skin in the game: Hidden asymmetries in daily life. Random
House Trade Paperbacks.
The Pell Institute. (2021). Indicators of higher education equity in the United
States. Penn AHEAD.
Tufte, E. R. (2001). The visual display of quantitative information. Graphics
Press.
Tufte, E. R. (2019). Beautiful evidence. Graphics Press LLC.
Using machine learning to improve student success in higher education. (2022,
August 1). McKinsey & Company. https://www​ .mckinsey​ .com​ /industries​ /
20 Philip Ice and Charles Dziuban

education ​/our​-insights​/using​-machine​-learning​-to​-improve​-student​-success​-in​
-higher​- education
Wang, M. C., Dziuban, C. D., Cook, I. J., & Moskal, P. D. (2009). Dr. Fox rocks:
Using data-mining techniques to examine student ratings of instruction. In M.
C. Shelley II, L. D.Yore, & B. Hand (Eds.), Quality research in literacy and
science education: International perspectives and gold standards. Dordrecht,
the Netherlands: Springer.
Wilkerson, I. (2011). The warmth of other suns: The epic story of America’s
Great Migration. Vintage.
Williamson, K., & Kizilcec, R. (2022). A review of learning analytics dashboard
research in higher education: Implications for justice, equity, diversity, and
inclusion. LAK22: 12th International Learning Analytics and Knowledge
Conference. https://doi​.org ​/10​.1145​/3506860​. 3506900
Wood, C. (2021, May 5). Physicists get close to taming the chaos of the ‘three-
body problem’. LiveScience. https://www​.livescience​.com ​/three​-body​-problem​
-statistical​-solution​.html
SECTION II

Analytics



2
WHAT WE WANT VERSUS
WHAT WE HAVE
Transforming teacher performance analytics
to personalize professional development

Rhonda Bondie and Chris Dede

We want educators to be self-directed, critically reflective, lifelong learn-


ers. However, in the current teacher education system, educators are
rarely asked to analyze their own teaching in order to direct their own
professional learning. Starting with teacher preparation and continuing
through ongoing professional development (PD) throughout a teaching
career, teachers have little control over the required topics of PD and typi-
cally engage in large group “one-size-fits-some” experiences. These indus-
trial PD experiences may not seem relevant to the school context, student
learning, or individual teacher needs. Our current career-long teacher
education exemplifies what Argyris and Schön (1992/1974) refer to as
beliefs that do not match the theory in use. In short, the methods used for
teacher learning are antithetical to the teaching practices we desire P–12
students to experience.
What we need instead is to create burden-free, interactive, and collabo-
rative learning environments; engage teachers in recognizing how their
prior experiences may shape perceptions and actions; adjust learning to
meet the individual strengths and needs of teachers; provide teachers with
immediate and useful feedback; and develop agile teacher thinking—the
ability to think critically to address challenges as student learning unfolds.
PD must model and reinforce how teachers should use teaching practices
as tools to guide their interactions with students. In addition to learning
teaching practices, PD must examine the potential impact of these strate-
gies on students as human beings, adapt to the context where teaching
practices are implemented, and explore how students’ interactions with
the teacher and peers will in turn impact their learning, motivation, and

DOI: 10.4324/9781003244271-4
24 Rhonda Bondie and Chris Dede

identity formation. However, differentiated, personalized, and adaptive


learning are infrequently used or researched in teacher PD. We know very
little about how to get what we want.
To address this shortfall, the authors have conducted novel studies
through immersive technologies that use analytics showing both the prom-
ise of achieving what we want in teacher education and the challenges that
must be overcome. This chapter explores moving from what we have to
what we want through the purpose, perils, and promise of three analytic
approaches to examining teaching expertise using simulated classroom
interactions delivered via immersive technologies that generate rich behav-
ioral data streams.

Mixed reality simulations to evoke data-


rich teacher performances
Mixed reality simulations (MRS) offer opportunities to practice teach-
ing through interactions with avatar students in a virtual classroom
(Bondie & Dede, 2021). The avatars, controlled by a human simulation
specialist, respond to teaching practices and may also initiate challenges,
feedback, and coaching. Teaching practices learned through experiences
in the virtual classroom may build confidence and skills that transfer to
interactions with real students. For PD providers, MRS provide benefits,
such as a standardized experience for assessing growth in skills (Bondie
et al., 2019). By leveraging the technology’s affordances (e.g., online
access, immersive learning, standard challenges, and pausing or restart-
ing), MRS can redefine and transform field experiences by increasing
opportunities for differentiated instruction, personalization, and forma-
tive assessments in ways that are not possible through in-person field
experiences (Bondie et al., 2021).

Data analyses we have: our experimental study


Our experimental study used block randomization to measure the effect
of alternative interventions (differentiated coaching versus written self-
reflection prompts) to increase the use of student-centered high-informa-
tion feedback for teachers at different points in their career trajectory.
Participants comprised 203 teachers (76% female) from 3 different types
of teacher PD programs (suburban preservice university preparation (n =
68), rural school district early career induction (n = 66), and urban expe-
rienced teacher PD through a not-for-profit education organization (n =
68)). The programs engaged teachers at different points along a contin-
uum of teaching experience while also representing different geographic
What we want versus what we have 25

areas. Our PD was embedded into existing courses held by each program
during the 2020–2021 school year.
Like many teacher educators (e.g., Dieker et al., 2017; Cohen et al.,
2020), we sought to test the potential of PD through MRS. However, our
goals were to use the technology affordances of MRS—such as the capac-
ity to erase or speed up time and instantly change locations—to imple-
ment differentiated and personalized professional learning. Following the
simulations, we explored how data analytics can be used to answer what
worked for whom under what conditions and what teachers should do
next in their professional learning.

Results: what we have—behavior without context


A common method for analyzing teacher performance in MRS is for
researchers to count the frequency of desired teacher behaviors and cre-
ate a score for each teacher (Dieker et al., 2017; Mikeska et al., 2019;
Cohen et al., 2020). For example, Dieker and colleagues (2017) used the
Danielson framework and the Teacher Practice Observation Tool (TPOT)
(Hayes et al., 2013) to score teacher performance, both in real classrooms
and the virtual simulator, using a scale of “level 1 being limited observa-
tion of the skill to level 4 being mastery of the skill” (p. 68). Cohen et
al. (2020) created an “overall quality score (ranging from 1 to 10) that
reflected the extent to which teacher candidates supported student avatars
in creating classroom norms … and redirecting off-task behaviors” (p.
16). Mikeska et al. (2019) used a rubric to measure preservice teacher
skill in facilitating argument-focused discussions considering five dimen-
sions of the teacher practice (e.g., attending to student ideas, responsively
and equitably, and facilitating a coherent and connected discussion) and
components of those dimensions (e.g., incorporates key ideas represented
in students’ prework elicits substantive contributions from all students,
makes use of student thinking and overall coherence of the discussion,
making the content storyline explicit to students, connecting/linking ideas
in substantive ways) (pp.132–133).
These rubrics break down a complex teaching practice into observable
component parts. All components are assumed to have equal value for all
avatar students and are expected to be used in teaching situations across
contexts. Rather than tallying the frequency of specific teaching behaviors
in relation to student needs, a wholistic score is given for an overall per-
formance. Wholistic rubrics are commonly used in teacher education to
gauge overall changes in the use of a teaching practice (Danielson, 2013).
However, our current approach to measuring teacher performance lacks
an account of the types of actions within the teaching practice that changed
26 Rhonda Bondie and Chris Dede

or a recognition that component parts may be more challenging for a teacher


to learn or more beneficial for students to receive. These overall scores are
useful measures for researchers and policymakers, but do not provide enough
specificity to differentiate future PD. Further, there is no recognition of the
responsiveness of the teaching practice to individual student needs; only a
count of teacher actions is provided, rather than the series of interactions.
Given the wholistic data analytics many studies are using, it is difficult for
teacher educators to provide personally focused PD that is needed to improve
teaching practices and responsiveness to student learning.
In this chapter, we demonstrate how using analytics to evaluate a series
of teacher–student interactions provides the ability to examine teacher
growth in offering specific types of feedback and responsiveness to indi-
vidual student avatars. For our study (Bondie et al., 2022), we coded each
teacher feedback utterance with a value including, low information (100),
clarifying student response (200), valuing student response (300), correct-
ing (400), and finally, pushing student thinking beyond the task (500). Our
dependent variable was a weighted mean of the different types of feedback
teachers offered during the 7-minute (500 second) simulations. To create the
weighted mean, we multiplied the frequency of each type of feedback times
the feedback type value (100 to 500), added the feedback scores together
and then divided by the total number of feedback utterances. We added
greater value to the types of feedback that were more beneficial for stu-
dent learning because the feedback provided students with specific action-
able information (Hattie & Timperley, 2007). We found that the weighted
mean provides a more nuanced score than an overall holistic score.

Differences between the interventions


Using data analysis techniques, we were able to determine that the treat-
ment of differentiated coaching, in contrast to the control of differentiated
written self-reflection prompts, significantly improved teachers’ use of high-
information feedback (Bondie et al., 2019). Figure 2.1 highlights the impact
of coaching on teacher use of high-information feedback and also illustrates
teacher growth over multiple exposures of MRS. All teachers participated in
a baseline simulation, called Sim 0, prior to engaging in the PD. Following the
PD and prior to Simulation 1 (Sim 1), the teachers were randomly selected for
interventions of differentiated coaching or self-reflection. After these interven-
tions, Simulation 2 (Sim 2) reflects the outcome of the teacher’s personalized
practice decision (i.e., to restart the simulation, continue where the simulation
was paused, or ask a new question). Figure 2.1 shows that the control group
(i.e., self-reflection) improves over three exposures and continues to increase
the use of high-information feedback with each additional exposure.
What we want versus what we have 27

FIGURE 2.1 
Weighted Mean of Teacher Feedback Type in Mixed Reality
Simulation by Treatment (Coaching versus No Coaching)

FIGURE 2.2 
Weighted Mean of Teacher Feedback Type in Mixed Reality
Simulation by Teaching Experience (Preparation, Early Career, and
In-service)

Figure 2.2 displays the total sample disaggregated by level of teaching


experience to highlight how teachers at different points in their career
trajectory grew in the skill of offering high-information feedback across
the three simulations. Figure 2.2 differentiates among preservice teachers
participating in a university preparation program, school district early
career educators in an induction program, and in-service teachers to show
that each group benefited from the MRS in different ways. We expected
preservice teachers to have the lowest mean of high-quality feedback,
experienced teachers to have the highest mean, and early career teachers
28 Rhonda Bondie and Chris Dede

to be in the middle, close to their preservice peers. As expected, experi-


enced teachers began the simulation with higher weighted means than
preparation and early career educators. Surprisingly, early career teachers
demonstrated on average lower weighted means of high-information feed-
back than both pre- and in-service teachers.

What we want—specific teacher interactions


The weighted mean offers the opportunity to add more personalization to
the analysis of teacher performance. For example, teachers could be invited
to set goals for their feedback practice by determining the weight of each dif-
ferent type of feedback. As an illustration, teachers may want to work valuing
the student perspective; therefore, they would prioritize valuing students with
the highest weight × 5 and clarifying questions with × 4 because clarifying
questions ensures teachers understand the student perspective. This simple
change allows teachers to set personalized goals while engaging in the same
MRS. With this approach, researchers are modeling co-construction of the
valuing system for teacher performance and are demonstrating how data ana-
lytics can change the power structure of a learning experience. In this way,
both teachers and researchers can set meaningful goals and use the MRS as a
means to measure changes in teaching practices.
The opportunity to set the value system of the weighted mean is a step
forward. However, to transform teacher education, we need data analyt-
ics that illustrate teaching practices as interactions with students. Figure
2.3 displays the mean frequency of each type of feedback teachers offered
to students across three trials by treatment condition. This display ena-
bles us to see the specific changes teachers made in both frequency and
types of feedback as they moved from one simulation trail to the next.
Importantly, we can see that teachers in both the treatment and control
conditions were able to increase feedback that prompted student thinking
and to decrease low-information feedback. Interestingly, teachers in the
treatment group (coaching) also decreased the total frequency of feedback
offered. This suggests that coaching may have helped teachers to be more
precise and specific with their feedback.
Figure 2.4 displays stacked bar graphs of the frequency of the types of
feedback teachers offered by level of teaching experience. We see increases
in prompting student thinking (dark gray) and decreases in low-informa-
tion feedback (light gray). We also see preservice and experienced teachers
offering fewer feedback utterances with each exposure. The middle section
displays how early career teachers increased the total number of feedback
utterances across the three MRS exposures. The stacked bar graphs provide
a window into the types of feedback that teachers offered and specific areas
What we want versus what we have 29

FIGURE 2.3 
Teacher Feedback Types across Three Trials by Condition
(Coaching versus Self-reflection)

where additional PD or coaching might increase the quality of teacher feed-


back. For example, future PD might address prompting thinking and valu-
ing the student perspective—the two types of high-information feedback that
remained infrequent across all three MRS exposures. While this data analysis
enables teacher educators to provide differentiated instruction focusing on
the specific types of feedback that teachers need to practice most, we are still
missing the essence of teaching: the interactions with students.

What we need—lessons measuring adaptive


teaching through student interactions

Purpose
Researchers and course instructors could use the outcomes from the
checklists of observed behaviors during the simulations to tailor future
30 Rhonda Bondie and Chris Dede

35

30
Teacher Feedback Frequency Mean

25
Prompt Thinking
20 Correct
Value Student
15 Clarify Student Response
Low Information

10

0
io S0
S1
S2

re S0
1
S2

ce 0
1
S2
:S

S
:S
n:

er

vi
at

er
Ca
ar

-S
rly
ep

In
Ea
Pr

FIGURE 2.4 Frequency of Each Type of High Information Feedback by Teacher


Experience Level (Preparation, Early Career, and In-service)

instruction to better meet the needs of the teachers. Using algorithms, PD


could adapt and personalize to support teachers in expanding their teach-
ing strategy repertoire.

Perils
As demonstrated in the weighted mean analytics, researchers often meas-
ure teaching behaviors without considering the alignment with the teach-
er’s identity and unique strengths, the values of the teacher’s practice, and
the particular teaching and learning context. In addition, studies rarely
assess the impact on student learning or use rubrics that generalize teach-
ing strategies, as if the teaching behavior is the end goal rather than P–12
student growth. One way to tackle these challenges is to report teacher
performance reflecting the distribution of feedback given to individual
avatar students. Figure 2.5 displays this approach illustrating for the
entire sample displaying the percentage of each type of feedback individ-
ual avatars received across three simulations.
The researcher value system weighted prompting student thinking with
the highest weight, yet teachers used that specific type of feedback the
What we want versus what we have 31

FIGURE 2.5 Giventhe Total Teacher Sample, the Frequency of Each Feedback


Type Offered to Individual Avatar Students

least. For example, although Ethan received the most feedback, he received
primarily low information and clarifying feedback. This may be a result
of the simulation script. We had three standardized challenges, where at a
planned time in the simulation (e.g., 15 seconds, 1 minute, and 2 minutes)
a specific avatar student interrupts the teacher with a question, “Why am
I in this group?” or a statement, “I don’t understand why my answer is
wrong.” Ethan is the first student to interrupt the teacher; therefore, he
may receive more feedback due to that interruption. These data analyt-
ics begin to provide a dashboard of student learning. For example, we
see that Savannah received more valuing feedback than any other student.
Researchers could investigate this to determine if being on the far left drew
visual attention as teachers read from left to right, or if Savannah as a White
girl reflected the teacher sample and therefore teachers valued her think-
ing more than the other avatars, or perhaps Savannah’s initial response
planned by researchers in the MRS script provided more opportunity for
teachers to use valuing feedback. It is difficult to determine the factors that
may have led to differences in the feedback that students received during
the MRS. We need greater context in the teacher performance dashboard.

Promise—automated data analyses


Computer programming languages such as Python along with natural
language processing can be used to automate the analyses of recorded
interactions in the mixed-reality classroom. For example, every teacher
32 Rhonda Bondie and Chris Dede

utterance can be coded based on the language used to communicate the


teacher’s intentions during simulations, identified as spoken to individual
students, and organized by time elapsed and duration. Figure 2.6 pro-
vides detailed feedback of how teaching practices changed over repeated
practice. The x-axis displays elapsed time in the simulation and the y-axis
indicates the type of feedback offered to students (i.e., 100=low informa-
tion, 200=clarify, 300=value, 400=correct, and 500=prompt thinking).
This graph enables us to see when during the simulation individual ava-
tars received feedback, the duration, and type of the feedback provided.
This type of analysis measures teacher growth by their interactions
with students, exploring issues of equity in terms of quality, frequency,
duration, and priority of teacher feedback given the need presented in the
avatar student’s initial response to the assigned task. This transforms the
outcome of teacher education from an expectation of an overall score of a
standard demonstration of teaching practices to a responsive demonstra-
tion of a series of teaching interactions with students.
Given the ability to measure equitable teacher–student interactions as
the new outcome of teacher education, we can reimagine teacher education
more broadly. For example, this data analysis could be used live, during
simulations, where algorithms could be used to evoke avatar students to
provide feedback and challenges individualized to the strength and needs
that the teacher is currently demonstrating. The avatar students could
also support teachers in successfully implementing a teaching strategy.
Finally, returning to the weighted mean discussion, each teacher along
with PD developers could establish the weighting system to value the type
of feedback the teacher most needed to develop in their teaching practice.
Clearly, we are on the journey from what we have to what we need.

Factors that complicate these comparisons


The early career teachers were required to participate in our professional
development as part of their induction program, with the PD taking place
during school hours. The preparation teachers engaged in our PD as
assignments in their required course work for state teacher certification,
and the in-service teachers were required to complete PD hours, how-
ever, they chose our PD out of a catalog of many options. So, while the
pre- and in-service teachers were required to complete our PD as part of
a larger program, there was some choice in registering for courses. In
addition, other logistical factors may have contributed to differences in
teacher learning in the MRS, such as early career teachers’ stress dur-
ing the school day in engaging in PD before teaching. The time period
between Simulation 0 and Simulation 1 was also extended for early career
What we want versus what we have

FIGURE 2.6 Measuring Change in Teacher Feedback to Individual Students as Time Elapsed During Three MRS Exposures
33
34 Rhonda Bondie and Chris Dede

teachers, who completed the baseline simulation in the summer and then
the PD, Sim 1, and Sim 2 at the start of the school year. The differences
in early career teacher performance in the MRS and their logistics raise
questions for further exploration about the effectiveness of required PD
that takes place during the school day on teacher learning of instructional
practices for beginning teachers (Bondie et al., 2022).
In addition to logistics, other factors may have contributed to the
lower levels of high-information feedback for early career teachers, such
as adjusting to the responsibilities of classroom teaching. Perhaps early
career teachers felt greater dissonance between their current students in
their classroom and the avatar students in the virtual classroom. Teachers
in preparation had limited experiences to compare avatar students to
experienced teachers in adapting to different classes of students, therefore,
a future study might look more closely into finding out how early career
teachers feel about the avatar students and the simulated classroom. Post-
simulation surveys suggested that teachers at all experience levels enjoyed
the simulation experience and found the MRS experience to be relevant
and useful. However, early career teachers did not increase their use of
high-information feedback in ways similar to preparation and experienced
teachers. It is important to note that in our study, all early career teach-
ers were teaching in the same school district, so our data are limited and
not generalizable, however, even the limited data help researchers form
new questions regarding how teachers at different points in their career
trajectory may benefit from using MRS as a means for professional learn-
ing. For example, researchers might explore ways to give teachers greater
autonomy, even within a required PD topic and format for learning. In
addition, given greater use of online meetings and learning, options in
logistics for required PD might also be explored.

Tailoring PD to teachers at different career points


Examining teacher performance at different career points when engag-
ing in the same simulation begins to illuminate how teaching practices
may increase in quality during preparation programs when teachers are
not yet responsible for students and may decrease in quality during the
early career stage as teachers adjust to their new responsibilities and then
increase again as teachers gain experience throughout their career. These
data may also be used to develop PD interventions that aim to move
preparation and early career teacher performance closer to the perfor-
mance of experienced teachers and to reduce a dip in teaching practices
as teachers transition from the support of preparation programs to teach-
ing on their own in their early career. Unlike a teaching observation with
What we want versus what we have 35

real students, MRS offer a classroom performance assessment that can


be returned to as a standard measure of teacher performance over time.
Teachers typically improve dramatically in their first years of teaching
(Papay & Laski, 2018); however, many teachers leave the profession dur-
ing that time (Ingersoll et al., 2018). By leveraging data analytics and
MRS as an assessment tool, we may be able to design more effective PD
reducing the years previously required to develop teaching expertise and
increasing the retention of early career teachers.
This data analysis (Bondie et al., 2019) is based on the frequency of
specific types of feedback offered during the 7-minute simulation with
a weighted value toward the increasing amount of prompts for student
thinking in different types of feedback. For example, clarifying student
responses (weighted as × 2) was lower than prompting a student’s meta-
cognition (weighted as × 4). The weighted mean score as a dependent vari-
able for data analysis uses frequencies of specific types of feedback, which
is moving closer to what we need to begin to differentiate professional
development based on specific aspects of teacher performance.
This analysis begins to answer the question of “what works for whom”:
specifically, did teachers with different experience levels change more or learn
more during repeated practice in MRS? However, we need more analytic
techniques than what we have to understand the changes in teacher learning
for the treatment or coached group from Sim 1 to Sim 2, as well as the steady
growth of early career teachers across the three exposures versus the growth
and decline of preservice teachers. More detailed data analytics is needed to
understand how to use MRS to rapidly develop teaching expertise so that pre-
service and early career teachers’ performance can be closer to their in-service
peers. Reaching higher quality teaching more quickly could have profound
effects in the field, such as greater job satisfaction and retention. Given the
high rate of teachers leaving the field, understanding how to efficiently build
teaching expertise has never been more urgent. Lessons from our study illu-
minate the promise of future data analysis techniques.

Analytics for adaptive immersive teacher learning


We want transformative professional learning that adapts to teacher
strengths, needs, and specific personalized goals. Educators need to feel
control over their professional learning as shapers of their career goals.
Teacher learning experiences must expand knowledge and skills in the
context of interactions with students—the essence of teaching. PD needs
to provide teachers with meaningful feedback that helps teachers reflect on
their current performance and monitor change in their teaching through-
out their career.
36 Rhonda Bondie and Chris Dede

However, what we have is a one-size-fits-some pattern of PD that is


often not seen by teachers as relevant to their own students and teach-
ing practices. Further, our current data analyses use metrics that do not
provide the specificity needed to differentiate PD based on individual
teacher learning needs or to provide actionable feedback that will sup-
port teachers in transferring learning into daily teaching practices. Even
more troublesome is the lack of common measures that teachers engage
in throughout their career so teachers can track growth in their teaching
practices from their preparation program, through early career induction,
and throughout their in-service experiences with students.
Given the limited access to schools for student teaching during the pan-
demic and the promise of a safe practice without the concern of harming
real children, the body of knowledge based on research using MRS in
teacher education is rapidly growing. For example, Cohen et al. (2020)
used an experimental design with repeated MRS exposures, enabling the
researchers to make causal claims regarding the impact of coaching on
changes in teacher behavior. However, we don’t know how the specific
types of feedback teachers offered may have changed (e.g., valuing the
student perspective, offering more specific corrections, or challenging stu-
dent thinking). More importantly, we don’t know the extent that teacher
feedback was responsive to student learning needs and distributed equita-
bly during the MRS.
If researchers embrace new data analytics that center teacher–student
interactions and teacher values of their own practices, then teacher edu-
cators can push toward a transformation in teacher education to reflect
our goals for student learning. The learning analytics necessary are pos-
sible with modern immersive technologies; researchers simply need to
use it. We have demonstrated, that with Python-powered data analytics,
we can provide teachers with the exact feedback type offered to each
avatar student as learning unfolded during the MRS. Analytics are key
to getting what we want and are increasingly attainable if we choose to
take this path.

References
Argyris, C., & Schön, D. A. (1992 [1974]). Theory in practice: Increasing
professional effectiveness. Jossey-Bass.
Bondie, R., Dahnke, C., & Zusho, A. (2019). Does changing “one-size fits all” to
differentiated instruction impact teaching and learning? Review of Research
in Education, 43(1), 336–362. https://doi​-org​.ezp​-prod1​.hul​.harvard​.edu ​/10​
.3102​/0091732X18821130
Bondie, R., & Dede, C. (2021). Redefining and transforming field experiences
in teacher preparation through personalized mixed reality simulations. What
What we want versus what we have 37

Teacher Educators Should Have Learned from 2020, 229. https://www​


-learntechlib​-org​.ezp​-prod1​.hul​.harvard​.edu​/primary​/p​/219088/
Bondie, R., Mancenido, Z., & Dede, C. (2021). Interaction principles for digital
puppeteering to promote teacher learning. Journal of Research on Technology
in Education, 53(1), 107–123. https://doi​-org​.ezp​-prod1​.hul​.harvard​.edu ​/10​
.1080​/15391523​. 2020​.1823284
Bondie, R., Zusho, A., Wiseman, E., Dede, C., & Rich, D. (2022). Can
differentiated and personalized mixed reality simulations transform teacher
learning? Technology, Mind, and Behavior. https://doi​.org​/10​.1037​/
tmb0000098
Cohen, J., Wong, V., Krishnamachari, A., & Berlin, R. (2020). Teacher coaching
in a simulated environment. Educational Evaluation and Policy Analysis,
42(2), 208–231. https://doi​.org​/10​. 3102​/0162373720906217
Danielson, C. (2013). The framework for teaching evaluation instrument. The
Danielson Group.
Dieker, L. A., Hughes, C. E., Hynes, M. C., & Straub, C. (2017). Using simulated
virtual environments to improve teacher performance. School–University
Partnerships, 10(3), 62–81.
Gabriel, R. (2010). The case for differentiated professional support: Toward
a phase theory of professional development. Journal of Curriculum and
Instruction, 4(1), 86–95. https://doi​.org ​/10​. 3776​/joci​. 2010​.v4n1p86 ​-95
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational
Research, 77(1), 81–112.
Hayes, A. T., Straub, C. L., Dieker, L. A., Hughes, C. E., & Hynes, M. C. (2013).
Ludic learning: Exploration of TLE TeachLivE™ and effective teacher training.
International Journal of Gaming and Computer-Mediated Simulations
(IJGCMS), 5(2), 20–33.
Ingersoll, R. M., Merrill, E., Stuckey, D., & Collins, G. (2018). Seven trends:
The transformation of the teaching force. Consortium for Policy Research in
Education, Research Report #2018-2.
Mikeska, J. M., Howell, H., & Straub, C. (2019). Using performance tasks within
simulated environments to assess teachers’ ability to engage in coordinated,
accumulated, and dynamic (CAD) competencies. International Journal of
Testing, 19(02),128–147. https://www​.tandfonline​.com​/doi​/full​/10​.1080​
/15305058​. 2018​.1551223
Papay, J. P., & Laski, M. (2018). Exploring teacher improvement in Tennessee:
A brief on reimagining state support for professional learning. TN Education
Research Alliance.
3
SYSTEM-WIDE MOMENTUM
Tristan Denley

Momentum year strategies


Over the last decade, there has been a growing focus on strategies to
improve not only access to higher education but also the successful com-
pletion of credentials of value. These efforts have ranged from the intro-
duction of new classroom pedagogy approaches to the creation of new
models of higher education. Throughout much of the last decade, I too
have focused on these strategies, but from the unique perspective of what
can be done at the level of a state’s higher education system. First at the
Tennessee Board of Regents (2013–2017) then at the University System of
Georgia (USG) (2017–2021), I worked with the faculty, staff, and leaders
of those institutions to study what works in systems of higher education
and conversely what does not; to diagnose what the root causes of inequi-
ties in higher education are, and how they can be removed; and to identify
the barriers we have historically, and often unknowingly, placed in the
way of our students’ successes, and in what ways we might take their
journey to new heights.
To carry out this work across a whole system of institutions requires
knitting evidence-based initiatives and research findings into a coher-
ent framework of change and working with campuses to implement that
shared vision. The work I undertook, that has come to be known as the
Momentum Year (Denley, to appear), was the first comprehensive stu-
dent success strategy to address American college completion at a state-
wide scale.
In this chapter I will discuss in detail three threads of the over-
all Momentum Year strategy, the theoretical framework behind these
DOI: 10.4324/9781003244271-5
System-wide momentum 39

threads, and the strategies we developed and implemented, based on that


theory, to radically impact student success.

A structural approach to system-wide curricula


Any modern higher education system must teach a vast array of course-
work to meet the curricular requirements of all its programs of study.
For instance, in the University System of Georgia’s 26 institutions, each
semester the system’s roughly 300,000 undergraduate students study over
4,600 different courses. However, more than half of the 2 million student-
course enrollments lie in approximately 30 of those classes. This observa-
tion in no way implies that the other 4,570 courses have no role to play, or
that we need far fewer courses—if anything we probably need more. But
it does point to a peculiarity of curricular structure: course enrollment is
highly concentrated.
Over the last 20 years there has been a growing understanding of the
structure of complex networks like that of coursework in higher educa-
tion (Watts & Strogatz, 1998; Albert & Barabási, 2002). These small-
world networks can be used to analyze a wide range of systems in nature,
technology and societal phenomena—from the World Wide Web to virus
spread. One feature of this type of network is the existence of hub verti-
ces—nodes in the network with an overabundance of connections. These
hubs play a disproportionately large role in the connective structure of the
overall network, both enabling effective flow around the network, and
also fragmenting the network when they are damaged or removed (Pastor-
Satorras & Vespignani, 2001; Albert et al., 2000).
By studying the course transcripts of graduates across three state
systems, I was able to establish that the system’s course structure itself
forms a small-world network. The highly enrolled classes that com-
prise the lion’s share of the enrollment are the hubs in this network.
But consequently, as well as being highly enrolled, they also play a dis-
proportionately critical curricular role in the overall learning structure
of the system—successful learning in these classes disproportionately
leads to further success; lack of success in these classes leads to failures
elsewhere.
More formally, we define the graduate-transcript graph G S with vertex
set V to be the set of courses that appear on the undergraduate transcript
of any graduate from institution or system S, and edge set E defined by
joining course c1to course c2 if course c1 and course c2 appear together on
a graduate’s transcript.
Network small worldliness has been quantified by the coefficient σ, cal-
culated by comparing the average clustering coefficient Cand average path
40 Tristan Denley

length Lof the graph to those of a random graph with the same number of
vertices and same average degree Cr and Lr .

C 
C 
 r 
L 
L 
 r 
The degree to which σ > 1 measures the graph’s small worldliness.
We constructed the graduate-transcript graph for both the Tennessee
Board of Regents (TBR) institutions and the institutions in the University
System of Georgia (USG). For TBR, σ = 15.15; for USG, σ = 123.57
Figure 3.1 displaying the degree distribution of the TBR system shows
that it follows an inverse power law, characteristic of small-world net-
works (Denley, 2016). The USG distribution follows a similar pattern.
This structural curricular analysis suggests that deepening student
learning in these “hub” classes will reach across the student body very
quickly and will also influence student success across the breadth of the
curriculum. We have used this theoretical observation to steer several

Degree Distribution
80

70

60

50
Count

40

30

20

10

0
0
50
100
150
200
250
300
350
400
450
500
550
600
650
700
750
800
850
900
950
1000
1050
1100
1150
1200

Degree Value

FIGURE 3.1 Degree Distribution


System-wide momentum 41

system-level curricular transformation initiatives, designed to impact stu-


dent success at scale.
For instance, in 2016, the Tennessee Board of Regents began an initiative
to increase accessibility to instructional materials for students with disabili-
ties. To ensure maximum impact, the work concentrated on these “hub”
vertices. We carried out this structural analysis at each of the 19 universities
and colleges and identified the 30 hub courses at each. We provided training
to faculty representatives, from each of the 30 hub classes at each institu-
tion, to apply an accessibility rubric to the delivery of that class on their
campus. The combined data from these analyses enabled a coordinated sys-
tem approach to increasing accessibility that was recognized with an award
from the National Federation of the Blind and has also enabled faculty in
each course to frame accessibility implementation plans for their courses.
This small-world network approach has also informed course redesign
strategies. Beginning in Fall 2013, TBR invited teams of faculty from
across the system to propose a re-envisaged structure for one of these
highly impactful “hub” classes on their campus. They were asked to pro-
pose a new approach, together with an assessment structure, with the
intention that this new format could enable more students to learn more
material more deeply, and consequently would be more successful in that
class. Support was provided for the successful teams from funds provided
by the system office as both recognition and incentive for the work to
be done. Between 2013 and 2015, more than 80 teams of faculty were
supported in this way to develop their strategy, implement their pilots,
and collect their impact data. The projects involved over 14,000 students
and 160 faculty at 18 campuses, in courses from 16 disciplines. Faculty
designed new course structures from the full spectrum of contemporary
techniques, including supplementary instruction, learning communities,
flipped and hybrid classroom models as well as using technologies in a
variety of learning settings.
In the University System of Georgia, I built on this approach to course
redesign, as part of the Gateway to Completion project, utilizing the
small-world network approach to identify the courses in which redesign
promised the greatest opportunity for impact.
We define a catapult course as a course for which there is evidence
that deepening the learning in that course has an impact on the student’s
graduation rate, as well as the outcomes in that course itself.
Let gc : A, B, C, D, F  0, 1 be the grade distribution for a course c,
and let bc : A, B, C, D, F  0, 1 be the conditional probability that a stu-
dent who takes course c and earns that grade graduates. So,

bc  x   P(student graduates | student takes course c and earns grade x)


42 Tristan Denley

Let g ’c : A, B, C, D, F  0, 1 be defined by g ’c  A   g  A   g  B,


g ’c  B  g C  , g ’c C   g  D  , g ’c  D   g  F  , g ’c  F   0.
We define the learning impact for course c by

I c  
 x A, B,C , D, F
g ’c  x  bc  x 
.
 x A, B,C , D, F
gc  x  bc  x 

Ideally, to study catapult courses, we would like to study the impact of


quality learning on graduation rates. Instead, to carry out this study at a
system scale, we used grades as a proxy. The function bc measures how
graduation rate varies across the students who take the course c and earn
the various grades. The function g ’c models the grade distribution in a
class where some intervention has been implemented; the intervention
results in each student receiving a grade one letter grade higher than they
would otherwise have received. This is intended to model the effect of
deepening learning in that course. The learning impact I(c) then measures
how much the graduation rate would increase for an average student in
the deeper learning version of the course.
To identify those courses where there is the greatest opportunity to impact
graduation rates through this approach we define e : All courses  0, 1
so that e(c) is the proportion of graduates who take course c at some point
in their degree. In this way we can identify the courses that have the maxi-
mal opportunity to impact student success by deepening learning. These
catapult courses are the courses for which I  c   e  c  is the greatest.
We carried out this analysis for each course at each institution in the
University System of Georgia, using student-course-level data from 2011
to 2018. Using this approach, we were able to identify the learning impact
for each course at each institution.
In 2017, the University System of Georgia began its second cohort of a
course redesign initiative, Gateways to Completion, in partnership with
the John Gardner Institute. In this cohort, we used the learning impact
approach to enable campuses to select which courses would be most stra-
tegic to engage in a redesign strategy to deepen learning. To this end, we
provided a bubble-chart visualization to each campus. Each bubble rep-
resents a course c. The y-axis represents the learning impact of the course
I(c), the x-axis represents the e(c), that is, the proportion of graduates who
took that course, and the size of the bubble represents the overall impact
I c   e c .
The bubble chart highlights those courses in which there is the greatest
opportunity to increase graduation rates in a particular subject area by
System-wide momentum 43

Deeper Course Learning Impact

English Composition I

American Government
College Algebra Intro to Psychology

Student Enrollment Impact

FIGURE 3.2 Campus-Wide Course Impact

deepening learning, and the greatest opportunity to increase graduation


rates across a whole campus.
Campuses used the visualization to select the courses on their cam-
pus that would lead to the greatest campus-wide impact within the con-
text of their faculty and student body (Figure 3.2). This work involved
over 75 teams of faculty in multiple disciplines. The Gardner Institute
provided technical support and a network improvement community
environment for these teams of faculty to explore evidence-based strate-
gies that would deepen learning and close equity gaps. Many of these
course redesigns resulted in just those outcomes. The specifics of the
interventions that these faculty teams implemented are captured in
detailed case studies (Gateways to Completion Case Studies, n.d.). All
told, Gateways to Completion impacted more than 750,000 students’
learning experiences.

The centrality of English and Mathematics


A meta-analysis of the catapult courses shows that while the complete
list varies somewhat from institution to institution, there are two sub-
ject areas that appear consistently: English Composition and Introductory
Mathematics. The small-world graph structure suggests that success and
failure alike spread like a pandemic from these two learning areas across
44 Tristan Denley

the curriculum. Consequently, success, or lack thereof, in this course-


work is about much more than passing a math or an English course. It
is about graduating or not. This theory is borne out in real data. In the
University System of Georgia, before 2018, students who were successful
in Freshman Math and Freshman Writing in their first year were more
than ten times more likely to graduate (66 percent) than those who were
not (6 percent). These differences in graduation rates remained true even
when the data were further disaggregated by student demographic vari-
ables and by preparation. Suffice it to say, improving student success in
Introductory Mathematics and English courses played an enormous role
not only in improving Mathematics and English outcomes, but also in a
student’s likelihood to complete credentials of value.
Although there is a rich collection of strategies that can improve stu-
dent learning in Mathematics and English courses (Guide to Evidence-
based, n.d.; Understanding and Teaching Writing, 2018), ironically the
most fundamental barrier is the traditional approach to developmental
education. Despite the intention of providing a supportive pathway for
less well-prepared students, the long sequence structure of traditional
remediation means that those students rarely complete the credit-bearing
courses required for graduation (Remediation, 2012). It is the structure
that creates a roadblock to student success.
Changes in remedial education that place students directly into gate-
way courses accompanied by intensive tutoring in conjunction with the
courses being taken for credit (corequisite remediation) have been shown
to significantly improve these outcomes, in some cases doubling or even
tripling student success rates in Freshman Math and English courses
(Denley, 2021b; Denley, 2016; Increasing Student Success, 2019; Logue et
al., 2019). This was certainly the case in both Tennessee’s and Georgia’s
colleges and universities.

Scaling the corequisite model


Galvanized by the centrality of student success in English and Mathematics,
the USG undertook a detailed analysis of the data comparing the effec-
tiveness of three approaches to developmental education that were being
used across the system in both English and Mathematics. The three
approaches were a traditional developmental sequence; the Foundations
model in which students enroll in a single semester of remediation requir-
ing successful completion prior to enrolling in a college-level course; and
the corequisite model. To compare the effectiveness of these approaches,
we compared the rates at which students were able to successfully com-
plete a college-level English course and a college-level Mathematics course
System-wide momentum 45

(college algebra, quantitative reasoning, or math modeling) within one


academic year.
Because the preparations of incoming student populations across the
system vary considerably, we disaggregated the data using common uni-
form measures of preparation: ACT math/writing sub-scores; SAT math/
writing sub-scores; and high-school GPA. The results were striking and
mirrored results of a similar analysis from the Tennessee Board of Regents
(Denley, 2016).
We will discuss both English and Mathematics but will begin with
Mathematics. The results disaggregated by ACT math sub-score are dis-
played in Figure 3.3. These data include both students who entered with a
standardized test score, and those who did not. Students without a score
are included in the “Total” percentage rates. Throughout, we will refer to
“passing a college level class” as earning at least a “C” grade.
As Figure 3.3 shows, the students who were educated using the coreq-
uisite model were more than twice as likely to complete a college-level
class with a grade of “C” or better when compared with their peers who
used either of the other two prerequisite approaches. Indeed, while the
success rates more than doubled overall, the gains were not only for the
most prepared students. In fact, the largest gains in success rates were
experienced by students with the weakest preparation. The data for the
other measures of preparation were similarly compelling.

80
Percentage of students who passed

70
a college-level math class within

60
one academic year

50

40

30

20

10

0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
2013 Traditional DevEd (7214 Students)
2015-17 Foundations (8813 Students)
2015-17 Corequisite (9089 Students)

FIGURE 3.3 
USG System-Wide Comparison of English Developmental
Education Models
46 Tristan Denley

80
Percentage of students who passed a
college-level math class within one
70
60
academic year

50
40
30
20
10
0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
2013 Traditional DevEd (3590 Students)
2015-17 Foundations (4718 Students)
2015-17 Corequisite (4694 Students)

FIGURE 3.4 
USG System-Wide Comparison of Mathematics Developmental
Education Models (African-American Students)

While the improvement in the results for the overall student popula-
tion were impressive, so, too, was the corequisite model’s effectiveness in
improving success rates for all student subpopulations and in eliminating
equity gaps. This is illustrated by the results for African-American students
who studied Mathematics using the three models, as displayed in Figure
3.4. Those students who took a corequisite Mathematics course successfully
earned a “C” or better at more than twice the rate compared to their peers
who used the foundations or traditional approach. Once again, there were
significant improvements in success rates for students at every preparation
level with the largest gains for those with the lowest preparation scores.
These results are analogous to the improvements achieved by other
racial groups and student subpopulations, such as low-income students
and adult learners. For each category, we saw a doubling in the rates at
which students successfully completed a college-level Mathematics course.
Moreover, that doubling holds true for students across the preparation
spectrum, whether using ACT, SAT, or high-school GPA as the measure
of preparation.
The analysis for gateway English course success followed a very similar
pattern. We show these results in Figure 3.5. Once again, the students who
were educated using the corequisite model were almost twice as likely to
earn at least a “C” grade in their college-level math class when compared
with their peers who used either of the other two prerequisite approaches.
System-wide momentum 47

100
Percentage of students who passed a
college-level English class within one

80
academic year

60

40

20

0
12 13 14 15 16 17 18 19 Total
ACT Writing Sub-score

2013 Traditional DevEd (1905 Students)


2015-17 Foundations (3673 Students)
2015-17 Corequisite (5043 Students)

FIGURE 3.5 
USG System-Wide Comparison of English Developmental
Education Models

As with Mathematics, the gains in success rates were apparent all across
the preparation spectrum, producing very similar success rates regardless
of incoming high-school preparation or student demographic. The data
for the other measures of preparation were similarly compelling.
In light of these results, all 26 University System of Georgia universities
and colleges moved entirely to the corequisite model of development edu-
cation for college Mathematics and English beginning Fall 2018.
While there are a variety of ways to implement the corequisite model,
we chose a scaling approach that followed three design principles:

1. all students enroll directly into a college-level Mathematics or English


course that satisfies a general education requirement
2. corequisite students are required to also attend a 1–3 credit hour coreq-
uisite course that is aligned with, and offered alongside, the appropri-
ate college-level course
3. the corequisite course is designed specifically to help students master
the skills and knowledge required for success in the accompanying col-
lege-level course

Within these design parameters, institutions were free to make decisions


concerning more granular aspects of the structure of their corequisite
48 Tristan Denley

implementation, such as composition of the student body in the credit-


bearing class, whether the same instructor or different instructors taught
the two instructional experiences, and the number of credit hours in the
corequisite class.
By analyzing the full implementation for academic years 2018–2019
and 2019–2020 we have been able to shed some light on which more
granular combinations of strategies produce better results (Denley, 2021a;
“Corequisite works,” 2021). Here, we will concentrate on providing the
results of the two full academic years of full corequisite implementation in
mathematics and English courses and comparing them to the results from
the threefold comparison that led to moving to full scale. These results are
shown in Figures 3.6 and 3.7.
The success results for the full-scale implementation closely mirrored
those that we had seen in the earlier smaller scale trials of the corequisite
model. In Mathematics, the success rate of 67 percent slightly exceeded
the overall corequisite success rate in the previous three years and more
than doubled the best outcomes from either of the other two approaches.
In English, the pattern was much the same. The overall success rate of
69 percent was not quite as strong as the 74 percent of the previous three
years’ partial implementation but was still a very substantial increase over
either of the other two approaches.
Once again, it is worthy of notice that in both Mathematics and English
these impressive overall gains were achieved for students across the full
Percentage of students who passed a

80
college-level math class within one

60
academic year

40

20

0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
2013 Traditional DevEd (7214 Students)
2015-17 Foundations (8813 Students)
2015-17 Corequisite (9089 Students)
Corequisite Full implementation 2018-20 (18455 Students)

FIGURE 3.6 
USG System-Wide Comparison of Mathematics Developmental
Education Models
System-wide momentum 49

Percentage of students who passed a


college-level English class within one 100

80
academic year

60

40

20

0
12 13 14 15 16 17 18 19 Total
ACT Writing Sub-score

2013 Traditional DevEd (1905 Students)


2015-17 Foundations (3673 Students)
2015-17 Corequisite (5043 Students)
Corequisite Full implementation 2018-20 (7613 Students)

FIGURE 3.7 
USG System-Wide Comparison of English Developmental
Education Models

preparation spectrum. Indeed, the most substantial gains were achieved


for those students with the lowest high-school test scores.
There has been considerable interest whether students might benefit
from the corequisite approach and whether an implementation should
employ a preparation floor score (Increasing Student Success, 2019). Our
results do not suggest that any such placement restrictions are required.
The results show very similar success results for students in Freshman
Writing regardless of incoming preparation. Mathematics shows a very
similar pattern with similarly strong passing rates across the preparation
spectrum. In Mathematics, the passing rates for those students who arrive
most weakly prepared are not quite as strong as their better-prepared col-
leagues, but even the lowest success rate of 57 percent (for students with
ACT math scores below 14) is still twice the overall success rate for the
foundations approach, and almost three times the traditional approach.
In English, even the least-prepared students are passing their credit-bear-
ing class at a success rate in the 70s. Indeed, the improvements that we
achieved by fully implementing the corequisite model were statistically
significant at 99 percent confidence over the previous models at every level
of preparation.
We see the same striking improvements we saw with the earlier analysis
when we further disaggregate the full implementation results by race. The
disaggregation analysis was carried out for all IPEDS ethnicity classifica-
tions. Figures 3.8 and 3.9 show this disaggregation for Black, Hispanic/
50 Tristan Denley

80
Percentage of students who passed a
college-level math class within one
70
60
academic year

50
40
30
20
10
0
<14 14 15 16 17 18 19 20 21 Total
ACT Math Sub-score
Black Students (9688 Students)
Latinx Students (2747 Students)
White Students (4493 Students)

FIGURE 3.8 
USG System-Wide Comparison of Success in Credit-Bearing
Mathematics Classes During Full Implementation of Corequisite

100
Percentage of students who passed a
college-level English class within one

80
academic year

60

40

20

0
12 13 14 15 16 17 18 19 Total
ACT Writing Sub-score

Black Students (3439 Students)


Latinx Students (841 Students)
White Students(1194 Students)

FIGURE 3.9 
USG System-Wide Comparison of Success in Credit-Bearing
English Classes During Full Implementation of Corequisite
Latino and White students for English and Mathematics corequisite stu-
dents during the full implementation phase, since these are the largest
racial groups among the USG student population.
While at each preparation level there is some variation between the suc-
cess rates of students of differing races, the differences are almost never
System-wide momentum 51

close to statistically significant. That said, there remains a statistically


significant difference between the overall success rates for Black students
when compared with their White and Hispanic/Latino colleagues. This
total success rate incorporates the data from those students with stand-
ardized scores, as well as those without scores.

Academic mindset and deeper learning


I also led research at the University System of Georgia examining con-
nections between aspects of academic mindset and deepened classroom
learning. In Fall 2017, we began a detailed exploration of the connec-
tions between various academic mindsets held by incoming freshmen and
their eventual academic success. To this end, the system developed and
deployed a survey instrument—the “Getting to Know You” survey. This
instrument was composed of roughly 80 questions that covered a broad
spectrum of attitudinal and perception information about each student,
ranging from questions of belonging, self-efficacy, and family attitudes to
questions of perceived purpose and growth mindset in connection with
gateway Mathematics and English courses. All of these aspects have been
shown to have connections with student educational outcomes (Tibbetts
et al., 2022; Logue et al., 2019; Broda et al., 2018; Lazowski & Hulleman,
2016; Yeager et al., 2019). Since 2017, this survey has been given to the
entire incoming freshmen class at each institution during the Fall semester
both within the first three weeks of class and during the last three weeks
of class. Students are free to participate or not, but those who choose to
provide the survey information are asked to also provide their institu-
tional ID number so that their mindset data can then be combined with
student-level course outcomes.
Over the last five years, this approach has provided a dataset composed
of more than 40,000 students at the beginning of their Fall semester and
almost 4,500 of those students at the end of their Fall. While analysis of
this data has already provided invaluable insights into the way in which
our students identify as learners and interact with the learning environ-
ment around them, this technical brief will concentrate on how their atti-
tudes change during the semester and how those changes are correlated
with improvements or declines in academic performance.

Purpose
As part of the survey, students were asked to judge the importance of vari-
ous factors that contribute to their motivation to attend college, i.e., did
they have a purpose for being at college? By far the most prominent factor
52 Tristan Denley

was to “prepare for my future career,” with 82 percent of students rat-


ing that factor as “very important” on a 7-point Likert scale (1 was “not
important at all,” 7 was “very important”). Consequently, we explored
the perceived connection between coursework and their future careers,
and its possible impact on course performance.
We asked at the beginning and end of the semesters whether the students
thought that what they learned in each of their Freshman Mathematics
and English classes would “help them in their future careers.” The results
are summarized in Tables 3.1 and 3.2.
While there was not a statistically significant difference between the
pass rates (at least a C grade) of those who agreed and disagreed about
the utility of what they were going to learn in their Mathematics course
at the beginning of the semester, this was far from true at the end of the
semester. The pass rates of those who agreed at the end of the semester
that what they had learned in their Mathematics would be useful in their
future career were 8 percentage points (pp) higher than those who disa-
greed. Those who agreed at the end of the semester also outperformed
their disagreeing colleagues by almost half of a letter grade on average and
earned more than 12 pp more A grades (effect size 8.8).

TABLE 3.1 What I Learn in My Math Classes will Help Me in My Future Career

Semester Semester End Pass N Effect Sig Level


Beginning Rate Size (%)
(%)

All students
Agree 84.1 4089
Agree Agree 86.1 3281 5.9 99
Agree Disagree 76.6 808
Disagree 82.6 1079
Disagree Agree 85.6 390 2 95
Disagree Disagree 80.9 689
Agree 86 3671 6.1 99
Disagree 78.6 1497
Corequisite students
Agree 77.8 983
Agree Agree 79.6 788 2.5 99
Agree Disagree 70.6 195
Disagree 76.4 250
Disagree Agree 81.1 95 1.4 90
Disagree Disagree 73.5 155
Agree 79.8 883 2.8 99
Disagree 71.9 350
System-wide momentum 53

TABLE 3.2 What I Learn in My English Classes will Help Me in My Future Career

Semester Semester End Pass N Effect Sig Level


Beginning Rate Size (%)
(%)

All students
Agree 91.4 3568
Agree Agree 91.9 3110 2.4 99
Agree Disagree 88.1 458
Disagree 91.2 658
Disagree Agree 91.7 356 0.5
Disagree Disagree 90.5 302
Agree 91.9 3466 2.3 95
Disagree 89.1 760
Corequisite students
Agree 83.2 478
Agree Agree 84.1 426 1.2
Agree Disagree 76.4 52
Disagree 77.4 66
Disagree Agree 78.4 39 0.2
Disagree Disagree 76.0 27
Agree 83.6 465 1.4 90
Disagree 76.3 79
= significant at 90% level
** = significant at 95% level
*** = significant at 99% level

These differences are also apparent for student subpopulations. For


instance, Black students who agree at the end of the semester that their
math class will be useful in their future career are 8.7 pp (effect size 3.1)
more likely to pass the course with at least a C and 8.4 pp more likely to
earn an A grade (effect size 3.5).
These differences remain even when we control for preparation. For
instance, focusing specifically on those students who took a corequisite
Mathematics course, those who agreed at the end of the semester that
what they had learned in their math course would be helpful in their
future career outperformed their disagreeing colleagues by 8 pp in pass
rate and earned 10 pp more A grades (effect size 3.7).
The opinions that students hold about the utility of their Mathematics
course is far from static across the semester, and the way in which that
opinion changes over time has significant connections to learning and
mastery.
We analyzed the differing pass rates of the four initial-final opinion
possibilities. Those students who agreed at the beginning and the end of
54 Tristan Denley

the semester passed at an almost 10 pp higher rate than those who moved
from agree to disagree at the end. Similarly, those who began in disagree-
ment but moved to agreement by the end of the semester outperformed
their colleagues who remained in disagreement by almost 5 pp. All of
these differences were very strongly statistically significant.
The importance of the choice of mathematics course and its fit with the
student’s major is an important aspect of this. By the end of the semester,
34.6 percent of students who were not in STEM or Business majors, but
chose to study College Algebra, did not agree that the course would be
useful in their future career, compared with only 24.6 percent of their
STEM/Business colleagues (effect size 3.2). Indeed, significantly more
of the non-STEM/Business students who initially agreed changed their
minds by the end of the semester.
The picture is similar for those students who describe their perceptions
of their Freshman Writing. The results are summarized in Table 3.2. Once
again, while perceptions at the beginning of the semester are not significantly
linked with outcomes, a significantly higher proportion of students, who at
the end of the semester agreed that what they learned in the course would
help them in their career, passed the class with a C or better compared to their
colleagues who disagreed. There were much smaller effect sizes with writing
than with Mathematics. Those students who agreed at the end of the semester
were 5 pp more likely to earn an A grade (effect size 2.6).
Interestingly, while these differences remain significant when we con-
trol for preparation and or consider only White and Latinx students, they
disappear for Black students.

Growth and fixed mindset


As part of the survey, we also assessed whether students had a growth
or fixed mindset toward their study of Mathematics and their study of
English, with several items such as “Your math intelligence is something
about you that you can’t change very much.” The results are summarized
in Tables 3.3 and 3.4.
There was a statistically significant difference between the pass rates
(at least a C grade) of those who had a math growth mindset and their
colleagues with a math fixed mindset at the beginning of the semester. But
this was far outshadowed by the difference at the end of the semester. The
pass rates of those who had a growth mindset about Mathematics at the
end of the semester were 5.5 pp higher than those who had a fixed mindset
at the end. Those who had a growth mindset about math at the end of the
semester also outperformed their colleagues with a fixed mindset earning
more than 8.9 pp more A grades (effect size 6.6).
System-wide momentum 55

TABLE 3.3 “Your math intelligence is something about you that you can’t change
very much”

Semester Semester End Pass N Effect Sig Level


Beginning Rate Size (%)
(%)

All students
Growth 84.8 3069
Growth Growth 86.6 2225 3.9 99
Growth Fixed 80.5 844
Fixed 81.9 2079
Fixed Growth 84.1 798 2.1 95
Fixed Fixed 80.5 1281
Growth 86 3023 5.1 99
Fixed 80.5 2125
Corequisite students
Growth 78.3 625
Growth Growth 80.6 426 2 95
Growth Fixed 73.3 199
Fixed 76.5 611
Fixed Growth 77.9 208 0.5
Fixed Fixed 75.9 403
Growth 79.7 634 2 95
Fixed 75 602
Black students
Growth 84.7 851
Growth Growth 86.7 583 2.2 95
Growth Fixed 80.5 268
Fixed 82.3 769
Fixed Growth 87.2 274 2.75 99
Fixed Fixed 79.7 495
Growth 86.8 857 3.7 99
Fixed 80 763

These differences are also apparent for student subpopulations. For


instance, Black students with a growth mindset about Mathematics at the
end of the semester were 6.8 pp (effect size 3.7) more likely to pass the
course with at least a C and 8.9 pp more likely to earn an A grade (effect
size 4).
Again, the differences remain when we control for preparation.
Corequisite Mathematics students who had a growth mindset at the end
of the semester outperformed their colleagues with a fixed mindset by 4.7
pp in pass rate and 6.5 pp more A grades (effect size 2.8).
We gain greater insight by studying how the students’ mindset changes
throughout the semester and how those changes are linked with course
56 Tristan Denley

TABLE 3.4 
“Your English intelligence is something about you that you can’t
change very much”

Semester Semester End Pass N Effect Sig Level


Beginning Rate %) Size (%)

All students
Growth 91.8 3333
Growth Growth 92.8 2498 3.4 99
Growth Fixed 88.7 835
Fixed 89.1 938
Fixed Growth 91 431 1.7 95
Fixed Fixed 87.6 507
Growth 92.6 2929 4.2 99
Fixed 88.3 1342
Corequisite students
Growth 83.9 380
Growth Growth 85.9 253 1.4 90
Growth Fixed 79.8 127
Fixed 79.1 166
Fixed Growth 87.5 66 2.3 95
Fixed Fixed 73.7 100
Growth 86.2 319 2.7 99
Fixed 77.1 227
Black students
Growth 88.2 1049
Growth Growth 89.4 750 1.8 95
Growth Fixed 85.2 299
Fixed 83.2 341
Fixed Growth 86.4 137 1.3 90
Fixed Fixed 81.1 204
Growth 89 887 2.8 99
Fixed 83.5 503

outcomes. Much has been written about interventions that enable a stu-
dent to have a growth mindset during the semester and links to gains in
academic performance, especially for students of color (Broda et al., 2018;
Yeager et al., 2019). We see those same gains mirrored here with students
who began the semester with a fixed mindset but ended with a growth
mindset, increasing their pass rate by 4 pp (effect size 2.1) and by 7.5
pp (effect size 2.75) for Black students. What is novel in this study is the
apparent impact on those students who move from a growth mindset at
the beginning of the semester to a fixed mindset at the end. Those students
undergo a decrease of 6.1 pp in their pass rate (effect size 3.9) and are 7.7
pp less likely to earn an A grade. This effect remains apparent when we
further disaggregate by race or by preparation.
System-wide momentum 57

The connection between outcomes and English growth mindset fol-


lows a very similar pattern to what we have already seen. The results
are summarized in Table 3.4. While the differences in outcome based on
growth mindset perceptions at the beginning of the semester are modest,
they are more substantial at the end, with 4.3 pp more students with a
growth mindset about English passing the course (effect size 4.2) and 12.1
pp more earning A grades (effect size 7.6). Once again, these significant
differences hold true when we disaggregate by race and by preparation.
For instance, 5.5 pp more Black students who ended the semester with a
growth mindset about English passed the course than those who ended
with a fixed mindset, earning 10.3 pp more A grades (effect size 4.2).
In a similar vein, while students who move from a fixed mindset to a
growth mindset have 4 pp higher pass rates, movement from a growth
mindset to fixed is associated with a 4 pp decline.
These students see a very pronounced decline in the proportion of A
grades, with 11.6 pp (effect size 4.7) fewer students who move from a
growth mindset to a fixed earning A grades than those who retain their
growth mindset. These declines are just as strongly felt for Black students
and corequisite students.
While there are established strategies that create an environment to
enable students to move from a more fixed to a more growth-oriented
mindset during the semester, we are not currently aware of the mechanism
which might move a student in the other direction. Given the prevalence
of this change and the apparent size of its impact, this certainly warrants
further investigation.

The impact of the momentum year


The work outlined here was part of a larger body of student success
strategies. This Momentum Year work involved thousands of faculty
and staff across every campus in these systems and has already shown
its success. Through the full implementation of the corequisite model
in both Tennessee and Georgia, success rates in Freshman Math and
English courses have doubled. All groups of students, including Black
and Latinx students, students from low-income households, and first-
generation college students, are passing these crucial gateway courses at
similar rates.
Graduation and retention rates also grew to an all-time high. In
Tennessee, there was a 42 percent increase in community college three-
year graduation rates, with an 87 percent increase for minority students,
and a 26 percent increase in university four-year graduation rates with a
51 percent increase for minority students.
58 Tristan Denley

In Georgia, four-year graduation rates rose by 20 percent and gradu-


ation rates for African-American students and first-generation students
increased by more than 30 percent. Graduation rates at Georgia’s HBCUs
increased by 50 percent over that same time period. The increases in
these and other key metrics were not limited to only some schools, but
were realized across the full spectrum of institutions in both states, from
Tennessee’s open-access community colleges to the University of Georgia
and Georgia Tech.
The number of degrees conferred in Georgia, set a record (72,929) in
2021, an increase of 12 percent since 2017 when the Momentum work
began, and of 33 percent over the previous decade. The gains were par-
ticularly marked for African-American students seeking Bachelor’s
degrees, with a 54 percent increase in the number of degrees conferred,
and for Hispanic and Latinx students seeking Bachelor’s degrees, who
witnessed a 270 percent increase in their number over a 10-year period.
These increases far outstripped the increase in enrollment over those same
periods.
I certainly want to take this opportunity to thank the faculty, staff,
and administrators in those two systems and their 45 institutions for their
Herculean efforts to make changes for student success. Their innovation,
efforts, and results have not only impacted the lives of hundreds of thou-
sands of students, but they have brought so much insight into what can be
achieved by a higher education system focused on this work.

References
Albert, R., & Barabási, A.-L. (2002). Statistical mechanics of complex networks.
Reviews of Modern Physics, 74(1), 47–97.
Albert, R., Jeong, H., & Barabási, A.-L. (2000). Error and attack tolerance of
complex networks. Nature, 406(6794), 378–382.
Broda, M., Yun, J., Schneider, B., Yeager, D. S., Walton, G. M., & Diemer, M.
(2018). Reducing inequality in academic success for incoming college students:
A randomized trial of growth mindset and belonging interventions. Journal
of Research on Educational Effectiveness, 11(3), 317–338. https://doi​.org​/10​
.1080​/19345747​. 2018​.1429037
Complete College America. (2021). Corequisite works: Student success models at
the university system of Georgia. https://files​.eric​.ed​.gov​/fulltext​/ ED617343​.pdf
Denley, T. (to appear). The power of momentum, recreating multi-campus
university systems: Transformations for a future reality (J. R. Johnsen, ed.).
Johns Hopkins University Press.
Denley, T. (2021a). Comparing co-requisite strategy combinations, university
system of Georgia academic affairs technical brief no. 2.
Denley, T. (2021b). Scaling developmental education, university system of Georgia
academic affairs technical brief no. 1.
System-wide momentum 59

Denley, T. (2016). Co-requisite remediation full implementation analysis, TBR


technical brief no. 3.
Gateways to completion case studies. (n.d.). University system of Georgia.
Guide to evidence-based instructional practices in undergraduate mathematics.
(n.d.). Mathematical Association of America.
Increasing student success in developmental mathematics: Proceedings of the
workshop. (2019). The National Academies of Sciences, Engineering, and
Medicine.
Lazowski, R. A., & Hulleman, C. S. (2016). Motivation interventions in
education. Review of Educational Research, 86(2), 602–640. https://doi​.org​
/10​. 3102​/0034654315617832
Logue, A. W., Douglas, D., & Watanabe-Rose, M. (2019). Corequisite
mathematics remediation: Results over time and in different contexts.
Educational Evaluation and Policy Analysis, 41(3), 294–315. https://doi​.org​
/10​. 3102​/0162373719848777
Pastor-Satorras, R., & Vespignani, A. (2001). Epidemic spreading in scale-free
networks. Physical Review Letters, 86(14), 3200.
Remediation: Higher education’s bridge to nowhere. (2012). Complete college
America.
Tibbetts, Y., DeCoster, J., Francis, M. K., Williams, C. L., Totonchi, D. A., Lee,
G. A., & Hulleman, C. S. (2022). Learning mindsets matter for students in
corequisite courses (full research report). Strong Start to Finish, Education
Commission of the States.
Understanding and teaching writing: Guiding principles. (2018). National
Council of Teachers of English.
Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’
networks. Nature, 393(6684), 440–442.
Yeager, D. S., Hanselman, P., Walton, G. M., Murray, J. S., Crosnoe, R., Muller,
C., Tipton, E., Schneider, B., Hulleman, C. S., Hinojosa, C. P., Paunesku, D.,
Romero, C., Flint, K., Roberts, A., Trott, J., Iachan, R., Buontempo, J., Yang,
S. M., Carvalho, C. M., … Dweck, C. S. (2019). A national experiment reveals
where a growth mindset improves achievement. Nature, 573(7774), 364–369.
https://doi​.org ​/10​.1038​/s41586 ​- 019​-1466-y
4
A PRECISE AND CONSISTENT
EARLY WARNING SYSTEM FOR
IDENTIFYING AT-RISK STUDENTS
Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

Introduction
The purpose of this study was to identify academically at-risk students
using big data analytics each semester in order to give instructional per-
sonnel enough lead time to provide support and resources that can help
reduce a student’s risk of failure. Such a system should be consistent with
high precision (i.e., the students identified as at-risk should be accurately
defined as those with a high probability of failure) (Smith et al., 2012;
Wladis et al., 2014). The consequences of course failure for students
include, but are not limited to, (1) prolonging time to graduation; (2)
increasing the student’s financial burden; and (3) increasing the chance
of dropping out entirely (Miguéis et al., 2018; Simanca et al., 2019). In
addition to the impact on students, these consequences also have a sig-
nificant negative and financial impact on the university through students’
increased time to completion and decreased retention rates.
There have been several data analytic attempts to identify at-risk stu-
dents using a variety of methods with the most effective approaches incor-
porating the concept of precision instead of accuracy as a methodological
foundation. However, most published studies use accuracy as the primary
selection criterion. The following example illustrates why model precision
provides more useful results. Assume that one class has 250 students with
a 30% failure rate. The confusion matrices for Model A and Model B are
shown in Table 4.1.
The accuracy rate for Model A is 80%, which is higher than the 78%
accuracy rate for Model B. However, Model B has a 60% precision rate
and identifies 20 more at-risk students than Model A. This cohort is
DOI: 10.4324/9781003244271-6
A precise and consistent early warning 61

TABLE 4.1 Confusion Matrices for Two Different Prediction Models

True State Model A Model B


Prediction Prediction

Pass Fail Row Total Pass Fail Row Total

Pass 175 0 175 150 25 175


Fail 50 25 75 30 45 75
Column 225 25 250 180 70 250
Total

unbalanced in its distribution because a considerably larger percentage


of students fall into the no- or low-risk categories. In using precision
rather than accuracy, however, there exists the possibility of identifying
some students at risk when they are not. But, this kind of classification
error is much less costly than missing students who would benefit from
instructional intervention. The second feature of many published studies
is that both the training and validation data were collected during the
same semester. However, seemingly effective predictive models cannot
be completely built at the beginning of any semester because the target
variable (i.e., the final grade) is not known at that time. A useful model
should be built for a semester then validated using the data collected at
least one semester later. Otherwise, the performance measure presented
is not honest because it may be biased or underestimate of the model
precision.
In this study, baseline data were collected in Fall 2017 to develop the ini-
tial predictions and additional data collected in Fall 2018 were used to test
the findings. Because all procedures were validated using the future data,
the model prediction and precision are honest and unbiased. However, a
reasonable constraint is that the university has limited resources that can
be allocated to support academically at-risk students. Table 4.2 presents
the relationship between class size and an estimated reasonable number of
students who can obtain additional assistance.

TABLE 4.2 
The Relationship between Class Size and
Available Resource Metrics

Class Size Estimates of Student Support Cohort Size

0–50 10
51–100 12
101–200 25
201+ 50
62 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

Methodology
The data mining process used in this study identified previously unob-
served informational patterns that cannot be observed with more tra-
ditional analyses procedures. For this work, six courses including
three STEM (Statistics, Physics, and Chemistry) and three non-STEM
(Psychology, Political Science, and Western Civilization) from Fall 2017
were selected for model development. The same offerings for the Fall 2018
term were used for validation. In the predictive building processes, the
authors developed a baseline at the beginning of courses, and then built
five individual progressive models from week 2 to week 6. The predictions
were built for STEM courses and non-STEM courses separately. For each
course, there are a total of six predictive models. The baselines were built
based on student information system (SIS) data including demographics,
academic type and level, grade point average (GPA), and standardized test
scores (ACT, SAT). The progressive models were built using both SIS and
learning management system (LMS) data augmented with course exami-
nation and assignment outcome scores from the LMS gradebook. The
authors applied the six models to make weekly predictions for the courses
in Fall 2018 semester.
Note that the baselines were built with fewer data elements; therefore,
they are theoretically less precise. On the other hand, the weekly pro-
gressive results enhance precision because they utilize additional data ele-
ments from the LMS system. This means that the intervention program
can help early identified at-risk students engage more quickly and increase
their likelihood of success.

Datasets
This study incorporated the two previously mentioned data sources. The
first originated from the university SIS containing both demographic and
academic variables. Most students have either an SAT score or an ACT score
that the authors combined into one composite through a cross imputation
technique before performing analysis. Demographic variables included
gender, ethnicity, and indicators for: academic level, full-/part-time sta-
tus, first-generation college student, transfer student, and birth date. SIS
data were used to build the baseline model to predict students’ likelihood
of failing a course at the first week of each semester. The second dataset
originated from the LMS. These data sets contain students’ activities dur-
ing the semester including quiz scores, engagement time and date for all
quizzes, exam results, and assignment completion metrics. The informa-
tion also includes students’ interaction with the LMS system such as the
duration of login time, the number of times logged in, and the number of
A precise and consistent early warning 63

At-Risk No-Risk
100%
89.56% 90.66% 90.05% 92.03%
88.93%
90%
80% 76.30%
70%
60%
50%
40%
30% 23.70%
20%
10.44% 9.34% 9.95% 11.07%
10% 7.97%

0%
Statistics Physics Chemistry Psychology Political Western
(N=1,363) (N=862) (N=364) (N=1,709) Science Civilization
(N=542) (N=477)

FIGURE 4.1 At-Risk Predictions for Fall 2017

times participating in chat room discussions. However, these student LMS


interaction attributes had a very weak relationship with final performance
and were not used as part of the information for building the progressive
model. The two datasets were merged and used to build the final solution.
Descriptive information for the datasets is shown in Figures 4.1 and 4.2
as well as Tables 4.3 and 4.4. Additionally, the total number of students
in each course subject is displayed for all sections in the tables and figures.
Figures 4.1 and 4.2 show the percentage of at-risk students for the six
courses for Fall 2017 and Fall 2018, respectively.

120%
At-Risk No-Risk

100% 95.83% 93.83%


91.76% 88.05%
85.05%
80% 75.99%

60%

40%
24.01%
20% 14.95% 11.95%
8.24% 6.17%
4.17%
0%
Statistics Physics Chemistry Psychology Political Western
(N=1,112) (N=883) (N=528) (N=1,590) Science Civilization
(N=962) (N=535)

FIGURE 4.2 At-Risk Predictions for Fall 2018


64 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

TABLE 4.3 Student Demographic Variables for Fall 2017

Variables Fall 2017


STEM Non-STEM
Total Fail Total Fail
N % No Yes N % No Yes

Study Population 2,589 100.00 2,142 447 2,728 100.00 2,142 447
Gender
Female 1,448 55.93 1,204 244 1,448 55.93 1,204 244
Male 1,141 44.07 938 203 1,141 44.07 938 203
Race
White 1,235 47.70 1,060 175 1,235 47.70 1,060 175
Black 293 11.32 207 86 293 11.32 207 86
Hispanic 684 26.42 559 125 684 26.42 559 125
Asian 176 6.80 151 25 176 6.80 151 25
Other 201 7.76 165 36 201 7.76 165 36
Academic year
Freshman 609 23.52 542 67 609 23.52 542 67
Sophomore 1,035 39.98 876 159 1,035 39.98 876 159
Junior 627 24.22 484 143 627 24.22 484 143
Senior 318 12.28 240 78 318 12.28 240 78
Full/Part time
Full time 2,319 89.57 1,944 375 2,319 89.57 1,944 375
Part time 270 10.43 198 72 270 10.43 198 72
Transfer
Yes 694 26.81 524 170 694 26.81 524 170
No 1,895 73.19 1,618 277 1,895 73.19 1,618 277
First Generation
Yes 589 22.75 455 134 589 22.75 455 134
No 2,000 77.25 1,687 313 2,000 77.25 1,687 313

Tables 4.3 and 4.4 display the data for students in courses included in
the study by STEM and non-STEM status. Demographic frequency and
percentages are included in Tables 4.3 and 4.4 for gender, race, academic
year, and full-/part-time, transfer, and first-generation status for courses
in Fall 2017 and Fall 2018, respectively.
Tables 4.5 and 4.6 show descriptive statistics for numerical variables
including age, term course load (semester hours), UCF GPA, high-school
GPA, SAT total, ACT total, assignments and quiz scores in each week, as
well as first exam grade for STEM and non-STEM courses in Fall 2017
and 2018, respectively. The missing values for high school GPA, SAT total,
and ACT total were replaced with relative mean values. Assignments, quiz
scores, and exam grades were calculated by using grade weight according
A precise and consistent early warning 65

TABLE 4.4 Student Demographic Variables for Fall 2018

Variables Fall 2018


STEM Non-STEM
Total Fail Total Fail
N % No Yes N % No Yes

Study 2,756 100.00 451 2305 3,087 100.00 275 2,812


Population
Gender
Female 1,514 54.93 240 1,274 1,638 53.06 128 1,510
Male 1,242 45.07 211 1,031 1,449 46.94 147 1,302
Race
White 1,304 47.31 178 1,126 1,402 45.42 100 1,302
Black 303 10.99 78 225 333 10.79 45 288
Hispanic 744 27.00 133 611 800 25.92 72 728
Asian 200 7.26 30 170 187 6.06 7 180
Other 205 7.44 32 173 365 11.82 51 314
Academic year
Freshman 473 17.16 61 412 1,402 45.42 108 1,294
Sophomore 1,287 46.70 209 1,078 958 31.03 71 887
Junior 675 24.49 130 545 545 17.65 81 464
Senior 321 11.65 51 270 182 5.90 15 167
Full/Part time
Full time 2,498 90.64 388 2,110 2,818 91.29 239 2,579
Part time 258 9.36 63 195 269 8.71 36 233
Transfer
Yes 778 28.23 160 618 734 23.78 91 643
No 1,978 71.77 291 1,687 2,353 76.22 184 2,169
First Generation
Yes 557 20.21 107 450 562 18.21 52 510
No 2,199 79.79 344 1,855 2,525 81.79 223 2,302

to students’ performances in the previous weeks. This means when scores


were calculated in the current week, student performances in the current
week and previous weeks were considered. If there were no scores in a
particular week, they were set to zero.

Model building and results


After training data preparation for Fall 2017 and progressive combi-
nation data for Fall 2018 in SAS 9.4, the information was imported
into the SAS Enterprise Miner workstation 15.1 for predictive model
building. For the baseline, the dependent variable is at-risk, and the
independent variables are course subject, gender, race, academic year,
66

TABLE 4.5 Descriptive Statistics for Fall 2017

Variable Fall 2017


STEM (N = 2,589) Non-STEM (N = 2,728)
Mean Std Dev Min Max Mean Std Dev Min Max

Age 20.20 2.90 16.00 57.00 19.70 3.40 17.00 57.00


Term course load 12.70 2.20 3.00 19.00 12.70 2.10 3.00 19.00
UCF GPA 3.00 0.70 0.00 4.00 3.00 0.70 0.00 4.00
High school GPA 3.80 0.50 1.70 4.80 3.80 0.40 1.40 4.80
SAT total 1103.40 103.00 590.00 1560.00 1094.90 86.00 560.00 1600.00
ACT total 24.10 3.00 6.00 35.00 24.20 3.00 12.00 36.00
Assignment week 2 8.10 11.50 0.00 35.00 2.70 4.90 0.00 15.00
Quiz week 2 8.40 8.10 0.00 20.00 13.80 18.40 0.00 85.00
Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

Assignment week 3 7.90 11.10 0.00 35.00 2.60 4.80 0.00 15.00
Quiz week 3 8.20 7.90 0.00 20.00 13.70 18.40 0.00 85.00
Assignment week 4 7.60 10.60 0.00 35.00 2.60 4.80 0.00 15.00
Quiz week 4 8.10 7.80 0.00 20.00 13.70 18.20 0.00 85.00
Assignment week 5 7.40 10.20 0.00 35.00 2.60 4.70 0.00 15.00
Quiz week 5 7.80 7.50 0.00 20.00 14.10 18.40 0.00 85.00
Assignment week 6 7.30 10.10 0.00 35.00 2.40 4.60 0.00 14.90
Quiz week 6 7.70 7.40 0.00 20.00 14.30 18.90 0.00 84.20
First Exam week 6 62.70 22.70 0.00 100.00 63.70 22.40 0.00 100.00
TABLE 4.6 Descriptive Statistics for Fall 2018

Variable Fall 2018


STEM (N = 2,756) Non-STEM (N = 3,087)
Mean Std Dev Min Max Mean Std Dev Min Max

Age 20.10 2.70 17.00 61.00 19.80 3.50 16.00 62.00


Term course load 12.90 2.10 3.00 20.00 12.80 2.10 3.00 18.00

UCF GPA 3.10 0.70 0.00 4.00 3.10 0.70 0.00 4.00
High school GPA 3.80 0.40 1.80 4.80 3.80 0.40 1.60 4.80
SAT total 1076.00 76.60 660.00 1490.00 1074.00 62.20 510.00 1570.00
ACT total 24.10 2.80 7.00 35.00 24.30 2.90 11.00 36.00
Assignment week 2 12.40 7.40 0.00 24.00 10.40 12.10 0.00 40.00
Quiz week 2 11.60 7.50 0.00 25.00 12.00 19.90 0.00 81.20
Assignment week 3 12.30 7.30 0.00 24.00 9.60 11.50 0.00 40.00
Quiz week 3 11.70 7.40 0.00 25.00 12.60 22.20 0.00 100.00
Assignment week 4 12.30 7.40 0.00 25.00 9.50 11.50 0.00 40.00
Quiz week 4 11.50 7.10 0.00 25.00 12.50 22.10 0.00 100.00
Assignment week 5 12.20 7.40 0.00 25.00 9.80 11.60 0.00 40.00
Quiz week 5 11.40 7.00 0.00 25.00 12.40 22.10 0.00 100.00
Assignment week 6 12.10 7.30 0.00 25.00 9.80 11.70 0.00 40.00
Quiz week 6 11.40 7.00 0.00 25.00 12.30 21.80 0.00 100.00
First Exam week 6 54.80 18.50 0.00 93.60 58.50 28.00 0.00 100.00
A precise and consistent early warning
67
68 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

Baseline SIS
STEM Courses
STEM
Fall 2017

Sample
Generation
Non-STEM Baseline SIS
Courses Non-STEM

STEM
Progressive
STEM Models Weeks
Courses 2-6
Fall 2018

Sample Score Model


SAS Coding
Generation Non-STEM Generation Completion
Non-STEM Progressive
Courses Models Weeks
2-6

FIGURE 4.3 Model Building Procedure

full-/part-time status, transfer status, first-generation status, age, term


course load, UCF GPA, high-school GPA, SAT, and ACT total. From
the results, a set of variables emerged that proved useful for weekly
progressive controls. In the weekly progressive analysis, the depend-
ent variable was also at-risk, and the independent variables were the
significant variables from the baseline model and weekly assignment
and quiz scores combined with the first exam grade. The example of a
model diagram is shown in Figure 4.3.
The details of node identification for modeling sampling method, data
partition, regression method, model scoring, SAS code, and model perfor-
mance are described as follows:

(1) Training data set node: 2017 STEM course training data or non-
STEM training data, including all the variables. Target variable is
at-risk:
STEM course target: At-risk = 1: 447 (17.27%); No-risk = 0: 2142
(82.73%).
Non-STEM course target: At-risk = 1: 269 (9.82%); No-risk = 0: 2460
(90.18%)
The target is unbalanced and the model prediction is a rare event data
analysis. The down sampling method was used to sample the data for
balance analysis.
(2) Sample node: Down sampling method was used. The sampling
method was stratified sampling the number of target non-events = 0
with an equal number of target events = 1.
A precise and consistent early warning 69

The sample size for STEM courses was 894 (50% for at-risk = 1.447
and 50% for no-risk = 0.447); The sample size for non-STEM courses
was 538 (50% for at-risk = 1.269 and 50% for no-risk = 0.269).
(3) Data partition node: 70% for training and 30% for validation.
(4) Model regression node: After the data partition node, the diagram
was divided into six branches. Each branch represented a model incor-
porating hierarchical logistic regression. First, the baseline model was
used to find significant control variables. These significant variables
were also used in the weekly progressive models. As mentioned previ-
ously, the baseline model only considered student demographics and
academic background variables—no assignments, quizzes, or exam
variables were considered. This method used logistic regression with
backward variable selection to identify significant variables. The
logistic regression for the baseline model is described in equation (1):

 ,
p (1)
log  0  i
1 p i 1

where βi —significant student demographics and academic back-


ground variables.
In weekly progressive models, the significant variables for the
baseline model were considered control variables, with added weekly
assignments, quizzes, or exam information variables for each week,
respectively. The logistic regression for the weekly progressive model
is described in equation (2):

m
 p 
 log
 1 

p week j
 0     Assig
i 1
i week j  Quizweek j  Examweek j 6 (2)

where j—from 2 to 6, Assig and Quiz are scores of assignments and


quizzes for each week and Exam is only for week 6.
Variables’ roles for this node were edited according to which ones
were used. Only the main effects of categorical variables were consid-
ered, ignoring interactions.
(5) Score data set node: The relative test data of Fall 2018 was in the score
data set.
(6) Score node: The score data set was scored using the modeling.
(7) SAS code node: SAS codes were developed to output the results.
(8) Modeling comparisons node: In this node, model performances such
as Receiver Operating Characteristic (ROC) Curve, misclassification
70 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

and modeling fit statistics, as well as Event Classification Table were


displayed.
(9) Last SAS code node: Used to execute nodes together by batch mode.

The modeling coefficients and their significance levels are shown in Tables
4.7 and 4.8 for STEM and non-STEM courses. Model performances with
receiver operator characteristic (ROC) curves were calculated for each
progressive and baseline model for STEM and non-STEM modeling,
respectively. However, both the baseline and progressive ROC curves cor-
respond so closely at each iteration of the study that Figure 4.3 presents a
combined prototype of all six curves for both the STEM and non-STEM
courses. ROC curves display the overall correct classification of students
at-risk or not yielded by the logistic regression procedures. The educa-
tional value add comes from being able to compare the occurrences of true
and false at-risk student designation. In a perfect classification system,
the curve simply would be a dot in the upper left corner of the intersec-
tion of the x-axis and y-axis. That would indicate no classification errors.

TABLE 4.7 Modeling Coefficients for 2017 STEM Courses

2017 STEM Modeling


Variables Base Weekly Progressive Model
Line
Week 2 Week 3 Week 4 Week 5 Week 6
Model

Intercept 10.41¹ 11.16¹ 11.15¹ 11.41¹ 11.49¹ 12.93¹


UCF_GPA –2.84¹ –2.93¹ –2.92¹ –2.94¹ –2.96¹ –2.53¹
Academic –1.30¹ –1.39¹ –1.38¹ –1.40¹ –1.42¹ –1.67¹
year
Freshman
Junior 0.24 0.24 0.23 0.26 0.26 0.43
Senior 0.93¹ 1.13¹ 1.11¹ 1.10¹ 1.10¹ 1.09¹
ACT total –0.13¹ –0.13¹ –0.13¹ –0.14¹ –0.14¹ –0.15²
score
Transfer 0.41¹ 0.38¹ 0.37¹ 0.35² 0.34² 0.30³
No
Assignment –0.02 –0.02 –0.02 –0.01 –0.04³
Quiz –0.07¹ –0.07¹ –0.09¹ –0.09¹ –0.15¹
Exam –0.05¹
Valid: ROC 0.87 0.87 0.88 0.88 0.89 0.89
(AUC)
¹ Sig .0001
² Sig .001
³ Sig .01
A precise and consistent early warning 71

TABLE 4.8 Modeling Coefficients for 2017 Non-STEM Courses

2017 Non-STEM Modeling


Variables Base Weekly Progressive Model
Line
Week 2 Week 3 Week 4 Week 5 Week 6
Model

Intercept 12.65¹ 12.61¹ 12.65¹ 12.65¹ 12.69¹ 13.99¹


UCF_GPA –3.66¹ –3.64¹ –3.66¹ –3.66¹ –3.66¹ –3.44¹
Academic year –0.01 –0.02 –0.00 –0.00 –0.02 –0.21
Freshman

Junior 0.74² 0.74³ 0.74³ 0.74³ 0.75² 0.84²


Senior 0.01 0.05 0.03 0.03 0.04 –0.32³
ACT total score –0.15² –0.15² –0.15² –0.15² –0.15² –0.20²

Transfer 0.59² 0.59² 0.59² 0.59² 0.59² 0.47³


No
Assignment –0.00 –0.00 –0.00 –0.01 –0.02
Quiz –0.00 –0.01 –0.01 –0.01 –0.03¹
Exam –0.05¹
Valid: 0.92 0.92 0.92 0.92 0.92 0.93
ROC(AUC)
¹ Sig .0001
² Sig .001
³ Sig .01

However, by plotting the true and false classification rates for multiple cut
points 0.2, 0.4, 0.6, 0.8, etc., the ROC evolves. The objective is for the
curve to rise and approach the upper left-hand corner as closely as pos-
sible. Figure 4.4 indicates that this happened for the baseline and progres-
sive analyses for both STEM and non-STEM courses. The prototype curve
that flattens out near the y (horizontal) axis indicates that the increasing
precision of at-risk classification has minimal impact on the false at-risk
designation. A second indicator of the ROC curve precision is presented
in Figure 4.4. The area under the curve (AUC) index increases as the curve
approaches the upper left corner. The larger the AUC, the more effective
predictive analysis. Therefore, Figure 4.4 presents AUC for the baseline
and progressive analyses through each phase of the study.
The major findings of this study are:

(1) In the baseline model, four variables (UCF GPA, academic year, trans-
fer status, and ACT total) are significant. These variables were used as
control variables in the weekly progressive calculations.
72 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

Prototypical ROC Curve for Prototypical ROC Curve for


STEM Courses Fall 2017 Non-STEM Courses Fall 2017
1.2 1.2
1 1

True At-Risk Rate


True At-Risk Rate

AUC AUC
0.8 0.8
Baseline: 0.867 Baseline: 0.923
0.6 Week 2: 0.873 0.6 Week 2: 0.923
Week 3: 0.875 Week 3: 0.923
0.4 Week 4: 0.881 0.4 Week 4: 0.923
0.2 Week 5: 0.885 0.2 Week 5: 0.923
Week 6: 0.894 Week 6: 0.925
0 0
0 0.2 0.4 0.6 0.8 1 1.2 0 0.2 0.4 0.6 0.8 1 1.2
False At-Risk Rate False At-Risk Rate
(a) STEM Courses (b) Non-STEM Courses

FIGURE 4.4 Prototypical Receiver Operating Characteristic (ROC) Curves and


Area Under the Curve (AUC) Analyses for included STEM and
Non-STEM Courses Fall 2017

(2) Among all the variables, UCF GPA is the most important baseline
predictor for the weekly progressive determination. For example,
in the baseline model, coefficients for UCF GPA are –2.84 for the
STEM model and –3.66 for the non-STEM model, which means
that keeping all other predictors constant with increasing UCF
GPA, the log odds ratio of at-risk decreases by 2.84 units in the
STEM model and 3.66 units in non-STEM, respectively. For both
STEM and non-STEM models, the effect of UCF GPA remains vir-
tually constant from week 2 to week 5 and decreases slightly in
week 6 as exam variables were added. Fundamentally, UCF GPA
can be used as the most important variable in the early weeks of
courses to identify at-risk students.
(3) Other academic background variables such as academic year, trans-
fer, and ACT total show significant effects that can identify at-risk
student status. In the STEM model, for the academic year variable
and keeping all other predictors constant, freshman have 1.30 units
log odds ratio significantly less than sophomores (reference group),
seniors have 0.93 units log odds ratio significantly greater than sopho-
mores, and juniors have no significant difference when compared to
sophomores. But in the non-STEM model, juniors have 0.74 units
log odds ratio significantly greater than sophomores and there are no
significant differences between freshman and sophomore or seniors
and sophomores. From the coefficients of both models, it may be con-
cluded that the higher the academic year, the higher the probability for
accurately and precisely identifying at-risk students. At-risk students
in the freshman year are the most difficult to identify. Additionally,
A precise and consistent early warning 73

ACT total and transfer student status present challenges for at-risk
identification.
(4) In the weekly models, although the coefficients of assignment for vari-
ables were negative (the more assignments, the less risk), they are not
significant in both STEM and non-STEM courses. The quiz variable
is significant in all the weekly models for STEM, but it is only signifi-
cant in the week 6 model for the non-STEM courses and not signifi-
cant in previous weeks. Thus, it may be concluded that quiz scores are
more useful than assignments for identifying at-risk students in the
early weeks for STEM courses and in week 6 for non-STEM courses.
(5) The exam variable score is useful for identifying at-risk students
because it is significant in both STEM and non-STEM modeling.
(6) AUC for the ROC analysis increases from the baseline model to
weekly progressive models in STEM modeling, but remains almost
constant in non-STEM modeling.
(7) In the early prediction phase, significant student demographics and
academic background variables can be used as effectively for both
STEM and non-STEM modeling. Quizzes can provide additional
information for STEM from week 2 to week 5. Since the first exam is
administered about the sixth week for many courses, this early identi-
fication of at-risk students can be accomplished prior to when the first
exam is taken.

Model prediction and results


For the modeling test, first the authors generated the confusion matrixes
between prediction and true events for each model, identifying the accu-
racy and precision (shown in Table 4.9). Subsequently, all six models were
compared to make agreement tables for at-risk students who were identi-
fied in the baseline models (Table 4.10). Finally, application of the models
for each course with different cut-off numbers of at-risk to increase preci-
sion prediction are shown in Table 4.11 and Table 4.12.
In Table 4.9, the confusion matrixes for model predictions in STEM and
non-STEM datasets in Fall 2018 are displayed and the calculated accu-
racies and precisions. For the prediction of the overall dataset in STEM
courses, the accuracies are 86.9% in baseline model. From the weekly
2 to 5 progressive model, the accuracies show marginal improvement to
values 87.2%, 87.1%, 87.4%, and 87.3%, respectively. For the week 6
progressive result, the accuracy increased to 88.4%. The precision has the
same pattern as the accuracy, that is 65.2% in the baseline, improving to
73.7% in week 6 for the progressive approach. For the overall model pre-
diction in STEM courses, the model performs well in identifying at-risk
74 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

TABLE 4.9 Confusion Matrixes, Accuracy, and Precision for Predictions in Fall


2018

Predictions in Fall 2018


Model STEM Non-STEM
At-risk (True) At-risk (True)
1 0 1 0

Prediction Base Line 1 193 103 175 137


0 258 2202 100 2675
Accuracy: 86.9% Accuracy: 92.3%
Precision: 65.2% Precision: 56.1%
Week 2 1 181 83 176 138
Progressive 0 270 2222 99 2674
Accuracy: 87.2% Accuracy: 92.3%
Precision: 68.5% Precision: 56.1%
Week 3 1 179 83 174 136
Progressive 0 272 2222 101 2676
Accuracy: 87.1% Accuracy: 92.3%
Precision: 68.3% Precision: 56.1%
Week 4 1 179 76 174 136
Progressive 0 272 2229 101 2676
Accuracy: 87.4% Accuracy: 92.3%
P Precision: 70.2% Precision: 56.1%
Week 5 1 178 77 175 131
Progressive 0 273 2228 100 2681
Accuracy: 87.3% Accuracy: 92.5%
Precision: 69.8% Precision: 57.2%
Week 6 1 207 74 178 121
Progressive 0 244 2231 97 2691
Accuracy: 88.4% Accuracy: 92.9%
Precision: 73.7% Precision: 59.5%

TABLE 4.10 The Prediction Agreement between Baseline and Weekly 6 Progressive


Data

Model Prediction in Weekly 6 Progressive Model


STEM Courses Non-STEM Courses

1 0 Agree 1 0 Agree
Prediction in 1 213 83 93.43% 265 47 97.37%
baseline 0 98 2362 34 2741
A precise and consistent early warning 75

TABLE 4.11 Precision Using Highest At-risk Students from Baseline and Weekly
Progressive predictive data

Course True Base Line Week 2 Week 3 Week 4 Week 5 Week 6

STEM At-risk 62 62 62 62 63 64
STA2023 No-risk 8 8 8 8 7 6
Precision 88.57% 88.57% 88.57% 88.57% 90% 91.4%
Non- At-risk 40 41 41 41 41 46
STEM No-risk 10 9 9 9 9 4
PSY2012 Precision 80% 82% 82% 82% 82% 92%

TABLE 4.12 
Top At-risk Students from Baseline Who Remain At-risk Through
Week 6

Week 2 Week 3 Week 4 Week 5 Week 6


Base 1 0 1 0 1 0 1 0 1 0
Line

STEM 1 70 0 70 0 69 1 68 2 60 10
STA2023
Non-STEM 1 50 0 50 0 50 0 50 0 50 0
PSY2012

students in the early weeks. Even at the beginning of the course, the base-
line model exhibits excellent performance, however, model prediction can
be enhanced by using the weekly progressive results in week 6.
For prediction in non-STEM courses, the accuracies are 92.3% for the
baseline model, remaining constant with values of 92.3% in the follow-
ing three weeks, showing little improvement until week 5 and week 6
with values of 92.5% and 92.9%. The precision has the same pattern
as accuracies. Compared to STEM models, the non-STEM semester has
higher accuracy but lower precision. For the overall model prediction in
non-STEM courses, the model performance exhibits an excellent ability
to identify at-risk students in the early weeks. Even at the beginning of
the course, the baseline model has acceptable performance. However, the
prediction does not improve in the weekly progressive models but shows a
little improvement at week 6.
Table 4.10 presents the agreement of the same students who were iden-
tified in the baseline model with those in the weekly 6 progressive models.
The results show that there is excellent agreement with values of 93.43%
in STEM courses and 97.37% in non-STEM courses.
To obtain a high precision model, the authors examined the most at-
risk students identified by the baseline model and weekly progressive
76 Jianbin Zhu, Morgan C. Wang, and Patsy Moskal

model for each course. The cut-off number of a most at-risk student was
based on the at-risk rate and available student sample sizes shown in Table
4.3. We used STEM statistics courses and non-STEM psychology courses
as examples. The cut-off size is 70 for the statistics course and 50 for the
psychology course. The relative precisions are shown in Table 4.11. The
results indicate that the prediction has higher precision from 88.57% to
91.4% in the statistics course if we cut the top 70 at-risk students, and a
precision from 80% to 92% in the psychology course if we cut the top 50
at-risk students.
The same most at-risk students identified in the baseline model were
examined for each course to determine if they remained at-risk in the
weekly progressive models. The result is shown in Table 4.12. It may
be observed that the 70 at-risk students identified in baseline prediction
decrease to 60 in week 6 for the statistics course. At the end of the semes-
ter, 85% of them remain at-risk. For psychology, 100% of the at-risk stu-
dents were identified in baseline prediction and remained at that status at
the end of the semester. Therefore, interventions for these students, such
as tutors or additional instruction, have the potential for a 50% improve-
ment, thereby helping 35 students in statistics and 25 in psychology ulti-
mately pass these courses.

Conclusion and future possibilities


In this study, the authors created an integrated at-risk student predictive
data domain from student information system (SIS) and learning man-
agement systems (LMS) data. They found that the precision rate for the
baseline model using SIS data alone was approximately 85% (i.e., the per-
centage of initially identified as at-risk students in the baseline model who
failed the course). The finding that predictions emerge for students prior
to their involvement in a course can benefit faculty members by identifying
those students who might benefit from immediate intervention. Further,
the fact that grade point average (GPA), a nontractable variable, was the
dominant predictor, highlights the need to identify surrogate measures
that respond to instruction and enhance the chances of student success.
That was the intent of this study with the integration of student progress
measures from the LMS. With those additional data the precision rate
was equivalent to that of the baseline model in weeks 2 through week 6.
However, the LMS data did improve the study’s precision over the GPA
prediction from the additional results of exams, and quizzes for STEM
courses, especially. Further, the baseline model was effective for both
STEM and non-STEM courses, however, the accuracy for non-STEM
courses was 5% to 10% higher. This indicates that there is an important
A precise and consistent early warning 77

interaction between predictive analytic models and subject matter disci-


plines demonstrating that generalized predictions are less effective than
specifically tailored approaches. These findings show that accurate and
precise predictions result from data that are readily available in educa-
tional environments.
These results reinforce the adage that predicting is one thing but explain-
ing is quite another requirement. Certainly, it would be easy to give up
and say that GPA is the best predictor and nothing else can be done. This
is not the case. Instructional measures will help explain why students are
at risk. This leads to the conclusion that future predictive analytic models
such as this, based on integrated data domains, will require much more
development. Instructors, advisors, instructional designers, researchers,
technology specialists, and other support personnel must become involved
in the process to refine predictive models and develop and evaluate effec-
tive interventions. Predictive analytics offers a wonderful opportunity to
help students succeed, but it requires an entire educational community to
be effective. Unfortunately, no matter our efforts we cannot guarantee
that every student will succeed, but if we remove obstacles and thereby
increase their chances of success, we will have done our jobs.

References
Miguéis, V. L., Freitas, A., Garcia, P. J., & Silva, A. (2018). Early segmentation
of students according to their academic performance: A predictive modelling
approach. Decision Support Systems, 115, 36–51.
Simanca, F., González Crespo, R., Rodríguez-Baena, L., & Burgos, D. (2019).
Identifying students at risk of failing a subject by using learning analytics for
subsequent customised tutoring. Applied Sciences, 9(3), 448.
Smith, V. C., Lange, A., & Huston, D. R. (2012). Predictive modeling to forecast
student outcomes and drive effective interventions in online community college
courses. Journal of Asynchronous Learning Networks, 16(3), 51–61.
Wladis, C., Hachey, A. C., & Conway, K. (2014). An investigation of course-
level factors as predictors of online STEM course outcomes. Computers and
Education, 77, 145–150.
5
PREDICTIVE ANALYTICS, ARTIFICIAL
INTELLIGENCE AND THE IMPACT
OF DELIVERING PERSONALIZED
SUPPORTS TO STUDENTS FROM
UNDERSERVED BACKGROUNDS
Timothy M. Renick

Chapter overview
This chapter explores the promise of analytics, predictive modeling, and
artificial intelligence (AI) in helping postsecondary students, especially those
from underserved backgrounds, graduate from college. The chapter offers a
case study on the nature and impact of these approaches, both inside and out-
side of the classroom, at Georgia State University. Over the past decade and
a half, Georgia State has broadened its admissions criteria, doubled the num-
ber of low-income students it enrolls (to 55%), and significantly increased
its minority-student populations (to 77%). At the same time, it has raised its
graduation rates by 81% and eliminated equity gaps in its graduation rates
based on race, ethnicity, and income level. Georgia State now graduates more
African Americans than any nonprofit college or university in the nation.
A critical factor in Georgia State’s attainment of improved and more equi-
table student outcomes has been its use of analytics and technology to deliver
personalized services to its students. The university has consistently been at
the leading edge of adopting new technologies in the higher-ed sector, in some
cases helping to develop them. These efforts include the scaling of adaptive
learning approaches, the design and adoption of a system of proactive aca-
demic advising grounded in predictive analytics, and the development of an
AI-enhanced chatbot designed to support student success.
This chapter will share research from a series of randomized control
trials and other independent studies that have been conducted regarding
Georgia State’s innovations in these three areas. It will include key lessons
learned in implementing analytics-based solutions to student-success chal-
lenges at a large, public institution.
DOI: 10.4324/9781003244271-7
Predictive analytics, AI and the impact of personalized supports 79

Equity gaps and postsecondary resources


Today, it is eight times more likely that an individual in the top quartile of
Americans by annual household income will hold a college degree than an
individual in the lowest quartile (Pell Institute, 2015). Nationally, White
students graduate from college at rates more than 10 percentage points
higher than Hispanic students and are more than twice as likely to gradu-
ate with a 4-year college degree when compared to Black students (US
Department of Education, 2014). Students from low-income backgrounds
(as defined by their eligibility for Federal Pell grants) have a 6-year gradu-
ation rate of 39% (Horwich, 2015), a rate that is 20 points lower than the
national average (US Department of Education, 2014).
While differences in educational outcomes based on race, ethnicity,
and income level—equity gaps—predate students’ enrollment in college,
historically, American higher education has done little to reverse the
trends. In fact, it can be argued that the current system only exacerbates
the problem. At 38 of the most elite colleges in America, more students
come from the top 1% of American households by annual income than
from the entire bottom 60% (New York Times, 2017). The wealthiest
students with the strongest academic backgrounds fill the Ivy Leagues
and other elite institutions, where they benefit from low student-to-faculty
ratios, incredible facilities, and a wealth of resources and opportunities.
In contrast, less than one-half of 1% of students from the bottom quintile
of American households by average annual income attend these same 38
elite colleges. In fact, fewer than half of low-income students attend col-
lege at all, and those that do are most likely to attend large, nonselective
public universities, community colleges, and for-profit institutions where
student-to-faculty ratios are high, advisors and counselors are scarce, and
resources are strained (New York Times, 2017).
In short, we send students with the strongest academic and financial
backgrounds to colleges where they will receive the very best resources
and personalized supports, and we send the students with the weakest
academic and financial backgrounds to colleges that offer the least amount
of personalized attention and support. Is it a surprise that national equity
gaps persist?

A move to data and analytics at Georgia State University


When it comes to the students it enrolls and the resources it has available,
Georgia State University is far from elite. Located in Atlanta’s urban core,
Georgia State is a large minority-serving public institution that enrolls 53,000
students every semester. Just under 30,000 of these students come from low-
income backgrounds. In 2003, the institutional graduation rate stood at 31%
80 Timothy M. Renick

and underserved populations were foundering. Graduation rates were 22%


for Latinos, 25% for African Americans, and 18% for African American
males (Complete College America, 2021).
Over the last decade, Georgia State has become a national outlier when
it comes to student success. The graduation rate for Bachelor’s degree-
seeking students has improved by 25 percentage points (81%) since 2003.
Equity gaps are gone. Rates are up 35 points for Latinos (to 57%) and
32 points for African Americans (to 57%). Both groups now graduate at
higher rates than White students. Pell-eligible students now graduate at a
slightly higher rate than non-Pell students. In fact, over the past six years,
African American, Hispanic, and Pell-eligible students have, on average,
graduated from Georgia State at rates at or above the rate of the student
body overall (Complete College America, 2021).
What did Georgia State do to bring about this unlikely transforma-
tion? A critical factor was the university’s turn to analytics-informed
and technology-enabled student supports. Of course, Georgia State is far
from alone among postsecondary institutions in increasing its use of data.
According to a recent survey, 91% of colleges and universities are currently
expanding their use of data and 89% are deploying predictive analytics
at least in some capacity (Jones et al., 2018). What distinguishes Georgia
State is the timing and extent of its use of data to promote improved stu-
dent outcomes. The university has consistently been at the leading edge
nationally in the adoption of new data-based and technology-enhanced
student-support initiatives, and it has implemented these systems at scale.

• Beginning in 2005, Georgia State was among the first institutions in


the US to scale the use of adaptive and interactive learning platforms in
all introductory mathematics courses. The university retrofitted class-
rooms so that all students had their own computer workstations. It
scheduled two to three contact hours per week for every introductory
math course in these labs, with the students working on adaptive and
interactive curricula under the supervision of instructors and graduate
assistants. By 2016, 12,000 students a year were taking introductory
math courses using this approach.
• In 2012, Georgia State began to deploy predictive analytics in aca-
demic advising and has now tracked every undergraduate student for
more than 800 data-based risk factors every day for the past 10 years.
The program has resulted in more than 900,000 proactive interven-
tions with students over the past decade.
• In 2016, Georgia State became one of the first universities in the
nation to deploy AI for student-success purposes by developing an
Predictive analytics, AI and the impact of personalized supports 81

AI-enhanced chatbot—an automatic texting platform—that answers


students’ questions about financial aid, registration, and other issues
24 hours a day, 7 days a week. The chatbot had more than 180,000
text-based interactions with students in its first 3 months of operation.

Collectively, these and other initiatives have allowed Georgia State to


accomplish a goal that was previously believed to be attainable only by
small, elite institutions with low faculty and staff-to-student ratios: to
deliver personalized and timely supports to students at scale. Rather than
waiting for students to diagnose their own problems and to seek out help—
a feat particularly challenging for low-income and first-generation college
students who often lack the context to know when they have gone off path
and the bandwidth to carve out time to get help—these new data-based
platforms are continuously analyzing students individually and, with the
help of trained staff, proactively delivering personalized support. They
are, in effect, leveling the playing field between Georgia State and elite
colleges where personalized supports have been the norm for decades.
To better understand the nature and impacts of these approaches,
we will turn to an examination of the three interventions and some of
the research that has been conducted to determine and quantify their
effectiveness.

Adaptive learning in introductory Mathematics


In 2005, the non-pass rates in Georgia State’s introductory math courses—
College Algebra, Introduction to Statistics, and Pre-calculus—were unac-
ceptably high. Forty percent of the roughly 12,000 students who enrolled
in these courses each year received grades of D, F, or W (Withdrawal).
Because successful completion of a course in introductory math is a grad-
uation requirement at Georgia State, non-completers had to retake the
courses in subsequent semesters, delaying their academic progress and
increasing their loan debt. Further complicating matters, introductory
math is typically taken by first-year students, meaning that non-passing
grades in math had a large impact on students’ overall GPAs, in some
cases making students at risk for falling out of compliance with Federal
grant and loan requirements or short of the 3.0 GPA needed to maintain
eligibility for Georgia’s HOPE scholarship. Hundreds of Georgia State
students were dropping out—or having to drop out—every year because
of introductory math.
In 2005, Georgia State began to pilot adaptive learning across sections
of introductory math courses. Early attempts were not successful. With
80% of Georgia State undergraduate students holding part- or full-time
82 Timothy M. Renick

jobs, most of them off campus, the initial thinking was that the online,
adaptive aspects of the course would be completed by students on their
own computers and on their own time, at home or at work. This would
allow students to complete adaptive modules when most convenient to
their schedules and save the university costs of retrofitting classrooms to
serve the courses. Within a semester, it became clear that the model was
not working. Too many students were not completing the adaptive exer-
cises at all, and, even for those who did, there was no easy way for them
to get timely help when they hit a roadblock. Non-pass rates in the courses
did not budge.
After a team of faculty members from the math department visited
Virginia Tech to learn about its emporium model for math instruction,
Georgia State began to pilot adaptive sections of its math courses that
require students to meet one hour a week with their instructors in tradi-
tional classrooms and two to three hours a week in classrooms retrofit-
ted to allow students to complete the adaptive learning components of the
course while their instructors, teaching assistants, and/or near-peer tutors
are in the room. Student attendance in the computer classrooms, dubbed
the MILE (Math Interactive Learning Environment) labs, was officially
tracked and made part of the students’ final grades, and students were sur-
rounded by resources to provide support and guidance when they faltered.
Almost immediately, Georgia State began to see improvements in course
outcomes.
MILE course sections began to regularly show 20% drops in DFW
rates when compared to their face-to-face counterparts. The results were
particularly promising for minority and Pell-eligible students. DFW rates
for these groups of students declined by 11 percentage points (24%)
(Bailey et al, 2018).
Achieving positive results took trial and error. Lower outcomes were
typical in the first few semesters of implementation, when faculty were
often still experimenting with course formats. Faculty dedicated hundreds
of hours to the iterative process of improvement. According to one math
faculty member: “You need to invest a lot of time considering the learn-
ing objectives and how they map to one another. I worked for 40 hours a
week for six weeks to build a viable course” (Bailey et al., 2018, p. 51). In
response, Georgia State adopted strategies to build and maintain faculty
buy-in for the adaptive courses. First, it regularly collected and shared
outcome data with the faculty—including longitudinal data showing
how students perform in subsequent math and STEM courses. Second, it
allowed faculty members to select the specific adaptive courseware used
in their courses. Many platforms and products were tried, from commer-
cial to open source, with data collected to see which tools produced the
Predictive analytics, AI and the impact of personalized supports 83

best results. Georgia State also offered professional development training


and a range of incentives, including stipends and fellowships, to under-
line to faculty members the importance of the initiative.
These modifications paid off over time: shifting several math classes
from a format with two hours of class time and two optional hours of lab
time, to a format with one hour in class and three mandatory hours in the
lab contributed to an additional 6-percentage-point drop in DFW rates in
college algebra and precalculus (Bailey et al., 2018). Boston Consulting
Group reported in 2018, “In all courses that have existed in hybrid form
for more than a few years, student outcomes are improving” (Bailey et al.,
2018, p. 51).
Because of these efforts and outcomes, enrollment in math adaptive
courses increased at a 12% annual rate between 2005 and 2016, from
2,162 students during the 2005–2006 academic year to 7,003 students in
the Fall term alone of 2016 (Bailey et al., 2018). Drops in DFW rates have
averaged about 10 percentage points across sections.
Additional MILE labs have been built at Georgia State to accommo-
date the growth in enrollments, a further cost. In courses that enroll more
than 12,000 a year, though, a 25% percentage point drop in DFW rates
represents more than a thousand additional students a year who pass the
course, staying on track for degree completion without additional wasted
credit hours, time to degree, and debt. These students are also more likely
to stay enrolled, thus preserving tuition-and-fee revenues that might oth-
erwise have been lost.
There were unanticipated benefits of the program, as well. Faculty mem-
bers in other departments learned about the use of adaptive approaches
from colleagues in the math department and began to pilot adaptive
learning courses in other disciplines. With a grant from the Association of
Public and Land Grant Universities, faculty members in political science,
psychology and economics led an effort between 2016 and 2018 to pilot
introductory courses in their own disciplines using adaptive approaches.
Some of the results were striking. Student performance in one adaptive
online course in economics exceeded performance in traditional face-to-
face sections of the same course, with DFW rates 12 percentage points
lower than those for the face-to-face version of the same course (Bailey
et al., 2018).
The Covid-19 pandemic has created setbacks for Georgia State’s adap-
tive courses. While the approach was continued, there were several semes-
ters during which the in-person lab experiences—the very ingredient that
helped bring success to the program—were suspended. The culture of trial
and error and experimentation will once again have to be deployed to
determine the next iterations for adaptive courses at Georgia State.
84 Timothy M. Renick

Predictive analytics in advising


As recently as 2010, 5,700 students were dropping out of Georgia State
every year, and the university did not know what prompted most of these
students to leave. Georgia State’s move to predictive analytics in advising
started with a simple and practical question: What would an advising sys-
tem look like that was designed to identify and address the factors leading
to student attrition before the students dropped out?
In 2011–2012, Georgia State became one of the first partners with the
Education Advisory Board (EAB). With the help of EAB, the university
used ten years of its own data, 144,000 student records and 2.5 million
Georgia State grades in a big data project. The goal was to find identifiable
academic behaviors by students that correlate in a statistically significant
way to their dropping or flunking out of the university. The projection was
that the university might be able to identify a few dozen such behaviors.
In fact, it found more than 800 identifiable student behaviors that cor-
related statistically to students dropping or flunking out of Georgia State.
For the last ten years, the university has been tracking every student every
night for each of these 800 risk factors, including such issues as students
registering for courses out of sequence, not attending their courses, doing
poorly on quizzes early in the semester, and underperforming in prereq-
uisite courses.
Georgia State launched GPS Advising in August 2012. Since then, stu-
dent outcomes have improved remarkably. In addition to improved gradu-
ation rates and closed equity gaps, students are graduating almost a full
semester more quickly than was the case in 2010, and they are more likely
to graduate in some of the most difficult disciplines. The number of STEM
degrees awarded annually has increased by 150%. Are any of these gains
attributable to GPS Advising?
Under the US Department of Education’s “First in the World” pro-
gram, GPS Advising at Georgia State was assessed as part of a large,
multiyear, randomized control trial of advising approaches at 11 public
universities. I served as principal investigator for the project, and Ithaka
S+R served as independent evaluator. For the Georgia State component
of the study, researchers at Ithaka randomly selected and assigned 1,040
incoming, first-time college students for the Fall term of 2016. All stu-
dents selected had to be low-income (as defined by eligibility for Federal
Pell grants), first in their families to attend college, or both. After consent
was secured from students, Ithaka assigned 492 students to the control
group and 502 students to the treatment group.
By the time of the launch of the RCT in the Fall of 2016, GPS Advising
was already in its fourth year of scaled operations at Georgia State—a fact
Predictive analytics, AI and the impact of personalized supports 85

which significantly shaped the research design. Control group students


were, of course, not denied advising during the study or limited to the
advising supports that would have been afforded to students at Georgia
State prior to GPS Advising. Rather, control group students received the
university’s “business as usual” practices as of 2016, which included ana-
lytics-based tracking and proactive outreach. Given that GPS Advising had
already been shown to be effective by many internal measures at Georgia
State, its supports could not morally be withheld from students. The treat-
ment group students received all of these supports but with greater inten-
sity and more assured fidelity. As such, for there to be any statistically
significant findings from the RCT, treatment group students would have
to show improvements in excess of those already documented by the exist-
ing deployment of GPS Advising. This was a tall order.
Despite these constraints, the study documented statistically significant
impacts on GPS Advising, with the strongest outcomes being enjoyed by
students from underserved backgrounds. After the first academic year
of the intervention, treatment group students had GPAs that were 0.17
points higher than control group students, and they had successfully accu-
mulated more credit hours (Alamuddin et al., 2019). By year two, the
treatment group students had completed an average of 2.19 additional
credit hours when compared to the control group, and the GPA increase of
0.17 points was maintained, with benefits particularly strong for students
in the lower half of the GPA distribution (Alamuddin et al., 2019). Across
the study, treatment group students also reported higher levels of institu-
tional know-how—the ability to navigate the university and its practices.
By the end of four academic years, the treatment group students had
GPAs that were 0.16 higher than control group students. GPAs of Pell-
eligible treatment-group students were 0.17 higher, underrepresented
minorities 0.14 higher, and first-generation students 0.19 higher. By the
end of year four, treatment group students had accumulated nearly six
additional credit hours. Even though interventions were not targeted by
race, Black students in the treatment group had GPAs that were 0.22
points than control group students and had accumulated 12 more credit
hours. Most significantly, Black students in the treatment group (about
40% of the total group) had graduation rates that were 8 percentage
points higher and a persistence rate (i.e., students who either graduated
or were still enrolled) that was 12 points higher than the treatment group
(Rossman et al., 2021).
Some critics had suggested that early-alert systems like GPS Advising
steer underrepresented students, especially Black students, away from
STEM degrees and toward “easier” majors. Ithaka researchers found no
evidence of this effect: “Black students at Georgia State were just as likely
86 Timothy M. Renick

to have a STEM major in the Spring 2020 term as other Georgia State stu-
dents (16%), across both the entire study sample and the treatment group”
(Rossman et al., 2021, p. 16).
While much attention has fallen on the novelty of Georgia State’s intro-
duction of predictive analytics in advising, the bulk of the time and effort
in launching and sustaining the program has been dedicated to struc-
tural and cultural changes to advising at the university. The university
has taken years to develop the administrative and academic supports that
allow the data to have a positive impact on students. Academic advising
was totally redesigned by Georgia State to support the new, data-based
approach. New job descriptions for advisors were approved by HR, new
training processes were established, and a new centralized advising office
representing all academic majors at Georgia State was opened. Roughly
40 additional advisors have been hired. More than this, new support sys-
tems for students had to be developed. When the university was blind
to the fact that hundreds of students were struggling in the first weeks
of certain courses, it was not compelled to act. Once hundreds of alerts
began to be triggered in the first weeks of Introduction to Accounting or
Critical Thinking, the university needed to do more than note that a prob-
lem had emerged. It needed to help. Advisors not only had to let students
know they were at risk of failing these courses but they also had to offer
some form of support. As a result, Georgia State has implemented one of
the largest near-peer tutoring programs in the country. In the current aca-
demic year, the university will offer more than one-thousand undergradu-
ate courses that have a near-peer tutor embedded in the course. Advisors
can refer struggling students identified by the GPS system to these “sup-
plemental instructors.” It has also opened a new STEM tutoring lab that
serves all STEM disciplines including math in one location. It delivers
most of the supports virtually as well as face-to-face.
These efforts cost money, but they generate additional revenues as well.
An independent study by the Boston Consulting Group looked specifically
at Georgia State’s GPS Advising initiative with attention to impacts and
costs (Bailey et al., 2019). It found that “the direct annual costs to advise
students total about $3.2 million, or about $100 per student” (Bailey
et al., 2019, p. 16). This is about three times the cost to advise students
before the initiative. Another $2.7 million has been invested by Georgia
State in new academic supports, such as the near-peer tutoring program.
But the return on investment of these programs must be factored in, as
well. Each one-percentage-point increase in success rates at Georgia State
generates more than $3 million in additional gross revenues to the uni-
versity from tuition and fees. Since the launch of GPS Advising, success
rates have increased by 8 percentage points. While the additional costs
Predictive analytics, AI and the impact of personalized supports 87

of the GPS program are close to $5 million, GPS Advising has helped the
university to gross $24 million in additional revenues. The annual costs
of the technology needed to generate the predictive analytics, by the way,
represent only about $300,000 of the $5 million total.

AI-enhanced chatbot
In 2015, Georgia State had a growing problem with “summer melt”—the
percentage of confirmed incoming freshmen who never show up for fall
classes. This is a vexing challenge nationally, especially for students from
underserved, urban backgrounds. Lindsay Page and Ben Castleman esti-
mate that 20–25% of confirmed that college freshmen from some urban
school districts never make it to a single college class (Castleman & Page,
2014). They simply melt out of the college-going population in the sum-
mer between finishing high school and starting college.
Ten years ago, Georgia State’s summer melt rate was 10%. By 2015,
with the large increase in low-income and first-generation college students
applying at Georgia State, the number had almost doubled to 19%. Once
again, Georgia State turned to data to understand the growing problem.
Using National Student Clearinghouse data to track the students, the uni-
versity found 278 students scheduled to start at Georgia State in the Fall
semester of 2015 who, one year later, had not attended a single day of
postsecondary work anywhere across the US. These students were 76%
non-White and 71% low income. Equity gaps in higher education are, in
effect, beginning before the first day of college classes.
In preparation for the Fall of 2016, Georgia State conducted an analy-
sis of all the bureaucratic steps it requires students to complete during
the summer before their freshmen year—a time when students are no
longer in contact with high-school counselors but not yet on college cam-
puses—and documented the number of students who had been tripped
up by each step. There were students who never completed the FAFSA,
the federal application for financial aid. Others completed the FAFSA but
did not comply with follow-up “verification” requests from the federal
government for additional documentation. Still others were tripped up
by requests for proof of immunization, transcript requirements, deposits,
and so forth. In each case, the data showed that the obstacles had been
disproportionately harmful to low-income and first-generation students—
students who typically lack parents and siblings who have previously navi-
gated these bureaucratic processes and who can offer a helping hand.
With better understanding of the nature of the problem, Georgia State
partnered with a start-up technology company, Mainstay (then known
as Admit Hub), to deploy one of the first AI-enhanced student-support
88 Timothy M. Renick

chatbots in the nation. What is a chatbot? It is an automatic texting plat-


form. Using data from previous summers about the types of questions that
had come from incoming freshmen, Georgia State developed a knowledge
base of more than 2,000 text-based answers to commonly asked questions
by incoming students—questions about financial aid, registration, immu-
nizations, housing, and so forth. Mainstay placed this knowledge base
on a smartphone texting platform so that students could text questions
24 hours a day, 7 days a week to the platform. The AI would determine
if there was an appropriate answer to the question in the knowledge base
or, alternately, whether the applicant’s question needed to be directed to a
staff member to write and answer and add it to the knowledge base.
The chatbot went live in June 2016 with projections that it might
have 5,000 to 6,000 exchanges with incoming students in the first three
months of operation. Between June and August 2016, the chatbot had
185,000 text-based exchanges with incoming freshmen. When a question
was posed by a student, the average response time was about seven sec-
onds. Suddenly, the playing field when it came to access to information
had been leveled. Students did not have to know someone with personal
knowledge of college bureaucracies in order to get help; they did not need
to show up in a particular office during business hours. They just had to
have access to their phones.
In the first year of the chatbot’s use, summer melt at Georgia State
declined by 19%. As the chatbot has been refined—the knowledge base
now includes 3,000 answers—summer melt has dropped further and is
now down 37% relative to the baseline year of 2015. Lindsay Page con-
ducted a study of the implementation of the chatbot at Georgia State, not
merely confirming the effectiveness of the chatbot in reducing summer
melt but also showing that the positive benefits were disproportionately
enjoyed by students from underserved backgrounds (Page et al., 2018).
By 2021, all Georgia State undergraduates had access to the chatbot
from before matriculation to graduation. With the tool shown to be effec-
tive in helping students navigate campus administrative purposes, Georgia
State launched a randomized control trial to gauge the impact of the chat-
bot when integrated into an academic course. Could the chatbot not only
help students complete the FAFSA but ace a mid-term?
A team of researchers from Brown University served as the independent
evaluators and collaborated with the Georgia State faculty and the chat-
bot staff and faculty to design and execute the study. A 500-student, asyn-
chronous online section of Introduction to American Government was
selected for the RCT. The course is required of all Georgia State under-
graduates and its online sections often have DFW rates that are above
25%.
Predictive analytics, AI and the impact of personalized supports 89

In the study, the same Mainstay chatbot technology used to address


summer melt was deployed within the government course, with the goal of
increasing students’ course engagement and academic performance. Half
of the students in the course were selected for the control group. They
continued to receive course supports, reminders, and prompts via e-mail
and the course’s learning management system (LMS). The other half of
the students in the course were randomly selected to be in the treatment
group. The course instructor and lead graduate teaching assistant worked
with the Brown researchers and the chatbot staff to deliver the course sup-
ports to these students via the chatbot in addition to email and the LMS.
Treatment group students received text-based nudges and had the ability
to ask questions about the course and its content using the chatbot. The
chatbot’s knowledge base was seeded with answers to commonly asked
questions about course content and requirements. These questions would
be answered automatically by the chatbot’s AI. The course’s lead teach-
ing assistant monitored incoming texts and provided responses via text to
questions that were not handled by the AI automatically. The team also
added a “quiz me” feature that allowed students to ask the bot to send
them multiple-choice questions about specific sections and topics in the
course, and the bot provided the students with immediate feedback on
their grasp of the material.
The study found “compelling evidence that the treatment shifted the
distribution of student grades, increasing the likelihood students earned
a grade of B or above in the course” (Meyer et al., 2022, p. 14). As evi-
denced by activity tracked in the LMS, treatment group students were
more engaged than the control group students in all major components of
the course—readings, activities, quizzes, exams. Results were particularly
strong for first-generation students.
Overall, students in the treatment group were 8 percentage points more
likely to earn a grade of B or higher than control group students. First-
generation students were 16 percentage points more likely to earn a grade
of B or higher. “Treated first-generation students earned reading grades
12 points higher, in-person activity grades 17 points higher, and higher
exam grades, scoring between 9–10 points higher on three of the four
exams” (Meyer et al., 2022, p. 16). As a result, the final grades in the
course for first-generation students in the treatment group were 11 points
higher than first-generation students in the control group—a full letter
grade improvement (Meyer et al., 2022).
Why were the benefits of the chatbot intervention particularly strong
for first-generation students? Some answers were suggested in focus-group
discussions that Georgia State conducted with students who use the chat-
bot. University staff heard from more than one first-generation student
90 Timothy M. Renick

that they were, at times, more likely to ask a question of the chatbot than
the course instructor or graduate assistant because they did not have to
feel “embarrassed” or “judged” for not knowing the answer. There was
even an instance where a student confided to the chatbot that the student
was depressed, and Georgia State was able to connect the student with
mental health supports. Ninety-two percent of treatment group students
said they support expanding the use of the tool to other courses (Meyer et
al., 2022), and Georgia State will be launching additional pilots using the
chatbot in high-enrollment courses during the Fall of 2022.
When the Brown researchers were asked why the chatbot intervention
in Introduction to American Government was so impactful, their response
was telling: “The AI chatbot technology enabled the instructional team
to provide targeted, clear information to students about their course per-
formance to date and necessary tasks to complete to ensure success in the
course” (Meyer et al., 2022). In short, it allowed the instructors, even in
an asynchronous online course of 500, to deliver personalized attention
to students.

Conclusion
Georgia State University’s student-success transformation, including an
81% increase in graduation rates and the elimination of equity gaps, has
been grounded in its development of innovative and scaled student sup-
port approaches that leverage analytics, AI, and technology to deliver
personalized supports to students. Emerging evidence from a series of
research studies demonstrates that these approaches can result in signifi-
cantly improved student outcomes and that they disproportionality ben-
efit students from underserved backgrounds.

References
Alamuddin, R., Rossman, D., & Kurzweil, M. (2019). Interim findings report:
MAAPS advising experiment. Ithaka S+R.
Bailey, A., Vaduganathan, V., Henry, T., Laverdiere, R., & Jacobson, M. (2019).
Turning more tassels: How colleges and universities are improving student
and institutional performance with better advising (2019). Boston Consulting
Group.
Bailey, A., Vaduganathan, V., Henry, T., Laverdiere, R., & Pugliese, L. (2018).
Making digital learning work: Success strategies from six leading universities
and community colleges. Boston Consulting Group.
Castleman, B., & Page, L. C. (2014). Summer melt: Supporting low-income
students through the transition to college. Harvard Education Press.
Predictive analytics, AI and the impact of personalized supports 91

Complete College America. (2021). Corequisite works: Student success models


at the university system of Georgia. https://files​ .eric​
.ed​
.gov​
/fulltext​
/ ED61
7343​.pdf
Horwich, L. (2015, November 25). Report on the federal pell grant program.
NASFAA. http://www​.nasfaa​.org ​/uploads​/documents​/ Pell0212​.pdf
Jones, D., Brooks, D. C., Wesaw, A., & Parnell, A. (2018). Institutions use data
and analytics for student success: Results from a landscape analysis. NASPA.
Meyer, K., Page, L., Smith, E., Walsch, T., Fifiled, C. L., & Evans, M. (2022).
Let’s chat: Chatbot nudging for improved course performance. Ed Working
Papers. https://www​.edworkingpapers​.com ​/ai22​-564
New York Times. (2017, January 18). Some colleges have more students from the
top 1 percent than the bottom 60.
Rossman, D., Alamuddin, R., Kurzweil, M., & Karon, J. (2021). MAAPS advising
experiment: Evaluation findings after four years. Ithaka S+R.
The Pell Institute. (2015). Indicators of higher education equity in the United
States: 45 year trend report (2015 revised edition). http://www​.pellinstitute​
.org​/downloads​/publications​- Indicators​_ of​_ Higher​_ Education​_ Equity​_ in​
_the​_US ​_45​_Year​_Trend​_ Report​.pdf
U.S. Department of Education. (2014). Institute of education sciences, national
center for education statistics (2014). Table 326.10. https://nces​.ed​.gov​/
programs​/digest​/d14​/tables​/dt14​_ 326​.10​.asp
6
PREDICTING STUDENT SUCCESS
WITH SELF-REGULATED BEHAVIORS
A seven-year data analytics study on a
Hong Kong University English Course

Dennis Foung, Lucas Kohnke, and Julia Chen

Introduction
The advancement of educational technology has allowed blended multi-
modal language learning content to become richer and more diverse. In
Hong Kong and beyond, institutions are increasingly adopting blended
components in the design of English for Academic Purposes (EAP) courses
(authors; Terauchi et al., 2018). Blended learning allows students to flex-
ibly, punctually, and continuously participate within the learning man-
agement system (LMS) (e.g., Blackboard, Canvas, Moodle) (Rasheed
et al., 2020). In this autonomous learning environment, self-regulation
becomes a critical factor for students to succeed in their studies (see Lynch
& Dembo, 2004). Students who cannot regulate their learning efficiently
may be less engaged with the learning content, leading to dissatisfaction
and unsatisfactory course grades (authors). As higher education institu-
tions are increasingly relying on blended learning to improve the quality
of learning and enhance and accelerate student success, exploring which
engagement variables are more predictive of student outcomes is critical.
Recently, learning analytics (LA) has emerged with the aim of allowing
stakeholders (e.g., administrators, course designers, teachers) to make
informed strategic decisions to create more effective learning environ-
ments and improve the quality of education (Sclater, 2016). By employing
LA, it is possible to understand, optimize, and customize learning, and
thus bridge pedagogy and analytics (Rienties et al., 2018).
This study investigates how students’ self-regulated behaviors are asso-
ciated with their outcomes via a LA approach. Learners in this study were
students in an EAP course at a university in Hong Kong that included a
DOI: 10.4324/9781003244271-8
Predicting student success with self-regulated behaviors 93

mandatory multimodal language learning package. This study is an essen-


tial contribution to the field as large-scale data analytics studies such as
this (i.e., with 17,000 students across seven years) are rare in the educa-
tional context.

Literature Review
Generally, LA can be defined as “the measurement, collection, analysis,
and reporting of data about learners and their contexts, for purposes of
understanding and optimizing learning and the environments in which
it occurs” (Siemens & Long, 2011, p. 32). LA provides various benefits
for higher education institutions and stakeholders as it produces summa-
tive, real-time, and predictive models that provide actionable informa-
tion during the learning process (Daniel, 2015). Hathaway (1985, p. 1)
argues that the “main barrier to effective instructional practice is lack of
information.” LA data can also facilitate the detailed analysis and moni-
toring of individual students and allow researchers to construct a model
of successful student behaviors (Nistor & Hernandez-Garcia, 2018).
This includes recognizing a loss of motivation or attention, identifying
potential problems, providing personalized intervention for at-risk stu-
dents, and offering feedback on progress (Avella et al., 2016; Gašević
et al., 2016).
Recently, several articles were published on how LA can be used to gain
insight into students’ learning (Du et al., 2021). One study documented
an early intervention approach used to predict students’ performance
and report outcomes directly to them via email (Arnold & Pistilli, 2012).
This “Course Signal” intervention collected various data points, such
as demographics, grades, academic history, and engagement with learn-
ing activities. The personalized email informed students of their learn-
ing performance to date and included additional information, such as
changes required for at-risk students. LA can make complex data digest-
ible for students and teachers by breaking it down and using visuals to
show meaningful information (e.g., trends, patterns, correlations, urgent
issues) through a dashboard (Sahim & Ifenthaler, 2021). Schwendimann
et al. (2017), in a review of articles published between 2010 and 2015,
found that dashboards often rely on data from a single LMS and utilize
simple pie charts, bar charts, and network graphs. However, in Brown’s
(2020) study, teachers were bewildered by how data were presented in the
dashboard, which they suggested undermined their pedagogical strate-
gies. While LA is essential for improving education as it helps students
and teachers make informed decisions (Scheffel et al., 2014), more atten-
tion should be paid to the delivery of data (see Vieira et al., 2018). It is
94 Dennis Foung, Lucas Kohnke, and Julia Chen

essential to find a balance between technology and learning to predict


student behaviors and spot potential issues (Bienkowski et al., 2012).
As higher education institutions turn to blended learning because of the
increase in demand (Brown, 2016), LA has become instrumental in assess-
ing how students interact with content (e.g., quizzes, discussion boards,
or forums; see Golonka et al., 2014) and how teachers analyze students’
work (Atherton et al., 2017; authors). The least sophisticated analytics
simply identify students who participate less than others and notify them
using the dashboard. More sophisticated versions consider “listening” and
“talking” in the forums (Wise et al., 2013), the quality of contributions
(Ferguson & Shum, 2011), or the development of students’ written ideas
using natural language processing software (Larusson & White, 2014).
The implementation of LA has enabled institutions to use adaptive
learning systems (Greller & Drachsler, 2012) that respond to students’
interactions with the LMS (Kerr, 2016). One system, developed by Hsieh
et al. (2012), recommended texts suitable for individual language learn-
ers to improve and sustain their learning interests. Moreover, a personal-
ized LMS provides means to encourage an inclusive learning environment
(Clow, 2013; Nguyen et al., 2018). Siemens and Long (2011) proposed a
model in which students were encouraged to self-regulate their learning,
as the frequency with which they accessed and used LMS tools (e.g., dis-
cussion boards) influenced their final grades.
In a review of articles published between 2008 and 2013, Papamitsiou
and Economides (2014) found that virtual learning environments were
the most common settings for the analysis of student data. They iden-
tified system logs, open datasets, and questionnaires as the most com-
mon data sources while classification, clustering, and regression were the
most commonly used methods. In an examination of articles published
between 2000 and 2015, Avella et al. (2016) discovered that data analy-
sis and social network analysis were the most frequently used models.
Furthermore, the study found that the main benefits of LA were improv-
ing curriculum design, enhancing students’ and teachers’ performance
and providing personalized services.
However, empirical studies on predicting student outcomes have pro-
vided conflicting information. For example, Macfadyen and Dawson
(2012) discovered a significant correlation between the use of discus-
sion forums and LMS-based email and students’ performance in blended
learning. Similarly, Fidalgo-Blanco et al. (2015) reported that the num-
ber of messages students posted on discussion forums correlated with
their grades for teamwork. However, Wolf and Zdrahal (2012) reported
that changes in online activity—rather than absolute levels—were more
appropriate indicators of at-risk students. In contrast, Iglesias-Pradas et
Predicting student success with self-regulated behaviors 95

al. (2015) found no correlation between online activity indicators and the
degree of commitment or teamwork. Thus, the efficacy of LA in predict-
ing and measuring student outcomes warrants further investigation.
As more students enroll in large-scale EAP courses in higher education
institutions around the world, it is becoming increasingly important to pre-
dict their outcomes based on their interaction with course content on LMSs
(author). EAP students face challenges in improving their academic language
proficiency and communicative competence so as to succeed in their academic
studies (Hyland, 2006). Although studies have confirmed the positive ben-
efits of LMS and blended learning for EAP students (Harker & Koutsantoni,
2005), more research is needed to shed light on which measures of engage-
ment can support students’ efforts and help them succeed.
Through careful analysis of data collected by an LMS, LA offers insti-
tutions and EAP teachers opportunities to make adjustments and improve-
ments to the content and delivery of their courses. Although studies in other
disciplines have analyzed LMS data to make sense of learning behavior and
its impact on learning and persistence in the learning process (e.g., Choi et al.,
2018; Tempelaar et al., 2018), LA is still an emerging field in EAP.
With surging interest in LA, especially for predicting student outcomes,
many studies are interested in comparing the effectiveness of various
machine learning algorithms (e.g., Yagci, 2022; Waheed et al., 2020; Xu et
al., 2019). Within higher education, algorithm comparison studies include
sample sizes ranging from around 100 (Burgos et al., 2018) to more than
10,000 (Waheed et al., 2020; Cru-Jesus et al., 2020). However, large-
scale comparison (with sample size > 10,000) is still very rare. Some com-
monly assessed algorithms include Logistical Regression, Support Vector
Machine (SVM), Artificial Neural Network (ANN), and Classification
Tree. The metric most commonly used to evaluate the effectiveness of
algorithms is the accuracy rate, which is simply the percentage of correct
predictions divided by the total number of cases. Some studies also exam-
ined the F1 ratio, which shows the ratio between recall and precision (e.g.,
Yagci, 2022). In the educational context, most studies achieve a predictive
accuracy rate of between 70% and 80% (Yagci, 2022: 70–75%; Waheed
et al., 2020: 80%). The F1 ratio varies from 0.69 to 0.87. Numerous stud-
ies agree that SVM can achieve a high accuracy rate and may perform
better than other algorithms, whether it is being compared with ANN
(Xu et al., 2019), with Logistical Regression/Classification Tree (Yagci,
2022). However, SVM does not always outperform other algorithms.
For example, Nieto et al. (2018) found no significant difference between
ANN and SVM in their context. Also, some algorithms are similar to
a black box with low interpretability, such ANN and SVM (Cru-Jesus
et al., 2020). Not many studies have directly compared only Logistical
96 Dennis Foung, Lucas Kohnke, and Julia Chen

Regression, ANN, and Classification Tree. This suggests the need for a
large-scale comparison between algorithms in the educational context. In
particular, this chapter aims to address this knowledge gap by answering
the following questions:
1. In what ways can three common machine learning algorithms
(Logistical Regression, ANN, and Classification Tree) be used to pre-
dict students’ grades?
2. Among the three algorithms (Logistical Regression, ANN, and
Classification Tree), which one performs best when employed for adap-
tive learning design?

Methodology

Context of study
The study was conducted with students in a Hong Kong university taking
an English for Academic Purposes course (EAPC). While the university
requires students to complete the university EAP requirement with differ-
ent entry and exit points, EAPC with an annual intake of around 1000 is
a mandatory course for all students. Some students with a lower English
proficiency may take a proficiency-based course before taking EAPC, but
most students will take EAPC as the first university English course fol-
lowed by an advanced course.
EAPC is a 3-credit 13-week course and there were more than fifty sec-
tions for each academic year. The course has been delivered as a blended
learning course and the classes meet in person every week with a sup-
plementary multimodal learning package. This study used a dataset from
the university learning management system that includes students from 7
years and 14 cohorts. This dataset runs from the first year that this course
included the blended learning package, to right before the Fall semester
in 2019 when the course delivery mode was temporarily changed due to
the social unrest in Hong Kong and the COVID-19 pandemic. The course
mode remains in its altered form at the time of writing.
While the dataset includes several cohorts and course sections, a strict
and well-established quality assurance mechanism made comparison across
cohorts/sections possible. All cohorts and sections completed the same assess-
ment tasks with the same assessment weighting and grading descriptors. All
cohorts and sections used standardized notes and online resources distrib-
uted through the university learning management system. There were only
very minor changes to the assessment guidelines and notes during the study
period, e.g., removal of typographical errors. While teachers have the flexibil-
ity to upload more materials, standardized notes and materials are used as the
Predicting student success with self-regulated behaviors 97

backbone of teaching in all classes. To ensure consistent grading standards,


all new and existing teachers have to complete a grading moderation exercise
each semester. New teachers also received support from subject leaders on
teaching and grading, so that teaching and assessment standards are compa-
rable across cohorts and sections.
EAPC has three assignments: an in-class essay-writing assignment (30%),
a take-home essay-writing assignment (30%), and a pair academic presen-
tation (40%). The writing assignments were assessed with four assessment
components: Content Development, Organization, Language (Style and Use
of English), and Referencing. The presentation was assessed with the compo-
nents of Content Development, Delivery Skills, Language, and Pronunciation
and Fluency. Standardized grading descriptors were used and a moderation
exercise was required for all teachers each year for each assessment. The final
outcome of the course is the overall course grade. Each assessment compo-
nent and the overall course grade are provided on a rating scale from 0 to 4.5
and will be used for analysis in the current study.
Other than the formal assessments described above, students have to com-
plete a multimodal learning package (MLP). There is a mark penalty on over-
all course grades for students not meeting the MLP completion requirement:
no penalty for 50% completion; 0.5 grades for 26% to 49% completion; 1
grade for 0% to 25% completion. MLP across the years included numerous
learning sessions, and each session included a video or an online resource sup-
plemented by a quiz hosted on the learning management system. All videos,
resources, and quizzes were tailor-made by the course developer with the aim
of supplementing in-class learning. While MLP of earlier cohorts included
more learning sessions (e.g., one video with multiple quizzes), MLP in gen-
eral covered academic style, referencing, genre knowledge of essays, and tips
for academic presentation. Students, regardless of their cohorts, were all
expected to complete their MLP activities each week to supplement what they
had learned in class. All MLP quizzes were hosted on the university learning
management system and contained multiple-choice questions or gap-filling
exercises. Students could attempt the activities as many times as they wished.
While there are differences in MLPs across cohorts, we believe that the MLPs
in different cohorts were comparable, and proper data processing was con-
ducted to focus on only students’ comparable interactions with the MLP.

Data processing and cleansing


The dataset for this study was retrieved from the university learning man-
agement system after obtaining approval through the university Data
Governance Framework. The dataset includes two files: (1) the assess-
ment component grades and overall grades of all students; and (2) an
98 Dennis Foung, Lucas Kohnke, and Julia Chen

event log with all events in the course, including the date and time of
attempts at MLP quizzes and scores for each MLP attempt. To process
the data for analysis, the two datasets were merged: each row of data
represents one student with the respective overall course grade, the day
the student started completing a quiz in MLP, the percentage of quizzes
that the student completed, the number of days between the first and the
last attempt at any MLP quizzes, the average score of any first attempt of
any MLP quizzes and the average score of the last attempt at any MLP
quiz. Retrieving and computing these variables will allow all rows to be
comparable despite variations in the number of quizzes across cohorts.
After merging the two datasets, further data processing and cleansing
began. First, students who did not complete the course (with no final grade)
were removed. Then, all MLP score variables were transformed to a standard
score for easy analysis (i.e., the average of the first attempt, that of the last
attempt, and differences between attempts). The days-related variables were
not transformed for easy understanding (i.e., the start day in using MLP, the
number of days until the first attempt at an MLP quiz, the number of days
between first and last attempt, etc.). The choice of predictors was made for
convenience, as the goal of the analysis is to identify predictors that are read-
ily available on the learning management system for further adaptive learn-
ing design. The overall grades (which range from 0 to 4.5) were transformed
into a binary variable, Good or At-Risk Performance, with 3.0 being the
cut-off point. The cut-off point was decided because (1) the notation for 3.0
was “Good” according to the university grading scheme, and (2) the subject
leader considers this cut-off point to be pedagogically meaningful.

Participants
After the data cleansing and processing procedures described above, there
were a total of 17,968 participants in the dataset. While the original data-
set did not include students’ demographic information, students taking this
course came from various disciplines including Applied Science, Engineering,
Design, Tourism and Hospitality, Nursing, and Rehabilitation Science.

Data analysis
This study tested three types of machine learning techniques to predict
students’ grades. To maximize comparability across the results provided
by the three predictive algorithms, the dataset was first randomly divided
into a training (n = 12,000) and testing (n = 5968) dataset, and all three
algorithms were trained with the same dataset. The three machine learning
techniques were tested with the same target variable, the binary variable
Predicting student success with self-regulated behaviors 99

on student performance, and all six predictors as introduced in the previ-


ous section. To evaluate the three algorithms, the accuracy rate and the F1
ratio were compared. Accuracy rate and F1 ratio are commonly used in
comparison studies to evaluate the performance or accuracy of evaluation.
Predictive accuracy is the percentage of grades being correctly predicted by
a given algorithm, divided by the total number of cases. While the predic-
tive accuracy and F1 ratios were presented in the results section, the two
indicators for the training dataset were examined as well. If there are sub-
stantial differences in accuracy between two datasets, there can be issues
of overfitting. However, this problem was not present among the three
algorithms in the current study. In addition, the results were analyzed in
terms of how the predictors can be interpreted to help with adaptive design.
All analyses were conducted with R version 4.0.3, with the library and
features set to use default settings. Classification Tree was conducted with
the rpart () library (see Therneau et al., 2022), which is a classification
and regression tree algorithm (CART). After building the Classification
Tree, the rpart​.pl​ot() function was used to plot the Classification Tree (see
Milborrow, 2022). As a common practice, all predictors were entered for
analysis, and it is up to the Classification Tree algorithm to decide whether
to retain a predictor. The Logistical Regression was conducted with the
glm() (see The R Core Team, 2022, p. 1453). We recorded and presented
the results of all predictors without removing any nonsignificant predic-
tors. Artificial Neural Network (ANN) was conducted with the nnet ()
function (see Ripley & Venables, 2022). To determine the number of hid-
den layers, we examined the sum of square error by trying the nnet() with
1–10 hidden layers. When there were two hidden layers, the sum of square
error was the smallest, so two hidden layers were used for analysis.

Results
This study aims to examine whether the three common machine learning
algorithms, Classification Tree, Logistical Regression, and ANN, can be
used to predict students’ grades for adaptive learning purposes. Whether
students can attain a “Good” grade or not was used as an indicator of suc-
cess, and seven self-regulated learning indicators were used as predictors
for each predictive algorithm.

Performance of models
All three models were successfully established, with Table 6.1 showing
their overall accuracy. Three algorithms achieved a satisfactory accuracy,
with an overall accuracy of 80% or above. While there is no one common
100 Dennis Foung, Lucas Kohnke, and Julia Chen

TABLE 6.1 Evaluation Indicators for Various Techniques

Overall Precision Recall F1


Accuracy (%) (%)
(Testing) (%)

Classification Tree 85.74 78.34 76.29 0.7730


Logistical 80.10 59.10 72.61 0.6516
Regression
Artificial Neural 81.52 58.46 78.04 0.6809
Network
Note: Accurate Prediction = Accurate Prediction of Students Achieving Good Grades

threshold for overall accuracy, 80% accuracy will mean that most of the
students will be classified as Good (expected to obtain a grade of 3.0 or
above) or At-Risk (likely to achieve below 3.0) accurately, which seems
to be operationally acceptable. An overall accuracy rate of 80% is also
comparable to Yagci (2022) but only the accuracy rate for Classification
Tree was as high as that from Waheed et al. (2020) with an accuracy of
85%. While there are differences in performance (e.g., Precision, Recall,
F1) to be discussed later, in general, all three models seem to be effective
in predicting students’ performance from self-regulated behaviors.
While the overall accuracy of all three models seems satisfactory, there
is a need to further evaluate the performance of each model. As a joint
measure of precision and recall rate, the F1 ratio provides information on
whether a model can successfully predict Good grades. The Classification
Tree with an F1 ratio of 0.77 performs better than Logistical Regression
(F1: 0.65) and ANN (F1: 0.68). This is also better than some past stud-
ies, such as Yagci (2022) with 0.69–0.72. Further consideration of these
parameters seems to suggest that the Classification Tree performs slightly
better than the other algorithms.

Details of predictors
The Classification Tree identifies only the average final score in quizzes
and improvement between attempts as predictors. (see Figure 6.1 for
the Classification Tree’s results.) In the first layer, the average final score
was used as the predictor and the cut-off was 0.41. In the second and
third layers, students were classified based on their improvement between
attempts. For example, if the average final score of a student is 0.3 and the
difference in attempt was –0.1, the student would be classified as At-Risk.
The adequacy of using two predictors in adaptive learning design deserves
further discussion.
Predicting student success with self-regulated behaviors 101

FIGURE 6.1 
Useful Information for Adaptive Design: Classification Tree
Developed

Unlike the Classification Tree, the Logistical Regression model identi-


fies all input variables as statistically significant predictors (at a p<0.05
level), albeit sometimes with only marginal impact. Table 6.2 presents the
details of these predictors. With a given estimate of the predictors and the
mean scores of the predictors, it is possible to estimate the probability of
a student achieving a Good grade. The baseline probability in this model
was 25.44%. In other words, if a student achieves a mean score in all self-
regulated items, that student will have a 25.44% chance of achieving a
Good grade. This can be used as baseline probability to understand how
the change of predictors can affect the likelihood of a student achieving a
Good grade. For example, when other items are held constant (i.e., mean
score) and a student completes one more quiz, the student will increase
his/her probability of achieving a Good grade by 1.91%, to 27.35%.
These changes in probability deserve further discussion in terms of their
implications for adaptive learning.
ANN provide the least information on predictors. See Figure 6.2 as a
visual representation of the predictions of the ANN. Due to the nature of
the algorithm, all predictors are drawn from the input layers. While there
are weights for each hidden layer, the impact of individual predictors on
102

TABLE 6.2 Useful Information for Adaptive Design: Strength of Predictors from Logistical Regression

Mean Estimate Hypothesized Change New Probability Change in Implications


of Being a Good Probability (%)
Student (%)

Start Day 24.56 0.0078 Ten days later 26.95 1.51 Later better
days
nth Day Before 19.76 0.0291 Ten days later 20.32 –5.12 Earlier better
Deadline for First days
Attempt
Duration 47.53 0.0079 Ten days longer 26.97 1.53 Longer better
days
No. of Quizzes 8.11 0.9841 One more quiz 27.35 1.91 More better
Submitted quiz
(10 quizzes assumed)
Dennis Foung, Lucas Kohnke, and Julia Chen

Average Score of 0.00 0.3954 One standard 33.60 8.16 Higher better
First Attempt (in deviation higher
standard score)
Average Score of 0.00 1.3051 One standard 55.72 30.28 Higher better
Final Attempt (in deviation higher
standard score)
Improvement between 0.00 –2.0061 One standard 4.39 –21.05 No improvement
First and Final deviation more better
Attempt (in improvement
standard score)
Baseline probability: If a student achieves the mean score for all items, the probability of obtaining a Good grade will be 25.44%.
Predicting student success with self-regulated behaviors 103

FIGURE 6.2 Visual Representation of Artificial Neural Network

the model remains unknown. The lack of information on the impact of


predictors has important implications for adaptive learning design, as will
be discussed later.

Discussion

Adopting data analytics for adaptive learning


In this section, we discuss the contribution of this research and its
implications in the light of the results of LA and adaptive design in the
context of a large-scale EAP course at a higher education institution in
Hong Kong (Greller & Drachsler, 2012). In particular, the results help us
understand how LA can be used to predict student outcomes and provide
stakeholders with valuable insight, enabling them to identify and monitor
students’ learning progress through an LMS (see Hinkelman, 2018). As
in past studies which build predictive models for students’ success, such
as Nister and Hernandez-Garcia (2018) and Siemens and Long (2011),
this study successfully established models to predict students’ success via
the three algorithms of Classification Tree, Logistical Regression, and
ANN. One possible way to apply this finding is to establish an adap-
tive warning system. For example, based on the results from Logistical
Regression that late starting predicts risk of failure, if students start
quizzes too close to the deadline, LMS can send the student a message
104 Dennis Foung, Lucas Kohnke, and Julia Chen

to remind them, with a link for the quiz. The same can be done based
on the Classification Tree finding predicting reduced success if students
complete a quiz and then wait more than a week to retake it. Students
could be sent reminders after a week if they scored low on a quiz and had
not yet revisited it. Such adaptive intervention is similar to those intro-
duced in other disciplines (see Arnold & Pistilli, 2012; Daniel, 2015;
Picciano, 2012). Messages could also be sent to teachers, enabling them
to follow up with students regarding their online behaviors. Depending
on the way these models are implemented in the LMS, the recommenda-
tion/reminder message may serve as a good source of feedback to help
students and teachers in the teaching and learning process (Siemens &
Long, 2011). Also, while all these indicators are relevant in general to
certain aspects of self-regulated learning (which are beyond the scope of
the chapter on data analytics and adaptive learning), the incorporation
of these students’ success models into the LMS will, to a certain degree,
promote self-regulated learning (Siemens & Long, 2011) and improve the
learning experience of students.
As a study to model the outcomes of students’ online behaviors for the
purpose of better adaptive learning design, this study confirms existing
findings that these behaviors are important predictors of outcomes. While
past studies examine participatory variables such as forum participation
(Fidalgo-Blanco et al., 2015; Macfadyen & Dawson, 2012), this study
used participatory indicators from online quizzes, such as Start Day, nth
Day Before Deadline for First Attempt, Duration, Number of Quizzes
Submitted, Average Score of First Attempt and at Final Attempt, and
Improvement between Attempts. The results seem to suggest that par-
ticipatory variables for online activities, such as participation in forums
or online quizzes, can be good predictors of students’ success. Also,
this study indicates that changes in online activities (e.g., improvement
between scores) can be an important predictor of success, a conclusion
that aligns with Wolff and Zdrahal’s (2012) findings. These indictors can
help teachers focus on particular aspects of online learning when identify-
ing at-risk and/or outstanding students.

Algorithm comparison: modeling students’ success


The second research question aimed to find out whether one of the algo-
rithms performs better than the others. We found that all algorithms
performed fairly well with an accuracy rate of over 80% and all were com-
parable to success rates in past studies, such as Yagci (2022) and Waheed et
al. (2020). Despite a comparable accuracy rate, there are more variables in
the F1 ratio across the three algorithms and the higher the ratio the better
Predicting student success with self-regulated behaviors 105

the balance the algorithm can strike between predicting cases correctly and
not having too many positive cases. From this perspective, the Classification
Tree seems to perform better, and in fact, the Classification Tree achieved
the highest predictive accuracy in the current study as well. While most
past studies identified SVM (not tested here) to have better performance,
Classification Tree is also seldom identified as the best compared with
ANN or Logistical Regression, making the findings of the present study
unusual (e.g., Xu et al., 2019). However, in Xu et al. (2019) the accuracy
rate of the comparable studies was lower, with 62.3% for Classification
Tree and 70.95 for ANN. This may also indicate that Classification Tree
outperforms ANN with a large sample size but not with a smaller one.
Another interesting finding was that no interpretation can be made with
ANN except on whether a student is predicted to perform well or not. Like
SVM, ANN works as a “black box” that provides limited information
about its calculations (Cruz-Jesus et al., 2020). In contrast, Classification
Tree and Logistical Regression provide useful information to practitioners
on how and in what ways a student may or may not perform well. This
advantage in interpretability can be important for decision-making in the
educational context (see Nieto et al., 2018 on how decisions can be made
in an academic context with these algorithms). Therefore, we argue that
Classification Tree and Logistical Regression seem to be more appropriate
for an educational context that aims for adaptive design.

Conclusion
This study examined how learners’ online interactions in an English for
Academic Courses module delivered in a Hong Kong university can be
used to predict student success and which algorithm performs better in
this context. We successfully established three algorithms using Logistical
Regression, Artificial Neural Networks, and Classification Tree. While
all these algorithms performed well in terms of predictive accuracy,
Classification Tree had a superior F1 ratio. Both Logistical Regression
and Classification Tree produce useful insights for adaptive learning.
Therefore, we suggest the more extensive use of Classification Tree to pro-
mote adaptive learning.

References
Arnold, K., & Pistilli, M. D. (2012). Course signals: Using learning analytics to
increase student success. In Proceedings of the 2nd international conference
on learning analytics and knowledge (pp. 267–270). ACM.
Atherton, M., Shah, M., Vazquez, J., Griffiths, Z., Jackson, B., & Burgess, C.
(2017). Using learning analytics to assess students engagement and academic
106 Dennis Foung, Lucas Kohnke, and Julia Chen

outcomes in open access enabling programme. Open Learning: The Journal


of Open and Distance Learning, 32(2), 119–136. https://doi​.org​/10​.1080​
/02680513​. 2017​.1309646
Avella, J. T., Kebritchi, M., Nunn, S. G., & Kanai, T. (2016). Learning analytics
method, benefits, and challenges in higher education: A systematic review.
Online Learning, 20(2), 13–29.
Bienkowski, M., Feng, M., & Means, B. (2012). Enhancing teaching and
learning through educational data mining and learning analytics: An issue
brief. Washington, DC: U.S. Department of Education, Office of Educational
Technology. http://www​.ed​.gov​/technology.
Brown, M. (2020). Seeing students at scale: How faculty in large lecture courses
act upon learning analytics dash-board data. Teaching in Higher Education,
25(4), 384–400. https://doi​.org​/10​.1080​/13562517​. 2019​.1698540
Brown, M. G. (2016). Blended instructional practice: A review of the empirical
literature on instructors’ adoption and use of online tools in face-to-face
teaching. Internet and Higher Education, 31, 1–10. https://doi​.org​/10​.1016​/j​
.iheduc​. 2016​.05​.001
Burgos, C., Campanario, M. L., Peña, D., Lara, J. A., Lizcano, D., &
Martínez, M. A. (2018). Data mining for modeling students’ performance: A
tutoring action plan to prevent academic dropout. Computers & Electrical
Engineering, 66, 541–556. https://doi​.org ​/10​.1016​/j​.compeleceng​. 2017​.03​
.005
Choi, S. P. M., Lam, S. S., Cheong Li, K., & Wong, B. T. M. (2018). Learning
analytics at low cost: At-risk student prediction with clicker data and
systematic proactive interventions. Journal of Educational Technology and
Society, 21(2), 273–290.
Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education,
18(6), 683–695. https://doi​.org​/10​.1080​/13562517​. 2013​.827653
Cruz-Jesus, F., Castelli, M., Oliveira, T., Mendes, R., Nunes, C., Sa-Velho, M.,
& Rosa-Louro, A. (2020). Using artificial intelligence methods to assess
academic achievement in public high schools of a European Union country.
Heliyon, 6(6), e04081.
Daniel, B. (2015). Big data and analytics in higher education: Opportunities
and challenges. British Journal of Educational Technology, 46(5), 904–920.
https://doi​.org​/10​.1111​/ bjet​.12230
Du, X., Yang, J., Shelton, B. E., Hung, J.-L., & Zhang, M. (2021). A systematic
meta-review and analysis of learning analytics research. Behavior and
Information Technology, 40(1), 49–62. https://doi​.org​/10​.1080​/0144929X​
.2019​.1669712
Ferguson, R., & Shum, S. B. (2011, February 27). Learning analytics to identify
exploratory dialogue within synchronous text chat. In Association of
Computing Machinery: Proceedings of the 1st international conference on
learning analytics and knowledge (LAK’11) (pp. 99–103).
Fidalgo-Blanco, Á., Sein-Echaluce, M. L., García-Peñalvo, F. J., & Conde, M. Á.
(2015). Using learning analytics to improve teamwork assessment. Computers
in Human Behavior, 47, 149–156.
Predicting student success with self-regulated behaviors 107

Gašević, D., Dawson, S., Rogers, T., & Gašević, D. (2016). Learning analytics
should not promote one size fits all: The effects of instructional conditions in
predicting academic success. Internet and Higher Education, 28, 68–84.
Golonka, E. M., Bowles, A. R., Frank, V. M., Richardson, D. L., & Freynik, S.
(2014). Technologies for foreign language learning: A review of technology
types and their effectiveness. Computer Assisted Language Learning, 27(1),
70–105. https://doi​.org​/10​.1080​/09588221​. 2012​.700315
Greller, W., & Drachsler, H. (2012). Translating learning into numbers: A generic
framework for learning analytics. Educational Technology and Society, 15(3),
42–57.
Harker, M., & Koutsantoni, D. (2005). Can it be as effective? Distance versus
blended learning in a web-based EAP programme. ReCALL, 17(2), 197–216.
https://doi​.org​/10​.1017​/S095834400500042X
Hathaway,W. E. (1985). Hopes and possibilities for educational information
systems. Paper presented at the invitational conference Information Systems
and School improvement: Inventing the future. UCLA Center for the Study of
Evaluation.
Hinkelman, D. (2018). Blending technologies in second language classrooms.
Valencia, SP: Palgrave Macmillan.
Hsieh, T. C., Wang, T. I., Su, C. Y., & Lee, M. C. (2012). A fuzzy logic-based
personalized learning system for supporting adaptive English learning. Journal
of Educational Technology and Society, 15(1), 273–288.
Hyland, K. (2006). English for academic purposes: An advanced resource book.
Routledge.
Iglesias-Pradas, S., Ruiz-de-Azcárate, C., & Agudo-Peregrina, Á. F. (2015).
Assessing the suitability of student interactions from Moodle data logs as
predictors of cross-curricular competencies. Computers in Human Behavior,
47, 81–89.
Kerr, P. (2016). Adaptive learning. ELT Journal, 70(1), 88–93.
Lynch, R. & Dembo, M. (2004). The relationship between self-regulation and
online learning in blended learning context. International Review of Research
in Open and Distance Learning, 5(2), 1–16.
Larusson, J. A., & White, B. (2014). Learning analytics: From research to
practice. Springer.
Macfadyen, L. P., & Dawson, S. (2012). Numbers are not enough: Why
e-learning analytics failed to inform an institutional strategic plan. Journal of
Educational Technology and Society, 15(3), 149–163.
Milborrow, S. (2022). Package ‘rpart​.pl​ot’. https://cran​.r​-project​.org​/web​/
packages​/rpart​.plot​/rpart​.plot​.pdf
Nguyen, A., Gardner, L. A., & Sheridan, D. (2018). A framework for applying
learning analytics in serious games for people with intellectual disabilities.
British Journal of Educational Technology, 49(4), 673–689.
Nieto, Y., García-Díaz, V., Montenegro, C., & Crespo, R. G. (2018). Supporting
academic decision making at higher educational institutions using machine
learning-based algorithms. Soft Computing (Berlin, Germany), 23(12), 4145–
4153. https://doi​.org​/10​.1007​/s00500​- 018​-3064-6
108 Dennis Foung, Lucas Kohnke, and Julia Chen

Nistor, N., & Hernández-Garcíac, Á. (2018). What types of data are used in
learning analytics? An overview of six cases. Computers in Human Behavior,
89(1), 335–338. https://doi​.org​/10​.1016​/j​.chb​. 2018​.07​.038
Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational
data mining in practice: A systematic literature review of empirical evidence.
Journal of Educational Technology and Society, 17, 49–64.
Picciano, A. G. (2012). The evolution of big data and learning analytics in
American higher education. Journal of Asynchronous Learning Networks,
16(3), 9–20.
Rasheed, R. A., Kamsin, A., & Abdullah, N. A. (2020). Challenges in the
online component of blended learning: A systematic review. Computers and
Education, 144. https://doi​.org​/10​.1016​/j​.compedu​. 2019​.103701
Rienties, B., Lewis, T., McFarlane, R., Nguyen, Q., & Toetenel, L. (2018).
Analytics in online and offline language environments: The role of learning
design to understand student online engagement. Computer Assisted Language
Learning, 31(3), 273–293. https://doi​.org​/10​.1080​/09588221​. 2017​.1401548
Ripley, B., & Venables, W. (2022). Package ‘nnet’. https://cran​.r​-project​.org​/web​
/packages​/nnet ​/nnet​.pdf
Sahim, M., & Ifenthaler, D. (2021). Visualisations and dashboards for learning
analytics. Springer.
Scheffel, M., Drachsler, H., Stoyanov, S., & Specht, M. (2014). Quality indicators
for learning analytics. Educational Technology and Society, 17(4), 117–132.
Schwendimann, B. A., Rodriques-Triana, M. J., Vozniuk, A., Prieto, L. P.,
Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2017). Perceiving
learning at a glance: A systematic literature review of learning dashboard
research. IEEE Transactions on Learning Technologies, 10(1), 30–41. https://
doi​.org​/10​.1109​/ TLT​. 2016​. 2599522
Sclater, N. (2016). Developing a code of practice for learning analytics. Journal of
Learning Analytics, 3(1), 16–42. https://doi​.org​/10​.18608​/jla​. 2016​. 31.3
Siemens, G., & Long, P. (2011). Penetrating the fog: Analytics in learning and
education. EDUCAUSE Review, 46(5), 30–40.
Tempelaar, D., Rienties, B., Mittelmeier, J., & Nguyen, Q. (2018). Student
profiling in a dispositional learning analytics application using formative
assessment. Computers in Human Behavior, 78, 408–420. https://doi​.org​/10​
.1016​/j​.chb​. 2017​.08​.010
Terauchi, D. T., Rienties, B., Mittelmeier, J., & Nguyen, Q. (2018). Student
profiling in a dispositional learning analytics application using formative
assessment. Computers in Human Behavior, 78, 408–420. https://doi​.org​/10​
.1016​/j​.chb​. 2017​.08​.010
The R Core Team. (2022). R: A language and environment for statistical
computing. https://cran​.r​-project​.org ​/doc​/manuals​/r​-release​/fullrefman​.pdf
Therneau, T., Atkinson, B., & Ripley, B. (2022). Package ‘rpart’. https://cran​.r​
-project​.org​/web​/packages​/rpart​/rpart​.pdf
Vieira, C., Parsons, P., & Byrd, V. (2018). Visual learning analytics of educational
data: A systematic literature review and research agenda. Computers and
Education, 122(1), 119–135. https://doi​.org​/10​.1016​/j​.compedu​. 2018​.03​.018
Predicting student success with self-regulated behaviors 109

Waheed, H., Hassan, S., Aljohani, N. R., Hardman, J., Alelyani, S., & Nawaz, R.
(2020). Predicting academic performance of students from VLE big data using
deep learning models. Computers in Human Behavior, 104, Article 106189.
https://doi​.org​/10​.1016​/j​.chb​. 2019​.106189
Wise, A. F., Zhao, Y., & Hausknecht, S. N. (2013, April 8–12). Learning analytics
for online discussions: A pedagogical model for intervention with embedded
and extracted analytics. In Society for Analytics Research: In Proceedings of
the third international conference on learning analytics and knowledge (pp.
48–56).
Wolff, A., & Zdrahal, Z. (2012). Improving retention by identifying and
supporting ‘at-risk’ students. Educause Review Online. http://er​.educause​.edu​
/articles ​/ 2012 ​/ 7​/ improving​-retention​-by​-identifying​- and​- supporting​- atrisk​
-students
Xu, X., Wang, J., Peng, H., & Wu, R. (2019). Prediction of academic performance
associated with internet usage behaviors using machine learning algorithms.
Computers in Human Behavior, 98, 166–173. https://doi​.org​/10​.1016​/j​.chb​
.2019​.04​.015
Yağcı, M. (2022). Educational data mining: Prediction of students’ academic
performance using machine learning algorithms. Smart Learning
Environments, 9(1), 1–19. https://doi​.org ​/10​.1186​/s40561​- 022​- 00192-z
7
BACK TO BLOOM
Why theory matters in closing
the achievement gap

Alfred Essa

Introduction
Closing the achievement gap is the holy grail in education. Everything has
been tried, but nothing has succeeded at scale and with replicability. In
this chapter, I go back to basics. I go back to Bloom. Benjamin Bloom is
well known in educational research for his learning taxonomy, his state-
ment of the 2 Sigma problem, and his design of mastery learning. But his
most import contribution to educational research is neglected and all but
forgotten.
In this chapter, I present Bloom’s theory of learning and show how it
can be applied to solving the achievement gap. I explicate Bloom’s theory
at two levels. First, I examine its structure. I will argue that the achieve-
ment gap is insoluble without this structure. In Bloom’s words, by struc-
ture we mean a “causal system in which a few variables may be used
to predict, explain, and determine different levels and rates of learning”
(Bloom, 1976, p. 202). The key idea is that a group of variables must be
identified, and their interactions studied together in an ongoing set of
experiments. His theory of learning as structure is empirically and peda-
gogically neutral, allowing for multiple instantiations and approaches. At
the second level, I examine Bloom’s specific theory of mastery learning as
an exemplary instance of the theoretical structure. His theory of learning
as mastery learning generates specific hypotheses subject to empirical con-
firmation or refutation. In Bloom’s words, mastery learning “is a special
case of a more general theory of school learning” (Bloom, 1976, p. 6).
I explicate Bloom’s theory with a formal model, enacted as a computer
simulation. The model, which I call learning kinematics, serves both a
DOI: 10.4324/9781003244271-9
Back to bloom 111

heuristic and probative purpose. As a heuristic tool, the formal model


will sharpen our understanding of why good students fail. As a probative
tool, the simulation will allow us to probe how changes in one variable
affect other variables and how different runs of the simulation affect the
distribution of final outcomes.
Bloom had very little to say about technology. But recent advances
in educational technology, particularly adaptive learning systems, have
paved the way for scalable implementations of Bloom’s theoretical ideas.
I discuss specific features of adaptive learning systems and map them to
Bloom’s learning theory. My central thesis is that closing the achievement
gap at scale will require combining human intelligence with artificial
intelligence, particularly in the form of adaptive learning systems. The
bridge is Bloom’s theory. I conclude by discussing a critical limitation in
Bloom’s approach and suggest a way to overcome it.

Why do good students fail?


Let’s begin with a basic question for understanding the achievement gap:
Why do good students fail? The question will set the stage for understand-
ing Bloom’s theory.
In Galileo in Pittsburgh (2010), Clark Glymour, a renowned statisti-
cian and philosopher of science, tells the story of his first year teaching
at Princeton University. The year was 1969, which was also the first
year that the university admitted Black students. Glymour was assigned
to teach Introductory Mathematical Logic to a class of about seventy
students. He chose a well-regarded textbook in the field, crafted a set
of lectures, and administered homework assignments, regular quizzes,
a mid-term examination, and a final examination. The students also
received weekly tutorials led by graduate students. Glymour did what
every good instructor does.
Or, so he believed. “One morning at the end of the semester my grad-
ers appeared in my office to tell me I had failed every black student in the
course, all seven of them” (Glymour, 2010, p. 28). Why had all Black stu-
dents failed? What had happened? Were all Black students in the course
“bad” students? Glymour thought otherwise.

These had to be pretty smart kids, or they would not have been admit-
ted; they had to be ambitious kids, or they would not have wanted
to attend an Ivy; they had to be brave kids, or they would not have
wanted to attend a snobbish, traditionally racist college, which was
what Princeton was trying to cease to be in 1969.
(Glymour, 2010, p. 28)
112 Alfred Essa

Glymour concluded that the problem was most likely not the students,
but “me or my course or both.” He then conducted a thought experiment.
Suppose that the students who had failed, including the White student,
were less well prepared than other students in the class. Would that have
mattered? Also, suppose that they were not as good at taking timed tests
under pressure: not because of ability, but because they had not practiced
under those conditions. Would that have made a difference in their per-
formance? In short, did the course design give underprepared students an
opportunity to catch up?
Based on his thought experiment, Glymour conjectured that his course
design did not meet the needs of all his students. Glymour hypothesized
that grit (“sweat equity”) alone cannot make up for lack of preparation.
As a good scientist, Glymour designed his next course offering as an
experiment.

I found a textbook that divided the material into a large number of very
short, very specific chapters, each supplemented with a lot of fairly easy
but not too boring exercises. For each chapter I wrote approximately
fifteen short exams, any of which a prepared student could do in about
fifteen minutes. I set up a room staffed for several hours every weekday
by a graduate student or myself where students could come and take a
test on a chapter. A test was immediately graded; if failed, the student
would take another test on the same material after a two-day wait.
No student took the very same test twice. I replaced formal lectures
with question-and-answer sessions and extemporaneous mini-lectures
on topics students said they found difficult. I met with each student for
a few minutes every other week, chiefly to make sure they were keeping
at it. Grading was entirely by how many chapter tests were passed; no
penalty for failed tests.
(Glymour, 2010, pp. 28–29)

What was the result of Glymour’s new course design? The following
semester approximately the same number of Black students enrolled. This
time, however, half of them earned an A and the others all received a B
grade. Glymour believed he was on to something.

What I had discovered was that a lot of capable students need informa-
tion in small, well-organized doses, that they often do not know what
they do not know until they take a test, that they need a way to recover
from their errors, that they need immediate feedback, that they learn at
different rates, that they need a chance to ask about what they are read-
ing and trying to do, and that, if given the chance, motivated students
Back to bloom 113

can and will successfully substitute sweat for background. This turns
out to be about all that education experts really know about college
teaching, and of course it is knowledge that is systematically ignored.
(Glymour, 2010, p. 29)

Unfortunately, Glymour also learned something else. He had to abandon


his new approach: “I also discovered that it is not physically possible to
teach that way: I was ready for the hospital by the end of term” (Glymour,
2010, p. 29).

Learning kinematics
We began with the question: Why do good students fail? Although we
should be wary of drawing conclusions from a single example, Glymour’s
experience at Princeton suggests that there are major cracks in the tradi-
tional model of education. One such crack is the assumption that under-
prepared students can make up for lost ground by doing more: more effort,
more grit, and more mindset. It’s not that these concepts are not impor-
tant for learning. They are. Glymour’s experiment suggests though that
placing the burden of success exclusively on the shoulders of the student is
misdirected. It is also dangerous. Why do good students fail? Glymour’s
experiment suggests that the primary culprit is flawed course design.
In this section, I present a formal model of learning, calling it “learn-
ing kinematics.” It is enacted in the form of a computer simulation. The
model seeks to confirm Glymour’s hypothesis formally. The advantage of
using a simulation is that it allows us to study the learning environment as
a causal system in which several key variables interact to bring about an
outcome. The model also lays the conceptual groundwork for explicating
Bloom’s theory.
The word “kinematics” comes from physics, where it means the study
of motion. In physics, motion is defined principally in terms of posi-
tion, velocity, acceleration, and time. The starting point for applying
kinematics to learning is the analogy of a foot race. Suppose we invite
a group of individuals to run a marathon. Suppose also that the run-
ners are drawn randomly from the population. We can safely assume
that the runners will have different levels of aptitude and motivation.
By aptitude, I mean a set of permanent attributes favorable to running
success. For example, elite runners have some natural advantages (e.g.,
greater VO2 max) over average runners. By motivation, I will mean a
set of malleable psychological attributes favorable to running success.
Distance running requires, for example, a high level of motivation and
persistence.
114 Alfred Essa

But unlike a typical marathon race, suppose we enforce two atypical


constraints. First, each runner starts at a different initial position in the
26.2-mile course. The starting position is meant to indicate prior prepara-
tion and training. Second, all runners are given the same amount of time
to finish. Success in the race is defined as finishing, or reaching the finish
line as closely as possible, within the allotted time.
Such a race is obviously absurd. In a typical foot race, everyone begins
at the same starting position and takes as much time (within limits) as
they need to finish. But our atypical constraints illustrate how a typical
classroom or learning “race” is setup. In a typical classroom, each student
comes in with a different initial position (prior knowledge). Yet we expect
everyone will reach the same final position (knowledge mastery) in the
same amount of time.
Of course, we don’t need a formal model to predict the outcome. Given
a starting inequality of aptitude, motivation, and prior preparation, we
should not be surprised by a finishing inequality of outcomes. What use is
a formal model then? We can use a formal model to test various assump-
tions and generate hypotheses for closing the achievement gap.
In the formal model, we simulate a group of learners instead of run-
ners. We start with (n) number of learners (e.g., n = 10,000). The goal of
the learning race is to reach the destination (final position = 100) during
some fixed time interval (e.g., t = 50).

• Initial position. Each learner is randomly assigned an initial starting


position from a normal distribution (mean = 40, standard deviation =
10). The initial starting position is meant to indicate the learner’s prior
knowledge of the material.
• Aptitude. Each learner is also randomly assigned a different aptitude
score from a normal distribution (mean = 0.5, standard deviation =
0.15). Aptitude is meant to capture a set of permanent attributes which
contribute to learning success.
• Motivation. Each learner is randomly assigned a motivation score
from a normal distribution (mean = 0.5, standard deviation = 0.15).
Motivation is meant to capture a malleable attribute which contributes
to learning success.
• Learning rate. We then calculate learning rate as an average of aptitude
and motivation. The greater the ability and motivation, the greater the
learning velocity.

Figure 7.1 illustrates the initial position of the learners as a distribution.


The initial position represents prior knowledge of the course material.
Back to bloom 115

FIGURE 7.1 Initial Position of Learners Represented as a Distribution

Model 1: Traditional instruction


When we run the simulation with the assumptions of traditional instruc-
tion, the results are not surprising: those who start ahead, finish ahead.
Those who start in the middle, finish in the middle. Those who start
behind, finish behind. This is the case despite different distributions for
aptitude and motivation. As Glymour found with his traditional teaching
approach, for most learners “sweat equity” by itself is simply not enough
to overcome lack of preparation.
We can see this formally in the high correlation (r = 0.88) between
initial position and final position. Figure 7.2 displays the final position of
the learners as a distribution.
As an analog to the concept of social mobility, we can also use “learn-
ing mobility” to see how those who started behind fare in the learning
race. Table 7.1 shows that 91% of the students who started in the low
initial position group finished in the low final group; only 8% were able to
climb to the next rung of the middle final position group. Similarly, of the
students who started in the middle initial position group only 17% were
able to climb to the high final position group.
What can we conclude from our first model (Model 1), which mimics
the traditional model of instruction? As Bloom observes:

It becomes evident that students in schools are to a large degree being


judged more on the previous learning or achievement they bring to the
116 Alfred Essa

FIGURE 7.2 Final Position of Learners Represented as a Distribution

TABLE 7.1 Traditional Instruction Tied to Low Learning Mobility Rates

Low Middle
Initial Position (%) Initial Position (%)

Low 91 51
Final Position
Middle 8 32
Final Position
High 1 17
Final Position

course or subject than on their learning within the academic term or


course for which the teacher’s grade or achievement measure is being
used. The student is being judged on his previous history of related
learning to a large degree than on his learning within the course.
(Bloom, 1976. p. 43)

Learning velocity
Let’s look more deeply at the connection between initial position and final
position. At a simple level (when we assume velocity is constant), the gov-
erning equation for particle motion is:

d = v xt
Back to bloom 117

FIGURE 7.3 Learning Velocities to Reach Achievement Target Based on Initial


Position

The variable d refers to distance, v refers to velocity, and t refers to time.


Distance is directly proportional to velocity and time. In order to cover a
greater distance, a runner must either run faster or have more time. The
analogy to learning is obvious. If all students are given the same amount
of time and some students need to cover a larger distance in that time,
the only means available to them is to increase their learning velocity. In
an earlier paper (Essa, 2020), I presented a more elaborate formal model
which also investigates the significance of time in learning success.
Figure 7.3 shows three different learners with three different initial
positions (prior knowledge). In order to reach the same target in the same
amount of time, v 2 needs to be greater than v 3, and v1 needs to be greater
than v 2 . Learning velocity corresponds to the slope of the line.
In reality, the situation is even more problematic for those who enter
the learning race with low prior knowledge. It’s not just a matter of rela-
tive starting distances. Each course implicitly assumes a reference velocity
( v r ) based on a reference learner. What is a reference learner? A reference
learner is someone who has the prerequisite knowledge for the course. As
shown in Figure 7.4, a learner who doesn’t have the prerequisite knowl-
edge for the course must maintain a target velocity ( v t ) greater than the
reference velocity ( v t > v r ). This means that someone without the prereq-
uisite knowledge for the course must cover the material at a faster learning
rate than the reference learner. And the larger the gap in the prerequisite
knowledge the faster the learning rate.
The situation takes on the characteristics of Zeno’s paradox. The ref-
erence velocity assumes that the learner is ready to master the learning
objectives of the first unit. But a learner without the prerequisite knowl-
edge will have an actual velocity ( v a) in the first unit not just lower than
their needed target velocity, but lower than the reference velocity ( v r ). As
118 Alfred Essa

FIGURE 7.4 Reference Velocity vs Target Velocity

FIGURE 7.5 Actual Velocity Decreases from Unit to Unit Based on Compounding


Error
time elapses, the knowledge gap increases and actual learning velocity
decreases even further (see Figure 7.5).
A way to visualize the phenomenon is to think of a rocket launched
into space. To reach orbit the rocket needs to reach an escape velocity
to overcome the gravitational force. A learner is in a similar situation.
Lack of prerequisite knowledge drags the learner down at the beginning.
And as the course progresses, insufficient mastery in each unit pulls the
learner down further and further down toward failure. The error keeps
compounding, making it inevitable that the student will fail.
As Bloom observes,

One of the most important elements in accounting for individual differ-


ences in school learning is the centrality of instruction for groups of learn-
ers. Instruction provided to a group of twenty to seventy learners is likely
Back to bloom 119

to be very effective for some learners and relatively ineffective for other
learners. This aspect of the process of schooling is likely to be replete with
errors which are compounded over time (emphasis mine). Unless there are
ways of identifying and correcting the flaws in both the teaching and the
learning, the system of schooling is likely to produce individual differences
in learning which continue and are exaggerated over time.
(Bloom, 1976, p. 9).

Traditional learning design, therefore, poses an impossible and insoluble


conundrum for underprepared students. An underprepared student must
not only learn at a faster rate overall than the “reference learner,” but their
initial handicap, even if it’s very small, is likely to compound over time.

Model 2: Optimized instruction


Armed with a better insight about learning rate and learning velocity, let’s
rerun the simulation by varying one of our main assumptions. In Model 2,
the learner characteristics (aptitude and motivation) remain the same as in
our first model. We also allocate the same amount of time for each student
as before. However, we increase the learning velocity for underprepared
students by improving the quality of instruction.
The results of the second simulation are very different from the first.
First, the correlation (r) between initial and final position is now reduced
from 0.88 to 0.66. This means, in part, that individual motivation and
aptitude can play a greater role in learning success. Second, the final dis-
tribution of achievement is considerably different when compared with
the first model. We can see visually in Figure 7.6 that a significant number
of learners are now able to catch up.

FIGURE 7.6 Final Scores Distribution. Model 1 vs Model 2.


120 Alfred Essa

TABLE 7.2 Optimized Instruction Improves Learning Mobility Rates

Low Middle
Initial Position (%) Initial Position (%)

Low 64 27
Final Position
Middle 35 36
Final Position
High 1 37
Final Position

Table 7.2 also shows a drastic improvement in learning mobility. Of


those who started in the low initial position group, 35% were able to
climb to the middle final group (compared to only 8% with traditional
instruction). Of those who started in the middle initial group, 37% were
able to climb to the high final group (compared to only 17% with tradi-
tional instruction).
It will be said, of course, that we are begging the question of how we
increased the learning rate and learning mobility of the underprepared
students. The simulation does not specify how we increase the quality of
instruction to begin to close the achievement gap. For that, we must turn
to Bloom’s theory of learning.

Bloom’s theory of learning


It’s not an exaggeration to say that Bloom is a godfather of equity in learn-
ing. Throughout his entire career Bloom put forward the hypothesis that
all students, except for a few exceptions, are capable of high achievement.
In 2 Sigma Problem, for example, Bloom argued that the paramount goal
of educational research is to design learning environments in which all
students, not just a few, can achieve a high degree of learning.

Can researchers and teachers devise teaching-learning conditions that


will enable the majority of students under group instruction to attain
levels of achievement that can at present be reached only under good
tutoring conditions?
(Bloom, 1984, pp. 4–5).

Most of Bloom’s work in educational research sought to demonstrate that


most students can achieve a high degree of learning.
Back to bloom 121

At this point, we need to pause to consider an important question for


measurement. How do we know that the achievement gap has been closed?
Bloom answers the question by putting forward an operational criterion
which I call the “Bloom Effect.” According to Bloom, an effective learn-
ing environment should not only raise the mean of final test scores, but
also decrease the standard deviation. We can define an optimal learning
environment operationally as:
Definition (Optimal Learning Environment): At least 80% of the stu-
dents can reach the criterion level of attainment attained by 20% in a
traditional learning environment.
In statistical terms, the effect size of an optimal learning environment
is one standard deviation (sigma) or greater compared to the baseline tra-
ditional environment. If we compare the final distributions of a standard
learning environment (control) versus an optimized learning environment
(treatment), we see the signature Bloom Effect in Figure 7.7.
An important consequence of the Bloom Effect is that in an optimized
environment students at the low end of the distribution begin to catch up
with those at the high end. Chi and VanLehn refer to this phenomenon
as aptitude-treatment interaction: “both groups should learn, and yet the
learning gains of the low students should be so much greater than those of
the high ones that their performance in the post-ties with that of the high
ones” (Chi & VanLehn, 2008, p. 604). We should also note that “seeing
the Bloom Effect” is not a one-time measurement. It requires setting up a
baseline and conducting an ongoing set of experiments.
Let’s now turn to Bloom’s theoretical structure. Bloom’s main thesis
is that “individual differences in learning is an observable phenomenon
which can be predicted, explained, and altered in a great variety of ways”
(Bloom, 1976, p. 8). The, goal, therefore, of any theory of learning is to

FIGURE 7.7 
Bloom Effect: Final Distributions of Traditional vs Optimized
Instruction
122 Alfred Essa

account for differences in achievement. In the language of statistics, our


target is to explain the distribution or variances in achievement.
Bloom elaborates his theory with three secondary theses. First, any
theory must explain learning achievement in terms of a small number of
variables. Second, the variables are of two types. The first type describes
characteristics of the learner and their learning history. The second type
describes characteristics of the learning environment. The focal variable(s)
of the learning environment is learning quality. Learning, therefore, takes
place in a causal system in which achievement results from the interaction
of a small number of variables. Third, the typical learning environment is
an “error-full” system. It is replete with errors where even small errors can
compound over time. An important task of learning design is to create,
therefore, an “error-free” system in which teaching and learning errors
are detected and corrected continuously.
The causal system can be portrayed diagrammatically in three parts as
shown in Figure 7.8. The first part is what the learner brings into the learn-
ing context. Bloom calls these learner entry characteristics. It includes
the learner’s cognitive characteristics such as prior knowledge, but also
noncognitive factors such as interest, motivation, and attitudes toward the
learning task. It also encompasses what today we would call self-regulation.
The second part is the learning environment itself. The most important
characteristic of the learning environment is the quality of instruction.
Bloom defines Quality of Instruction as the

degree to which the presentation, explanation, and ordering of elements


of the task to be learned approach the optimum for a given learner.
Implicit in this definition is the assumption that each student can learn
if the instruction approaches the optimum for him or her.
(Bloom, 1976, p. 11)

The third and final part is learner outcomes or achievement. Parallel to


learner entry characteristics, learner outcomes also emphasizes both cog-
nitive and noncognitive growth.

FIGURE 7.8 Bloom’s Theory has Tripartite Structure


Back to bloom 123

According to Bloom’s theoretical structure, “variations in learning


and the level of learning of students are determined by the student’s
learning history and the quality of instruction they receive” (Bloom,
1984, p. 16).

Bloom’s theory of mastery learning


Let us turn now to Bloom’s application of his theory of learning as
“mastery learning.” Bloom’s formulation of mastery learning emerged
in a series of papers in the 1960s and 1970s. Its principal concepts are
drawn from the works of Washburne (1922), Morrison (1926), and
Carroll (1963).
What then are the elements of mastery learning and how does it work?
Let’s begin by reviewing the conventional model of instruction:

• the teacher selects a textbook; each instructional unit corresponds


roughly to a chapter in the textbook
• for each chapter, the teacher delivers generic instruction in the form of
one or more lectures
• students are graded (summative assessment) through assignments,
homework, quizzes, and exams

By comparison, the mastery learning model incorporates generic and dif-


ferentiated instruction, along with formative assessments, throughout the
learning path. In its main aspects, mastery learning corresponds to what
we saw earlier in Glymour’s modified learning design.

• the teacher organizes a knowledge domain into a set of instructional


units; as in traditional learning, each unit corresponds roughly to a
chapter in a textbook
• each unit of instruction is further decomposed into a set of learning
objectives
• corresponding to each learning objective, the teacher prepares a set of
formative assessments
• the teacher delivers initial instruction (generic) covering the topic or
learning unit.
• following the initial generic instruction, the instructor administers a
brief formative assessment
• based on the results of the formative assessment, the teacher designs
and delivers differentiated instruction; the aim of the differentiated
instruction is to target and correct sources of error picked up by the
formative assessment
124 Alfred Essa

• once the differentiated instruction is complete, the teacher administers


another formative assessment to verify the learning gains

In a mastery learning setting, instruction occurs in two stages. The ini-


tial instruction is generic and undifferentiated. It is followed by a forma-
tive assessment, which serves as a diagnostic and knowledge check. True
instruction begins during the differentiated instruction phase. It consists
of individualized feedback, correction, and enrichment. The initial forma-
tive assessment helps to identify what students have learned up to that
point and what they need to learn better. Contrast that with traditional
learning, in which the only instruction is generic and undifferentiated.
The core of differentiated instruction is feedback, correction, and enrich-
ment. Initial feedback lets students know what they have learned and what
they have yet to master. To be effective, the formative assessment must not
only diagnose points of weakness but match it with a set of correctives.

Adaptive learning
In this section, we turn to adaptive learning systems. We saw earlier that
the key idea of mastery learning is differentiated instruction. Different
learners need different instructional resources and strategies during their
learning journey. Adaptive learning systems are intelligent systems that
can scaffold the learning process by supporting differentiated real-time
feedback, correction, and enrichment for each learner. They can be used
for self-paced learning or in mastery learning mode as a supplement to
instructor-led classrooms.
In his monograph A Decision Structure for Teaching Machines, Richard
Smallwood (1962) gives one of the earliest and clearest statements of the
workings of “teaching machines” based on adaptive principles.

• each student proceeds at his own individual pace; a student working


“at his own individual pace” can happen during the entire learning
path or within a unit of instruction
• a student is presented with instructional resources adequate to their
level of learning
• by answering questions at the end of each block, a student provides
evidence of mastery before going to the next block
• the student finds out immediately whether he has answered a question
correctly and so is able to correct any false impressions at once
• a complete record of the student’s performance is available, as the basis
of feedback to the student but also for making improvements on the
program itself
Back to bloom 125

The first four properties embody core principles of mastery learning. The
fifth property anticipates embedded learning analytics, or the continuous
collection of data to update and improve the quality of learning models.
According to Smallwood, the fundamental “desirable” property of a
teaching machine, however, is that it be able to vary its presentation of
learning material based on the individual characteristics and capacities
of each learner. Smallwood observes that “this adaptability requires that
the device be capable of branching—in fact, one would expect the poten-
tial adaptability of a teaching machine be proportional to its branching
capability” (Smallwood, 1962, p. 11). As Smallwood notes, the idea of
branching is fundamental to teaching machines as adaptive systems. Until
recently, branching has been achieved through a limited set of ad hoc,
finite rules programmed by humans. The application of machine learning
to teaching machines eliminates this restriction entirely.
Adaptive learning systems have evolved rapidly since Smallwood’s for-
mulation. At a high level of generality, all adaptive learning systems can
be said to rely on five interacting formal models (Essa, 2016). The domain
model specifies what the learner needs to know. What the learner needs
to know is further decomposed into a set of concepts, items, or learn-
ing objectives. The domain model is also referred to as the knowledge
space. Once we have specified explicitly what the learner needs to know
or master, the learner model represents what the learner currently knows.
The learner model is also called the knowledge state, which is a subset of
the knowledge space. The assessment model is how we infer a learner’s
knowledge state. Because the learner’s knowledge state is inferred through
assessment probes, adaptive learning systems are inherently probabilis-
tic. The transition model determines what the learner is ready to learn
next, given the learner’s current knowledge state. Finally, the pedagogi-
cal model specifies the activities to be performed by the learner to attain
the next knowledge state. The pedagogical model can encompass a wide
range of activities, from watching a video to engaging in a collaborative
exercise with peer learners.
Because a sophisticated adaptive learning system depends upon an
accurate assessment of each learner’s knowledge, such systems can provide
a fine-grained picture of each learner’s knowledge state. Next-generation
adaptive systems can also pinpoint precisely which topics a leaner is ready
to learn next, and which topics a learner is likely to struggle with, given
the learner’s history and profile. Even if an instructor chooses not to use
adaptive systems for instruction, their built-in formative assessment capa-
bility can serve as a powerful instrument for diagnostics and remediation.
In summary, well-designed adaptive learning systems allow an instruc-
tor to carry out differentiated instruction and formative assessments
126 Alfred Essa

efficiently, thus realizing one of the central challenges of implementing


mastery learning at scale.

Conclusion
Bloom’s singular ambition, throughout his long and distinguished career,
was to find methods for closing the achievement gap. As I have tried to
show, his approach was to outline a theoretical structure which tries to
explain variation in outcomes in terms of the learners’ previous charac-
teristics and the quality of instruction. With his theory of mastery learn-
ing, Bloom went one step further by demonstrating a learning design that
incorporates formative assessments and feedback-corrective procedures at
its core. But as Glymour realized, implementing such a system is not scal-
able without some form of machine intelligence.
While adaptive learning systems have considerable promise, two major
obstacles remain in realizing Bloom’s vision, the first is technical and the
second is institutional. Bloom envisioned a “causal system in which a few
variables may be used to predict, explain, and determine different lev-
els and rates of learning” (Bloom, 1976). But traditional statistical tech-
niques, as well as new methodologies in machine learning, are entirely
restricted to the realm of correlation. True interventions require an under-
standing of causal mechanisms. Unfortunately, the study of causality has
been hampered by Randomized Control Trials (RCTs), which are often
impractical, expensive, laborious, and ethically problematic in education.
Fortunately, recent advances in causal modeling have opened approaches
to causality using observational data.
The second barrier to implementing Bloom’s theoretical ideas is insti-
tutional. Technical advances in statistics, machine learning, and causal
modeling has outstripped the ability of nontechnical educators to apply
the new techniques to advance educational goals.

We believe that much of the issue springs from a lack of genuine col-
laboration between analysts (who know and understand the methods
to apply to the data) and educators (who understand the domain, and
so should have influence over what critical questions are asked by
Learning Analytics researchers).
(Hicks et al., 2022, p. 22)

If Bloom’s ideas are to be tested and put into practice, educators and prac-
titioners will need the ability to “think with causal models.” Fortunately,
the learning research team at University of Technology Sydney have devel-
oped a visual formalism and accompanying set of techniques to think
Back to bloom 127

with causal models. In an important recent paper, the UTS team makes a
convincing case that

a graphical causal structure depicting the causal relationships between


variables shows promise as a form of diagrammatic reasoning, ena-
bling a more diverse set of stakeholders to participate in a genuine and
collaborative definition of LA model, a highly abstract process that is
typically left to analysts to code in opaque notation that excludes oth-
ers (even if they have been consulted).
(Hicks et al., 2022)

It’s been nearly forty years since Bloom published his landmark paper
“The 2 Sigma Problem: The Search for Methods of Group Instruction as
Effective as One-to-One Tutoring.” The paper challenged researchers to
devise methods of instruction which would allow most students to attain
a high degree of learning. Most of the elements are now in place to finally
realize Bloom’s vision of equity in education.

References
Bloom, B. S. (1976). Human characteristics and school learning. Mc-Graw Hill.
Bloom, B. S. (1984). The 2 Sigma problem: The search for methods of group instruction
as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16.
Carroll, J. B. (1963). A model of school learning. Teachers College Record, 64(8),
723–733.
Chi, M., & VanLehn, K. (2008). Eliminating the gap between the high and
low students through meta-cognitive strategy instruction. In International
conference on intelligent tutoring systems (pp. 603–613). Springer. https://
link​.springer​.com ​/chapter​/10​.1007​/978​-3 ​-540 ​- 69132​-7​_ 63
Essa, A. (2016). A possible future for next generation adaptive learning systems.
Smart Learning Environments, 3(1), 16.
Essa, A. (2020). Does time matter in learning: A computer simulation of Carroll’s
model of learning. In International conference on human computer interaction
(pp. 458–474). Springer. https://dl​.acm​.org​/doi​/abs​/10​.1007​/978​-3​- 030​-50788​
-6​_ 34
Glymour, C. (2010). Galileo in Pittsburgh. Harvard University Press.
Hicks, B., Kitto, K., Payne, L., & Shum, S. B. (2022). Thinking with causal
models: A visual formalism for collaboratively crafting assumptions. In
LAK22: 12th international learning analytics and knowledge conference (pp.
250–259). https://cic​.uts​.edu​.au​/wp​- content​/uploads​/2022​/02​/ LAK22​_ Hicks​
_etal​_Thinking ​_with​_Causal​_ Models​.pdf
Morrison, H. C. (1926). The practice of teaching in the secondary school (2nd
ed.). University of Chicago Press.
Smallwood, R. (1962). A decision structure for teaching machines. MIT Press.
Washburne, C. W. (1922). Educational measurement as a key to individual
instruction and promotion. Journal of Educational Research, 5: 195–206.
8
THE METAPHORS WE LEARN BY
Toward a philosophy of learning analytics

W. Gardner Campbell

One of the first observations to be made about the various separate disci-
plines of science and technology is that each has built up its own framework
or culture represented by preferred ways of looking at the world, preferred
methodology and preferred terminology, preferred ways of presenting
results, and preferred ways of acting…. The natural philosophers were …
unhampered by the accumulated culture of science. They built their own
frameworks for understanding the world. They were primarily engaged in
“search” as opposed to research. They developed ways of discovering how
to think about problems as opposed to getting answers to questions posed
within an established or traditional framework. Searching appears to be the
traditional function of the philosopher. With the sharp line drawn in mod-
ern times between philosophy and science, searching for the better frame-
work in science seems to be confined to the very few philosophers of science
and technology—the many do research.
—Kennedy, J.L. and Putt, G. H. “Administration
of Research in a Research Corporation.” RAND
Corporation Report No. P-847, April 20, 1956.

The most grotesque model of analytics and assessment I have yet encoun-
tered resides in the title of an “occasional paper” from the National
Institute for Learning Outcomes Assessment: A Simple Model for
Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig (Fulcher et al.,
2014). The “pig” in the title is a metaphor for students, and the weigh–
feed–weigh is a metaphor for analytics, pedagogical intervention, and
assessment. If, as Lakoff and Johnson (2003) argue, our lives are framed

DOI: 10.4324/9781003244271-10
The metaphors we learn by 129

and guided by metaphor, the way of life implied in the metaphor of


“weigh pig, feed pig, weigh pig” is truly brutal. We might call it “slops to
chops,” the primary function of the feedlot. (For the role of the teacher,
Orwell’s swinish hierarchy in Animal Farm may convey its own sinister
implications.)
The abhorrent feedlot metaphor has a counterpart, one lodged so firmly
in the public discourse that we scarcely even register it as a metaphor any-
more: the Internet as an “information superhighway.” For serious work,
for the heavy labor of civilization, “information superhighway” evokes
massive convoys of 18-wheelers delivering material success by means of
“content delivery.” For over 20 years, this metaphor has helped to erase
the dreams of distributed cognition as a purposeful and beneficial oppor-
tunity for good.
Feedlots. Information superhighways. The malignant spread of these
diseased and sickening metaphors might have been challenged by higher
education. Instead, higher education continued to breed its own toxic
metaphors. Witness the deadly metaphor of the “learning management
system” that began life branded as an application called “Web Course
In A Box” (A box of webs: is no one paying attention to the metaphors
we live by?). Boxes and systems led to “x-Moocs,” “next-gen learning
environments,” and the hype cycles around analytics, assessment, and
“evergreen” online courses that offer truly and sadly “remote” instruc-
tion, sometimes from beyond the grave, as a student learned when he
asked a question of the “professor” who was teaching his course, only to
find the professor had died two years earlier (Kneese, 2021). As online
learning becomes increasingly designed, engineered, templated, modular-
ized, scaled, and sustained within the business model of selling credits for
clicks, all these web courses in boxes become the shipping containers that
the cranes of instructional design lift onto the trucks delivering content to
learner after learner in replicable, standardized, weighed, and measured
cargoes of easily analyzed “outcomes,” supported by analytics that aim to
“optimize” learning, just as highways optimize speedy consumption and
feedlots optimize flesh markets.
Indeed, the idea of optimization emerged quickly in the young field
of learning analytics. In “Learning Analytics: The Emergence of a
Discipline,” George Siemens quotes the definition of the field proposed at
the 1st International Conference on Learning Analytics, in 2011:

Learning analytics is the measurement, collection, analysis, and report-


ing of data about learners and their contexts, for the purposes of
130 W. Gardner Campbell

understanding and optimizing learning and the environments in which


it occurs.
(Siemens, 2013 p. 1382)
Later in this essay, Siemens (2013) briefly considers what he terms “the
dark side” of learning analytics, and in doing so narrows the defini-
tion of analytics to the practice of “identifying and revealing what
already exists” rather than “the generation of new ideas, approaches,
and concepts” related to the process of learning (Siemens, 2013, p.
1395). Yet the word “optimization,” identified from the outset as one
of the primary purposes of learning analytics, lingers throughout the
essay. As we shall see, the word belies the apparently value-free foren-
sic mode Siemens appears to advocate for this new discipline. For the
goal of optimization, easily forgotten within methodologies and “big
data,” necessarily implies a philosophy of the optimum, the “best,”
which must always be part of a philosophy of human flourishing. Yet
the philosophical underpinnings of this new discipline vanish into a
cloud of methodologies.
In System Error: Where Big Tech Went Wrong and How We Can
Reboot, Rob Reich, Mehran Shami, and Jeremy Weinstein (2021) devote
an entire chapter to “The Imperfection of the Optimization Mindset.”
The chapter should be required reading for all those working in learning
analytics and assessment. Among the many bracing observations in this
chapter, several seem especially pertinent to the topics of analytics and
assessment. For example, Weinstein et al. (2021) discuss the problem of
proxies, and what proxies inevitably exclude:
Technologists are always on the lookout for quantifiable metrics.
Measurable inputs to a model are their lifeblood, and like a social sci-
entist, a technologist needs to identify concrete measures, or “prox-
ies,” for assessing progress. This need for quantifiable proxies produces
a bias toward measuring things that are easy to quantify. But simple
metrics can take us further away from the important goals we really
care about, which may require complicated metrics or be extremely
difficult, or perhaps impossible, to reduce to any measure. And when
we have imperfect or bad proxies, we can easily fall under the illusion
that we are solving for a good end without actually making genuine
progress toward a worthy solution.
The problem of proxies results in technologists frequently substitut-
ing what is measurable for what is meaningful. As the saying goes.
“Not everything that counts can be counted, and not everything that
can be counted counts.”
(Weinstein et al., 2021, p. 18)
The metaphors we learn by 131

These proxies also inspire a relentless focus on methodology rather


than goals, frustrating or blocking deep and open discussion of basic,
essential values (Weinstein et al., 2021, p. 15). Moreover, the very prac-
tice of optimization represents a priori assumptions of “what problems
are worth solving” (Weinstein et al., 2021, p. 17)—to which I would add
that whoever makes the choice of what to optimize is not only selecting
but defining those problems. Optimization can conceal or erase the real-
ity of competing values and the messy trade-offs they require (Weinstein
et al., 2021, p. 20). And it can result in what Reich, Shami, and Weinstein
term “success disasters,” in which “success in solving for one particu-
lar task has wide-ranging consequences for other things we care about”
(Weinstein et al., 2021, p. 20).

Think, for example, of the amazing technological advances in agri-


culture that have massively improved productivity. Factory farming
has not only transformed the practices of growing vegetables but also
made possible the widespread and relatively cheap availability of meat.
Whereas it once took fifty-five days to raise a chicken before slaughter,
it now takes thirty-five, and an estimated 50 billion chickens are killed
every year—more than 5 million every hour of every day of the year.
But the success of factory farming has generated terrible consequences
for the environment (massive increases in methane gasses that are con-
tributing to climate change), our individual health (greater meat con-
sumption is correlated with heart disease), and public health (greater
likelihood of transmission of novel viruses from animals to humans
that could cause a pandemic).
(Weinstein et al., 2021, pp. 20–21)

Indeed.
Higher education might have been the Jane Jacobs of the digital age,
fighting to preserve the idea of networked villages offering a scaled ser-
endipity of cultural, intellectual, and emotional linkages. Jacobs’ vision
of networked neighborhoods is analogous to the dreams of those who
imagined and helped to build cyberspace in the 1960s and 1970s, dream-
ers and designers such as J. C. R. Licklider, Doug Engelbart, Ted Nelson,
Bob Taylor, Alan Kay, and Adele Goldberg. Instead, higher education
emulated Jacobs’ implacable opponent Robert Moses, whose worship of
cars and the revenues generated by stacked and brutal “development” of
the environments where people lived and worked led him to envision a
lower Manhattan Thruway that would have utterly destroyed Soho and
Greenwich villages, two of the most culturally rich and vibrant areas in
New York City. Higher education’s mission proclaims Jacobs’ values, but
132 W. Gardner Campbell

its business model, particularly when it comes to our use of the Internet and
the Web, consistently supports Moses’ more remunerative and destructive
“vision” (For the story of Jacobs’ heroic and finally successful struggle to
block Robert Moses’ destructive plans, see the documentary Citizen Jane:
The Battle for the City, Altimeter Films, 2017).
It is difficult to say what sector within higher education might provide
an effective intervention. Centers for Teaching and Learning very often
constitute themselves as anti-computing, or at least as retreats from the
network, emphasizing mantras like “active learning” and “problem-based
learning” and “Bloom’s Taxonomy” (now in several varieties) and per-
petuating catchy, reductive, and empty jingles like “not the sage on the
stage but the guide at the side,” thereby insulting sages, guides, and learn-
ers all at once. The IT departments are busy with Student Information
Systems that retrieve data from, and in turn replenish, the LMS contain-
ers. Various strategies for “assessment” combine IT, teaching-learning
centers, offices of institutional research, and so-called “student success”
initiatives to demonstrate institutional prowess with specious measures
of what constitutes such “success” rather than the kind of transforma-
tive learning that curriculum scholar Lawrence Stenhouse (1975) argues
will produce unpredictable outcomes, not quantifiable “optimization” (p.
82). Faculty in their departments might yet make a difference, but most
faculty appear to have resigned themselves to the various strategies for
success, scaling, and sustainability generated by their administrators. This
is so perhaps because for now those strategies appear to preserve some
space for faculty autonomy—though the “evergreen” courses faculty help
design, particularly in the fraught area of general education (that arena of
FTE competition), may someday erase faculty autonomy altogether.
Two recent documents—one concerning information literacy, the other
concerning networked scholarship, offer better metaphors to learn by,
better ideas to think with, better ideas about true communities of learn-
ing—and how to analyze, assess, and promote their formation. One is
the ACRL’s Framework for Information Literacy for Higher Education
(2016). The other is Jean Claude Guédon’s Open Access: Toward the
Internet of the Mind (2017). Each insists that the value and life of the net-
worked mind implicit in the idea of a university is a dream that need not
die. The lessons drawn from each document are about levels of thought
and modes of understanding, abstractions generalized into principles and
particularized into actions. Both documents imagine and argue for some-
thing comprehensive and conceptually rich, a mode of understanding like
the one that computer pioneer Doug Engelbart (1962) called “a way of
life in an integrated domain where hunches, cut-and-try, intangibles, and
the human ‘feel for a situation’ usefully co-exist with powerful concepts,
The metaphors we learn by 133

streamlined terminology and notation, sophisticated methods, and high-


powered electronic aids.” Guédon’s (2017) vision is of scholarly commu-
nity and communication, while the ACRL Framework (2016) concerns
teaching and learning. Yet both consider the university as a community
of learning, on multiple levels of expertise, enacted across time and space.
Although the two documents were composed separately—in fact, nei-
ther mentions the other—both Guédon’s (2017) “Open Access: Toward the
Internet of the Mind” and the ACRL’s (2016) Framework for Information
Literacy for Higher Education attempt to embrace the larger significance
and meaning of library in the service of the life of the networked mind.
Of course, minds have always been networked—one may argue that the
very idea of “mind” presupposes and indeed derives from the experience
of distributed cognition in social groups. Yet the danger today is that
the mechanics and metrics of distribution will reduce cognition to neo-
Skinnerian models. Both documents seek to ameliorate that danger, and
both argue for more comprehensive understandings of what “informa-
tion literacy” means: far more than a set of searching skills or document
manipulation techniques, far other than can be meaningfully quantified
by clicks, circulation, or “engagement.”
Here’s one way to understand how important the conceptual frame-
works are in both documents. Ask someone to define “literature.” Chances
are that nearly everyone will describe either a collection of published writ-
ings or a certain set of literary “qualities” or characteristics of such writ-
ings. Look up “literature” in Samuel Johnson’s (1755) famous Dictionary,
however, and you will find a surprise: Johnson defines “literature” as the
process of becoming well read. The Oxford English Dictionary gives this
very definition as the first definition of “literature,” noting its source in
Johnson’s work:

1. Familiarity with letters or books; knowledge acquired from read-


ing or studying books, esp. the principal classical texts associated with
humane learning (see HUMANE adj. 2); literary culture; learning,
scholarship. Also: this as a branch of study. Now hist. The only sense
in Johnson (1755).

In other words, literature is not a set of writings, but a quality of being,


cultivated over time.
Similarly, we think of “information” as a set of data, perhaps, or
answers to questions, or facts. Like “literature” in Johnson’s dictionary,
however, “information” is also defined as a quality of being acquired over
time. The Oxford English Dictionary (1924) notes that this meaning
is “now rare,” a sad and damaging state of affairs that, as I will argue
134 W. Gardner Campbell

below, the ACRL Framework for Information Literacy (2016) can help to
ameliorate. Look up the OED definition of “information,” and this entry
appears at the top of the list:

I. The imparting of knowledge in general.

1. a. The shaping of the mind or character; communication of


instructive knowledge; education, training; †advice (obs.). Now
rare.
1526 Bible (Tyndale) Eph. vi. 4 Brynge them vppe with the norter
and informacion off the lorde.
1813 T. JEFFERSON Writings (1830) IV. 182 The book I have
read with extreme satisfaction and information.
1851 U. GREGORY Let. 6 Oct. in F. W. Shearman Syst. Public
Instr. & Primary Sch. Law Michigan (1852) 579 The literary
and scientific institution contributes to the discipline and gen-
eral information of the mind.
1901 H. MÜNSTERBERG Amer. Traits iii. 44 The community
ought to see to it that both free election and the pedagogical
information of the teachers were furthered.

The Jefferson quotation is of particular interest, as it clearly indicates the


way in which “information” was once understood as the experience of a
specific state of being—or becoming. To be “informed” is not simply to
have been fed content. No: for Jefferson, who after all founded a univer-
sity, the idea is that books are read with satisfaction and information. The
preposition changes everything. To read with information is to have had
one’s mind or character shaped—to be inwardly formed.
Of course, information-as-object makes analytics and assessment
neater and more straightforward. By contrast, the judgment required to
analyze or assess qualities of being cultivated over time involves metrics
of mind, of character. These metrics involve conversation, shared under-
standing, appraisals of context and meaning, and, inevitably, inconsisten-
cies, along with a great deal of fuzzy words and messy, time-consuming
practices. Metrics of mind and character are also themselves contextually
situated and must be contextually responsive, while at the same time being
grounded on enough shared principles and understanding to be something
other than idiosyncratic, something more just than special pleading. In
such instances, the best guide is a compass, not a map. And compass is
a word for wayfinding as well as comprehension. One uses a compass to
encompass one’s largest surroundings, and the journeys themselves will
help to draw the maps.
The metaphors we learn by 135

The ACRL Framework (2016) is just such a compass. As the document


makes clear, the word “framework” entails larger principles of learning,
larger practices of reading with information:

At the heart of this Framework are conceptual understandings that


organize many other concepts and ideas about information, research,
and scholarship into a coherent whole …. Two added elements illustrate
important learning goals related to those concepts: knowledge prac-
tices, which are demonstrations of ways in which learners can increase
their understanding of these information literacy concepts, and dispo-
sitions, which describe ways in which to address the affective, attitu-
dinal, or valuing dimension of learning. The Framework is organized
into six frames, each consisting of a concept central to information
literacy, a set of knowledge practices, and a set of dispositions.
(pp. 7–8)

Crucially, the ACRL Framework (2016) envisions “metaliteracy” as a


high-level concept organizing the other concepts and making them gener-
alizable among many different learning contexts and learning occasions,
across a lifetime of learning. For this Framework to be meaningful and
effective, certain demands must be attended to, and answered:

Metaliteracy demands behavioral, affective, cognitive, and metacog-


nitive engagement with the information ecosystem. This Framework
depends on these core ideas of metaliteracy, with special focus on meta-
cognition, or critical self-reflection, as crucial to becoming more self-
directed in that rapidly changing ecosystem.
(p. 8)

The question then becomes how those demands might be articulated by


the instructor, understood by the student, and framed within an ecosys-
tem in which mere compliance becomes, instead, discipline, the kind of
discipline that in a new taxonomy of learning stems from, and leads to,
interest, insight, and love (Campbell, 2015).
Jean Claude Guédon’s (2017) essay “Open Access: Toward the Internet
of the Mind” illuminates that very question by analyzing scholarly com-
munication, which he argues is a special case of the aim of higher educa-
tion itself, broadly considered. Guédon’s work takes stock of the 15 years
after the “position paper,” which Guédon terms a “kind of manifesto,” of
the 2002 Budapest Open Access Initiative (BOAI) (Guédon, 2017, p. 1).
The 2002 BOAI manifesto identified three main areas of concern, accord-
ing to Guédon: “the slowness of the editorial process, the high price of
136 W. Gardner Campbell

journals, and the failure to take advantage of the Internet” (Guédon,


2017 p. 1). In 2002, as in 2017, the fundamental failure Guédon diagnoses
is the failure to take advantage of the Internet. While there was apparently
at least as much heat as light generated from the “divergent” conversation
preceding it, the BOAI managed to bring a choir out of the chaos, with
Peter Suber’s opening declaration as the cantus firmus, so to speak: “An
old tradition and a new technology have converged to make possible an
unprecedented public good” (quoted in Guédon, 2017, p. 1).
Some current analysts, including many in the academy, might criticize
Suber’s statement, as well as the BOAI’s overarching goal of “the deployment
of an optimal communication system for scholarly research,” as “cyberu-
topian” at best, technologically deterministic at worst. Nevertheless, and
commendably, Guédon’s (2017) “Toward the Internet of the Mind” rejects
such easy critiques. Instead, Guédon’s argument reaches toward some-
thing not far from Teihard de Chardin’s “noöosphere” (Chorost, 2011,
p. 18) or indeed what Michael Chorost memorably describes in his book
World Wide Mind: The Coming Integration of Humanity, Machines, and
the Internet as an “autopoietic” system:

The most profound possibility of all is feeding the system’s output level
back into each individual. If everyone is aware of the system’s behavior
it could be “autopoietic”, that is, self-creating. Local changes in one
area—say, a death in a community—would affect the whole, and the
whole would affect the community in turn. One gets the possibility
of nonlinear feedback loops generating complex, unpredictable behav-
ior …. Multiple energies come together and form more than the sum
of their parts. The outcome is constantly surprising and new. Many-
to-many networks generate these kinds of events, and if the system is
robust enough, the feedback loop never stops. It regenerates itself like
a living flame.
(Chorost, 2011, pp. 177, 178)

We may seem to be at the farthest verge of Guédon’s argument here, but I


believe that, on the contrary, we are near its center, as well as the center of
the true importance of information literacy. If information literacy entails
not only informed judgment about what to read and how to understand it
but also a conceptual framework comprehensive enough to imagine and
create such purposeful communication across time and space, Guédon’s
(2017) open access essay and the ACRL Framework (2016) are aligned
both in letter and spirit.
Guédon’s analysis focuses on two related questions, both of which are
vital for understanding the dream he stubbornly and brilliantly insists
The metaphors we learn by 137

on throughout the 2017 essay: What are the purposes of scholarly com-
munication? And with regard to those purposes, what substantial, indeed
defining changes have been wrought by the advent of widespread digi-
tal communication? The answer to the first question is not new, though
Guédon’s analysis furnishes intriguing evidence that the answer has
been lost or obscured within institutional practices, practices that have
led to much of the (often-willed) new ignorance in answering the second
question.
In Guédon’s analysis, scholars are a pronounced instance of the gen-
eral human impulse to study things carefully and rigorously. Scholars
devote themselves to discovering or creating knowledge, a process that
relies on unfettered communication. In higher education, libraries facili-
tate and liberate effective scholarly communication, serving their com-
munities of learning by the thoughtful stewardship and support of that
communication:

Libraries, traditionally, have stood between publishers and users


(researchers, teachers, students), and they have managed the acquisi-
tion budgets that justified the erecting of the magnificent temples of
knowledge that continue to dominate university campuses until now.
The McLennan Library at McGill University echoes this ambition
when it upholds (engraved in stone) that “Beholding the bright counte-
nance of truth in the quiet and still air of delightful studies”, is its rai-
son d’être. The next fifty years, alas, have eroded much of this superb
stance.
(Guédon, 2017, p. 12)

Guédon reads this “superb stance” not as ivory-tower nostalgia, but


once again as a special case of a generalizable human desire and need.
Universities and their libraries exist as exemplars of not only a possible,
but also an essential public good:

Whenever access to documents is impeded, the quality of human com-


munication suffers …. The reach of a distributed system of human
intelligence is ultimately limited by the constraints placed on human
communication. The notion of Open Access aims precisely at describ-
ing how far a distributed system of human intelligence, unencumbered
by artificial shackles, may reach.
(Guédon, 2017, p. 3)

To the extent, then, that universities model and empower “distributed


system[s] of human intelligence,” universities serve essential human goods.
138 W. Gardner Campbell

Now to the second question: What difference does the advent of wide-
spread digital communication make for scholarly communication? What
possibilities for “unprecedented public good” emerge from the “new tech-
nology” of networked digital communication, and what blocks those pos-
sibilities from being accomplished realities? Guédon’s answer is clear: a
failure of knowledge and imagination, in many cases sustained by defense
of outmoded and damaging business plans, has overtaken the contempo-
rary academy. Contemporary higher education, as practiced across the
wide range of institutions in the US and globally as well, neither beholds
“the bright countenance of truth in the quiet and still air of delightful
studies” nor engages the protean varieties of new knowledge within the
light-speed conveyances and connections of a global telecommunica-
tions network. Instead, the academy has fallen between two stools. For
Guédon, the question of whether the academy can recover either or both
of these possibilities (he implies they are linked) will depend upon recov-
ering a sense of scale—not the scale of credit hours to sell or vendors (of
technologies or of “content”) to handcuff ourselves to, but the scale of
possibility and commitment represented by higher education at its best,
and in particular, its libraries.
In an age in which one hears over and over that higher education is
a “business” (as if all businesses were alike) and that resources must be
allocated not on the basis of judgment and mission but on the basis of
“performance” or “productivity” (the optimization mindset that all too
often drives analytics and assessment), libraries are particularly vulner-
able. Libraries do not typically award degrees or generated billable credit
hours. They stand for what those degrees and credit hours themselves
are supposed to represent, and they exist to empower learners through-
out the environment. The move to “responsibility center management” or
“responsibility-based budgeting,” as Michael Katz noted in the late 1980s,
“ties the allocation of resources to the enrollment of individual schools and
departments and thereby fosters competitive, entrepreneurial activity on
individual campuses” (Katz, 1987, p. 177). Katz observes that such budg-
eting reflects “the precarious situation of universities caught by inflation,
decreased government funding, and a smaller cohort from which to draw
students,” but goes on to argue that “the application of the law of supply
and demand to internal decisions” inevitably determines “educational and
scholarly worth” merely by “market value,” with the accompanying result
that “faculty scholarship” becomes little more than “a commodity” (Katz,
1987, p. 177). It is not hard to see that libraries, reduced to the role of
“servicing” the “production” of scholarship, degrees, credit hours, and all
the other “outputs” in the contemporary university, are at risk, not only in
terms of resources, but also in terms of leadership within the university and
The metaphors we learn by 139

higher education generally. Yet their vulnerability, in plain sight, demon-


strates the ongoing rot at the core of higher education, a rot accelerated by
optimization mindsets that substitute for deep philosophical consideration
of what kinds of education are most meaningful for human flourishing.
People want maps and methodologies. We tolerate complication much
more readily than complexity. Yet teaching and learning are among the
most complex activities in which human beings engage, and in my view
both teaching and learning should be distinguished not only by a toler-
ance of complexity but by an appetite for it. Information literacy, con-
sidered as the anthropology of cultures of information, must therefore
have a complex understanding of the idea of information. This idea of
information and information literacy must include a primary emphasis on
the worth of purposeful, ethical communication in human society. Here
the ideas of teaching, learning, librarianship, and scholarly communica-
tion join hands. Here Guédon’s (2017) concept of the “internet of the
mind” becomes a compass with which the principles, ideas, and values
in the ACRL Framework (2016) may be more fully explored and real-
ized. In my experience, such explorations must at some point take on a
certain oblique, enigmatic, or even riddling character if they are to be
truly empowering for the learner. The liminal states one encounters at
every threshold concept, every disjunctive new level of learning, are anti-
deterministic, and thus potentially liberating—though liberation always
includes both elation and anxiety, which is one of the ways one knows one
has encountered true freedom. By contrast, too often the culture of ana-
lytics and assessment elides its purposes behind the word data, a strangely
anodyne word that appears to transform thinkers into users. But data are
not information.
In his great ethical treatise on the information age, The Human Use of
Human Beings, cyberneticist Norbert Wiener (1967) offers this precise
and bracing definition of information:

Information is a name for the content of what is exchanged with the


outerworld as we adjust to it, and make our adjustment felt upon it.
The process of receiving and of using information is the process of our
adjusting to the contingencies of the outer environment, and of our liv-
ing effectively within that environment. The needs and the complexity
of modern life make greater demands on this process of information
than ever before, and our press, our museums, our scientific laborato-
ries, our universities, our libraries and textbooks, and obliged to meet
the needs of this process or fail in their purpose. To live effectively is to
live with adequate information.
(26–27)
140 W. Gardner Campbell

Adjustment, but with effect; content, but also process. Wiener’s definition
specifies information as “content,” but as we have seen, “content” alone
is not enough to explain or explore the nature of information—as indeed
Wiener’s subsequent elaboration acknowledges. The Information Literacy
Framework also acknowledges and embraces a wider and more inclusive
view of information as phenomenon, activity, and commitment. This view
is itself informed by learning science, philosophy, and the wisdom that
comes from practicing what anthropologist and cyberneticist Gregory
Bateson (1972/2000) calls “the business of knowing.”
Bateson’s (1972/2000) intriguing phrase emerges in a “Metalogue,”
his name for conversations with his daughter Mary Catherine that he
published from time to time. These conversations recall Socratic dia-
logues and portray the mysterious ways in which naivete and sophistica-
tion can find an unexpected and sometimes revelatory reciprocity in the
arts of inquiry. Indeed, Bateson’s definition of “metalogue” has suggestive
connections with the nature and purpose of the Information Literacy
Framework:

A metalogue is a conversation about some problematic subject. This


conversation should be such that not only do the participants discuss
the problem but the structure of the conversation as a whole is also
relevant to the same subject.
(Bateson, 1972/2000, p. 1)

The metalogue titled “How Much Do You Know?” includes an especially


fascinating exchange, as father and daughter ask questions about quan-
tifying and assessing knowledge, questions that soon become questions
about various kinds of knowledge and the process of knowing itself:

D [daughter]. That would be a funny sort of knowledge, Daddy. I mean


knowing about knowledge—would we measure that sort of knowing
the same way?
F [father]. Wait a minute—I don’t know—that’s really the $64
Question on this subject. Because—well, let’s go back to the game of
Twenty Questions. The point that we never mentioned is that those
questions have to be in a certain order. First the wide general ques-
tion and then the detailed question. And it’s only from answers to the
wide questions that I know which detailed questions to ask. But we
counted them all alike. I don’t know. But now you ask me if knowing
about knowledge would be measured the same way as other knowl-
edge. And the answer must surely be no. You see, if the early ques-
tions in the game tell me what questions to ask later, then they must
The metaphors we learn by 141

be partly questions about knowing. They’re exploring the business of


knowing.
(Bateson, 1972/2000, p. 23)

The authors of the ACRL Framework (2016), like Jean-Claude Guédon,


like any serious artist of inquiry in this explosive digital age, explore this
business of knowing. Guided by the Framework, librarians as anthropolo-
gists of the cultures of inquiry (and thus cultures of information) explore
the business of knowing by asking the wide questions in a certain order.
The Framework prompts librarians, like all educators, to ask the early
questions—questions about knowing—so that the later questions may
promote effective living within an information environment that is itself
altered by every inquiry.
For me, after a quarter-century of work in teaching and learning
within environments of networked personal computers, the most excit-
ing aspects of the ACRL Framework (2016) are the recognition that the
human impulse toward communication, not skills per se, is the core of
literacy, and the noble, thoughtful aspiration throughout the document
toward what Jerome Bruner terms a “heuristics of inquiry”:

[W]hoever has taught kindergarten and the early primary grades or


has had graduate students working with him on their theses—I choose
the two extremes for they are both periods of intense inquiry—knows
that an understanding of the formal aspect of inquiry is not sufficient.
Rather, several activities and attitudes, some directly related to a par-
ticular subject and some fairly generalized, appear to go with inquiry
and research. These have to do with the process of trying to find out
something and, though their presence is no guarantee that the product
will be a great discovery, their absence is likely to lead to awkwardness
or aridity or confusion. How difficult it is to describe these matters—
the heuristics of inquiry. There is one set of attitudes or methods that
has to do with sensing the relevance of variables—avoiding immersion
in edge effects and getting instead to the big sources of variance. This
gift partly comes from intuitive familiarity with a range of phenomena,
sheer “knowing the stuff.” But it also comes out of a sense of what
things among many “smell right,” what things are of the right order of
magnitude or scope or severity.
(Bruner, 1979, p. 93)

The Framework for Information Literacy (2016) aspires toward just


such a heuristics of inquiry within the complexity of the digital age. This
Framework, Bruner’s (1979) heuristics of inquiry, Guédon’s (2017) Internet
142 W. Gardner Campbell

of the mind, Bateson’s (1972/2000) Metalogues, Engelbart’s (1962) way


of life in an integrated domain: these concepts and dreams form an eternal
golden braid within which strands are also woven by teacher and student,
by library and librarian, to the benefit of all.
In that brief but telling acknowledgement of the “dark side” of analyt-
ics, George Siemens quotes the philosopher and theologian Jacques Ellul:

The potential of LA to provide educators with actionable insight into


teaching and learning is clear. The implications of heavy reliance on
analytics are less clear. Ellul (1964) stated that technique and technical
processes strive for the “mechanization of everything it encounters” (p.
12). Ellul’s comments remind us of the need to keep human and social
processes central in LA activities. The learning process is essentially
social and cannot be completely reduced to algorithms.
(Siemens, 2013, p. 1395)

I believe Siemens takes Ellul’s words to heart. Yet as we have seen, “LA
activities” and the “actionable insights” they produce are never value-
neutral, for those activities not only collect but use data in the service of
an optimization mindset. Instead of a great cloud of witnesses, we have
another kind of cloud, one that swallows up the questions and concerns of
Ellul and all those who aspire to the life of the networked mind. Sadly, by
the time the optimization mindset wreaks its destruction, the activities of
optimization come to seem self-justifying. Tragically, what Bateson terms
“the early questions” seem already to have been answered in the service of
the “business” of education.
In 1997, Doug Engelbart received computer science’s highest honor, the
Turing Award. Named for pioneering information and computer scientist
Alan Turing, this award is often called the “Nobel Prize for computer
science.” During the Q&A following his Turing lecture, Engelbart was
asked, given the 32 years to that time that it had taken for something as
basic as the mouse to win widespread acceptance, how long he thought
it would take for his larger vision of human-computer coevolution to be
understood and acted on. Engelbart replied that he didn’t know—but that
he did know that the goal would not be reached by simply following the
marketplace. (The marketplace, unfortunately, is now itself a primary
metaphor for universities and scholarship.) Speaking to an audience of
distinguished information scientists, Engelbart pointed out that optimiz-
ing for a multidimensional environment—that is, a complex set of inter-
dependent variables—faced a particular danger:
Any of you that knows about trying to find optimum points in a multi-
dimensional environment … you know that your optimizing program
The metaphors we learn by 143

can often get on what they call a “local maximum”. It’s like it finds a
hill and gets to the top of it, but over there ten miles there’s a much big-
ger hill that it didn’t find.
(Engelbart, 1998)

Teaching and learning are among the most complex multidimensional


environments we experience as human beings. Are we now in that dark
cloud of local maxima, where conclusions are foregone and the full meas-
ure of human flourishing can no longer be imagined?

References
Altimeter Films. (2017). Citizen Jane: Battle for the city. Directed by Matt Tyrnauer.
Association of College and Research Libraries (ACRL). (2016). Framework for
information literacy for higher education. ACRL. https://www​.ala​.org​/acrl​/
standards​/ilframework
Bateson, G. (1972). Steps to an ecology of mind. University of Chicago Press.
Reprinted with a new Foreword by Mary Catherine Bateson, 2000.
Bruner, J. (1979). The act of discovery. In On knowing: Essays for the left hand.
Boston: Harvard University Press.
Campbell, G. (2015). A taxonomy of student engagement. YouTube. https://www​
.youtube​.com​/watch​?v​=FaYie0guFmg
Chorost, M. (2011). World wide mind: The coming integration of humanity,
machines, and the internet. Free Press.
Ellul, J. (1964). The technological society. Vintage Books.
Engelbart, D. (1962). “Augmenting human intellect: A conceptual framework.”
SRI Summary Report AFOSR-3223. Prepared for Director of Information
Sciences, Air Force Office of Scientific Research, Washington DC. http://www​
.dougengelbart​.org​/pubs​/Augmenting​-Human​-Intellect​.html
Engelbart, D. (1998, November 16). Bootstrapping our collective intelligence.
Turing Award Lecture at the 1998 ACM Conference on Computer-Supported
Cooperative Work. https://youtu​.be​/ DDBwMkN6IKo
Fulcher, K. H., Good, M. R., Coleman, C. M., & Smith, K. L. (2014). A simple
Modelfor learning improvement: Weigh pig, feed pig, weigh pig. Occasional
Paper #23. National Institute for Learning Outcomes Assessment, December
2014. https://files​.eric​.ed​.gov​/fulltext ​/ ED555526​.pdf
Guédon, J.-C. (2017). Open access: Toward the Internet of the mind. http://www​
.bud​apes​tope​nacc​essi​nitiative​.org ​/ boai15​/ Untitleddocument​.docx
Johnson, S. (1755). A dictionary of the English language.
Katz, M. (1987). Reconstructing American education. Harvard University Press.
Kennedy, J. L., & Putt, G. H. (1956, December). Administration of research in a
Research Corporation RAND Corporation Report No. P-847, April 20, 1956.
Administrative Science Quarterly, 1(3), 326–339.
Kneese, T. (2021, January 27). How a dead professor is teaching a university
art history class. Slate. https://slate​.com ​/technology​/2021​/01​/dead​-professor​
-teaching​- online​- class​.html
144 W. Gardner Campbell

Lakoff, G., & Johnson, M. (2003). Metaphors we live by. University of Chicago
Press
Siemens, G. (2013). Learning analytics: The emergence of a discipline. American
Behavioral Scientist, 57(10), 1380–1400.
Simpson, J. A. (1924). The Oxford English Dictionary. Clarendon Press.
Stenhouse, L. (1975). An introduction to curriculum research and development.
Heinemann Educational.
Weinstein, J. M., Reich, R., & Sahami, M. (2021). System error: Where big tech
went wrong and how we can reboot. Hodder & Stoughton.
Wiener, N. (1967). The human use of human beings: Cybernetics and society.
Avon Books, p. 1954. Discuss Edition.
SECTION III

Adaptive Learning



9
A CROSS-INSTITUTIONAL SURVEY
OF THE INSTRUCTOR USE OF DATA
ANALYTICS IN ADAPTIVE COURSES
James R. Paradiso, Kari Goin Kono, Jeremy Anderson,
Maura Devlin, Baiyun Chen, and James Bennett

One of the major benefits of an adaptive learning system (ALS) is the


richness of data collected and available to support educators’ decisions
around teaching and learning (Peng et al., 2019). From the data provided,
an instructor can tell if and when students are learning specific concepts
or struggling with course content (Attaran et al., 2018). ALS data dash-
boards also allow instructors to tailor their educational activities to meet
the needs of both the individual students and homogenous subgroups
of the class as a whole (Rincón-Flores et al., 2019). Recent literature,
however, suggests that research about ALS data (learning) analytics are
often focused in technology and engineering fields rather than education
(Guzmán-Valenzuela et al., 2021). Therefore, our study was designed to
add value and insight into instructors’ use of ALS data to inform their
educational practices. To situate our research, we focus on three specific
areas: How do educators use ALS data dashboards? How do instructors
use data dashboards to inform their pedagogy? What role does self-effi-
cacy play in the process?

Instructors’ general use of ALS data dashboards


Despite the potential benefits of using ALS data dashboards to enhance the
educational experience, instructors express feelings of being overwhelmed
and/or incapable of utilizing data analytics (Amro & Borup, 2019).
Shabaninejad et al. (2020) addressed the fundamentals of learning ana-
lytics dashboards, and how they might be used to empower instructors
with direct, detailed information about the teaching and learning process.

DOI: 10.4324/9781003244271-12
148 James R. Paradiso, et al.

Their study detailed a new style of dashboard designed to return mean-


ingful analytics to instructors through “insights” provided by learning
process drill-downs—by using filtering rules to identify subcohorts of stu-
dents by gender, performance, and more.
While the work was insightful about the information available to
instructors, it also uncovered a surprising instance of diminishing
returns:

We presented the reported drill-down recommendations and the pro-


cess visualizations to the instructor of the course to capture their feed-
back and comments on the findings …. While the instructor had access
to “Course Insights” throughout the semester, they rarely used it and
generally found it to be overwhelming. They considered the large num-
ber of potential drill-down options within the platform to be the main
reason that using the platform was overwhelming.
(Shabaninejad et al., 2020, p. 495)

While extremely robust dashboards—powered by large amounts of


data—may provide helpful details that inform teaching and learning inter-
ventions, overly complex dashboards may remain unused, particularly if
instructors do not feel comfortable retrieving or interpreting the data.
An additional study by Knoop-van Campen and Molenaar (2020)
analyzed dashboard-prompted feedback as it relates to human-prompted
feedback (i.e., initiated by the student or instructor). The researchers
identified value in dashboard-promoted feedback—explaining how dash-
board-prompted feedback elicits richer, more diverse feedback directed at
students’ process, compared to human-prompted feedback that tends to
be predominantly personal or task-oriented (which has proven to be less
effective). A notable conclusion highlighted by Knoop-van Campen and
Molenaar (2020) was the criticality of having instructors work dashboard
use into their professional routines. This study observed that 20% of the
instructors did not offer any dashboard-prompted feedback to students,
indicating zero uptake into their instructional routines.

Instructors’ pedagogical decisions to use data from ALSs


Building the use of ALS data dashboards into instructors’ existing peda-
gogical “routines” can be facilitated by establishing an intentional con-
nection between the tool and practice (Molenaar & Knoop-van Campen,
2017; Norman, 1990). This value carries forward into scholarly literature
on educators’ instructional tendencies when applying analytics from ALS
dashboards to improve their own practices and enhance student learning.
A cross-institutional survey 149

Insights provided through dashboards provide relevant and timely


information about individual-student and whole-class learning trajecto-
ries, but these data are of little use to instructors if not leveraged to adjust
their instructional approach(es) (Schwendimann et al., 2016). For instance,
Keuning and van Geel (2021) investigated the impact that instructors’
use of ALS dashboards had on implementing differentiated instruction.
They found the principles of differentiation—(1) being goal-oriented; (2)
continually monitoring students’ progress and understanding; (3) adapt-
ing instruction and assignments to students’ needs; (4) being ambitious
and challenging all students on their own level; (5) stimulating students’
self-regulation (p. 202)—remained true in the context of ALSs. Although
instructors may feel more “up-to-date” with real-time analytics, research-
ers noted that the ultimate responsibility still falls on the teachers to dif-
ferentiate student feedback and instruction.
The more intelligent a system becomes the more it may cause instruc-
tors to feel as though natural components of their teaching process are
being replaced by ALSs. Therefore, creating a sense of ownership within
the adaptive learning framework by actively engaging faculty members
and seeking input from them in the curricular decisions informed by the
ALS data analytics is a key factor for success in teaching and learning
(Johnson & Zone, 2018). Additionally, establishing professional develop-
ment programming and a community of practice (CoP) among instructors
using ALSs can mitigate the loss of the natural components of teaching
and provide increased motivation to adopt new practices (Lave & Wenger,
1991; Johnson & Zone, 2018). The successes of those who have improved
their instructional practices through the use of ALS dashboards, while
variable, serve as the fundamental principle for educators who may not
have a commensurate level of confidence using ALSs.

Instructors’ self-efficacy in using ALSs to


inform pedagogical decisions
Applying learning analytics in educational settings requires “intentional
integration” by instructors (O’Sullivan et al., 2020). This integration also
assumes that instructors believe they can use (or learn to use) such plat-
forms, grasp the platform functionalities designed to enhance student
success, and adapt their teaching craft. However, it is unclear whether
instructors’ roles while leveraging ALSs are mediated by their own atti-
tudes and beliefs about using the system (Bandura, 1977; Bandura, 1993;
Bandura, 1994; Dweck & Leggett, 1998).
According to Albert Bandura’s theoretical model (1977), teachers can
develop efficacy in their own abilities in four ways. The most effective
150 James R. Paradiso, et al.

way to develop a perspective about one’s abilities is successful experien-


tial application, in other words, learning content or mastering a skill or
task. A second way is through social modeling or “vicarious experiences”;
seeing someone else master something gives others a sense that they can
master it as well. Social persuasion (i.e., being influenced by others’ per-
ceptions that one could master something) is a third way. Lastly, reducing
someone’s stress response and anxiety to learning a new task may also
facilitate increased self-efficacy. We concentrate on the first two aspects,
which focus on the agency of the person (instructor) who faces a new task.
Past research confirms that self-efficacy can be encouraged through
training and support. In a meta-analysis of 82 empirical studies on
teacher self-efficacy, researchers reported that teachers gained a sense that
they could do something when modeled by others (Morris et al., 2017).
Another researcher asserted,

A sense of efficacy is a valuable outcome of early teaching experiences


and can be fostered with specific training that provides needed peda-
gogical knowledge, a variety of forms of feedback, and social support
that normalizes the predictable fears of novice teachers.
(Hoy, 2003–2004, p. 1)

Others have argued the importance of teaching tools and their situated
utilizations for enhancing teachers’ self-efficacy (Pajares, 2006)—which
may partially explain the potentially intimidating incorporation of ALS
dashboards into one’s pedagogical routine(s).
In a recent dissertation, Ngwako (2020) identified a significantly dif-
ferent from zero positive correlation (p < 0.001) between the Teacher Self
Efficacy (TSE) scale and the Perception of ALS survey. Furthermore, the
study found that the number of courses taught with ALSs significantly
moderated the relationship between TSE and ALS perception, support-
ing one component of Bandura’s model (1977) that “successful mastery
breeds more successful mastery.”
Additional evidence on the growth of instructor self-efficacy can be
found in an experimental study of teacher candidates by Wang et al.
(2004), which focused on vicarious experiences. They found that among
280 preservice teacher candidates assigned to either treatment or control
course sections of an introductory educational technology course, vicari-
ous learning experiences and goal setting (considered independently) had
positive impacts on teachers’ self-efficacy for using technology. When
both vicarious learning experiences and goal setting were present, even
stronger senses of self-efficacy for integrating technology in teaching were
found (cf. Wang et al., 2004, p. 239).
A cross-institutional survey 151

Our current study focuses on (1) instructors’ self-efficacy when using


ALSs and (2) whether self-efficacy is a mediating factor when utilizing ALS
dashboards to optimize teaching and learning in post-secondary courses.
We adopt Bandura’s definition of self-efficacy: “Perceived self-efficacy is
defined as people’s beliefs about their capabilities to produce designated
levels of performance that exercise influence over events that affect their
lives” (Bandura, 1994, p. 1). In our study, “events that affect [instructors’]
lives” are educators’ pedagogical decisions when using ALSs. We hypoth-
esize that instructors’ positive self-efficacy in mastering and leveraging
ALS data dashboards influences the decisions they make and practices
they implement when teaching with an ALS. Furthermore, we hypothesize
that instructors’ positive self-efficacy in leveraging ALS data dashboards
positively correlates with taking part in a CoP.

About the study


Scholars from four institutions—University of Central Florida, Dallas
College, Bay Path University, and Portland State University—and one
independent researcher came together to form a learning community
around dashboard usage among instructors using ALSs. As a group, we
identified inconsistencies in understanding how our instructors were lev-
eraging analytics for teaching. We created a survey tool to address ques-
tions around instructor self-efficacy, how often instructors review ALS
dashboards and make changes to their curriculum based on data from the
ALS dashboards, instructor communication with students related to ana-
lytics, and how instructors iterated their curriculum based on analytics.
Given varying academic operations across our institutions, we would like
to note that some instructors in our study self-select to teach with ALSs,
while others work in a centralized course management model. This means
that they teach from master courses designed and developed by academic
experts and instructional designers that leverage ALSs (and other educa-
tional technologies).

Survey distribution
The survey was distributed to higher educational institutions across
the United States—including Bay Path University, University of Central
Florida, Dallas College, and Northern Arizona University. Bay Path
University is a small private university in Massachusetts that serves over
3,300 students each year. University of Central Florida is a large pub-
lic research institution with a student population of around 70,000.
Dallas College is a large public community college in Texas serving over
70,000 annually. Northern Arizona University is a large public research
152 James R. Paradiso, et al.

institution that serves approximately 29,000 students each year. Two


methods of survey distribution were utilized: initially the survey was dis-
tributed by the researchers at their respective institutions, and then by the
Association of Public and Land-grant Universities (APLU), Personalized
Learning Consortium (PLC), and Realizeit (an ALS vendor) to their part-
ner institutions across the United States.

Participants
Participants were university and college instructors who had used ALS
data analytics to inform their teaching and learning practices. The par-
ticipants identified from seven public and private universities in the
United States with the majority of them from Bay Path University (n =
17), Northern Arizona University (n = 14), University of Central Florida
(n = 14), and Dallas College (n = 5). These participants represent various
disciplines, such as Mathematics and Statistics (n = 8), Business (n = 8),
Foreign Languages (n = 5), Biology (n = 4), History (n = 4), Psychology (n
= 4), and others. Table 9.1 illustrates the ALSs used by our respondents;
more than half of them used either Realizeit (n = 29) or ALEKS (n = 11).

Method
To gain an understanding of how instructors use data analytics in adap-
tive software, we examined instructor confidence (i.e., self-efficacy) about
ALS dashboards. First, we adapted a survey instrument of quantitative and
TABLE 9.1 Adaptive Learning Systems Used by Respondents

Adaptive Learning System Respondents

Acrobatiq 1
ALEKS 11
Cengage MindTap 4
CogBooks 1
Hawkes Learning 1
InQuizitive 2
Knewton 2
Learning Management System 1
Lumen Waymaker 1
Macmillan Achieve 1
Other/Not Reported 1
Pearson MyLab/Mastering 7
Realizeit 29
Smartbook 5
Total 67
A cross-institutional survey 153

qualitative questions by Wang et al. (2004) that was based on Bandura’s


theory of self-efficacy. Our adaptation consisted of 10 Likert-scale ques-
tions directed at determining post-secondary instructors’ perceived con-
fidence using an ALS for educational purposes. The questions related to
self-efficacy consisted of two subscales of adaptive technology capabili-
ties and influences of adaptive technology use. Reliability was high for
the scale (α = 0.94), the adaptive technology capabilities subscale (α =
0.92), and the influences of adaptive technology use subscale (α = 0.88).
Additionally included in the instrument were questions pertaining to an
educator’s frequency and granularity of ALS data dashboard usage. Three
free response questions were included for participants to elaborate their
approach(es) to using the ALS dashboard(s).

Data analysis
Researchers conducted a convergent mixed methods data analysis
process, employing separate quantitative and qualitative analyses and
then compared results from the findings (Creswell & Creswell, 2018;
Merriam & Tisdell, 2016). Quantitative analysis examined self-efficacy
as an independent variable and its relationship with the dependent vari-
ables of faculty members’ frequency and granularity of analytics use in
ALSs. A separate analysis sought to determine if there was a relation-
ship between self-efficacy or the presence of a CoP and frequency and
granularity of ALS analytics use. Qualitative data addressed the three
free-response survey questions using a three-step qualitative coding pro-
cess to identify categories and themes from the data (Saldaña, 2016).
Results from the qualitative analysis were then compared to quantitative
findings about self-efficacy as it relates to frequency and granularity of
dashboard usage; qualitative analysis clarified how instructors used ALS
data dashboards.

Quantitative analysis
Data preparation consisted of identifying and compensating for missing
values in responses and constructing scale scores for the independent and
dependent variables. One respondent had valid responses on nine of ten
items for the self-efficacy scale. In this case, the person mean imputa-
tion approach was taken whereby the mean of available responses for
the individual was used (Heymans & Eekhout, 2019). Seven respondents
were deleted from the analysis of frequency and granularity and nine
were omitted from the analysis of community of practice due to a lack of
responses on all items for those respective scales.
154 James R. Paradiso, et al.

Subsequently, authors calculated scale scores for self-efficacy and gran-


ularity of dashboard use. The authors treated each of the scales, including
the two subscales for self-efficacy, as continuous variables by summing
the scores of the items in the scales. The three items relating to frequency
of dashboard served as separate categorical variables for comparison-
of-means analyses. An inspection of the distribution of the data demon-
strated that several assumptions of linear models were violated, and so
nonparametric tests were more appropriate for their robustness (Field,
2018). Kendall’s tau served as the correlation test statistic for all analyses
that included continuous variables—self-efficacy and granularity of ALS
data dashboard use.

Qualitative analysis
The qualitative portion of analysis utilized coding software called Atlas​.​ti
where researchers conduct line-by-line coding of data. Open-ended ques-
tions from the survey were uploaded using several a priori codes related
to the goals of the study, which included searching for a fuller under-
standing of instructor use of ALS dashboard reporting, how instructors
adapted teaching practices, and how instructors interacted with learn-
ers (Charmaz, 2006; Saldaña, 2016). Using an initial coding practice,
additional protocols were added from the text (Charmaz, 2006; Saldaña,
2016). Responses were double-coded to confirm and validate their status.
The final results were subjected to a categorization and thematic analysis
process which was then applied into narrative writing to inform the quan-
titative component of our methodological approach of identifying instruc-
tor best practices of dashboard use of student data (Merriam & Tisdell,
2016; Saldaña, 2016).

Results
Results are shared of the three research inquiries—using a combination of
quantitative and qualitative data to support our findings.

RQ1: What is the relationship between instructor self-efficacy and


the use of system dashboard and learning analytics reports?
To answer this first research question, the researchers imported the for-
matted results from the survey into SPSS 25 for analysis. Correlation anal-
ysis for self-efficacy and granularity of dashboard use was computed using
Kendall’s tau. The independent-samples t-test provided results for the rela-
tionship between self-efficacy and frequency of dashboard use. Results of
the quantitative statistical analyses are presented in Table 9.2. In all cases,
A cross-institutional survey 155

TABLE 9.2 
Correlation Analysis Results for Self-Efficacy and Granularity of
Dashboard Use

Relationship Tested n Kendall’s τ p

Self-efficacy, granularity of dashboard use 54 0.285 0.006


Self-efficacy adaptive technology 54 0.354 0.001
capabilities sub-scale, granularity of
dashboard use
Self-efficacy influences of adaptive 54 0.338 0.001
technology use subscale, granularity of
dashboard use

self-efficacy and its subscales were positively related with granularity of


dashboard use. Effect sizes were in or approaching the moderate range
suggested by Cohen (1988).
The Kruskal-Wallis ANOVA test was completed comparing means of
self-efficacy scores across categories for the three items pertaining to fre-
quency of dashboard use. These items addressed the use of dashboards
for informing teaching practices and communication with students. Table
9.3 contains descriptive statistics by question and category. There was a
trend across all three questions for those reporting more frequent uses of

TABLE 9.3 
Descriptive Statistics and Self-Efficacy Ratings for Granularity of
Dashboard Items and Responses

Dashboard Use Item Response Sample Mean Self- Standard


Type n efficacy Deviation

General purposes Daily 9 45.56 5.15


Weekly 28 40.89 10.86
Once a month 8 40.38 8.81
1–2 times a semester 9 38.28 4.51
Never 0 – –
For teaching Daily 4 46.25 6.24
Weekly 16 40.81 12.57
Once a month 8 44.88 4.67
1–2 times a semester 17 43.41 4.42
Never 9 31.94 6.54
For communicating Daily 7 45.71 4.92
with students
Weekly 22 41.14 11.54
Once a month 12 41.04 6.68
1–2 times a semester 11 40.64 6.42
Never 2 29.00 8.48
156 James R. Paradiso, et al.

dashboards to exhibit higher mean self-efficacy scores than those with less
frequent use. However, only the differences in means across the categories
for the use of dashboards for teaching purposes were significant (H =
15.78, p = 0.003, df = 4, n = 54). Differences between categories for gen-
eral use of dashboards approached significance (H = 7.04, p = 0.071, df =
3, n = 54), while differences in using the dashboards to inform communi-
cation to students were not significant (H = 7.67, p = 0.105, df = 4, n = 54).
The relationships between having a CoP and the other variables
under study represented the second line of analysis. Respondents com-
pleted a single item pertaining to CoP by selecting or not selecting five
optional responses about the types of roles involved in the CoP: course
instructors, instructional designers, administrators, other roles, and
no CoP. The authors treated selection of one of the responses as a score
of “1” and then summed the scores for a range of zero to four (no CoP
also was treated as zero). The scale score served as an ordinal categori-
cal variable. Kruskal-Wallis ANOVA analyses were derived to compare
the mean scores for self-efficacy, granularity of dashboard use, and
frequency of dashboard use since the CoP variable was ordinal. None
of these ANOVA analyses yielded significant results. To further inves-
tigate, the researchers attempted to determine if a particular type of
member in the CoP yielded different results. They categorized presence
of course instructors and presence of instructional designers into yes/
no groups. Mann-Whitney U tests again demonstrated that there was
not a significant difference in self-efficacy, granularity of dashboard
use, or frequency of dashboard use across respondents who said course
instructors were in the CoP or were not, nor across those who said
instructional designers were in the CoP or were not.
Quantitative results demonstrated that there were significant and posi-
tive relationships between self-efficacy and granularity of dashboard use
and between self-efficacy and frequency of dashboard use for informing
teaching practices. Nonsignificant but similarly positive relationships
were present between self-efficacy and general frequency of dashboard
use and frequency of use for communicating with students. There was
no significant relationship between presence of a CoP and self-efficacy,
granularity of dashboard use, or frequency of dashboard use.

RQ2: How do faculty use adaptive system


dashboards and learning analytics reports?
To answer the second research question, the researchers reviewed instruc-
tors’ survey responses related to granularity levels within each dashboard
use type (Table 9.4).
A cross-institutional survey 157

TABLE 9.4 Types of Dashboard Usage and Implementation at Different


Levels of Granularity

Dashboard Use Type By By By By Total


Course Student Lesson Question

Topic mastery 27 37 32 14 110


Topic completion 26 36 33 9 104
Student engagement 21 40 26 7 94
Student affect 13 26 10 5 54

In the qualitative analysis, topic mastery and topic completion from


dashboard usage type (Table 9.4) had alignment with codes related to
student performance and student engagement data.
Instructors indicated that they used their adaptive learning dashboard(s)
to respond to student performance data in a variety of ways. One instruc-
tor replied, “I have a much more granular understanding of how students
are learning, how much they’re learning, and where they are at in learning
the concepts with [the] dashboard.” This quote is an indication that the
instructor utilized the dashboard(s) to increase their knowledge of how
students were doing with the material.
Another instructor identified questions where a high number of stu-
dents were answering incorrectly and reviewed and improved the ques-
tion for future students. “I used [data from the dashboard] that showed
a high incidence of incorrect answers to find out why and then made
necessary adjustments to improve the question/answer.” This instruc-
tor used student performance data to improve and refine their teaching
materials.
Several instructors indicated they used student performance data in the
ALS data dashboard(s) to adjust and review missed concepts in class. One
instructor shared, “If I notice that students have struggled completing a
learning module, I will adjust my teaching practices over time (weekly)
to review that content.” Another instructor said that when they review
student performance data and notice “if a group of students are strug-
gling with a particular topic, I will provide more resources and instruc-
tion in the LMS [learning management system].” In this way, instructors
personalized instruction and responded to student learning needs in the
mid-range (e.g., weekly) or possibly slightly more or less in terms of time
frame.
Instructors used student engagement data from the ALS dashboards
to communicate with students who were not engaging with the course
materials. One instructor indicated, “I notice if students are not engaged
158 James R. Paradiso, et al.

in the material and remind them to be more so.” Another instructor paid
attention to how much time students were spending within the ALS,

If students seem to be struggling in a particular area, if they are not


spending time in the system, if they haven’t completed certain activi-
ties, I will reach out to them via email and messaging to make sure they
are OK and offer assistance.

Another instructor used engagement data to loop in advising support for


students: “I have used the engagement dashboards to reach out to students
and submit early alerts to our advising team (in the case of non-engage-
ment).” This may assist students in getting help before receiving a lower
grade that may affect their academic progression.
Although a majority of faculty surveyed indicated productive use of
adaptive dashboards, there were instructors who responded they did not
use analytics effectively. For example, one instructor shared,

I do not use data analytics effectively. I know they exist and how to
use them, but I am not in the practice of doing so. I recognize that it
would benefit my students to leverage this information to tailor class to
address weaknesses in understanding.

This instructor response might indicate a need for support teams such as
instructional designers to demonstrate the benefits of using analytics in
one’s teaching practice for student learning.
In summary, instructors relied on adaptive analytics from their dash-
boards to evaluate student performance and engagement data, revise their
teaching focus, and adapt to student learning needs. Instructors utilized
student performance data to assess student understanding of concepts,
to edit areas of confusion within course materials, and to adapt to stu-
dent learning needs in the medium term (i.e., daily, weekly, or monthly).
Instructors assessed student engagement data in their dashboards and fol-
lowed up with students to offer reminders, connect about learning, and
provide additional academic assistance.

RQ3: What type of data-driven educational interventions do


faculty make from system provided dashboards/analytics?
To answer the third research question, we reviewed instructor responses
from the survey about frequency and type of dashboard use (Table 9.5).
A cross-institutional survey 159

TABLE 9.5 Frequency of Types of Dashboard Usage

Dashboard Use Never Once or Monthly Weekly Daily Total


Type Twice a
Semester

Use the data 0 9 8 28 9 54


dashboard(s)
provided by
the adaptive
system.
Use data to 9 17 8 16 4 54
adjust my
teaching
approach.
Use data as an 2 11 12 22 7 54
opportunity
to
communicate
with students
about their
performance.

Nearly half of surveyed instructors used the ALS data dashboards weekly
or daily—with a majority opting to use the dashboards weekly in all cases
except adjusting their teaching approach, which was a close tie with once
or twice a semester.
The qualitative questions and answers provided more detailed examples
of how instructors incorporated data analytics into their teaching. Initial
coding and subsequent categorical analysis illustrated how instructors
adapted to varying student learning needs (within courses that utilized
adaptive learning in the curriculum) by applying an evaluative process
that either promoted the acceleration or remediation of course topics dur-
ing the term or for future iterations of the course. Instructors adjusted
their teaching practices based on student data in the ALS dashboard(s) by

• reaching out to students with lower reported engagement


• identifying questions and providing answers
• providing additional review of course content during class time
• assessing when students were ready to move to more advanced topics
• assessing student understanding of content
• gauging student involvement in course material and
• informing the frequency and type of communications with students
160 James R. Paradiso, et al.

Some aspects of how instructors modified their teaching practices influ-


enced how they changed course materials for future course iterations.
Instructors indicated they used analytics for course revisions such as

• assessing how long it took students to understand course topics and


• adapting or revising content for future course iterations

For example, one instructor stated:

If I create supplemental content for challenging topics, I will use that


content and be proactive for future sections. I may also provide feed-
back to the Program Director and recommend that we adjust the con-
tent in the adaptive system to make it easier for students to understand.

Several instructors indicated they used data to identify where students


spent extra time to learn a concept. Instructors then reviewed that content
for relevance, spent time chunking content, created additional resources
and content as needed, and/or changed content order and timing to help
students to learn the material more quickly in the future.
Some instructors also noted reviewing student data at the end of the
semester to make adjustments to course content. In one case, an instructor
stated how data in the dashboard

help[ed] me [to] think about the course in the future to help adjust to
the individual learnings for students. For example, I have students who
have asked for more chapters to be covered so [I] modify the course to
integrate additional chapters.

Additionally, instructors used ALS data dashboards to pause and review


concepts and to move ahead to new topics after determining students had
adequate foundational knowledge on important concepts.
In sum, to understand what type of data-driven educational interven-
tions faculty made from the ALS data dashboards, researchers reviewed
the data about the frequency of dashboard use (Table 9.5) and applied
qualitative examples to illustrate types of teaching adjustments made. The
instructors surveyed indicated they used student data from dashboards to
accelerate or remediate course content within the semester and/or to revise
course content for future iterations of the course.

Discussion
This study sought to unpack the extent to which educator self-efficacy
mediates the frequency and level of granularity with which data from
A cross-institutional survey 161

ALS dashboards and learning analytics are used to improve instructional


practices. Free response questions were also incorporated into the survey
instrument to get a more complete picture of the “instructor experience”
with ALS dashboards and their applications.
Previous studies have illustrated that educators’ self-efficacy is corre-
lated with personal values (Barni et al., 2019), job satisfaction (Klassen et
al., 2009), and an openness to new ideas and methods to meet the needs
of students (Tschannen-Moran & Hoy, 2001), yet few studies explore the
relationship between instructor self-efficacy and the use of learning ana-
lytics dashboards in ALSs. This area of inquiry is “extremely narrow”
and “ever emerging” due to the technical growth and utility of ALS data
ecosystems in education (Guan et al., 2020). This dearth of research is the
impetus for this study and the motivation fueling this discussion.

Significant finding #1
A positive relationship between instructor self-efficacy and the granular-
ity of dashboard use was a significant finding from this study; however,
particular reasons as to “why” require further exploration.
A majority of surveyed instructors felt confident with respect to pos-
sessing the skills necessary to teach using an ALS, yet not quite as con-
fident in terms of (1) providing students with personalized feedback and
(2) collecting and analyzing data produced by the ALS to enhance their
instructional practices—which are both key affordances of learning ana-
lytics dashboards (Knobbout & van der Stappen, 2018).
The most common categories of data leveraged by the surveyed faculty
were “topic mastery” and “topic completion” per student. This indicates
that faculty prefer viewing performance at the individual student level,
yet their interest trends toward high-level completion and mastery data,
rather than individual question metrics or time on task, which are more
indicative of student engagement with the learning content (cf. Hussain et
al., 2018; Kahu, 2013).
This tendency for faculty to prefer high-level performance data over
more granular student engagement data is not surprising, as main-
stream learning management systems (LMSs) have historically rendered
data about student progress through a centralized gradebook (Oyarzun
et al., 2020). Therefore, faculty are accustomed to deciphering mastery
and completion data. Yet, one of the main benefits of using an ALS is
that large amounts of student engagement data are being collected and
analyzed by the system to inform content recommendations, alternative
learning materials, questions at varying difficulty levels, and customized
feedback (Knobbout & van der Stappen, 2018). By not leveraging these
data, faculty members are less equipped to make predictions as to why
162 James R. Paradiso, et al.

their students may or may not be successfully completing certain course


activities (Dziuban et al., 2020).
As noted in previous studies on self-efficacy, internal and external
determinants, such as personal motivations (Barni et al., 2019), a sense
of control/powerlessness (Ashton et al., 1983), and pedagogical training
(Moore-Hayes, 2011) among other factors contribute to instructors feel-
ing confident enough (and willing) to adopt new practices related to the
usage of learning analytics dashboards. What remains to be determined,
however, is (1) the level of intervention needed to promote these personal
attributes in faculty members and (2) which actions should or should not
be taken (and when) to enhance the student learning experience and/or
improve one’s teaching approach. While these latter two items will not be
discussed at length in this study, they are, nevertheless, important points
for future discussion.

Significant finding #2
This study also found a positive relationship between instructors’ level
of confidence using an ALS with how often they leverage ALS data dash-
boards to improve their instructional practice(s).
Checking ALS dashboards weekly was by far the most common
practice among surveyed instructors, and certainly, monitoring student
progress and mastery at that rate can be helpful in determining student
performance at mid-range. There is, however, something to be said for
having access to data that measure student achievement and engagement
in real-time (cf. Holstein et al., 2018) and considering the impact that
data (across varying timescales) can have on instructional practices and
student learning.
What is reasonable to expect from an instructor who has anywhere
between 50 and 500 students in a single course? Are just-in-time inter-
ventions even sustainable by the most well-intentioned and hard-working
educators? There may come a point when monitoring student learning
data and intervening based on that data become unsustainable and inde-
pendent of self-efficacy, principally due to the sheer scale of interventions
that overburdens the human-side workload (Kornfield et al., 2018).
A promising feature of adaptive learning technology is that it can adjust
its behavior based on the data it receives from the “actors,” which in this
case are students. If an ALS has any sort of artificial intelligence/machine
learning component(s), these algorithms can learn to communicate relevant
information “just-in-time” to enhance student achievement (Teusner et al.,
2018), while taking some of the burden off of the instructors who may not
be able to take action on ALS dashboard analytics in a timely manner.
A cross-institutional survey 163

Regardless of the extent to which we discuss these findings, one ques-


tion still remains: Does the frequency/timing of using dashboards to adjust
teaching practices actually affect student achievement and engagement?
While this question was not explicitly addressed in this chapter, it pro-
vides a sense of where future research might look next as an opportunity
to connect the instructor use of ALS dashboards with student learning
outcomes.

Qualitative confirmation of significant (quantitative) findings


As noted in the quantitative analysis, highly self-efficacious instructors
more frequently used ALS data dashboards to enhance their instructional
practices. Results from Research Questions 2 and 3 illustrated how these
highly self-efficacious instructors engage in dynamic practices, such as
assessing when students are ready to advance (e.g., effort required for stu-
dents to understand certain topics), providing students additional review
content, and informing the frequency and type of communication with
students (e.g., contacting students with lower reported engagement).
Although self-efficacy was not a significant determining factor in
instructors’ use of dashboard analytics to communicate with students, the
qualitative results confirm that instructors are reaching out to students
about their (1) performance and (2) engagement. This is helpful for not-
ing that ALS dashboards can broadly complement an already “common-
place” practice (i.e., communication) linked with student success metrics
(Kaufmann & Vallade, 2020; Bolliger & Martin, 2018).

A model approach and extension to a community of practice (CoP)


While dashboards can be a helpful tool to enhance instructional practices
and student learning, a lack of best practices is problematic (Arnold &
Pistilli, 2012).
One way to better understand this problem, however, is by contextu-
alizing it within an already existing framework of instructors’ perceived
usefulness of dashboards. Verbert et al. (2013) presented a learning ana-
lytics process model that illustrates four stages of applying data: Stage
1—Awareness; Stage 2—Reflection; Stage 3—Sensemaking; and Stage
4—Impact.
This process model is not only concise, but also aligns quite well with
our current experiences supporting faculty members applying ALS data to
teaching and learning.

Awareness. This stage is concerned with just data, which can be visualized
as activity streams, tabular overviews, or other visualizations.
164 James R. Paradiso, et al.

Reflection. Data in themselves are not very useful. The reflection stage
focuses on users’ asking questions and assessing how useful and rel-
evant these are.
Sensemaking. This stage is concerned with users’ answering the ques-
tions identified in the reflection process and the creation of new
insights.
Impact. In the end, the goal is to induce new meaning or change behavior
if the user deems it useful to do so.
(Verbet et al., 2013, p. 1501)

As was indicated through this study’s survey findings, instructor aware-


ness (Stage 1) of student course activity (i.e., performance data) was ubiq-
uitous, yet the number of faculty members who actually used that data to
reflect (Stage 2) on their instructional practices was much smaller. In fact,
nine instructors indicated “never” using the dashboards to effect their
teaching, which makes sensemaking (Stage 3) quite difficult.
However, instructors reported using insights from the data dashboards
to communicate with their students—illustrating that instructors’ first
order of concern (i.e., sensemaking) might very well be fueled by “reac-
tion” rather than “reflection,” revealing either a disconnect between the
impact their instructional practices have on the student experience or a
lack of time or confidence in knowing how to make effective pedagogical
change.
This study also aligns with Verbet et al. (2013) in observing that very
few instructors expressed a desire to adjust their current level of interac-
tion with data dashboards when asked (1) how dashboards inform their
teaching, (2) how they use dashboard to enhance student learning, and (3)
how they use data to inform future iterations of the course.
The use of dashboards varies greatly, and according to our findings,
some of it has to do with instructor confidence. However, one way to
promote growth in this area is through forming communities of practice
(CoP) focused on building self-efficacy around course instruction (cf. Inel
Ekici, 2018) using the data analytics provided by the ALS.
While an attempt was made to gather high-level data around an institu-
tion’s community of practice (CoP), this study was unable to reveal any
findings as they relate to self-efficacy and the usage of learning analytics
dashboards—although further investigation is necessary.
Independent of the extent to which an institution has or maintains a
CoP, research supports the notion that highly self-efficacious instructors
can, indeed, become so through vicarious experiences—observing others
successfully completing a single task/or set of tasks (Seneviratne et al.,
2019; Bandura, 2000; Bandura, 2005).
A cross-institutional survey 165

Therefore, resources invested in forming and/or sustaining a CoP that


supports (1) faculty development programming and/or (2) peer-to-peer
networking centered around ALS learning analytics and the application
of such data could be advantageous for institutions looking to enhance
instructor confidence using ALS dashboards and promote iterative course
design to improve student learning outcomes (Khosravi et al., 2021;
Klassen et al., 2011).

Future research
The findings from this study reinforce the role of instructor self-efficacy
in the use of ALS data dashboards, and although a couple of signifi-
cant findings were observed, further investigation might prove helpful
not only for improving the instructor experience when working with
ALS data analytics, but also for optimizing the student learning expe-
rience. For example, replicating a study after implementing a training
specifically designed to support instructor self-efficacy and ALS data
dashboard use could reveal a different set of findings (cf. Rienties et al.,
2018).
Scholars might also draw a parallel between a more highly researched
area, such as the impact student-facing dashboards (Santos et al., 2012)
have on academic achievement through self-regulation/learner awareness
(Bodily et al., 2018), and investigate how promoting self-regulated learn-
ing (SRL) in faculty development programming might increase levels of
self-efficacy, which could translate into more deliberate and effective use
of ALS data dashboards and, ultimately, student success (Zheng et al.,
2021; Schipper et al., 2018; Van Eekelen et al., 2005).

References
Amro, F., & Borup, J. (2019). Exploring blended teacher roles and obstacles
to success when using personalized learning software. Journal of Online
Learning Research, 5(3), 229–250.
Arnold, K. E., & Pistilli, M. D. (2012, April). Course signals at Purdue: Using
learning analytics to increase student success. In Proceedings of the 2nd
international conference on learning analytics and knowledge (pp. 267–270).
https://dl​.acm​.org ​/doi ​/10​.1145​/2330601​. 2330666.
Ashton, P., Webb, R. B., & Doda, N. (1983). A study of teacher’s sense of
efficacy: Final report to the National Institute of Education, executive
summary. Florida University (ERIC Document Reproduction Service No.
ED231 833).
Attaran, M., Stark, J., & Stotler, D. (2018). Opportunities and challenges for big
data analytics in US higher education: A conceptual model for implementation.
Industry and Higher Education, 32(3), 169–182.
166 James R. Paradiso, et al.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change.


Psychological Review, 84(2), 191–215.
Bandura, A. (1993). Perceived self-efficacy in cognitive development and
functioning. Educational Psychologist, 28(2), 117–148.
Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia
of human behavior (Vol. 4, pp. 71–81). Academic Press. (Reprinted in H.
Friedman [Ed.], Encyclopedia of mental health. Academic Press, 1998).
Bandura, A. (2000). Self-efficacy: The foundation of agency. Control of human
behavior, mental processes, and consciousness: Essays in honor of the 60th
birthday of August Flammer. Mahwah, N.J.: Lawrence Erlbaum Associates.
Bandura, A. (2005). Guide for constructing self-efficacy scales. In T. Urdan & F.
Pajares (Eds.), Self-efficacy beliefs of adolescents (pp. 307–337). Information
Age Publishing.
Barni, D., Danioni, F., & Benevene, P. (12 July 2019). Teachers’ self-efficacy:
The role of personal values and motivations for teaching. Frontiers in
Psychology,10. https://www​.frontiersin​.org ​/articles​/10​. 3389​/fpsyg​. 2019​
.01645​/full
Bodily, R., Kay, J., Aleven, V., Jivet, I., Davis, D., Xhakaj, F., & Verbert, K. (2018,
March). Open learner models and learning analytics dashboards: A systematic
review. In Proceedings of the 8th international conference on learning analytics
and knowledge (pp. 41–50). https://doi​.org ​/10​.1145​/3170358​. 3170409.
Bolliger, D. U., & Martin, F. (2018). Instructor and student perceptions of online
student engagement strategies. Distance Education, 39(4), 568–583.
Charmaz, K. (2006). Constructing grounded theory: A practical guide through
qualitative analysis. SAGE Publications.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Routledge.
Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative,
quantitative, and mixed methods approaches (5th ed.). SAGE Publications.
Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation
and personality. Psychological Review, 95(2), 256–273. https://doi​.org​/10​
.1037​/0033​-295X​.95​. 2​. 256
Dziuban, C., Howlin, C., Moskal, P., Muhs, T., Johnson, C., Griffin, R., &
Hamilton, C. (2020). Adaptive analytics: It’s about time. Current Issues in
Emerging Elearning, 7(1). https://scholarworks​.umb​.edu​/ciee​/vol7​/iss1/4
Field, A. (2018). Discovering statistics using IBM SPSS. Sage.
Guan, C., Mou, J., & Jiang, Z. (2020). Artificial intelligence innovation in
education: A twenty-year data-driven historical analysis. International Journal
of Innovation Studies, 4(4), 134–147.
Guzmán-Valenzuela, C., Gómez-González, C., Rojas-Murphy Tagle, A., &
Lorca-Vyhmeister, A. (2021). Learning analytics in higher education: A
preponderance of analytics but very little learning? International Journal of
Educational Technology in Higher Education, 18(1), 1–19.
Heymans, M. W. & Eekhout, I. (2019). Applied missing data analysis with SPSS
and "R" Studio. Self-published: https://bookdown​.org ​/mwheymans​/ bookmi/
Holstein, K., McLaren, B. M., & Aleven, V. (2018, June). Student learning
benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms.
A cross-institutional survey 167

In International conference on artificial intelligence in education (pp. 154–


168). Springer.
Hoy, A. W. (2004). Self-efficacy in college teaching. Essays on Teaching
Excellence: Toward the Best in the Academy, 15(7), 8–11.
Hussain, M., Zhu, W., Zhang, W., & Abidi, S. M. R. (2018). Student engagement
predictions in an e-learning system and their impact on student course
assessment scores. Computational Intelligence and Neuroscience, 2018.
https://pubmed​.ncbi​.nlm​.nih​.gov​/30369946/
Inel Ekici, D. (2018). Development of pre-service teachers’ teaching self-efficacy
beliefs through an online community of practice. Asia Pacific Education
Review, 19(1), 27–40.
Johnson, C., & Zone, E. (2018). Achieving a scaled implementation of adaptive
learning through faculty engagement: A case study. Current Issues in Emerging
eLearning, 5(1), 7.
Kahu, E. R. (2013). Framing student engagement in higher education. Studies in
Higher Education, 38(5), 758–773.
Kaufmann, R., & Vallade, J. I. (2020). Exploring connections in the online
learning environment: Student perceptions of rapport, climate, and loneliness.
Interactive Learning Environments, 30, 1–15.
Keuning, T., & van Geel, M. (2021). Differentiated teaching with adaptive
learning systems and teacher dashboards: The teacher still matters most. IEEE
Transactions on Learning Technologies, 14(2), 201–210. https://doi​.org​/10​
.1109​/ TLT​. 2021​. 3072143
Khosravi, H., Shabaninejad, S., Bakharia, A., and Gasevic, D. (2021). Intelligent
Learning Analytics dashboards: automated drill-Down recommendations to
support teacher data exploration. Journal of Learning Analytics, 8(I). https://
www​.learning​-analytics​.info​/index​.php​/ JLA ​/article​/view​/ 7279
Klassen, R. M., Bong, M., Usher, E. L., Chong, W. H., Huan, V. S., Wong, I. Y.,
& Georgiou, T. (2009). Exploring the validity of a teachers’ self-efficacy scale
in five countries. Contemporary Educational Psychology, 34(1), 67–76.
Klassen, R. M., Tze, V., Betts, S. M., & Gordon, K. A. (2011). Teacher efficacy
research 1998–2009: Signs of progress or unfulfilled promise? Educational
Psychology Review, 23(1), 21–43.
Knobbout, J., & van der Stappen, E. (2018). Where Is the learning in learning
analytics? In V. Pammer-Schindler, M. Pérez-Sanagustín, H. Drachsler, R.
Elferink, & M. Scheffel (Eds.), Lifelong technology-enhanced learning (pp.
88–100). Springer International Publishing. https://doi​.org​/10​.1007​/978​-3​
-319​-98572​-5_7
Knoop-van Campen, C., & Molenaar, I. (2020). How teachers integrate
dashboards into their feedback practices. Frontline Learning Research, 8(4),
37–51. https://doi​.org ​/10​.14786​/flr​.v8i4​.641
Kornfield, R., Sarma, P. K., Shah, D. V., McTavish, F., Landucci, G., Pe-Romashko,
K., & Gustafson, D. H. (2018). Detecting recovery problems just in time:
Application of automated linguistic analysis and supervised machine learning
to an online substance abuse forum. Journal of Medical Internet Research,
20(6), e10136.
168 James R. Paradiso, et al.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral


participation. Cambridge University Press.
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design
and implementation (3th ed.). Jossey-Bass.
Molenaar, I., & Knoop-van Campen, C. (2017). Teacher dashboards in practice:
Usage and impact. In É. Lavoué, H. Drachsler, K. Verbert, J. Broisin, & M.
Pérez-Sanagustín (Eds.), Data driven approaches in digital education. EC-TEL
2017. Lecture Notes in Computer Science, 10474. https://doi​.org​/10​.1007​/978​
-3​-319​- 66610​-5​_10
Moore-Hayes, C. (2011). Technology integration preparedness and its influence
on teacher-efficacy. Canadian Journal of Learning and Technology. Revue
Canadienne de l’Apprentissage et de la Technologie, 37(3). https://cjlt​.ca​/
index​.php​/cjlt ​/article ​/view​/26351
Morris, D. B., Usher, E. L., & Chen, J. A. (2017). Reconceptualizing the sources
of teaching self-efficacy: A critical review of emerging literature. Education
Psychological Review, 29(4), 795–833. https://doi​.org​/10​.1007​/s10648​- 016​
-9378-y
Ngwako, F. A. A. (2020). Influences of students’ perception of adaptive learning
systems on academic self-efficacy [Doctoral Dissertation]. Keiser University.
Norman, D. A. (1990). Cognitive artifacts. Department of Cognitive Science,
University of California.
O’Sullivan, P., Voegele, J., Buchan, T., Dottin, R., Goin Kono, K., Hamideh,
M., Howard, W. S., Todd, J., Tyson, L., Kruse, S., de Gruyter, J., & Berg, K.
(2020). Adaptive courseware implementation: Investigating alignment, course
redesign, and the student experience. Current Issues in Emerging eLearning,
7(1), 101–137. https://scholarworks​.umb​.edu ​/ciee ​/vol7​/iss1/6
Oyarzun, B., Martin, F., & Moore, R. L. (2020). Time management matters:
Online faculty perceptions of helpfulness of time management strategies.
Distance Education, 41(1), 106–127.
Pajares, F. (2006). Self-efficacy during childhood and adolescence. Self-Efficacy
Beliefs of Adolescents, 5, 339–367.
Peng, H., Ma, S., & Spector, J. M. (2019). Personalized adaptive learning: An
emerging pedagogical approach enabled by a smart learning environment.
Smart Learning Environments, 6(1), 1–14.
Picciano, A. G., Dziuban, C. D., Graham, C. R., & Moskal, P. D. (Eds.). (2022).
Blending learning: Research perspectives (Vol. 3). Routledge.
Rienties, B., Herodotou, C., Olney, T., Schencks, M., & Boroowa, A. (2018).
Making sense of learning analytics dashboards: A technology acceptance
perspective of 95 teachers. International Review of Research in Open and
Distributed Learning, 19(5). https://www​.irrodl​.org ​/index​.php​/irrodl ​/article​/
view​/3493
Rincón-Flores, E. G., Mena, J., López-Camacho, E., & Olmos, O. (2019, October).
Adaptive learning based on AI with predictive algorithms. In Proceedings
of the seventh international conference on technological ecosystems for
enhancing multiculturality (pp. 607–612). https://www​.researchgate​.net ​/
publication ​/336934297​_ Adaptive ​_ learning ​_ based ​_ on ​_ AI ​_with ​_ predictive​
_algorithms
A cross-institutional survey 169

Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). SAGE
Publications.
Santos, J. L., Govaerts, S., Verbert, K., & Duval, E. (2012, April). Goal-oriented
visualizations of activity tracking: A case study with engineering students. In
Proceedings of the 2nd international conference on learning analytics and
knowledge (pp. 143–152). https://dl​.acm​.org​/doi​/10​.1145​/2330601​. 2330639
Schipper, T., Goei, S. L., de Vries, S., & van Veen, K. (2018). Developing
teachers’ self-efficacy and adaptive teaching behaviour through lesson study.
International Journal of Educational Research, 88, 109–120.
Schwendimann, B. A., Rodriguez-Triana, M. J., Vozniuk, A., Prieto, L. P.,
Boroujeni, M. S., Holzer, A., Gillet, D., & Dillenbourg, P. (2016). Perceiving
learning at a glance: A systematic literature review of learning dashboard
research. IEEE Transactions on Learning Technologies, 10(1), 30–41.
Seneviratne, K., Hamid, J. A., Khatibi, A., Azam, F., & Sudasinghe, S. (2019).
Multi-faceted professional development designs for science teachers’ self-
efficacy for inquiry-based teaching: A critical review. Universal Journal of
Educational Research, 7(7), 1595–1611.
Shabaninejad, S., Khosravi, H., Indulska, M., Bakharia, F, Isaias, P. (2020).
Automated insightful drill-down recommendations for learning analytics
dashboards (2020). Paper presented at The International Learning Analytics
and Knowledge (LAK) Conference. https://www​.researchgate​.net​/publication​
/338449127​ _ Automated​ _ Insightful​ _ Drill​ - Down​ _ Recommendations​ _ for​
_Learning ​_ Analytics​_ Dashboards
Teusner, R., Hille, T., & Staubitz, T. (2018, June). Effects of automated
interventions in programming assignments: Evidence from a field experiment.
In Proceedings of the fifth annual ACM conference on learning at scale (pp.
1–10). https://dl​.acm​.org ​/doi ​/10​.1145​/3231644​. 3231650
Tschannen-Moran, M., & Hoy, A. W. (2001). Teacher efficacy: Capturing an
elusive construct. Teaching and Teacher Education, 17(7), 783–805.
Van Eekelen, I. M. V., Boshuizen, H. P. A., & Vermunt, J. D. (2005). Self-
regulation in higher education teacher learning. Higher Education, 50(3),
447–471.
Verbert, K., Duval, E., Klerkx, J., Govaerts, S., & Santos, J. L. (2013). Learning
analytics dashboard applications. American Behavioral Scientist, 57(10),
1500–1509.
Wang, L., Ertmer, P. A., & Newby, T. J. (2004). Increasing preservice teachers’
self-efficacy beliefs for technology integration. Journal of Research on
Technology in Education, 36(3), 231–250. http://doi​.org​/10​.1080​/15391523​
.2004​.10782414
Zheng, J., Huang, L., Li, S., Lajoie, S. P., Chen, Y., & Hmelo-Silver, C. E. (2021).
Self-regulation and emotion matter: A case study of instructor interactions
with a learning analytics dashboard. Computers and Education, 161, 104061.
10
DATA ANALYTICS IN ADAPTIVE
LEARNING FOR EQUITABLE OUTCOMES
Jeremy Anderson and Maura Devlin

The United States’ higher education system is rife with inequities along race
and class “fault” lines, with students belonging to first-generation, lower-
socioeconomic, or racially diverse groups not benefiting from the promises
of higher education as much as their more privileged and affluent White
peers. News outlets commonly report on college admissions processes favor-
ing the privileged (Jack, 2020; Saul & Hartocollis, 2022); higher educa-
tion’s uneven return on investment (Carrns, 2021; Marcus, 2021); and, to
a lesser extent, college degree attainment data by race and socioeconomic
status (Edsall, 2021; Nietzel, 2021). Notable lawsuits arising from perceived
inequities in race-based admissions processes make their way to the Supreme
Court (Brown & Douglas-Gabriel, 2016; Krantz & Fernandes, 2022; Jasick,
2022c). Analyses of persistence, retention, and debt load related to social class
are becoming increasingly prevalent, at least among think tanks and public
interest groups (Brown, 2021; Rothwell, 2015; Libassi, 2018). The Varsity
Blues scandal of 2019 (Medina, Benner & Taylor, 2019) brought to light
practices that the wealthy deploy to ensure their progeny succeed in optimiz-
ing higher education. Public discourse about whether higher education is still
a force for social mobility is common (Stevens, 2019), and college scorecards
and ranking organizations (U.S. News and World Reports, n.d.) have incor-
porated measures of social mobility to their data tools.

Required data reporting focuses on


equitable credential attainment
This public disclosure of higher education’s inequities has resulted in mod-
est progress in some areas aimed at leveling the playing field for those for

DOI: 10.4324/9781003244271-13
Data analytics in adaptive learning for equitable outcomes 171

whom higher education has historically been denied. Legacy admissions


practices, which favor the offspring of those from prior classes at a given
institution, has been scrutinized as an inherently inequitable practice, and
several elite liberal arts colleges have stopped this long-standing practice
(Fortin, 2021; Murphy, 2021). Other well-endowed colleges have focused
specifically on low-income students, such as raising the family income
limits for institutional funding (Merriman, 2021), providing additional
support for ad hoc costs (Jasick, 2022a), or offering grants rather than
loans (Jasick, 2022b). Philanthropists such as McKenzie Scott and the Bill
and Melinda Gates Foundation have contributed millions of dollars to
Historically Black Colleges and Universities (HBCUs) and other less well-
endowed colleges in efforts to equalize the unequal system (Anderson,
2021; Complete College America, 2021).
Systemwide data reporting and analysis have facilitated much of our
understanding of inequities in higher education and consequently many
of these incremental changes. The US Department of Education requires
higher education institutions to report persistence, retention, and attain-
ment data into its Integrated Postsecondary Education Database (IPEDs).
The federal government provides data to researchers through the National
Center for Education Statistics (NCES). The federal government also
develops and provides college scorecards to promote equality of infor-
mation. Researchers, analysts, policymakers, and think tanks can mine
these holistic data to gauge the system’s inequities and hold postsecondary
education accountable, leading to some of the progress mentioned above.
The metrics used to gauge the equity promise across the higher educa-
tion system largely relate to access, persistence, debt load, and degree com-
pletion (Cahalan et al., 2021). Analyses based on these federal datasets
are essentially analyses of access and credential attainment. Credentials
signal to the marketplace or workforce a perceived set of desirable hiring
attributes, such as persistence, grit, self-regulation, discipline, or mastery
of an ability to acquire new knowledge. However, given the raging debate
and widely known inequities in access, credentials earned upon comple-
tion of a degree may also signal social class or racial disparities more than
merit (Bills, 2019).

Challenges to data on equitable learning


Despite federally required data reporting, there remains a significant gap
in our collective understanding of equitable learning across higher educa-
tion institutions. The US higher education system, as a somewhat chaotic
patchwork of 4,000+ institutions, requires no universal tests or consistent
measures of learning in order to grant credentials. With a decentralized
172 Jeremy Anderson and Maura Devlin

approach, faculty at each institution determine whether students are learn-


ing, with potentially inequitable consequences for students with diverse
characteristics.
At the same time, the process of learning is moving away from centu-
ries-old formats of face-to-face lecture and paper-based submissions to
digitized learning management systems and online repositories of learning
data. In addition, institutions are deploying platforms and technologies to
assist with personalized student learning. Educational technology allows
faculty to monitor formative student learning data on dashboards, often
in near-real time to intervene more rapidly and possibly change students’
learning trajectories as they progress through courses. Educational tech-
nologies that leverage instructional capacity and move beyond traditional
lecture to active learning strategies hold promise for ensuring equitable
learning across a diverse classroom of students.

Promise of educational equity using


adaptive learning technology
Personalized learning has been hypothesized to bridge learning gaps among
disparate groups because it harnesses technology that allows the instruc-
tor to provide personalized, formative support to ensure that all students
learn equitably, based on individualized needs. There have been some
indications of success in achieving this promise. One meta-analysis found
that certain versions of educational technology achieved outcomes simi-
lar to one-on-one personalized tutoring (VanLehn, 2011). Early research
on student learning in ALSs in online environments, specifically, suggests
that ALSs facilitate learning at the same or better levels than in face-to-
face or hybrid courses without adaptive learning (Dziuban et al., 2017,
2018). A recent mixed-methods study using data across a wide variety of
courses and disciplines from the authors yielded mixed results related to
learning via ALSs. Quantitative learning data from blended courses using
the ALS were generally more positive than negative compared to tradi-
tional on-ground courses, while qualitative data showed favorable stu-
dent perceptions of learning with ALSs in this modality (Anderson et al.,
2021). Researchers have also found that personalized learning through an
ALS stabilized learning organization across universities among nursing
and math students (Dziuban et al., 2018). If replicable, this model could
potentially be impactful for equitable learning outcomes across the decen-
tralized system of US higher education.
The research is lacking regarding ALSs and their facilitation of aca-
demic achievement among students with diverse characteristics. Not hav-
ing this data reported on a large scale, such as through IPEDs, makes it
Data analytics in adaptive learning for equitable outcomes 173

difficult to gauge whether tools such as adaptive learning systems impact


the inequities that are so visible in admission processes and credential
earning. Research examining the impact of educational technology plat-
forms by diversity, equity, and inclusion (DEI) metrics is critically impor-
tant to gauge whether they help bridge within-course divides that may
exist along race and class lines (Devlin et al., 2021).

Purpose of this study


This study explored data from an adaptive learning system by race and
social class at a small private women’s undergraduate college in the
Northeast, Bay Path University. Because the institution is a women’s col-
lege, the researchers did not explore whether the system impacted gendered
learning. The study was undertaken in the adult-serving undergraduate
division of the college, The American Women’s College (TAWC), which
began as an accelerated (6 week courses) weekend college in 1999 and
migrated to a fully online asynchronous model starting in 2011. TAWC
enrolls approximately 1,200 adult women annually, with an average age
of 34 years and an average of 44 transfer credits (though some students
start with no prior college learning and some transfer in as many as
90 college credits). The student population is 44.7% Pell-grant eligible,
49.7% women of color, and 41.3% first-generation college goers. TAWC
uses a centralized course management model in developing and delivering
its online courses, such that each section of every course, regardless of
the assigned faculty member, uses the same learning content and activi-
ties. It began leveraging an adaptive learning system from one educational
technology company, Realizeit, in 2015, with the benefit of a federal
grant. Through its centralized approach, TAWC was able to deploy adap-
tive learning in about 25% of its courses. The courses chosen for this
study, the math sequence and English sequence, were chosen because of
the stability of their high course enrollments across sections, the relatively
advanced state of their development with the ALS, and because they are
gateway courses in the core curriculum and required in almost all of the
degree programs.
The surrogate used for learning was the students’ final grades for courses
utilizing the ALS as a means of content acquisition in every weekly course
module. In this study, diverse characteristics among students were defined
as follows: Pell grant eligibility was used as a measure for low-income
students, first-generation was used to define students with less social capi-
tal than their non-first-generation counterparts, and racial diversity was
defined by students’ ethnicity self-identification. All of these variables
were captured in the university’s student information system (SIS).
174 Jeremy Anderson and Maura Devlin

The research questions this study addressed are:

RQ #1: Does ALS use in a course increase the academic performance of


Pell-eligible students when compared to comparable sections with tra-
ditional, nonadaptive learning?
RQ #2: Does ALS use in a course increase the academic performance of
first-generation students when compared to comparable sections with
traditional, nonadaptive learning?
RQ #3: Does ALS use in a course increase the academic performance of
racially diverse students when compared to comparable sections with
traditional, nonadaptive learning?

Methodology
This post-test only study with nonequivalent groups sought to answer
the research question of whether exposure to adaptive learning activities
resulted in improved learning outcomes for students who had diverse char-
acteristics. The comparison groups under study were students enrolled
in sections of courses utilizing embedded adaptive learning activities,
compared with comparable sections without adaptive learning activities.
Further, each group was disaggregated according to certain characteris-
tics—Pell eligibility, race and ethnicity, and first-generation status. Course
grades on a typical 4.0 scale (with plus or minus grading) were used as
the dependent variable. Increased average grades for Pell-eligible, first-
generation, and Black or African American and Hispanic students would
support the hypotheses and support continued investment in designing
adaptive courses at the institution and in contexts with similarly diverse
student populations.
The following hypotheses were tested using five years of course out-
comes data stored in the SIS at the college:

Hypothesis #1: The average final grade, aggregated by course, will be


higher for Pell-eligible students enrolled in sections that included ALS
activities than for Pell-eligible students enrolled in traditional sections.
Hypothesis #2: The average final grade, aggregated by course, will be
higher for first-generation students enrolled in sections that included
ALS activities than for first-generation students enrolled in traditional
sections.
Hypothesis #3: The average final grade, aggregated by course, will be
higher for Black or African American and Hispanic students enrolled
in sections that included ALS activities than for Black or African
American and Hispanic students enrolled in traditional sections. (Other
Data analytics in adaptive learning for equitable outcomes 175

races were included in the data set but did not have sufficient counts to
ensure the preservation of anonymity so were excluded from analysis.)

Preparing the data


The dependent variables in the study were average grades for each course
under study. Faculty members entered a letter grade according to the insti-
tutional grading matrix (Table 10.1) for each student who was enrolled
in a section after the drop deadline. To facilitate analysis, the research-
ers translated the letter grades to their 4.0 scale equivalents. Grades of
W (withdraw) were coded as blank and excluded from the analysis. The
resulting file contained 17,120 grades earned and 829 withdrawals across
766 sections of 62 courses. Also in the file were columns for race/ethnicity
(American Indian or Alaska Native, Asian, Black or African American,
Hispanic, Native Hawaiian or Other Pacific Islander, Non-resident alien,
two or more races, unknown, White), Pell eligibility for the term when the
student was enrolled in the target course section (yes, no), and first-gener-
ation status (yes, no; unknown status excluded from analysis). Each of the
categories within these demographic variables was recoded as a number
prior to analysis for proper grouping in SPSS. The final data element,
presence of adaptive learning in the course section, was embedded in the
course section numbering schema used by the institution. The researchers
extracted this status into a separate column with a 1 value for “contained
adaptive learning” and a 0 value for “did not contain adaptive learning.”
To preserve the anonymity of individual students, students who were
easily identified in a course section based on their demographic were
excluded. With a target of five or more students in a given demographic,

TABLE 10.1 Course Letter Grades with Grade Point Equivalents

Letter Grade Grade Points

A 4.00
A– 3.67
B+ 3.33
B 3.00
B- 2.67
C+ 2.33
C 2.00
C- 1.67
D+ 1.33
D 1.00
F 0.00
176 Jeremy Anderson and Maura Devlin

courses had sufficient numbers of students who were Black or African


American, Hispanic, and White. Other races and ethnicities were excluded
from analyses to preserve student anonymity. Pell eligibility and first-gen-
eration status were sufficiently large enough in cell size to include these
variables in analyses for all courses.

Analysis plan
In preparing data for analyses, the researchers first divided the file into a
series of six files, one per course. The split file function in SPSS allowed
for dividing each course file by the categories of independent variables
(Pell eligibility, first-generation status, race/ethnicity) addressed by each
hypothesis. The final step in setting up the analyses was to use the pres-
ence of adaptive learning as the grouping variable to compare means, sub-
tracting the mean of the subgroup without adaptive learning from the
corresponding mean of the subgroup with adaptive learning from each
course. An example analysis, therefore, was a comparison of average
grades earned by Black or African American students in those sections of
Statistics (MAT120) with adaptive learning versus those without.
The researchers used the Student’s t-test to compare means unless vari-
ances were unequal, in which case Welch’s t-test was substituted (Field,
2018), since this test has been found to be robust even if model assump-
tions are not upheld (Lumely et al., 2002). Cohen’s d was useful for cal-
culating effect sizes which were classified as small (0.2), medium (0.5), or
large (0.8) (Cohen, 1988).

Results
Math courses
The average course grade for Pell-eligible students increased more than
non-Pell-eligible students in all three math courses after adaptive learn-
ing was adopted. All improvements in average grade are in terms of a 4.0
scale, such that a difference of 0.33 is the equivalent of one grade incre-
ment (from B to B+, e.g.). In MAT104, the difference was a 0.51-point
increase for Pell-eligible students versus 0.01 point for non-Pell-eligible; in
MAT112 it was 0.46 versus 0.39; and in MAT120 it was 0.35 versus 0.31.
The gains for Pell-eligible students in MAT112 (t(195) = 2.39, p = 0.018)
and MAT120 (t(454) = 2.93, p = 0.004) were significant for those in sec-
tions with adaptive learning, with small to medium and small-effect sizes
(d = 0.37 in MAT112; d = 0.30 in MAT120). Findings were not signifi-
cant for MAT104, though the effect size was small to medium (d = 0.38).
There is good support for Hypothesis 1 in math courses as a result of these
Data analytics in adaptive learning for equitable outcomes 177

analyses. In general, these trends show promise for adaptive learning’s


ability to provide positive math outcomes for Pell-eligible students.
First-generation students’ average course grades trended higher in
all three math courses for those enrolled in the adaptive learning sec-
tions, though only at a higher rate than continuing generation students in
MAT112. The differences were 0.04 points for first-generation students
versus 0.50 points for continuing generation students in MAT104, 0.90
points versus 0.38 points in MAT112, and 0.37 points versus 0.59 points
in MAT120. The gains for first-generation students were significant, and
of a large and medium effect sizes in MAT112 (t(88) = 3.43, p = 0.001, d
= 0.76) and MAT120 (t(230) = 2.30, p = 0.022, d = 0.47). The differences
were not significant in MAT104 and the effect size was small (d = 0.04).
These analyses suggest that there is good support for Hypothesis 2 in
math courses. In general, adaptive learning shows promise for increasing
first-generation students’ performance in math.
Average course grades improved for all race/ethnicity groups in all
three math courses except for a decrease of 0.28 points for White students
in MAT104 and of 0.49 points for Black or African American students
in MAT112. Otherwise, average course grades improved by 0.89 points
for Black or African American students and 0.09 points for Hispanic stu-
dents in MAT104; by 0.43 points for Hispanic students and 0.58 points
for White students in MAT112; and by 0.16 points for Black or African
American students, 0.35 points for Hispanic students, and 0.26 points for
White students in MAT120. These differences were statistically significant
for White students (t(62.0) = 2.77, p = 0.007) with a medium effect size (d
= 0.52) in MAT112 and for Hispanic students (t(111.9) = 2.00, p = 0.048)
in MAT120 with a small effect size (d = 0.30). Differences approached
significance for Black or African American students in MAT104 (t(49)
= 1.90, p = 0.063) with a medium effect size (d = 0.61) and MAT120
(t(85.7) = 1.88, p = 0.063) with a small effect size (d = 0.13). Even though
the differences were not significant, the positive trend in course grades
with small to medium effects for Black or African American and Hispanic
students in math courses lends some support to Hypothesis 3 and shows
adaptive learning’s promise for possibly increasing student success for stu-
dents across minority representation.
Full analyses and descriptive statistics for math courses are presented
in Table 10.2.

English courses
Impacts of adaptive learning in English courses were mixed for Pell-
eligible students. Average grades dropped by 0.18 points in ENG114 and
TABLE 10.2 Descriptive Statistics and t-test Results for Adaptive and Non-Adaptive Sections of Math Courses Disaggregated by
Demographic Groups
178

Non-adaptive Adaptive

Course Subpopulation n M SD n M SD t value p

MAT104 Pell 25 20.37 10.49 190 20.88 10.15 –0.02 .984


Not Pell 12 30.14 10.24 111 30.13 10.09 1.63 .114
First Gen 10 30.03 10.22 79 30.07 10.04 0.11 .915
Not First Gen 5 20.67 10.83 22 30.17 00.82 0.54 a .626
Black/African Am 11 10.76 10.61 40 20.65 10.31 1.90 .063
Hispanic 14 20.88 00.83 76 20.97 10.12 0.27 0.790
White 22 30.35 00.98 145 30.07 10.10 –1.12 0.265
MAT112 Pell 57 20.37 10.32 140 20.83 10.16 2.39 0.018 b
Jeremy Anderson and Maura Devlin

Not Pell 25 20.59 10.27 83 20.98 10.22 1.42 0.159


First-Gen 28 20.24 10.27 62 30.14 10.10 3.43 0.001 b
Not First Gen 8 20.17 10.87 26 20.55 10.39 0.63 0.531
Black/African Am 13 20.31 10.21 27 10.82 10.34 –1.12 0.268
Hispanic 28 20.50 10.19 39 20.93 10.26 1.41 0.163
White 46 20.63 10.31 120 30.21 00.90 2.77 a 0.007 b
MAT120 Pell 132 20.56 10.24 324 20.91 10.13 2.93 0.004 b
Not Pell 144 20.91 10.16 303 30.22 10.11 2.71 0.007 b
First Gen 78 20.67 10.14 154 30.04 10.18 2.30 0.022 b
Not First Gen 24 20.64 10.40 54 30.23 10.17 1.95 0.055
Black/African Am 40 20.32 10.17 80 20.48 10.32 1.88 a 0.063
Hispanic 60 20.55 10.25 100 20.90 10.10 2.00 a 0.048 b
White 197 30.00 10.11 356 30.26 10.02 0.92 0.357
n:   observations of grades awarded, excluding nonnumeric values (grades of withdrawal, i.e.)
M:   average grade using a four-point grade point scale
a
results of a Welch’s t-test; equal variances not assumed
b
significant at p < 0.05
Data analytics in adaptive learning for equitable outcomes 179

0.17 points in ENG134, though the average improved by 0.14 points


in ENG124. None of these changes was significant and all had small
effect sizes ranging from 0.10 to 0.15. Decreases in average grades were
less for Pell-eligible students in ENG124 and ENG134 than they were
for non-Pell-eligible students. Still, the result of these analyses do not
support Hypothesis 1. Overall, these results did not support the use of
adaptive learning to increase student success for Pell-eligible students
in English.
First-generation students similarly experienced mixed results when
comparing adaptive and non-adaptive sections. In ENG114, the aver-
age grade improved by 0.18 points and a similar increase of 0.22 points
occurred in ENG124 with effect sizes falling in the small range at 0.12
and 0.32, respectively. First-generation students in ENG134, however,
experienced an average decline of 0.28 points while continuing gen-
eration students saw an improvement of 0.22 points with adaptive
learning. Improvements to course averages were stronger for continu-
ing generation students in ENG114 (0.83 points) and ENG124 (0.78
points). These findings, coupled with a lack of significance and small
effect sizes in all three courses for first-generation students, serve to
reject Hypothesis 2 in English courses and therefore do not support
the use of adaptive learning for increasing student success for these
students in English.
Impacts for students of different races and ethnicities were unique
to each course. In ENG114, Black or African American (–0.66 points)
and Hispanic (–0.12 points) students in adaptive learning sections saw
drops in average grades, while White students experienced an improve-
ment in course grades (0.15 points). The decrease was significant and of
a medium effect size for Black or African American students (t(98.53)
= –2.72, p = 0.008, d = 0.50). Results were more positive in ENG124
for Black or African American (0.06 point increase) and Hispanic (0.13
point increase) students than for White students (0.01 point decrease).
These differences were not significant and were of small effect sizes
ranging from 0.01 to 0.09. Finally, course grades in ENG134 dropped
for Black or African American students (–0.25 points) with adaptive
learning, improved slightly for Hispanic students (0.01 points), and
significantly dropped by 0.30 points for White students (t(311.87) =
–2.41, p = 0.017, d = 0.24). Overall, the results are mixed when exam-
ining adaptive learning’s impact on student success by ethnicity.
Full analyses and descriptive statistics for English courses are presented
in Table 10.3.
TABLE 10.3    
180

Non-adaptive Adaptive

Course Sub-population n M SD n M SD t value p


ENG114 Pell 140 3.17 1.10 306 2.99 1.28 –1.48 a .141
Not Pell 90 3.00 1.41 162 3.10 1.38 0.57 .572
First Gen 33 20.75 10.49 171 20.93 10.43 0.67 .502
Not First Gen 6 20.61 10.50 50 30.44 10.07 1.72 .09
Black/African Am 40 30.20 10.14 83 20.54 10.49 –2.72 a .008 b
Hispanic 54 20.87 10.45 123 20.79 10.36 –0.35 .727
White 101 30.20 10.15 220 30.35 10.13 1.12 .265
ENG124 Pell 410 20.84 10.39 771 20.98 10.33 1.67 .096
Not Pell 292 30.31 10.15 540 30.19 10.27 –1.33 .184
Jeremy Anderson and Maura Devlin

First Gen 93 20.86 10.37 446 30.08 10.36 1.47 .143


Not First Gen 23 20.49 10.77 158 30.27 10.19 2.04 a .052
Black/African Am 109 20.68 10.35 196 20.74 10.37 0.41 .679
Hispanic 139 20.60 10.49 302 20.73 10.45 0.85 .396
White 388 30.34 10.11 690 30.33 10.12 –0.10 .919
ENG134 Pell 153 20.70 10.27 319 20.53 10.30 -1.42 .157
Not Pell 103 30.28 10.00 210 20.86 10.30 –3.11 a .002 b
First Gen 70 20.91 10.26 139 20.63 10.35 –1.47 .145
Not First Gen 12 20.70 10.33 49 20.92 10.28 0.54 .593
Black/African Am 43 20.60 10.21 89 20.35 10.22 –1.10 .273
Hispanic 50 20.60 10.24 118 20.61 10.33 0.04 .972
White 138 30.12 10.14 275 20.82 10.32 –2.41 a .017 b
n:   observations of grades awarded, excluding non-numeric values (grades of withdrawal, i.e.)
M:   average grade using a four-point grade point scale
a
results of a Welch’s t-test; equal variances not assumed
b
significant at p < .05
Data analytics in adaptive learning for equitable outcomes 181

Implications
Implications for practice
The goal of this study was to examine whether the use of adaptive learning
can positively impact course grades for learners of diverse demographic
backgrounds. The authors compared grades for students who differed by
continuing and first-generational status, race, and ethnicity groups, and
income levels (using Pell eligibility as a proxy) for classes utilizing adaptive
learning, compared with prior nonadaptive sections. Findings for math
courses were mixed but mostly positive for Pell-eligible, first-generation,
Black or African American, and Hispanic students and often were signifi-
cant or approaching significance with effect sizes ranging from small to
large. Student outcomes were strongest in Applied College Math (MAT
112) and Statistics (MAT120). Institutions and faculty members looking
to address equity gaps in mathematics at the course level should consider
adopting adaptive learning. Remedial courses at state systems often rep-
resent a disproportionate and stagnating starting place for low income
and diverse learners (Burke, 2022). Adaptive technology in math courses,
where students can remediate within college-level, credit-bearing courses
that use the technology’s content and algorithms, can allow the faculty
member to intervene in a personalized way. More research is needed on
when and how to implement this technology effectively in these courses,
but this research shows promise.
There was a contrast in the case of English courses where most findings
were mixed or negative with grades decreasing for many groups engaged
with adaptive learning. These findings most often were not significant and
had nearly universally small effect sizes. Still, adaptive learning may not
be as effective at impacting equity in English courses, so institutions and
instructors should be selective in adopting this technology in that context.
The other implications of this study revolve around measuring the con-
cept of equity and using it to inform course design projects. Decision-
makers should define their equity framework and goals ahead of time.
Determining which approach to operationalizing equity—for example,
calculating differences in subgroup versus overall average, setting a crite-
rion threshold to compare subgroups to a reference group (e.g., top achiev-
ing), or creating an index comparing the proportions of the subgroup in the
educational outcome group versus the overall population (Sosa, 2018)—
enables administrators, faculty, and staff to work collaboratively with the
same student success goals in mind. Those involved in selecting educa-
tional technologies should use the defined equitable measure of effective-
ness in student outcomes to inform decision-making. A baseline measure
should be obtained to document achievement gaps across subpopulations.
182 Jeremy Anderson and Maura Devlin

Technologies then should be matched to equity needs by context based on


efficacy research. Conducting a post-analysis of student outcomes within
and across groups would determine if equity has improved and pinpoint
where more work is needed. Such research should be ongoing and longi-
tudinal to ensure reductions of inequities are sustained over time. Gap
scores should improve for groups after a technology is introduced to a
course. If not, the implication would be to adjust the implementation or
discontinue the adoption. Longitudinal analyses should continue to gauge
whether the technology and instructional method are improving student
outcomes. This research, when embedded within the institution’s equity
framework and goals, can also help inform course design and student and
faculty support.
Subgroup analysis in this study was enabled by the institution’s use of
a centralized course management model in delivering its courses. Each
section contained the same learning content, adaptive learning system
questions and content, and course activities, regardless of instructor. This
allowed the current researchers to aggregate subgroup populations across
course sections (Manly, 2022), though even with this research design
capability, as noted previously, certain subgroup populations did not
have representation that allowed for statistical analysis. If researchers and
higher education administrators truly want to move beyond credential-
earning, outcomes-oriented data (i.e., end-of-course grades), which are by
definition too late to help disadvantaged students change course toward
academic success, then they would be wise to look at data analytics within
courses, such as those provided within ALSs. Since individual courses at
most institutions are the purview of individual faculty members, though,
this may make just-in-time analysis and utilization of comparable data
that can address equity gaps difficult. To harness data to inform equitable
learning in ways that can improve students’ academic trajectories, faculty,
instructional designers, administrators, and researchers should collabo-
rate in proactively developing and delivering courses and course sections
that optimize consistent and useful data collection.
Findings suggest that ALSs may close equity learning gaps among
disadvantaged students, particularly in some courses. Another potential
implication to practice would be to change the way higher education insti-
tutions conduct their assessment of student learning. Higher educational
institutions, often sparked by requirements promulgated by regional
accreditors, generally require faculty and department chairs to assess stu-
dent learning at one point in each academic year, typically the end of the
traditional Spring semester. However, this annual assessment cycle may be
too infrequent and too retrospective (Maki, 2017) to quickly close student
learning gaps through any significant intervention. The findings from this
Data analytics in adaptive learning for equitable outcomes 183

study related to learning among diverse subgroups in adaptive learning


courses suggest that assessment of student learning must become a con-
tinuous analytical process.

Implications for research


A meta-analysis of adaptive learning from Xie et al. (2019) found that
impacts on cognition, including learning outcomes, were positive or
mixed in 43 studies that ranged over a variety of subject areas and levels
of learners. The current study contributes to the literature by providing a
counterexample of negative outcomes in English courses, both at the dis-
aggregated level of subpopulations and the aggregated course level. More
research is needed to determine if the findings in this study are consist-
ent in other institutions’ contexts. One possible explanation is that the
11 studies in the meta-analysis that included the impact of ALS in lan-
guage courses constituted a small sample size that was not fully represent-
ative of the outcomes in wider practice. In addition, “language courses”
is a comprehensive category that includes all forms of language education
(Fu & Hwang, 2018).
A possible confound in this study is that the implementation of adap-
tive learning may have been different between the two particular course
sequences. MAT112 has a prerequisite of Fundamentals of Algebra,
MAT104, which is also a course developed and delivered in the adap-
tive learning system. MAT120 is a standalone course, not requiring any
prerequisites, whether adaptive or not. The English courses, ENG114
and ENG124, are both adaptive courses, with ENG114 serving as a
prerequisite to ENG124. Lending to this possibility is the fact that each
sequence had its own dedicated pairing of an academic program direc-
tor and course designer overseeing course development. It is conceivable
that each sequence of courses was designed and taught similarly because
of the consistency of personnel. Completing a qualitative investigation of
the goals and approaches of embedding adaptive learning could lend a
richer interpretation to the findings of the current study. Such a study also
would benefit from a quantitative component to examine the relationship
between the instructor of record and student outcomes.
This study also used only course grades as the outcome variable. An
extension of this study could examine course withdrawals in adaptive
courses by subgroups, since course withdrawals were omitted in the cur-
rent research project. Additional designs could take this research in new
directions. One path forward could be to look at other quantifiable out-
comes such as course retention rates, counts of cumulative credits earned,
persistence rates, and graduation rates. These data were available for the
184 Jeremy Anderson and Maura Devlin

current study’s population and would add to a more longitudinal under-


standing of the extent to which adaptive learning impacts students from
diverse backgrounds across their educational journeys.
Another potential branch to the research could be a review of student
feedback by group before and after the introduction of adaptive learn-
ing. Again, while out of scope of the current study, the institution could
match course evaluation results to individual students and join in demo-
graphic data in the same way as this study. Applying a deductive approach
would be possible as there is a schema of participant characteristics to
consider and prior research on students’ perceptions of adaptive learning
(Anderson et al., 2021). Both of these research directions would provide a
more holistic understanding of how diverse learners might experience and
benefit from adaptive learning in different courses.
Learner agency, motivation, and math self-identity within adaptive
learning courses could be another line of inquiry. Stereotype threat has
been shown to have wide implications for academic performance of
minorities and disadvantaged students (Schmader & Hall, 2014). Beyond
outcome data such as retention rates, cumulative credits earned counts,
persistence rates, and graduation rates, researchers need to understand
the psychosocial attributes that facilitate one disadvantaged student’s
persistence and another’s attrition. Formative assessment data on stu-
dent performance within an ALS, combined with data such as time on
task, persistence on task, and student affect while engaging with learning
within the system could yield useful information on whether disadvan-
taged students’ learning agency is enhanced when using an ALS, espe-
cially when combined with qualitative data.

Conclusion
Bay Path University began implementing adaptive learning in 2015 with
the aim of improving student outcomes generally, but for a diverse student
body more specifically. A research team at the institution has tracked the
efficacy of the implementation for the last several years starting at lower
levels of granularity examining overall population outcomes in adaptive
and nonadaptive courses, and progressing to higher levels including out-
comes disaggregated by courses and now by demographic groups. The
current study sought to understand if adaptive learning benefited lower-
income (Pell-eligible, i.e.), first-generation, Black or African American,
and Hispanic students when comparing course grades for these subpopu-
lations before and after ALS implementation in specific courses. Analyses
of data from the student information system (SIS) demonstrated that these
groups of students generally tended to benefit in the core math sequence,
Data analytics in adaptive learning for equitable outcomes 185

but not in the core English sequence. Further investigation is necessary


to understand this divergence in outcomes and to determine whether it
should influence continuation and discontinuation decisions for the adop-
tion of adaptive learning. Other institutions should consider applying a
similar equity lens when evaluating the efficacy of this instructional tool
in their local contexts.

References
Anderson, J., Bushey, H., Devlin, M., & Gould, A. (2021). Efficacy of adaptive
learning in blended courses. In A. Picciano (Ed.), Blended learning research
perspectives, volume III. Taylor & Francis. Pp. 147–162.
Anderson, N. (2021, June 15). MacKenzie Scott donates millions to another
surprising list of colleges. The Washington Post. https://www​.washingtonpost​
.com ​/education ​/2021​/06​/15​/mackenzie​-scott​- college​-donations​-2021/
Bills, D. B. (2019). Ch. 6. The problem of meritocracy: The belief in achievement,
credentials and justice. In R. Becker (Ed.), Research handbook on the
sociology of education. Pp. 88–105. https://china​.elgaronline​.com ​/display​/
edcoll​/9781788110419​/9781788110419​.00013​.xml
Brown, D. (2021, April 9). College isn’t the solution for the racial wealth gap. It’s
part of the problem. The Washington Post. https://www​.washingtonpost​.com​
/outlook​/2021​/04​/09​/student​-loans​-black​-wealth​-gap/
Brown, E., & Gabriel-Douglas, D. (2016, June 23). Affirmative action advocates
shocked - And thrilled. The Washington Post. https://www​.washingtonpost​
.com​/news​/grade​-point​/wp​/2016​/06​/23​/affirmative​-action​-advocates​-shocked​
-and​-thrilled​-by​-supreme​- courts​-ruling​-in​-university​-of​-texas​- case/
Burke, L. (2022, May 9). Years after California limited remediation at community
colleges, reformers want more fixes. Higher Ed Dive. https://www​.highereddive​
.com ​/news​/years​-after​- california​-limited​-remediation​-at​- community​- colleges​
-reformers ​ / 623288/​ ? utm ​ _ source ​ = Sailthru​ & utm ​ _ medium​ = email​ & utm ​ _ c​​
ampai​​g n​=Ne​​wslet​​ter​%2​​0Week​​ly​%20​​Round​​up:​%2​​0High​​er​%20​​E d​%20​​Dive:​​
%20Da​​ i ly​%2​​ 0 Dive​​ %2005​​ -14 ​-2​​ 022​&utm ​_ term​= Higher​%20Ed​%20Dive​
%20Weekender
Cahalan, M. W., Addison, M., Brunt, N., Patel, P. R., & Perna, L. W. (2021).
Indicators of higher education equity in the United States: 2021 historical trend
report. The Pell Institute for the Study of Opportunity in Higher Education.
Council for Opportunity in Education (COE), and Alliance for Higher
Education and Democracy of the University of Pennsylvania (PennAHEAD).
Carrns, A. (2021, August 14). Will that college degree pay off? New York Times.
https://www​.nytimes​.com ​/2021​/08​/13​/your​-money​/college​-degree​-investment​
-return​.html
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Routledge
Academic.
Complete College America. (2021, October 13). Complete College America
announces $2.5 million initiative focused on digital learning innovation
at historically black colleges and universities. https://completecollege​.org​/
186 Jeremy Anderson and Maura Devlin

resource​/complete​- college​-america​-announces​-2​-5​-million​-initiative​-focused​
-on​-digital​-learning​-innovation​-at​-historically​-black​- colleges​-universities/
Devlin, M., Egan, J., & Thompson, E. (2021, December). Data driven course
design and the DEI imperative (A. Kezar, ed.). Change Magazine.
Dziuban, C., Howlin, C., Moskal, P., Johnson, C., Parker, L., & Campbell,
M. (2018). Adaptive learning: A stabilizing influence across disciplines and
universities. Online Learning, 22(3), 7–39. https://doi​.org​/10​. 24059​/olj​.v22i3​
.1465
Dziuban, C., Moskal, P., Johnson, C., & Evans, D. (2017). Adaptive learning: A
tale of two contexts. Current Issues in Emerging eLearning, 4(1), 26–62.
Edsall, T. B. (2021, June 23). Is higher education no longer the ‘great equalizer’?
The New York Times. https://www​.nytimes​.com​/2021​/06​/23​/opinion​/
education​-poverty​-intervention​.html
Field, A. (2018). Discovering statistics using IBM SPSS statistics. Sage.
Fu, Q. K., & Hwang, G. J. (2018). Trends in mobile technology-supported
collaborative learning: A systematic review of journal publications from
2007 to 2016. Computers & Education, 119, 129–143. https://doi​.org​/10​
.1016​/j​.compedu​. 2018​.01​.004
Fortin, J. (2021, October 20). Amherst College ends legacy admissions favoring
children of alumni. The New York Times. https://www​.nytimes​.com ​/2021​/10​
/20​/us​/amherst​- college​-legacy​-admissions​.html
Jack, A. A. (2020, September 15). A separate and unequal system of college
admissions. The New York Times. https://www​.nytimes​.com​/2020​/09​/15​/
books​/review​/selingo​-korn​-levitz​- college​-admissions​.html
Jasick, S. (2022a, February 7). For students’ extra needs. Inside higher Ed​.com​.
https://www​.insidehighered​.com ​/admissions​/article ​/2022 ​/02 ​/07​/colleges​-start​
-aid​-programs​-students​-full​-needs
Jasick, S. (2022b, April, Volume 18b). Williams gets more generous with aid.
Inside Higher Ed​ .com​. https://www​.insidehighered​.com ​/admissions​/article​
/2022​/04​/18​/williams​-improves​-aid​-offerings
Jasick, S. (2022c, May 3). Students for fair admissions file Supreme Court brief.
Inside Higher Ed​.com​. https://www​.insidehighered​.com ​/quicktakes​/2022 ​/05​
/03​/students​-fair​- admissions​- files​- supreme​- court​-brief​?utm​_ source​= Inside​
+Higher​ +Ed​ & utm ​ _ campaign ​ = 579e1fa972 ​ - DNU ​ _ 2021 ​ _ COPY​ _ 02 ​ & utm​
_medium​= email​&utm ​_t​​erm​= 0 ​​_1fcb​​c0442​​1​-579​​e1fa9​​72​-19​​75146​​61​&mc ​_ cid​
=579e1fa972​&mc​_ eid​= e57f762926
Krantz, L., & Fernandes, D. (2022, January 24). Supreme Court agrees to hear
Harvard affirmative action case, could cause ‘huge ripple effect’ in college
admissions. The Boston Globe. https://www​.bostonglobe​.com​/2022​/01​/24​
/metro ​ / future ​ - affirmative ​ - action​ - higher​ - education​ - limbo ​ - supreme ​ - court​
-agrees​-hear​-harvard​- case/
Libassi, C. J. (2018, May 23). The neglected race gap: Racial disparities
among college completers. The Center for American Progress. https://www​
.americanprogress​.org ​/article ​/neglected​- college ​-race ​- gap ​-racial​- disparities​
-among​- college​- completers/
Lumely, T., Diehr, P., Emerson, S., & Chen, L. (2002). The importance of the
normality assumption in large public health data sets. Annual Review of
Data analytics in adaptive learning for equitable outcomes 187

Public Health, 23(1), 151–169. https://doi​.org​/10​.1146​/annurev​.publhealth​. 23​


.100901​.140546
Maki, P. (2017). Real-time student assessment: Meeting the imperative for
improved time to degree, closing the opportunity gap, and assuring student
competencies for 21st-century needs. Stylus Publishing, LLC.
Manly, C. A. (2022). Utilization and effect of multiple content modalities in online
higher education: Shifting trajectories towards success through universal
design for learning. [Doctoral dissertation. University of Massachusetts,
Amherst]. Scholarworks.
Marcus, J. (2021, November 1). Will that college degree pay off? New public data
has the numbers. The Washington Post. https://www​.washingtonpost​.com​/
education ​/2021​/11​/01​/college​-degree​-value​-major/
Murphy, J. S. (2021, October 23). College admissions are still unfair. The
Atlantic. https://www​.theatlantic​.com​/ideas​/archive​/2021​/10​/amhersts​-legacy​
-announcement​-wont​- end-i
Medina, J., Benner, K., & Taylor, L. (2019, March 12). Actresses, business leaders
and other wealthy parents charged in U.S. college entry fraud. The New York
Times. https://www​.nytimes​.com ​/2019​/03​/12​/us​/college​-admissions​- cheating​
-scandal​.html
Merriman, A. (2021, May 14). Dartmouth to raise income maximum for full-
tuition scholarships. Valley News. https://vtdigger​.org ​/2021​/05​/14​/dartmouth​
-to​-raise​-income​-maximum​-for​-full​-tuition​-scholarships/
Murphy, J. S. (2021, October 23). College admissions are still unfair. The
Atlantic. https://www​.theatlantic​.com​/ideas​/archive​/2021​/10​/amhersts​-legacy​
-announcement​-wont​- end​-inequity​/620476/
Nietzel, M. (2021, February 22). New from U.S. Census Bureau: Number of
Americans with a bachelor’s degree continues to grow. Forbes. https://www​
.forbes​. com ​/sites ​/michaeltnietzel ​/ 2021​/ 02 ​/ 22 ​/new​-from​- us​- census​-bureau​
-number ​ - of​ - americans ​ - with ​ - a​ - bachelors ​ - degree ​ - continues ​ - to ​ - grow/ ​ ? sh​
=6a14a33d7bbc
Rothwell, J. (2015, December 18). The stubborn race and class gaps in college
quality. Brookings Institute. https://www​.brookings​.edu​/research​/the​
-stubborn​-race​-and​- class​-gaps​-in​- college​-quality/
Saul, S., & Hartocollis, A. (2022, January 10). Lawsuit says 16 elite colleges are
part of price-fixing cartel. The New York Times. https://www​.nytimes​.com​
/2022​/01​/10​/us​/financial​-aid​-lawsuit​- colleges​.html
Schmader, T., & Hall, W. M. (2014). Stereotype threat in school and at work:
Putting science into practice. Policy Insights from the Behavioral & Brain
Sciences, 1(1), 30–37.
Sosa, G. (2018). Using disproportionate impact methods to identify equity gaps.
https://www​.sdmesa​.edu​/about​-mesa​/institutional​- effectiveness​/institutional​
-research​/data​-warehouse​/data​-reports​/ Equity​%20Calculations​%20Explained​
.pdf
Stevens, M. (2019, December 31). College in the US is at a crossroad. Will it
increase social mobility or stratification? NBC News. https://www​.nbcnews​
.com ​/think ​/opinion ​/college​-u​-s​- crossroads​-will​-it​-increase​-social​-mobility​- or​
-ncna1109071
188 Jeremy Anderson and Maura Devlin

U. S. News and World Reports. (n.d.). Top performers on social mobility. https://
www​.usnews​.com ​/ best​- colleges​/rankings​/national​-universities​/social​-mobility
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent
tutoring systems, and other tutoring systems. Educational Psychologist, 46(4),
197–221. http://doi​.org​/10​.1080​/00461520​. 2011​.611369
Xie, H., Chu, H.-C., Hwang, G.-J., & Wang, C.-C. (2019). Trends and development
in technology-enhanced adaptive/personalized learning: A systematic review
of journal publications from 2007 to 2017. Computers & Education, 140,
103599. https://doi​.org​/10​.1016​/j​.compedu​. 2019​.103599
11
BANKING ON ADAPTIVE QUESTIONS TO
NUDGE STUDENT RESPONSIBILITY FOR
LEARNING IN GENERAL CHEMISTRY
Tara Carpenter, John Fritz, and Thomas Penniston

Introduction
How do we help new college students learn how to learn? If as the saying
goes, “nobody learns from a position of comfort,” can first-year students
honestly and accurately self-assess what they currently know, understand,
or do? And be disciplined enough to put in the time and effort to develop
or strengthen what they may lack? Perhaps more importantly, can they be
nudged into taking responsibility for doing so, especially if their initial
interest or ability alone is insufficient to be successful? If so, what can
faculty do, in the design and delivery of their courses, to help initiate or
accelerate student engagement? Finally, what, if anything, can learning
analytics and adaptive learning do to help both faculty and students in
their teaching and learning roles?
In this case study from the University of Maryland, Baltimore County
(UMBC), we explore these questions through one of the university’s larg-
est courses, CHEM 102 “Principles of Chemistry II,” with a typical Spring
enrollment of 550–600 students (just under 200 in Fall). After 18 years’ expe-
rience in teaching the course as well as guiding, even exhorting students in
how to learn and succeed in it, Tara Carpenter wondered if students simply
did not know how to do so independently. So, in the middle of Spring 2021—
armed with a pedagogical theory of change and innovative, pandemic-driven
digital skills and experience—she designed an incentive model and personal-
ized learning environment in UMBC’s Blackboard LMS to help students not
only pass the exams, but also take responsibility for preparing for them.
Frequently known as “spaced practice,” in which students have time
to study, forget, re-acquire, and reorganize new knowledge or content,
DOI: 10.4324/9781003244271-14
190 Tara Carpenter, John Fritz, and Thomas Penniston

Carpenter leveraged large pools or banks of questions to guide students


in their “time on task” practice and application of key concepts needed to
perform well on high-stakes, summative exams. She also incorporated her
spaced practice pedagogy into a pilot of the Realizeit adaptive learning
platform in Fall 2021 and Spring 2022. In addition to scaling her bird’s
eye view on student engagement, Carpenter and her chemistry department
colleagues are also taking a closer look at if—and how—students are tak-
ing lessons learned in her course to the next one that requires it, CHEM
351 “Organic Chemistry” (Carpenter, 2022).

Pedagogical influences
To better understand Carpenter’s methodology of spaced practice and the
role learning analytics and adaptive learning have played in implementing
and refining it, we first need to understand her pedagogical influences.
To do so, we will focus on how we learn to learn and think about our
thinking, often known as “metacognition,” the role that memory (and
forgetting) plays in doing so, and finish with why spaced practice—unlike
cramming—is literally about the time we give our brains to discover, pro-
cess, and organize new knowledge, skills, or abilities.

Metacognition
In 2016, Carpenter joined a UMBC Faculty Development Center (FDC)
book discussion about McGuire and McGuire’s (2015), Teach Students
How to Learn: Strategies You Can Incorporate into Any Course to
Improve Student Metacognition, Study Skills, and Motivation. A year
later, she attended McGuire’s on-campus keynote presentation (2017) at
UMBC’s annual “Provost’s Symposium on Teaching and Learning.” In
both her book and UMBC talk based on it, McGuire argues that faculty
need to intentionally and explicitly introduce students to metacognition,
or thinking about thinking, by showing and telling them about Bloom’s
taxonomy of learning (see Figure 11.1): “Don’t just assume they’ve seen
Bloom’s [taxonomy] or understand it,” said McGuire, who also coau-
thored with her daughter, Stephanie, a student-focused book about meta-
cognition, Teach Yourself How to Learn (2018).
“Reading [McGuire’s] book essentially changed the way I view teach-
ing and learning and changed how I view course design,” says Carpenter,
who had long seen incoming students repeat what they were conditioned
to do in high school: memorize, regurgitate (for an exam), and promptly
forget (after it). “Most of them just aren’t prepared for the rigors of college
because they simply don’t understand the difference between memoriza-
tion and learning.”
Banking on adaptive questions to nudge students in gen chem 191

FIGURE 11.1 Bloom’s Taxonomy. Creative Commons by 2.0 Generic, Courtesy


Vanderbilt University Center for Teaching.
https://cft​.vanderbilt​.edu​/guides​-sub​-pages​/blooms​-taxonomy

Along with her colleague, Sarah Bass, who teaches CHEM 101, and
also leveraged the same large-question-bank approach to online, “open
note” exams during the pandemic (Bass et al., 2021a, 2021b; Fritz, 2020),
Carpenter became much more intentional about introducing students to
metacognition, delivering McGuire’s recommended lecture on the topic,
and taking time in class to encourage student reflection on their own
learning, especially after exams (Carpenter et al., 2020). She even began
sending individual, personalized emails, highlighting effective strategies
for specific students who seemed to be struggling, which is remarkable
given the class size.

The Ebbinghaus forgetting curve


Despite her encouragement and guidance, Carpenter saw new students
continue to struggle and fall behind. As her own pedagogical awareness
evolved, she began to focus on why cramming doesn’t work for students
as a strategy for long-term learning and retention, based on the work of
German psychologist Herman Ebbinghaus (1850–1909), who spearheaded
the research of memory by studying his own (Ebbinghaus (1885), 2013).
Specifically, Ebbinghaus famously memorized a set of 2,300 random
three-letter combinations and then tracked, consistently, how long it took
for him to forget them, only consulting the list after he could not recall
192 Tara Carpenter, John Fritz, and Thomas Penniston

FIGURE 11.2 The Ebbinghaus Forgetting Curve.


Image source: https://images​.app​.goo​.gl​/kW4ZZy6K2Lmx9hBi6
anything. He found that after memorizing and immediately demonstrat-
ing 100% recall, by 19 minutes he could only recall 60 percent of the list,
but by 31 days he could still recall just over 20% (see Figure 11.2).
Ebbinghaus’ work has sometimes been criticized for lacking external
validity because he only studied himself. However, a 2015 study repli-
cated his findings (Murre & Dros, 2015). Also, since as Ebbinghaus
found, that rote-learned memories fade soon, it is worth noting Cathy N.
Davidson’s (2012) excellent distillation of the brain science of attention,
which shows how we are literally trained from birth to pay attention to
things that matter and filter out those that do not. As humans, we need
this filtering to prevent sensory overload and to promote mastery, which
comes through trial and error, repeated exposure, and intuitive applica-
tion of key concepts in different contexts. In other words, when we forget
rote-acquired, short-term memories, it’s likely a sure sign that our brains
have not deemed them to be sufficiently important enough to be commit-
ted to long-term retention.

The learning curve (via spaced practice)


Though he’s most widely known for his forgetting curve, Ebbinghaus’
attempts to overcome it also give us a more familiar concept, the learning
curve, which ideally occurs when we move on from relying on memory
alone to recall facts and concepts when needed. Specifically, Ebbinghaus
found that he forgot less after repeated exposures to the same material,
albeit in regular, even smaller periods than his initial acquisition by rote
memorization alone and separated or “spaced” out over time, such as 1
day, 1 week, 1 month, 3 months, etc. (see Figure 11.3).
Banking on adaptive questions to nudge students in gen chem 193

FIGURE 11.3 An "Ideal" Learning Curve.


Image source: https://images​.app​.goo​.gl​/xMnsJRM5ButPVJYt6

Though it may vary in name, the principles behind spaced prac-


tice—initially frequent and regular review of key concepts and content
via smaller study sessions vs. single, long-cramming sessions—find sup-
port in the scholarship of teaching and learning, including as “divided
practice” (Weimer, 2002), “distributed practice” (Cepeda et al., 2006),
“goal directed practice” (Ambrose et al., 2010), and “deliberate practice”
(Nilson & Zimmerman, 2013). More recently, we see “spaced practice”
(Dunlosky & Rawson, 2015; Hodges, 2015; Miller, 2014) emerge as a
consensus term, including in for-profit and nonprofit adaptive learning
environments like Duolingo (Munday, 2016) and Khan Academy (Gray &
Lindstrøm, 2019), respectively.

Methodology
Again, Carpenter recognized the effects of the Ebbinghaus forgetting
curve in many of her incoming students, especially their slavish adherence
to cramming for exams instead of following her advice to adopt better,
time-tested, and proven learning strategies. But despite the evidence and
even her own encouragement, student behaviors and approaches to their
own learning did not change.
“What if they just don’t know how,” she wondered. “What if they’re
just so overwhelmed by learning how to learn, that they can’t pull this off
on their own, partly because of immaturity or time management, etc.?”
194 Tara Carpenter, John Fritz, and Thomas Penniston

With these questions in mind, and based in part on her Fall 2020
use of large question banks to design online, “open-note” exams dur-
ing the pandemic-pivot to remote instruction (Fritz, 2020), Carpenter
began constructing a schedule for spaced practice over Spring break of
the Spring 21 term, and allowed her students to opt-into it for the remain-
ing three exams. If students opted in, they did not need to complete the
regularly scheduled homework as this would take its place. As an incen-
tive to opt in, students were offered 10 percentage points extra credit
toward the first exam where spaced practice was offered, if needed. For
the remaining two exams, no extra credit was offered, but the homework
assignments continued to be replaced by the spaced practice for students
who opted in.
Specifically, Carpenter began writing iterations of practice questions
for individual units in her course that she wished students would see
and could answer. Having written her own exam questions for years,
she found the transition to writing practice questions herself—instead
of using publishers’ homework questions—to not be very difficult. But
her focus was not on “definition” kinds of questions (per Bloom’s low-
est “remembering” level of learning). Instead, she focused on higher-level
thinking that required students to apply concepts or solve math-related
problems that relied on conceptual understanding. To help with question
variety, she used Blackboard’s “calculated question” format to change the
numeric variables in a question stem or prompt (see https://tinyurl​.com​/
bbcalcquestion). She also randomized the order of the questions students
would see in each spaced practice lesson.
However, Carpenter had to solve another key problem: lack of time for
her students to space their practice when she had an exam every three weeks
for the rest of the term. So, she made up her own schedule based loosely on
an “N + 2” sequence, where the first iteration or exposure to unit practice
may occur on day “zero” followed by another iteration on days 2, 5, and 7,
respectively. These might also overlap with another unit’s practice schedule,
such that students were literally practicing every day, albeit not the same
material two days in a row. For an example, see Figure 11.4.
Based on the manually intensive nature of her practice question
development process, and with a desire to personalize student learning
even further, Carpenter agreed to pilot the Realizeit Learning adaptive
learning platform in Fall 21 and Spring 22, based largely on the posi-
tive experience and recommendation from colleagues at the University of
Central Florida (UCF), who use and support it, and have also published
their experiences (Dziuban et al., 2017, 2018, 2020). This required an
initial export of Carpenter’s own question pools from Blackboard and
import into Realizeit that was not entirely automatic or without cleanup.
Banking on adaptive questions to nudge students in gen chem 195

FIGURE 11.4 
CHEM102 Students’ Spaced Practice Schedule before Learning
Checkpoint 5, Spring 21

Also, Carpenter now faced her own steep learning curve to master a new
assessment authoring platform, which she did over repeated practice and
support.
Finally, with the use of Realizeit in Fall 21 providing a near infinite
supply of variable questions and answers, Carpenter essentially was able
to implement her spaced practice content and schedule across an entire
term. She kept the same exam questions, given over the course of five
Learning Checkpoints (LCPs) as she’d done using six LCPs in Spring 21,
but interestingly, she did not make Spaced Practice optional. It was now
required as the homework system for CHEM 102, which was understand-
able not only for her perceived benefits to all students (based on the Spring
21 experiment) but also her own considerable time and effort to create
the Spaced Practice environment in Realizeit. She just didn’t have time to
have students using another, publisher-based homework system, which
she understandably abandoned for Fall 21, since she could now author her
own questions for both practice and exam environments.
196 Tara Carpenter, John Fritz, and Thomas Penniston

Findings
In this section, we summarize findings from Carpenter’s experimental
implementation of spaced practice during the middle of the Spring 21 term
using Blackboard and also describe its full implementation in Fall 21 using
Realizeit Learning. We also share results from an interesting experiment
in which Carpenter, working with instructors in the next course requiring
hers, CHEM 351 “Organic Chemistry,” surveyed students who had been
enrolled in her Spring 21 version of CHEM 102, to see if and how they
carried the lessons learned from their use of spaced practice into CHEM
351 in Fall 21.
Note: By default, to avoid content duplication, UMBC’s Bb LMS course
shells combine multiple sections of the same course taught by the same
instructor. As such, CHEM 102H (Honors section, n = ~25) are always
combined with the larger CHEM 102 LMS course shell. However, in the
analysis that follows, only the “FA21” findings alone specifically excluded
Honors students for methodological reasons supporting inferential analysis.
By contrast, Honors students were included in the “SP21” and “SP21/102
to FA21/351” progression findings, which are mostly descriptive in nature.

CHEM 102 (Spring 21)


Six exams were given to 558 students in CHEM 102 in Spring 21. Exams
1–3 were given using traditional “one-and-done” homework, in that stu-
dents were not required to review questions from prior unit modules
in the assignments. For exams 4–6, students were given the opportu-
nity to participate in Carpenter’s spaced practice implementation, with
64.5% completing them for all remaining exams, 13.9% completing for
at least two exams, 7% completing for just one exam, and 14.4% not
completing any spaced practice. Overall, students who participated in
spaced practice performed better on exams than students who did not
(see Figure 11.5).
Additionally, based on prior research on the “strength of relationship”
between UMBC student engagement in the LMS, particularly duration or
time spent, and final grades (Fritz et al., 2021), we see that all students
were dramatically more engaged after Spring break, but especially those
earning higher final grades (see Figure 11.6).

CHEM 102 (Fall 21)


Again, in Fall 21, Carpenter not only implemented her spaced practice ped-
agogy during the entire term, but also used the Realizeit adaptive learning
platform for the first time. Accordingly, it is more appropriate to compare
Banking on adaptive questions to nudge students in gen chem 197

Spaced Practice Completed vs Course Grade


70.0%
3/3 (n = 363)
60.0% 2/3 (n = 78)
% of Student Earning Grade

50.0% 1/3 (n = 39)


none (n = 81)
40.0%

30.0%

20.0%

10.0%

0.0%
A B C D F
Final Course Grade

FIGURE 11.5 CHEM 102 Final Grade by Spaced Practice Completion of Last


Three Exams, Spring 21

her Fall 21 course outcomes with those from Fall 20. Additionally, while
CHEM 102 is offered every Fall and Spring term, the largest enrollment is
always in the Spring, since it is part of the two-semester general chemistry
sequence.
In doing so, we see that there was not a statistically significant relation-
ship between the treatment (i.e., course design) and overall DFW rates
between Fall 20 and Fall 21. However, if we disaggregate the final grade
data, we see that there is an overall statistically significant increase in As
(p < 0.01) and decrease in Cs (p < 0.05) and Ds (p < 0.05) in Fall 21 (see
Figure 11.7).
Notably, all of this gain from increasing As appears to have been from
students of color (SOC), who demonstrated a nearly 4x advantage in
attaining this grade (p < 0.001) over their peers who completed the course
prior to the redesign, while White students demonstrated no statistically
significant gain in this area. Figure 11.8 illustrates the breakdown of per-
centage of grades earned by term and White versus SOC.
There does appear to be an upward trend in Fs, although not statisti-
cally significant when comparing the past two terms. There are no statisti-
cally significant, notable grade distribution trends for female or transfer
students. Non-STEM students, however, have a statistically significant
advantage earning As over their class peers.
There also seem to be some inflection points during students’ in-term
learning. If we consider how students learning check points (LCPs) rank
compared with their peers (i.e., as z-scores), we see that there is a statisti-
cally significant 34% reduction in DFWs if students improve from the
first to second LCP (p < 0.01), along with a 2.4x increased likelihood of
198 Tara Carpenter, John Fritz, and Thomas Penniston

FIGURE 11.6 CHEM 102 “Waterfall” Chart in which Every Row Is a Student, Every Column a Week in the Semester, and Each
Cell’s “color” density is based on time spent (e.g., darker = more time). For a larger, color version of this image, see
the presentation slides from Fritz et al., 2022, in the references.
Banking on adaptive questions to nudge students in gen chem 199

FIGURE 11.7 CHEM 102 Grade Distribution (Fall 2020 to Fall 2021)

earning an A for all students, and 3.4x for African American students in
particular. Considering inflection points later in the term, we see there is a
39% reduction in students’ chances of earning a DFW if there is improve-
ment from LCP 4 to LCP 5 (p < 0.01). Although controlling for race,
gender, STEM major, academic status, high school GPA, and math SAT
scores in these models only accounts for 10% of the total variance, it does
appear that growth within the semester may be contributing to academic
gains in course grade distribution.
Additionally, when we look at student engagement data from Realizeit,
including duration or time spent, and completion of spaced practice mod-
ules, we see a compelling insight into one of Carpenter’s key goals for the
course: students taking responsibility for their own learning. For example,
Figure 11.9 shows final grades earned by those students who did (or did
not) complete daily spaced practice modules in Realizeit (similar to Figure
11.6, every row is a student and every column is a daily spaced practice
module in Realizeit, with the color corresponding to final grade earned).
We also observe that after only 14 days into the Fall 21 term, a model
trained only on actual student usage in Realizeit was 82.6% accurate in
correctly predicting ABC and DFW final grades for all students (83%
precision).
Overall, the data indicate that Carpenter’s spaced practice implemen-
tation in Realizeit appeared, at the very least, to be doing no harm, and
when evaluating the implementation alongside grade distribution there
seem to be certain advantages, particularly when considering students of
color and non-STEM students.
200 Tara Carpenter, John Fritz, and Thomas Penniston

FIGURE 11.8 Chem 102 Grade Distribution by Term and Race (SOC = Students
of Color)

CHEM 102 (Spring 21) Students in CHEM 351 (Fall 21)


Finally, based on a small UMBC learning analytics “mini-grant” (Fritz,
2021), Carpenter proposed to survey students in her Spring 21 version
of CHEM 102 to see if and how they carried their spaced practice “les-
sons learned” into and through the next course requiring hers, CHEM
351 “Organic Chemistry.” She presented her findings during a UMBC
Learning Analytics Community of Practice meeting on March 10, 2022
(2022), which included the following:

• overall, 305 of Carpenter’s 558 students from her Spring 21 CHEM


102 course enrolled in the Fall 21 version of CHEM 351 “Organic
Chemistry”
• of these, 266 students went “all in” using spaced practice in CHEM
102, and 91% earned a C or better in CHEM 351
• by contrast, of the 15 students who opted out of using space practice in
CHEM 102, 46% went on to earn a DFW in CHEM 351; admittedly,
these are low Ns to generalize any further

Carpenter also surveyed her Spring 21 CHEM 102 students before and
after their enrollment in the Fall 21 version of CHEM 351 and learned the
Banking on adaptive questions to nudge students in gen chem 201

FIGURE 11.9 CHEM 102 Spaced Practice Modules Completed in Realizeit by


Final Grade Earned (Fall 21). For a larger, color version of this
image, see the presentation slides linked in Carpenter (2022), in
the references.

following: while 78% of pre-survey respondents indicated they would use


spaced practice in CHEM 351, only 34% of the post-survey respondents
indicated that they had actually done so. In other words, students valued
spaced practice in CHEM 102, but were unable or unwilling to use it on
their own in CHEM 351.
Carpenter analyzed students’ open-ended responses to the post-survey
(n = 216 participants) and found common themes explaining why and
how students struggled to implement spaced practice in CHEM 351 (see
Table 11.1).
Carpenter found one student’s comments to be particularly telling and
representative of all students’ comments she observed (bold emphasis
added):
202 Tara Carpenter, John Fritz, and Thomas Penniston

TABLE 11.1 
Post-survey Responses of CHEM 102 (SP21)
Students in CHEM 351 (FA21)

Common Themes % Response (n = 216)

Planning it out, finding material 50%


Time 28%
Accountability 20%
Less helpful in CHEM 351 2%

• the biggest challenge in carrying out Spaced Practice (SP) was formulat-
ing my own types of questions that integrated the many learning objec-
tives (LOs) [for CHEM 351 “Orgo”]
• translating LOs [learning objects] into challenging questions was very
difficult
• creating a practice schedule that followed the class schedule closely was
a bit difficult to do
• any tips that could help us in creating an appropriate SP schedule for a
given section of units before an exam, would be very helpful
• many students in class chats spoke of their problems actually forming
an SP schedule despite really wanting to continue the great studying
technique

Discussion
As we reflect on this UMBC case study, a few key questions emerge from
Carpenter’s implementation and refinement of spaced practice in CHEM
102. First, why do some students strongly resist spaced practice in CHEM
102 at the start of the semester and then (surprisingly) embrace it by the
end? The Scholarship of Teaching and Learning (SoTL) literature and
practice includes frequent examples of student resistance to active learn-
ing in the classroom, but spaced practice is largely a solitary activity stu-
dents do (or don’t) pursue on their own time, outside of class. Even if
shown the evidence from prior, successful cohorts, could it be that incom-
ing students are learning more, but liking it less? If so, how should faculty
respond, if at all, given student agency and responsibility for learning?
Whose problem is this to solve?
Second, what might help students be more successful in implement-
ing spaced practice in CHEM 351 after successfully doing so in CHEM
102? To be sure, Carpenter rolled up her sleeves and scaffolded an anti-
cramming approach to learning that incoming college students may not
be familiar with or even like. But at some point, they do have to learn how
Banking on adaptive questions to nudge students in gen chem 203

to learn (in college) and not all faculty can be expected to implement and
sustain Carpenter’s approach to course design. Or could they?
Finally, what is the least amount of spaced practice “time on task”
spent per unit that students need to do to be successful in CHEM 102, and
become proficient in self-regulating their own learning in CHEM 351?
This is where we want to further study the data and patterns of behav-
ior associated with student engagement in Realizeit. While the goal for
spaced practice still must be quality over quantity, Carpenter has found
that initial student resistance is based on a perception (concern) over the
number and amount of time spent on unit module spaced practice ses-
sions. To date, however, Carpenter has relied on the first exam’s results to
quell student protest.
“When they see that—unlike what they are used to – they aren’t cram-
ming for exams and are largely ready for them in CHEM 102, the vocal
minority quiets down pretty quickly” says Carpenter.
There is, however, a small group of students who continue to spend
inordinate amounts of time doing spaced practice without the results
they (and Carpenter) would like to achieve (see especially Figure 11.9
above, where some D and F students were just as active or more so than
their higher performing peers). This is where Carpenter comes back to
McGuire’s focus on metacognition.
“Some students, for whatever reason, really struggle to think criti-
cally and objectively about their own thinking,” says Carpenter, who
has noticed these students might be using spaced practice only to fur-
ther aid their prior approaches to rote memorization instead of learn-
ing. “They can recognize a problem similar to one they’ve seen before,
and even recall a pattern or process of steps to follow, but don’t recog-
nize new variables, or that I’ve changed the problem from focusing on
boiling point or freezing point, or from an acid to a base, for example.
They will simply try to repeat what they did in spaced practice instead
of doing what’s required by the current problem they are presented on
the exam.”

Next steps
Going forward, there are three key directions we could imagine pursuing
further: (1) fully implementing adaptive learning in Realizeit, (2) devel-
oping a way to make spaced practice more flexible through the use of
contract or specifications grading, and (3) working within the Chemistry
department’s general chemistry curriculum to phase students’ discovery
and maturation with spaced practice across 3–4 courses. We describe
each in more detail below.
204 Tara Carpenter, John Fritz, and Thomas Penniston

Fully implement adaptive learning in Realizeit


At the time of this writing, we have finished the Spring 22 implementa-
tion of Realizeit in CHEM 102. We have mostly worked through some
growing pains from the first implementation in Fall 21, and the midterm
exam results included an average grade of 79%. It should be noted that
Carpenter switched back to a midterm and final exam structure but incor-
porated the same questions from her prior iteration of six learning check-
point (LCP) exams in SP21 and five LCPs in Fall 21. Partly this was due
to the effort she was putting into grading, which is always a factor to
consider in a high-enrollment STEM course. Still, despite this change, she
saw a similar correlation between student practice behaviors and their
exam performance (see Figure 11.10).
As stated earlier in the methodology, during the first phase of imple-
menting Carpenter’s spaced practice pedagogy using Realizeit, she mostly
relied on its algorithmic power to generate nearly infinite numbers of prac-
tice question and answer sets, over and above what she had been doing
using Blackboard’s “calculated question” type. This was a time-saving
to her, as the instructor, but as the implementation of Realizeit settles
out, we would like to more fully take advantage of its adaptive learning
capabilities to personalize and remediate the kinds of questions students

100.0 96
93
89
90.0 86
82 82
Averge Exam Score (Attempt #1)

78
80.0 73
72
70.0 66

60.0

50.0

40.0

30.0
26
20.0
0 1 2 3 4 5 6 7 8 9 10
# of Units for which 4 out of 4 Spaced Practice Assignments Were Completed

FIGURE 11.10 
CHEM102 Midterm Exam Grade by Spaced Practice
Completion, Spring 22
Banking on adaptive questions to nudge students in gen chem 205

need to be solving, as demonstrated by their weaknesses in the spaced


practice Realizeit environment itself, preferably well before they take their
midterm and final exams. In this way, individual students aren’t simply
amassing practice all students see, they are focusing on remediating spe-
cific demonstrated weaknesses and (hopefully) improving in areas neces-
sary to improve their specific exam scores.
Perhaps also, as students see these benefits of personalization for them-
selves, especially earlier in the term, their acceptance of and motivation to
use spaced practice will change from being primarily extrinsic (for points)
to intrinsic (for learning how they learn).

Increased flexibility in spaced practice


In the transition from Spring 21 to Fall 21, Carpenter not only changed
adaptive technologies (from Blackboard to Realizeit), but also changed her
incentive model for her spaced practice design. Specifically, use of spaced
practice was optional in Spring 21, and required in Fall 21. Some students
balked initially—and loudly—but Carpenter’s Fall 21 course design deci-
sion reflected a key learning goal and challenge: How could she appeal
to or even develop students’ intrinsic motivation to want to learn how to
learn if they were extrinsically and primarily motivated by points needed
to get a certain course grade?
Carpenter had all the evidence she needed that spaced practice worked,
but students had to be willing to put in the time on task practice. So, not
wanting to deny an effective treatment to any student who might not ini-
tially understand or value its impact, she decided to require it. With many
students focusing on how assignments will affect their grade more than
how they will help them learn, the sheer volume of assignments can cause
resistance from students.
One way Carpenter has imagined giving students some autonomy over
their learning path is through use of “specifications grading” or simply
“specs grading,” based on the work of Linda Nilson (2015), who argues
that current approaches to grading engender “hair-splitting” and should be
simplified by structuring assessment around the effort students are willing
to expend for a desired grade that is aligned with the instructor’s learning
goals. Sometimes known as “contract grading,” there is also heavy use of
rubrics in “specs grading,” because the level of effort expected for a grade
is clearly delineated in advance of the student’s attempt.
“The issue with traditional grading is that it puts the emphasis on
points and supports the students’ extrinsically motivated approach
to learning,” says Carpenter, who has not only dealt with student
206 Tara Carpenter, John Fritz, and Thomas Penniston

quibbles over a fraction of a point, but has seen the clear disconnect
among students between the assessments they are completing and the
goal of learning. “Additionally, not every student wants an A. If I can
clearly outline what a student needs to do to demonstrate a B or C per-
formance, and they do not need to stress about points, everyone’s stress
level can come down a bit.”

Incorporating spaced practice across the


general chemistry curriculum
While few faculty would probably be willing to do what Carpenter does
in a course as large as CHEM 102, her efforts have not gone unnoticed.
In fact, based largely on a 25-minute screencast video of her approach to
using large question banks to administer online, “open note” exams dur-
ing the pandemic (Fritz, 2020), many faculty in large, STEM courses have
followed suit in developing and leveraging their own exam question banks.
Additionally, four biology faculty recently leveraged their own exam ques-
tion banks—as well as student laptops and campus wifi – to turn a large
ballroom used to socially distance students during the pandemic into an
ad hoc testing center to administer online exams when Spring 22 classes
returned to campus (Fritz, 2022).
However, based on the results of her survey of CHEM 102 students who
valued spaced practice in Spring 21 but struggled to implement it on their
own in the Fall 21 instance of CHEM 351, she has imagined a longer run-
way of sorts in which students might be introduced to and become proficient
with the method. Specifically, what if students were introduced to a light
version of spaced practice in CHEM 101, immersed in it in CHEM 102
(as she is doing now with Realizeit), and then guided or transitioned into a
more self-directed or self-regulated approach in CHEM 351? To be sure, her
chemistry colleagues would need to be “on board,” and the specifics of each
approach would need to be fleshed out further, especially when only one of
the three courses currently uses Realizeit. But perhaps like Ebbinghaus’ ideal
learning curve, maybe spaced practice is not something one can learn imme-
diately but must be consciously and regularly implemented to be retained.
To help, as part of her 2022 UMBC learning analytics mini grant
renewal (Fritz, 2022), Carpenter recently recorded a brief (36 min) work-
shop demo of why and how to set up spaced practice (see https://youtu​.be​
/lLJGSsBySyA). She coordinated with the CHEM 351 instructors before-
hand, and the optional workshop was offered to any students currently
enrolled in the Fall 22 version of CHEM 351, including those she taught
in CHEM 102 in Spring 22. About 100 students attended.
Banking on adaptive questions to nudge students in gen chem 207

Conclusion
Throughout Carpenter’s teaching career, the key pedagogical challenge
she has strived to overcome is moving new students from being primar-
ily extrinsically motivated (for points) to become more intrinsically
motivated (to learn how they learn). Doing so at the scale of her typical
course enrollments is challenging. And yet, while her spaced practice
innovation did not appear to significantly change CHEM 102’s Fall 20
to Fall 21 DFW rate, which is a typical metric used to gauge effectiveness
of student success interventions, it is remarkable that the distribution
of the course’s higher grades did change significantly—and dramati-
cally. Not only were there more As and fewer Cs in Fall 21 than Fall 20,
but students of color were 4× more likely to have earned them, which
is remarkable in a high-enrollment gateway STEM course at a public
university.
So, what is it about the design of Carpenter’s course, exam, and practice
environments that might be working for more students? We are continuing
to explore this question, which has also been supported by a small grant
from Every Learner Everywhere to promote “Equity in Digital Learning,”
which Carpenter has participated in. Again, one group of students
Carpenter is especially interested in helping, regardless of background, are
those who actually do put in the effort and time on task in the learning
environments she designs, but still do not perform well. She fears they’ll
become discouraged and give up, but firmly believes these students need to
focus more on their metacognitive “thinking about their thinking” skills.
“I tell these students to learn—not just memorize—as if they were
going to teach the course,” says Carpenter. “It isn’t just that they need to
practice, prepare, or calculate better. They need a different mindset about
what they’re being asked to do, especially if problem variables or context
change from what they saw in practice.”
Indeed, as McGuire might suggest, improving students’ abilities—and
motivation—to honestly and accurately assess what they currently know,
understand, or can do could be the ultimate expression of improving their
metacognition.
Finally, one benefit of higher education’s pandemic-pivot to digital
learning may be more faculty becoming savvier and more capable of
expressing their pedagogy and course design at scale, regardless of the
course’s delivery or mode of instruction. Interestingly, despite it becoming
more possible to return to campus, Carpenter, and many of her UMBC
faculty colleagues who developed large exam question pools are continu-
ing to offer their exams online. Well, if it is relatively easy for faculty to
develop large numbers of exam questions to assess students, could it even
208 Tara Carpenter, John Fritz, and Thomas Penniston

become trivial for them to do so to help students practice and prepare for
them, too?
Indeed, UMBC Physics Lecturer Cody Goolsby-Cole, who also received
a 2022–2023 UMBC learning analytics mini-grant (Fritz, 2022), lever-
aged his own LMS question banks to create practice questions that were
ungraded, in terms of points contributing to a final grade, but including
correct and incorrect answers displayed to students. During the first six
units of his Spring 22 course, PHYS 122 “Introductory Physics II,” he
found students earning a 70% on the practice questions earned an aver-
age of 92% on the exam that followed. Students who did not use practice
questions earned an average exam grade of 77%. He plans to build out
and assess this practice environment further, including practice exams, to
see if and how more students might use and benefit from it.
If we could create learning environments in which faculty can model
and assign both exam and practice questions—with good feedback for
each, and maybe even adapted to students’ specific, demonstrated weak-
nesses—then students could get more exposure to the mindset and skills
needed to also demonstrate and improve their own self-regulated learn-
ing by creating more predictable and effective exam practice. Perhaps we
could also begin to encourage students to not only predict likely exam
questions—based on their experience in the course up to that point—but
also predict the likely, plausible answers, something we have dabbled in
for another course (Braunschweig & Fritz, 2019). In this context, stu-
dents could also be encouraged to “compare notes” with peers, which
might take some of the exam prep burden off of faculty alone via exam
reviews, etc. If so, this could illustrate the highest form of Bloom’s tax-
onomy where students create a learning artifact with knowledge and skills
they’ve applied from adaptive practice and exam learning environments.
All we’d need next is a Holodeck!

References
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M.
K. (2010). How learning works: Seven research-based principles for smart
teaching. John Wiley and Sons.
Bass, S., Carpenter, T., & Fritz, J. (2021a, May 18). Promoting academic
integrity in online, open-note exams without surveillance software. ELI
Annual Meeting. https://events​.educause​.edu ​/eli ​/annual​-meeting ​/2021​/
agenda ​/promoting​- academic​- integrity​- in​- online​- opennote​- exams​-without​
-surveillance​-software
Bass, S., Carpenter, T., & Fritz, J. (2021b, October 27). Promoting academic
integrity in online: “Open Note” exams without surveillance software
[Poster]. Educause Annual Conference. https://events​ .educause​
.edu​
/annual​
Banking on adaptive questions to nudge students in gen chem 209

-conference ​/ 2021​/agenda ​/promoting​-academic​-integrity​-in​- online​- open​-note​


-exams​-without​-surveillance​-software
Braunschweig, S., & Fritz, J. (2019, March 1). Encouraging student metacognition
by predicting exam Q&As [Poster]. Provost’s Teaching & Learning Symposium,
UMBC. https://umbc​.box​.com ​/exampredictposter
Carpenter, T. S. (2022, March 10). Do students carry lessons learned to the next
course? https://doit​.umbc​.edu​/analytics​/community​/events​/event​/101268/
Carpenter, T. S., Beall, L. C., & Hodges, L. C. (2020). Using the LMS for exam
wrapper feedback to prompt metacognitive awareness in large courses. Journal
of Teaching and Learning with Technology, 9(1), Article 1. https://doi​.org​/10​
.14434​/jotlt​.v9i1​. 29156
Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006).
Distributed practice in verbal recall tasks: A review and quantitative synthesis.
Psychological Bulletin, 132(3), 354–380. https://doi​.org​/10​.1037​/0033​-2909​
.132​. 3​. 354
Davidson, C. N. (2012). Now you see it: How technology and brain science will
transform schools and business for the 21st century. Penguin Books. https://
www​.penguinrandomhouse ​. ca ​/ books ​/ 306330​/ now​-you​- see​- it​- by​- cathy​- n​
-davidson​/9780143121268
Dunlosky, J., & Rawson, K. A. (2015). Practice tests, spaced practice, and
successive relearning: Tips for classroom use and for guiding students’ learning.
Scholarship of Teaching and Learning in Psychology, 1(1), 72–78.
Dziuban, C., Howlin, C., Johnson, C., & Moskal, P. (2017, December 18). An
adaptive learning partnership. Educause Review. https://er​.educause​.edu​/
articles​/2017​/12​/an​-adaptive​-learning​-partnership
Dziuban, C., Howlin, C., Moskal, P., Muhs, T., Johnson, C., Griffin, R., &
Hamilton, C. (2020). Adaptive analytics: It’s about time. Current Issues in
Emerging Elearning, 7(1). https://scholarworks​.umb​.edu​/ciee​/vol7​/iss1/4
Dziuban, C., Moskal, P., Parker, L., Campbell, M., Howlin, C., & Johnson,
C. (2018). Adaptive learning: A stabilizing influence across disciplines and
universities. Online Learning, 22(3), 7–39.
Ebbinghaus, H. (2013). Memory: A contribution to experimental psychology.
Annals of Neurosciences, 20(4), 155–156. https://doi​.org​/10​. 5214​/ans​.0972​
.7531​. 200408
Fritz, J. (2020, October 29). Promoting academic integrity in online testing.
UMBC Division of Information Technology. https://doit​.umbc​.edu​/news/​?id​
=97023
Fritz, J. (2021, July 22). Five faculty receive UMBC learning analytics mini grants.
UMBC Division of Information Technology. https://doit​.umbc​.edu​/analytics​/
news​/post​/111234/
Fritz, J. (2022, March 3). Four biology faculty give 1st exam in-class and online.
UMBC Division of Information Technology. https://doit​.umbc​.edu​/post​
/117418/
Fritz, J. (2022, October 21). 2022–23 learning analytics mini Grant recipients
announced. DoIT News. https://doit​.umbc​.edu ​/post ​/128655/
Fritz, J., Penniston, T., & Sharkey, M. (2022, February 17). How do UMBC
course designs correlate to final grades [Show & Tell]. Learning Analytics
Community of Practice, UMBC. https://tinyurl​.com​/umbclacop021722.
210 Tara Carpenter, John Fritz, and Thomas Penniston

Fritz, J., Penniston, T., Sharkey, M., & Whitmer, J. (2021). Scaling course design
as a learning analytics variable. In Blended learning research perspectives (1st
ed., Vol. 3). Routledge. https://doi​.org​/10​.4324​/9781003037736-7
Gray, J., & Lindstrøm, C. (2019). Five tips for integrating khan academy in your
course. Physics Teacher, 57(6), 406–408. https://doi​.org​/10​.1119​/1​. 5124284
Hodges, L. C. (2015). Teaching undergraduate science: A guide to overcoming
obstacles to student learning (1st ed.). Stylus Publishing (Kindle Edition).
McGuire, S. (2017, September 22). Get students to focus on learning instead of
grades: Metacognition is the key! 4th Annual Provost’s Teaching & Learning
Symposium, UMBC. https://fdc​.umbc​.edu ​/programs​/past​-presentations/
McGuire, S., & McGuire, S. (2015). Teach students how to learn: Strategies
you can incorporate into any course to improve student metacognition,
study skills, and motivation. Stylus Publishing (Kindle Edition). https://sty​
.presswarehouse​.com ​/ books​/ BookDetail​.aspx​?productID​= 441430
McGuire, S., & McGuire, S. (2018). Teach yourself how to learn. Stylus
Publishing (Kindle Edition). https://styluspub​.presswarehouse​.com​/ browse​/
book ​/9781620367568​/ Teach​-Yourself​-How​-to​-Learn
Miller, M. D. (2014). Minds online: Teaching effectively with technology.
Harvard University Press. https://www​.hup​.harvard​.edu​/catalog​.php​?isbn​
=9780674660021
Munday, P. (2016). The case for using DUOLINGO as part of the language
classroom experience. RIED: Revista Iberoamericana de Educación a
Distancia, 19(1), 83–101. https://doi​.org​/10​. 5944​/ried​.19​.1​.14581
Murre, J. M. J., & Dros, J. (2015). Replication and analysis of Ebbinghaus’
forgetting curve. PLOS ONE, 10(7), e0120644. https://doi​.org​/10​.1371​/
journal​.pone​.0120644
Nilson, L. B. (2015). Specifications grading: Restoring rigor, motivating students,
and saving faculty time. Stylus Publishing. https://styluspub​.presswarehouse​
.com ​/ browse​/ book ​/9781620362426​/Specifications​- Grading
Nilson, L. B., & Zimmerman, B. J. (2013). Creating self-regulated learners:
Strategies to strengthen students’ self-awareness and learning skills. Stylus
Publishing.
Weimer, M. (2002). Learner-centered teaching: Five key changes to practice. John
Wiley & Sons.
12
THREE-YEAR EXPERIENCE WITH
ADAPTIVE LEARNING
Faculty and student perspectives

Yanzhu Wu and Andrea Leonard

Introduction
As more students choose online learning, institutions of higher education
continuously seek ways to ensure online students’ success in their aca-
demic endeavors by not just offering flexible, marketable degrees, but also
by implementing feasible pedagogical approaches and tools to provide
individualized learning. Researchers have indicated that individualized
learning has been demonstrated to be more promising for student per-
formance since the instructions are adapted to meet individual students’
unique needs (Coffin & Pérez, 2015; Kerr, 2015). In order to optimize
individualized instruction in online learning, educators have made a con-
certed effort to develop an effective solution. As a result, a growing num-
ber of institutions in higher education adopted viable adaptive learning
systems to deliver the tailored learning experiences that support student
success both at distance and at scale (Dziuban et al., 2018; Mirata et al.,
2020; Kleisch et al., 2017; Redmon et al., 2021).
The concept of adaptive learning is not new in the educational arena,
it has been around for many decades. It refers to an instructional method
in which students are provided with learning materials and activities that
specifically address their personal learning needs. What is new, though, is
the emerging adaptive technology advances aligned to this concept, which
opens up a whole new set of possibilities for transforming one-size-fits-
all learning into an individualized learning experience through the use of
cutting-edge artificial intelligence systems, the collection and analysis of
individual learning data, and the provision of appropriate interventions
(Howlin & Lynch, 2014a). As students progress through the course, the
DOI: 10.4324/9781003244271-15
212 Yanzhu Wu and Andrea Leonard

adaptive technology automatically directs them to the most appropriate


content and assessment for their current levels of understanding.
Additionally, the adaptive technology allows students with prior knowl-
edge to move more quickly through familiar content, focusing more on
the gaps in their understanding. This tailored approach has the potential
to help students manage their time more effectively by focusing on what
they have not yet mastered. As a result, students receive fundamentally
different learning pathways than in standard online courses. Adaptive
technologies also collect and display individual’s performance data that
allow faculty to provide meaningful just-in-time support to students.
Tyton Partners (2016) provided a comprehensive description of adaptive
learning. They define adaptive learning as

solutions that take a sophisticated, data-driven, and nonlinear approach


to instruction and remediation, adjusting to each learner’s interactions
and demonstrated performance level and subsequently anticipating
what types of content and resources meet the learner’s needs at a spe-
cific point in time.
(p.3)

Background of the study


Since 2018, the University of Louisiana at Lafayette (UL Lafayette) began
to explore a viable adaptive technology. A comprehensive review con-
ducted by Tyton Partners (2016) on adaptive learning platforms used in
higher education shows that the adaptive algorithms and capabilities vary
by product. Also, the faculty ability for building and modifying adaptive
courses is limited. Realizeit was selected after a thorough review of vari-
ous adaptive platforms. A common characteristic of the competitor plat-
forms included prepackaged content, either from a textbook publisher or
from the platform itself, the inability to edit or alter content, and the lack
of variety and variability in the assessment questions.

Adaptive learning at UL Lafayette


Realizeit is integrated into Moodle and contains all of the course learn-
ing materials, activities, and formative assessments that are required. It
provides a textbook-agnostic platform that gives instructors the greatest
freedom to customize their own courses rather than relying on prepack-
aged content. The adaptive intelligence engine in Realizeit uses a Bayesian
estimation model (Howlin & Lynch, 2014b) and is constantly gathering
information about what types of learning materials result in an individual
student’s success and directing them to their most appropriate content
Three-year experience with adaptive learning 213

preferentially. Realizeit will assist struggling students in focusing on the


areas they have not yet mastered through adaptive remediation, while pro-
viding real-time feedback on their progress.
Compared to other adaptive learning platforms, Realizeit offers the
most customizable options, enabling faculty to include multiple learning
materials, decide the sequence of topics, and determine when a student is
ready to move on to the next topic. Additionally, the graphical interface of
learning analytics clearly visualizes students’ learning progress and their
level of involvement with the learning process.
Instructional designers at UL Lafayette developed a total of six novel
adaptive courses from a range of disciplines including chemistry, psychol-
ogy, geography, business, and kinesiology. A total of 34 course sections
have been taught using Realizeit beginning in the summer of 2018 and
continuing to the present.

Adaptive course structure in Realizeit


Adaptive course structure in Realizeit is organized into three tiers. First,
the course is organized into modules, which are then further split up into
lessons. Each lesson consists of required and optional learning materials,
as well as a formative assessment. Figure 12.1 shows a module consisting
of four lessons represented by circles arranged by a system of instructor-
defined prerequisites to create a Learning Map.
The map allows students to choose which lesson to work on next,
whenever possible. When students have completed a lesson, that circle
will change color to reflect their level of achievement, as demonstrated
in Figure 12.2. The colors used range from deep red for beginner-level

FIGURE 12.1 A Sample Learning Map


214 Yanzhu Wu and Andrea Leonard

FIGURE 12.2 A Sample of Color-Coded Learning Map

performance to deep green for mastery of the information and skills


within the lesson.
Faculty and students can easily identify which lessons are completed
successfully and which are causing issues. The fundamental idea is that
practicing is the most promising way to acquire a new skill. Therefore,
students are allowed to repeat a lesson as many times as necessary to
reach a mastery of that skill, and the number of repetitions needed differs
from student to student.

Adaptive course development model


Instructional designers at UL Lafayette created a three-phase adaptive
course development model that was based on the Realizeit platform struc-
ture, as illustrated in Figure 12.3.
In Phase One, a faculty member identifies the course and module level
learning objectives, defines the modules and lessons, and specifies pre-
requisites. Often, faculty members struggle with this process, therefore

FIGURE 12.3 Three-Phase Adaptive Course Development Process


Three-year experience with adaptive learning 215

instructional designers created a set of templates with examples to assist


faculty in completing tasks in this phase. Once all information is gathered,
instructional designers begin ingesting the course in the Realizeit platform.
In Phase Two, the faculty provides learning content for each lesson. Finally,
the faculty determines the number of assessment questions asked per lesson
and the requirements to unlock the next lesson. In Phase Three, the faculty
defines the due date for each module and the instructional designer links the
Realizeit content to a Moodle course. Both faculty and students will be given
training on how to use the Realizeit platform once the course is launched.

Purpose of the study


Although the benefits of emerging adaptive technology are apparent,
higher education institutions are still slow to adopt adaptive learning sys-
tem and there are limited empirical studies that have been conducted to
examine student and faculty experiences with adaptive courses. The pur-
pose of the current study was first to understand undergraduate students’
learning experience with Realizeit and second was to gain insight into fac-
ulty perceptions on designing and teaching with Realizeit. The research
questions addressed in this study were:

• How do students perceive the learning process in adaptive courses


using Realizeit?
• How do faculty perceive the designing and teaching of adaptive courses
using Realizeit?

Methods

Research design
This study employed a mixed methods approach using multiple data sources
in which quantitative data collection was followed by qualitative data col-
lection. The design was chosen to meet the purpose of the study, namely to
determine the views of faculty and students regarding their experience with
adaptive courses in the Realizeit platform. Analyses of the quantitative infor-
mation obtained from the students and faculty were conducted separately,
while qualitative information was analyzed collectively.

Population and sample


During 2018–2021, six adaptive courses have been developed in Realizeit
and 34 sections offered across five disciplines, including chemistry,
216 Yanzhu Wu and Andrea Leonard

geography, psychology, business, and kinesiology. The sample of the study


consisted of 1049 undergraduate students who completed these adap-
tive courses and ten faculty members who delivered at least one adaptive
course throughout this time period.

Data collection procedure


In the first phase of the data collection, data were collected through two
surveys administered to students and faculty separately. Both surveys
included demographic information, several closed-ended Likert scale
questionnaires, and only the student survey contained open-ended ques-
tions. The questionnaires for the student survey were designed to ascer-
tain student perceptions and satisfaction with the adaptive aspects of the
Realizeit platform, while the open-ended questions sought to elicit both
positive and negative experiences in their adaptive courses. The student
survey was administered through the Moodle LMS at the end of each
online adaptive course. Additionally, the weighted percentage of Realizeit
activities in each course, the overall average of students’ performance data
in the Realizeit platform, and the overall course average in Moodle were
collected to examine for any possible correlations.
For the faculty survey, questionnaires were developed to primarily
explore their experiences with Realizeit in the context of teaching their
adaptive courses. Faculty were contacted via email and invited to take
part in an online survey by the researchers. Both surveys were completely
anonymous, and participation was entirely optional.
In the second phase, semi-structured, open-ended interviews were con-
ducted with faculty who have completed the faculty survey and consented
to participate in the follow-up interview. The purpose of the interview is
to acquire a more in-depth understanding of the survey result, as well as
collect extensive information on the challenges that have been encoun-
tered in the design and delivery of adaptive courses. Faculty members were
contacted via email to set up interviews that were conducted individu-
ally through Zoom, and each participant consented to the session being
recorded. Each session lasted around an hour.

Data analysis
The quantitative data were analyzed using Excel and descriptive statistics
utilizing means and percentages were produced. The analysis of faculty
interviews was based on the recorded sessions. The qualitative data were
analyzed using an inductive approach (Creswell, 2008). After each inter-
view, the researcher transcribed each recording, verbatim, into a Word
Three-year experience with adaptive learning 217

document. Each faculty was assigned a number and was referred to by that
number, never by name, in the coding sheets. No identifiable information,
not even the faculty’s department, name of the course, and their academic
ranks, appeared anywhere on the typed notes from the interviews.
The initial coding of interview transcription began with color coding
that was used to identify keywords and phrases from each interview. The
researchers independently read the interview transcriptions and coded all
five interviews. Once initial coding was complete, the researchers com-
pared and grouped codes, and generated emerging themes for further
analysis. The inter-rater reliability between the two coders was 96%. In
the findings and discussion, the emerging themes were further elucidated.

Findings
A total of 173 out of 1049 undergraduate students who had enrolled in
an adaptive course during 2018–2021 responded to the student survey,
and five out of ten faculty members who had designed or taught at least
one adaptive course during the same time period have completed the fac-
ulty survey and volunteered to participate in the follow-up interview. All
student participants had completed at least one online or hybrid course,
and the majority of students (84.4%) had taken between three and ten
online/hybrid courses prior to their adaptive courses. The faculty’s teach-
ing experience in higher education ranged from six to nineteen years, and
their online teaching experience at UL Lafayette ranged from two to ten
years. Table 12.1 contains a summary of participant demographics.

Quantitative analysis of student and faculty survey


Student perspectives with adaptive features of Realizeit. The overall stu-
dent experiences with adaptive courses in Realizeit were positive. The stu-
dent survey data showed that most students chose “strongly agree” or
“agree” when asked whether they enjoyed the adaptive course (M = 4.16,
SD = 1.01) and if it helped them in their learning (M = 4.12, SD = 1.05).
The same substantial favorable result was seen when 84.8% (n = 146) of
respondents indicated that they would take another adaptive course and
recommend it to others (M = 4.19, SD = 1.04) (Table 12.2).
With respect to the different aspects of the Realizeit platform, all mean
scores were higher than 3.15, showing a strong consensus regarding the
beneficial features of Realizeit. As shown in Table 12.3, the most agreed-
upon feature was the capability to remediate and repeat lessons for con-
tinuous improvement (M = 4.19, SD = 1.04), which prompts students to
review previously studied materials if they are struggling with specific
assessment questions. Immediate feedback on activities and assessments
218 Yanzhu Wu and Andrea Leonard

TABLE 12.1 Demographic of Participants (Faculty names have been changed to


F1–F5)

Category Student Category Faculty


(n = 173) (n = 5)

Academic Freshmen (16.5%) Experience with Designed and taught


Level Sophomore (31.8%) Adaptive adaptive courses
Junior (28.24%) Courses (F1, F2, F4)
Senior (23.53%) Taught adaptive
courses
(F3, F5)
Numbers 1 to 2 (17 %) Years of Teaching 6 to 19 years
of 3 to 5 (46.5%) in Higher
Online/ 6 to 10 (24.7%) Education
Hybrid 10 < (11.8%)
Courses
Taken
Years of Teaching 2 to 10 years
Online/Hybrid
Courses

TABLE 12.2 Student Overall Experience

Items (Likert Scale: 1 = Strongly Disagree to 5 = M SD


Strongly Agree) N = 173

I enjoyed using this adaptive course platform 4.16 1.01


(Realizeit)
This adaptive course platform helped me to learn 4.12 1.05
I would take another adaptive course 4.19 1.04
I would recommend adaptive course to others 4.19 1.04

was the next most-agreed beneficial aspect (M = 3.75, SD = 0.64), fol-


lowed by the ability to visualize learning progress with the Learning Map
(M = 3.69, SD = 0.75), which is a graphical user interface showing all
available lessons and their connections within a module and presenting
different levels of achievement of each lesson. Students also highly ranked
the other three aspects of the Realizeit platform, the ease of access to
learning materials (M = 3.66, SD = 0.69), guided solutions of each ques-
tion (M = 3.51, SD = 0.95), and the capability to make choices in the les-
son map (M = 3.48, SD=0.81).
Compared to face-to-face, online, and hybrid teaching formats, stu-
dents indicated four features of the adaptive teaching approach that are
Three-year experience with adaptive learning 219

TABLE 12.3 Students’ Perceptions about the Realizeit Platform

Items M SD
(Likert Scale: 1 = Not at all beneficial to 4 = Very n = 65
Beneficial)

What you should do next block in the Realizeit dashboard 3.29 0.99
Ability to visualize learning progress with Learning Map 3.69 0.75
Capability to make choices in the lesson map 3.48 0.81
Capability to remediate and repeat lessons for continuous 4.19 1.04
improvement
Ease of access to learning materials 3.66 0.69
Immediate feedback on activities and assessments 3.75 0.64
Assessment question flagging 3.36 0.98
Guided solutions of each question 3.51 0.95
Question hints 3.15 1.09

TABLE 12.4 Perceptions of Engagement in Adaptive Courses

Compared with Different Face-to-Face (%) Hybrid (%) Online (%)


Learning Modalities, the
Adaptive Course Was

n = 65
More engaging 24.62 12.31 30.77
Equally engaging 55.38 75.39 55.39
Less engaging 20.00 12.30 13.85

more beneficial for meeting their needs. These are timely feedback on
assessments (40%, n = 71), the opportunities of repeated practice (35%,
n = 60), retention of information (32%, n = 54), and the focused remedia-
tion (35%, n = 62) that allows students to move quickly through content
that they already know. Fifty-nine students (80.7%, n = 65) indicated that
they are somewhat motivated or highly motivated to continue improving
their composite score for each lesson through repeated practice.
The survey results also reveal that most students found adaptive courses
equally engaging or more engaging compared with different modalities
including face-to-face (80%), hybrid (87.7%), and typical online courses
(86%), whereas around 30% found adaptive course more engaging than
typical online courses (Table 12.4).
Features of Realizeit influencing teaching effectiveness. Thirteen
Likert-like scale items were used to examine faculty perceptions of the
adaptive features of the Realizeit platform. Items were assessed on a
220 Yanzhu Wu and Andrea Leonard

TABLE 12.5 Faculty Perspectives on the Realizeit Platform

Items M SD
(Likert Scale: 1 = Not at all beneficial to 4 = Very Beneficial)

n=5
Increasing teaching efficiency 3.80 0.44
Monitoring student progress 4.00 0.00
Quantifying how well students have learned each concept/skill 3.60 0.54
Identifying meaningful patterns in student learning behavior 3.40 0.54
Evaluating the effectiveness of course design 3.80 0.44
Addressing student questions 3.40 0.54
Identifying students in need of support 4.00 0.00
Allowing students to make choices in the Lesson Map 3.60 0.54
Providing students the ability to practice and revise lessons 3.80 0.44
Visualizing learning progress 3.80 0.44
Providing students with easy access to learning materials 3.40 0.54
Providing immediate feedback on activities and assessments 3.80 0.44
Engaging students with learning content 3.60 0.54

4-point Likert scale, from 1—not at all beneficial, to 4—very beneficial.


A total of five faculty responses were received, resulting in a response
rate of 50%. Table 12.5 presents the mean and standard deviations for
survey items dealing with various adaptive features of Realizeit. There
was strong agreement on the beneficial level of the Realizeit adaptive fea-
tures, with mean scores exceeding 3.40 for all items. All five respondents
selected somewhat beneficial or very beneficial with ten of the thirteen
items, and they overwhelmingly agreed that monitoring student progress
and identifying students in need of support are the two mostly agreed
beneficial features. Additionally, four faculty also gave the most favorable
responses to the following features of Realizeit: increasing teaching effi-
ciency, evaluating the effectiveness of course design, providing students
the ability to practice and revise lessons, visualizing learning progress,
and providing immediate feedback.

Overall student performance


A total of 29 sections of 4 different courses were taught using Realizeit.
Five sections of Course A and all sections of Course B were offered in a
hybrid format, while all sections of Course C and D were delivered asyn-
chronously. The Realizeit portion, mainly formative assessments of these
courses, was weighted differently in each course section. It contributed
anywhere from 10% to 40% of the overall course grade, and summative
assessments were significantly weighted in the most courses except Course
Three-year experience with adaptive learning 221

TABLE 12.6 
Student Performance Data (Course names have been changed to
Course A, B, C, and D)

Courses Number of Sections Weighted Overall


Percentage Course Average (%)
of Realizeit (Range)
(%)

Course A 8 10–17 84.11


Course B 3 10–15 83.98
Course C 9 35–40 88.35
Course D 9 11–21 83.41

C (Table 12.6). A Pearson correlation coefficient was computed to assess


the linear relationship between the weighted percentage of Realizeit activ-
ities and course average score. There was a positive correlation between
the two variables, r(28) = 0.52, p < 0.001.

Qualitative analysis of student open-ended


questions and faculty interviews
Twenty-three students responded to the open-ended questions focusing
on their positive and negative experiences with adaptive courses. Overall,
students expressed satisfaction, and the five frequently cited aspects of
Realizeit that contributed to this include (1) intuitive user interface of
Realizeit that was easy to navigate, access the content, and identify the
activities that need to be complete; (2) Learning Map that provided them
with the ability to choose their own learning path, understand how each
lesson worked together, and visualize their learning progress; (3) timely
feedback provided by the automated assessments. They liked the fact that
Realizeit effectively redirected them to focused remediation whenever they
were struggling with the questions; (4) ability to practice lessons repeat-
edly to improve their performance until they reach their desired outcomes,
which is consistent with the student survey findings from Table 12.3; and
(5) finally, most students (87%, n = 20) reported that they did not experi-
ence technical issues within Realizeit.
On the faculty end, three themes emerged through qualitative data
analysis of faculty interviews regarding designing and teaching adaptive
courses in the Realizeit platform.
Challenges with adaptive course design and development. Faculty
identified four challenges associated with designing an adaptive course
in Realizeit. First, they indicated that designing an effective adaptive
course is time-consuming because the adaptive course design process is
222 Yanzhu Wu and Andrea Leonard

dramatically different from traditional course design. Instructors must


devote significant time to rethinking their course structure, identifying
the alignment of course and module-level learning objectives with learn-
ing materials, activities, and assessments, determining the varied order
of topics and prerequisites, and developing a granular knowledge map in
nonlinear, branching sequences that allow students to follow individual-
ized pathways to achieve the required learning objectives.
Second, the intricacy of the Realizeit platform made it hard for
instructors to build the course themselves. Instead, they were forced
to rely significantly on the instructional designers to program all of
course items into Realizeit. They pointed out that the whole process of
designing and building a high-quality course takes at least six months
to complete.
Third, the use of Open Educational Resources (OER) has proved to be
a challenge. Because the Realizeit platform charges a fee per student, all
adaptive courses are obliged to use OER in order to reduce the extra cost to
students. However, faculty noted that there were limited materials acces-
sible at higher course levels since most OER are geared at the 100–200
course level. Also, faculty reported concerns about confusion regarding
what exactly constitutes OER, how to distinguish publisher content from
OER, and how to use OER responsibly. Finally, the faculty agreed that
adaptive learning is better suited when it comes to reaching the lower lev-
els of Bloom’s taxonomy than meeting the upper levels because Realizeit
encourages independent learning and practices. They believed that it was
difficult to develop activities that require active communication and col-
laboration among students.
Challenges in teaching adaptive courses. Faculty encountered three
challenges in teaching adaptive courses. First and foremost, faculty are
aware of the high level of academic dishonesty in formative assessments in
Realizeit. Considering that the beauty of adaptive technology is that it can
assist students in mastering knowledge by constantly collecting students’
performance data from repeated practices and remediation, faculty must
devote a tremendous amount of time to developing question pools for
formative assessments to ensure continuous improvement while minimiz-
ing cheating. Due to this continuous aspect, paid proctoring is not a viable
option. Several studies have confirmed that online assessments completed
without proctoring perform significantly higher than those taken in
a proctored situation (Schultz, Schultz, & Gallogly, 2007; Carstairs &
Myors, 2009). In an unproctored setting, it is difficult to authenticate
exam taker’s identities, deter cheating, and accurately assess their actual
performance (Weiner & Morrison, 2009). F5 shared that, “some stu-
dents keep repeating the quizzes until they’ve seen all the questions in
Three-year experience with adaptive learning 223

the question pool, and I found some test questions and answers from my
course had been made available on the Internet.”
Second, three faculty members brought up the issue that the Realizeit
notification system hinders faculty from identifying and addressing stu-
dents’ issues on a timely basis because it does not generate email alerts.
This leads students to believe that they are not getting just-in-time support
from their faculty. As F4 stated, “when students sending a message in
Realizeit, I don't get any notification, I had to actively looking to Realizeit
to check students’ messages.”
Third, four faculty members raised an issue about the use of emojis,
a state framework (Howlin & Gubbins, 2018) that is integrated into the
Realizeit platform to capture students’ feedback on regular basis with-
out interrupting the learning process. With this feature, the intention was
to provide students a means of expressing and updating their emotional
states throughout the course, to help faculty to visualize how students feel
about their learning, and to ensure that students receive the appropriate
assistance at the right time. However, faculty have observed that students
chose their emotional states at the beginning of the course and seldom
update them as the course progresses. This limits the intended effective-
ness of the state framework.
Positive aspect of teaching adaptive courses. Four faculty members
believe that Realizeit is best suited for promoting formative learning.
The option to repeat practice provides students with an effective method
for consolidating what they have learned and brings them motivation
for continuous improvement. They found that students typically prac-
tice until they reach the Exemplary band, despite the fact that receiving
the Emerging band permits them to access the next sequential lessons.
Faculty commented that repeating practice provides students a bridge over
the gaps that exist between different achievement levels.
All faculty members agreed that Realizeit’s learning analytics inte-
grated into the user dashboard provided a clear visual representation of a
summary of individual students’ performance data. This made it easy to
monitor learning progress, engagement, and achievement, as well as over-
all learning progress across different modules and learning objectives. F2
stated, “I always start by going to the Learning Map to check how many
students have completed their activities, and I go to the Need-to-Know
section to check if there are any students that had any specific problems”
(see Figure 12.4). The visual indicator, such as red-colored circles, helped
them quickly identify students in need of support or intervention. This
is impossible to do in online courses hosted in a typical learning man-
agement system. Faculty members also indicated that they were able to
identify relevant patterns of learning and anticipate future engagement
224 Yanzhu Wu and Andrea Leonard

FIGURE 12.4 Faculty Dashboard

of specific students using the stream of granular data generated by each


student. F3 stated, “I closely monitor how much time they’re spending on
learning materials and formative assessment because it is an important
parameter to successfully complete the course.” F4 also noted that “at
the beginning of the course, the set of data in the instructor dashboard
tells me who have been consistently not doing the modules, then I suspect
the students are dropping the course, so I reach out to them to see what I
could do to help them.”
Lastly, two faculty members voiced that those adaptive courses previ-
ously developed by another faculty member are relatively easy to teach
with assistance from the original instructional designers. They both felt
that the instructional designers’ support is crucial to help explain the
structure of the courses, adaptive features of Realizeit, and its functional-
ity. It was remarked by both that they did not experience difficulties in
providing edits and updates to course materials when necessary.

Discussion
In this study, the researchers sought to understand the experiences that
faculty members had throughout the creation and teaching process of
adaptive courses in Realizeit, as well as students’ experiences in taking
these adaptive courses.
The findings of this study indicate that both faculty and students
had overall positive experiences using Realizeit. All participants in this
research expressed that the platform is easy to use, the dashboard is
intuitive, and they expressed satisfaction with repeated and individual-
ized practices, remediation, real-time feedback, and the visualization
Three-year experience with adaptive learning 225

of learning progress afforded in Realizeit. It is worth noting that, as


compared to traditional online courses, about 86% of students who
responded to the survey indicated that they were equally or more
engaged in adaptive courses, and a strong correlation was observed
between the weighted percentage of Realizeit activities and the over-
all scores of course summative assessments. There were no noticeable
challenges or technical problems encountered by students while engag-
ing with learning activities and assessments in the system, which was
a variable that contributed to the student’s satisfaction. The majority
of students indicated that they would take another adaptive course and
would suggest it to others.
A thorough analysis of faculty perspectives elicited three dimensions of
adaptive learning in Realizeit: design, delivery, and support.

Design
Regarding the adaptive course design, all the courses awarded less than
40% of the course grade to the adaptive learning activities, with the
majority of them valuing adaptive activities at less than 20% of the grade.
Apparently, the use of adaptive activities was intended to increase student
engagement with learning content to better prepare students for summa-
tive assessments. Two faculty members used adaptive activities as part of
the flipped classroom model in their hybrid courses. They had students
finish the lower level of cognitive work through the adaptive learning
approach, and then they used active interactions during class to engage
students in higher cognitive learning with peers.
Faculty members indicated that building an adaptive course was time-
consuming, which was consistent with the findings of earlier research
(Cavanagh et al., 2020; Chen et al., 2017). When it comes to making
Realizeit provide students with a fully individualized learning experience,
faculty members should present a wide range of learning materials and
a large pool of assessment questions with varying degrees of difficulty
(Chen et al., 2017).
To avoid extra costs associated with taking an adaptive course, all
courses replaced textbook and publisher content with OER as the pri-
mary course material and the assessment questions. Some faculty mem-
bers shared that since they lacked prior understanding about OER, it was
quite a challenge to locate OER, adapt it to align with learning objectives,
fit the needs of their courses, and use it responsibly with proper permis-
sion. They also noticed that most students did not have any issues with
the OER as their primary assigned learning materials, but some students
preferred to have a textbook in addition to the OER.
226 Yanzhu Wu and Andrea Leonard

Due to the complexity of importing information into Realizeit, a col-


laborative approach was used to develop courses. Consequently, faculty
must rely on the instructional designer to add and change information
on their behalf. When compared to traditional online courses built in an
LMS, the initial development process of an adaptive course might take
twice as long.

Delivery
Adaptive technology revolutionized the way faculty facilitate student
learning. All faculty members found that having access to a matrix of
learning analytics was beneficial for quickly identifying patterns of stu-
dent engagement, efficiently tracking individual student participation,
class performance, and spotting potential problems. Faculty highlighted
the importance of attributes of engagement data, which included the visu-
alization of student learning progress, time spent on each activity, color-
coded knowledge mastery levels, individual scores, and the list of student
difficulties that needed faculty attention.
When teaching an adaptive course, faculty are certain that adaptive
technology promotes individualized learning by providing students with
the opportunity for continuous improvement on specific learning gaps
by going through the relearning and retesting process. However, when
faculty were asked to share challenges in teaching an adaptive course,
they mentioned two issues. First was a concern regarding the possibility
of academic dishonesty occurring when taking the formative assessments
repeatedly. Paid proctoring services are not feasible considering the volume
of formative assessments in each adaptive course. Furthermore, without
proctoring, it seems to be impossible to prevent academic dishonesty and
correctly measure student performance (Weiner & Morrison, 2009). In
an attempt to reduce the likelihood of academic dishonesty, faculty used
the following practices: the inclusion of variables in assessment questions
to prevent the repeating of identical questions, the use of a large question
pool for each lesson to diversify the questions seen, and the reduction of
the impact of the adaptive learning grade on the overall course grade to
disincentivize cheating. Some researchers also recommended the consid-
eration of timing the adaptive assessments and limiting the number of
attempts (Miller et al., 2019).
The second issue is about the dissatisfaction with two communica-
tion tools in Realizeit. Some faculty members reported that the notifi-
cation system and emotional expression tools were not functioning as
designed. The notification system does not send out an email alert when
students sent messages to faculty in Realizeit, which prevents faculty
Three-year experience with adaptive learning 227

from responding to students in a timely manner. According to faculty, the


emotional expression tool has flaws as well. The same emotional states
that students selected at the beginning of the course were presented in
every later module, and there is no alert sent to the faculty when students
change their emotional states. As a result, it is difficult for faculty to iden-
tify the problems that students are experiencing, particularly for those
who selected the frustration emoji.
Effective interaction between students and faculty is a vital component
of sustaining a supportive learning environment, improving student per-
formance, and increasing student satisfaction in an online course (Sher,
2009; Wu et al., 2010). Online students are more prone to seek prompt
interaction with faculty. Without effective communication, students may
feel alienated, which contributes to student dissatisfaction.

Support
All faculty participants overwhelmingly agreed that the assistance offered
by instructional designers is essential to the success of creating and deliv-
ering adaptive courses. They valued the assistance they received prior to,
during, and after the course design. Before course design begins, instruc-
tional designers meet with faculty to explain the Realizeit approach,
review the three-step process of adaptive course design shown in Figure
12.3, determine the course components that can be best supported with
adaptive learning, and set the design timeline. The active design process is
streamlined with the use of instructional designer-provided templates that
faculty fill in with course information such as learning objectives, course
structure, learning materials, and assessment questions. When the design
is complete and the course is launched, instructional designers continue to
provide support for faculty and students with respect to technical prob-
lems and continuous course improvement.
Some faculty members also provided insight into areas of concern that
administrators should consider addressing, including increasing aware-
ness of the benefits of adaptive learning and providing the necessary sup-
port to faculty members who wish to implement this innovative approach.

Limitations of the study


There are several limitations to this study. The findings of the current study
cannot be generalized. There are only six adaptive courses that have been
developed at UL Lafayette, therefore, the faculty sample size used in this
study is relatively small, and a limited variety of disciplines were exam-
ined. The validity of the data depends on their veracity of self-reflection.
Another limitation is that the research was conducted in one institution,
228 Yanzhu Wu and Andrea Leonard

which may constrict participants’ perspectives. Finally, opposed to faculty


interviews which allowed in-depth responses, the qualitative portion of
student data was gathered using optional open-ended questions only, with
a low response rate. This limited students from providing extensive com-
ments on their experiences.

Conclusion
Today, the number of online course offerings has grown exponentially,
and as a result of the pandemic, the number of students learning online,
fully or partially, has increased dramatically (Lederman, 2021). Given
that students have different learning styles and different prior knowledge,
the paradigm of online education has shifted from passive to adaptive
learning, and a revolution of adaptive learning is presently underway,
powered by an explosion of robust and cost-effective adaptive technol-
ogy solutions and resources. The findings of this study contributed to the
body of knowledge about student and faculty perspectives and experi-
ences using adaptive technology, especially the Realizeit platform. It also
discussed faculty practices and challenges with the design and delivery of
an adaptive course, as well as the crucial relationship between faculty and
instructional designers in this process. Hopefully, the results of this study
will better assist faculty, instructional designers, and administrators in
understanding the requirements and rewards of adaptive courses.

References
Carstairs, J., & Myors, B. (2009). Internet testing: A natural experiment reveals
test score inflation on a high-stakes, unproctored cognitive test. Computers in
Human Behavior, 25(3), 738–742.
Cavanagh, T., Chen, B., Lahcen, R., & Paradiso, J. (2020). Constructing a
design framework and pedagogical approach for adaptive learning in higher
education: A practitioner’s perspective. International Review of Research in
Open and Distributed Learning, 21(1), 173–197.
Chen, B., Bastedo, K., Kirkley, D., Stull, C., & Tojo, J. (2017). Designing
personalized adaptive learning courses at the University of Central Florida. ELI
Brief. https://library​.educause​.edu ​/resources​/2017​/8​/designing​-personalized​
-adaptive​-learning​- courses​-at​-the​-university​-of​- central​-florida
Coffin Murray, M., & Pérez, J. (2015). Informing and performing: A study
comparing adaptive learning to traditional learning. Informing Science, 18,
111–125.
Creswell, J. W. (2008). Research design: Qualitative, quantitative, and mixed
methods approaches. Sage.
Dziuban, C., Howlin, C., Moskal, P., Johnson, C., Eid, M., & Kmetz, B. (2018).
Adaptive learning: Context and complexity. E-Mentor, 5(77), 13–23.
Three-year experience with adaptive learning 229

Howlin, C., & Lynch, D. (2014a, November 25–27). A framework for the delivery
of personalized adaptive content [Conference presentation]. 2014 International
Conference on Web and Open Access to Learning (ICWOAL), Dubai, United
Arab Emirates.
Howlin, C., & Lynch, D. (2014b, October 27–30). Learning and academic
analytics in the Realizeit system [Conference presentation]. E-Learn 2014,
New Orleans, LA, United States.
Howlin, C., & Gubbins, E. (2018. Self-reported student affect. https://lab.
realizeitlearning.com/features/2018/09/19/Affective-State/
Kerr, P. (2015). Adaptive learning. ELT Journal, 70(1), 88–93.
Kleisch, E., Sloan, A., & Melvin, E. (2017). Using a faculty training and
development model to prepare faculty to facilitate an adaptive learning online
classroom designed for adult learners. Journal of Higher Education Theory
and Practice, 17(7). https://articlegateway​.com​/index​.php​/ JHETP​/article​/view​
/1470
Lederman, D. (2021, September 16). Detailing last fall’s online enrollment surge.
Inside Higher Education.
Miller, L. A., Asarta, C. J., & Schmidt, J. R. (2019). Completion deadlines,
adaptive learning assignments, and student performance. Journal of Education
for Business, 94(3), 185–194.
Mirata, V., Hirt, F., Bergamin, P., & Westhuizen, C. (2020). Challenges and
contexts in establishing adaptive learning in higher education: Findings from
a Delphi study. International Journal of Educational Technology in Higher
Education, 17(32). https://edu​cati​onal​tech​nolo​g yjournal​.springeropen​.com​/
articles​/10​.1186​/s41239​- 020 ​- 00209-y
Redmon, M., Wyatt, S., & Stull, C. (2021). Using personalized adaptive learning
to promote industry-specific language skills in support of Spanish internship
students. Global Business Languages, 21, 92–112.
Schultz, M. C., Schultz, J. T., & Gallogly, J. (2007). The management of testing
in distance learning environments. Journal of College Teaching and Learning,
4(9), 19–26.
Sher, A. (2009). Assessing the relationship of student-instructor and student-
student interaction to student learning and satisfaction in web-based online
learning environment. Journal of Interactive Online Learning, 8, 102–120.
Tyton Partners. (2016). Learning to adapt 2.0: The evolution of adaptive learning
in higher education. https://d1hzkn4d3dn6lg​.cloudfront​.net​/production​/
uploads​/2016​/04​/yton​-Partners​-Learning​-to​-Adapt​-2​.0​-FINAL​.pdf
Weiner, J., & Morrison, J. (2009). Unproctored online testing: Environmental
conditions and validity. Industrial and Organizational Psychology, 2(1),
27–30.
Wu, J., Tennyson, R. D., & Hsia, T. (2010). A study of student satisfaction in
a blended e-learning system environment. Computers and Education, 55(1),
155–164.
13
ANALYZING QUESTION ITEMS
WITH LIMITED DATA
James Bennett, Kitty Kautzer, and Leila Casteel

Introduction
Few who are familiar with robust adaptive learning systems will dismiss
their effectiveness in both providing instruction and assessing student
achievement of specific learning objectives. While a powerful teaching
tool, adaptive learning systems can also require a great deal of effort to
ensure that accurate measurement of learning is taking place. Refinement
of question items can take a number of iterations and depend upon large
sample sizes before the educator is assured of their accuracy. This process
can take a long time when courses run intermittently or with only small
cohorts.
In fact, some courses may never reach a large-enough sample size to
satisfy the recommended requirements of conventional statistics. This can
be especially concerning when dealing with learning content that is criti-
cal for student mastery and where the stakes may be high enough that the
accuracy of adaptive learning assessments must be immediately reliable.
In situations such as these educators do not have the luxury of waiting
until enough students have passed through the course to produce a large-
enough sample size.
For some time now we have undertaken to develop tools and processes
that allow for quick assessment of adaptive learning question items with-
out large sample sizes or numbers of iterations. The goal is to be able to
spot question items for possible refinement using conditional probability.
We have understood from the beginning that the best methods require
good sample sizes of data, but we also know that our learner’s interest is

DOI: 10.4324/9781003244271-16
Analyzing question items with limited data 231

better served if we quickly identify question items that may need refine-
ment and move them forward in the analysis process. In other words, it
is better spending the extra time analyzing a hand full of questions that
might prove to be accurate, rather than wait several semesters to narrow
the analysis down to fewer questions.

Conditional probability
Conditional probability is defined as a measure of the probability of an
event, given that another event or condition has already occurred (Gut,
2013). While the primary focus of conditional probability is to identify a
probable outcome based on statistical analysis (in this case, the probabil-
ity that some part of a question item is proving problematic for learners)
the approach we have been developing is one that allows us to quickly
identify question items that may be problematic. Whether they are not is
determined by further analysis.

Background
This study is a logical extension of our earlier work published by Springer
(2021). After we had arrived at a method for question item analysis, our next
challenge was to explore the use of the same method under less-than-ideal
conditions (e.g., limited number of sections, small cohorts, etc. ). We think
the details of the original method applied to question item analysis are just
as important to educators working in adaptive learning platforms as the
new findings and that they are closely tied together. In order to share those
previous findings as well as the new ones, what follows is a brief description
of each of the principles we use in our question evaluations. Also, at the end
of this chapter we include a table that presents the entire question analysis
system in the form of a tool—with step-by-step procedures.

Question item review


The main methods employed to determine whether or not a question item
requires refinement is based on early psychometric research. In the science
of psychological assessment there are four principles used to judge the
quality of a given assessment (Rust, 2007). These are validity, reliability,
standardization, and freedom from bias. We have found that the same
principles serve exceptionally well when it comes to evaluating the effec-
tiveness of any assessment within an adaptive learning system. Each can
be used as a series of standards to measure the accuracy of question items.
Using these four principles to create a tool for the evaluation of adaptive
232 James Bennett, Kitty Kautzer, and Leila Casteel

learning question items was the major focus of our work for the Human–
Computer Interaction conference. What follows is an abbreviated descrip-
tion and explanation of each of the four categories as presented in the
earlier work.
Validity. Validity is the primary measurement of a question item’s value
and should always be the first criteria for analyzing any question item.
The focus is on whether or not an assessment captures and measures what
is intended to be assessed. Unfortunately, this critical concern is often
subjected to misalignment in many learning assessments. Too often ques-
tion items are written in a way that focus on a learner’s ability to recall or
associate terms, rather than measure learning and competency levels. An
example of misaligned assessment would be a question that asks learners
to select from a list of answers that contain parts of the human anatomy,
but what needs to be measured is the learner’s understanding of organ
functions.
There are two primary checks to determine validity for an adaptive
learning question item. The first is to make certain that the adaptive learn-
ing content supports the question item. In other words, will the learner be
able to answer the question item correctly based on the information they
have consumed.
The second check is to make certain the question item aligns with a
specific course outcome, learning objective, or required competency and
then evaluate whether the assessment correctly measures a learner’s per-
formance in that area. This can be done by using Bloom's Taxonomy
(Bloom, 1956) to compare the cognitive domain of what is being meas-
ured to what is being assessed in a question item.
An example of a valid assessment that aligns with the proper Bloom’s
cognitive domain would be a question that asks a learner to describe and
explain a concept. In this case the assessment would be measuring under-
standing. If understanding is what the assessment was intended to meas-
ure, then the assessment is likely valid.
Reliability. A question item’s reliability depends on several factors.
Does it perform consistently and do the scores demonstrate the assess-
ment is measuring student learning more accurately than mere chance?
Also, do the conditions and the context of the assessment interfere with
learning measurement?
An example of a test item that would be unreliable by its context would
be a question that gave away its own answer in its wording or gave hints
toward the solution of another question in the same exam.
The method we use to begin to determine a question item’s reliability
is by examining the Difficulty Index of the answers provided by the learn-
ers. The Difficulty Index (sometimes referred to as P value) is obtained by
Analyzing question items with limited data 233

using a simple formula of dividing the number of correct answers by the


number of all the learners that answered the item (Lord, 1952). This meas-
urement is important in the study for this chapter and its use as a condi-
tional probability tool will become evident in the Methodology section.
Standardization. As a principle of analysis, standardization seeks to
ensure that the performance of learners on any given question will be
in alignment with other learners with similar scores. In other words, if
an above-average learner answers a specific question correctly, most of
the other above-average learners should also answer the same question
correctly.
For evaluating the standardization of an assessment item, we have
found the use of an item Discrimination Index to be simple and effective.
Freedom from bias. Bias in assessments is often most evident when
scores differ based on group membership. Though this is often tied to the
ideas, anti-discrimination and protected groups such as those based on
gender, ethnicity, disability, etc., bias can be found toward any group. An
example would be when members of one group appear to perform better
than members of a different group on the same assessment. In an organ-
ized education setting a group could also include a cohort or class, which
is often a strong indicator that something is amiss (though the problem
might not be the question item itself).

Methodology
For some time, we have used the Difficulty Index as a first pass indica-
tor when performing assessment analysis. The Difficulty Index is easy to
obtain and is used as an identifier by psychometricians. The problem with
relying on the Difficulty Index as a question item review trigger comes
from the statistical probability that low sample sizes can lower the num-
ber artificially. How much lower and how different from the same ques-
tion item presented to a larger cohort is determined by a complex series
of factors that are difficult to track. What we needed to know was if we
could still depend on low Difficulty Index scores as a trigger for further
question item analysis. We determined that to accomplish this we would
need a good sample of small cohorts answering the same questions. This
would give us a series of comparative Difficulty Index scores for each item,
which in turn could be used to determine a conditional probability that an
item needed refinement.
The methodology for this research used a simple approach that was
applied in actual question item analysis. We wanted to learn if that ten-
dency toward low Difficulty Index values in small cohorts was still within
practical levels to use for a first indicator for question item review. In
234 James Bennett, Kitty Kautzer, and Leila Casteel

essence we were looking for an average conditional probability that the


Difficulty Index could still be used before we had run the course enough
times or with large-enough cohorts to obtain larger sample data.
The general recommendation is to use a 0.30 Difficulty Index as a
threshold, but for this study we elevated our threshold to any question
item that had a Difficulty Index below 0.40. This would ensure that we
were not missing any trends that appeared in the upper range. It should be
noted here that in the study there were no question items that were found
to depend on a 0.40 for review, so they were removed during the compila-
tion of the results and our findings are based on the 0.30 Difficulty Index
threshold.
Another thing worth noting here is that when Difficulty Index values
are used to trigger question item reviews, there is also often an upper
number recommended along with 0.30 on the lower end. Typically, if
a question item has a Difficulty Index of above 0.80, it is also marked
for further analysis. This is different in our case since we are working
with questions used to determine if a learner has attained certain knowl-
edge. While we might expect to see these sorts of statistical margins on
exams and other assessments, we would hope that knowledge check ques-
tions would generate higher Difficulty Index scores through confirmed
learning.

Sample size
Our sample size consisted of 108 learners in 6 cohorts with an average of
15 students per cohort (most cohorts ranged in size from 7 to 20 with one
outlier having 26 students). These cohorts were given a total of 7 knowl-
edge check quizzes with 25 question items each (150 questions total).
We calculated the Difficulty Index for each and compared that number
across all cohorts for the same quiz. As an extra measure we also collected
data on the number of question items that had at least one cohort with a
Difficulty Index above 0.30 for any question that needed revision. While
this was information outside of what we needed for our conditional prob-
ability research, we wanted to know if there were any tendencies for small
cohorts to produce artificially high Difficulty Index values that would be
missed if there was only a single cohort available for review.

Study data
Table 13.1 presents the data collected and combined from each of the
small cohorts. Each row contains the combined data for a quiz delivered
to all of the cohorts.
Analyzing question items with limited data 235

TABLE 13.1. Data Collected from the Study

Total items Total items Number of Conditional


with any in quiz that question Probability
Difficulty warranted items that had (%)
Index of refinement at least one
0.30 or after analysis cohort with
lower a Difficulty
Index above
0.30 for a
question
that needed
revision

Knowledge 6 2 1 33
Check A
Knowledge 8 2 2 25
Check B
Knowledge 7 2 0 28
Check C
Knowledge 7 3 1 42
Check D
Knowledge 11 7 3 63
Check E
Knowledge 8 2 1 25
Check F
Knowledge 13 2 2 15
Check G
Average = 32
Note: The column on the right displays the conditional probability that any given question
item from the quiz, with a Difficulty Index of 0.30 or lower, may require refinement.

• Total items with any Difficulty Index of 0.30 or lower—is the total
number of question items that produced a Difficulty Index of 0.30 or
lower in a single quiz. Even if only a single cohort had a number that
low, the question item was set aside for review.
• Total items in quiz that warranted refinement after analysis—this
number is the total number of question items from each quiz that was
determined to need refinement after analysis using the four principles
to evaluate assessments.
• Number of question items that had at least one cohort with a Difficulty
Index above 0.30 for a question that needed revision. This is a special
category of data that were collected during the research, but not needed
to make the conditional probability calculations.
236 James Bennett, Kitty Kautzer, and Leila Casteel

Results
The results from the study (see Table 13.2) show some trends that can be
useful knowledge when it comes to analyzing question items with small
cohorts in an adaptive learning system. Each of these trends and a few
anomalies are noted in the following bullets.

• The average conditional probability that a given question item needed


refinement was 32%. It is important to note that this does not mean
that 1/3 of all question items with a Difficulty Index below 0.30 are
bad questions. It simply means that while a question might still be
good, there is a 1/3 chance it could use refinement to make certain it is
accurately measuring learning. It is also worth saying that it is entirely
possible for that predictive ratio to change based on the average quality
of question items.
• There seem to be some anomalies occurring in Knowledge Check E
and Knowledge Check G. In both cases, the number of question items
with a Difficulty Index of 0.30 or below were significantly higher and
the conditional probability for these assessments represent the two

TABLE 13.2 Data Collected from the Study (two outliers excluded)

Total items Total items Number of Conditional


with any in quiz that question Probability
Difficulty warranted items that had (%)
Index of refinement at least one
0.30 or after analysis cohort with
lower a Difficulty
Index above
0.30 for a
question
that needed
revision

Knowledge 6 2 1 33
Check A
Knowledge 8 2 2 25
Check B
Knowledge 7 2 0 28
Check C
Knowledge 7 3 1 42
Check D
Knowledge 8 2 1 25
Check F
Average = 30.6
Analyzing question items with limited data 237

extremes of all the data sets. Since the purpose of this study was to test
the use of the Difficulty Index as a predictive indicator in question item
analysis in small cohorts, we did not look for exemptions (e.g., question
topics that were more difficult, overall student performance, etc.). This
study focuses on the use of simple statistics.

It should be noted that if these two outliers were removed from the study a
tighter grouping of each data point would be present, but the conditional
probability does not change significantly.

• As was noted in the Methodology and Study Data sections, we also


recorded the number of question items that had at least one cohort
with a Difficulty Index above 0.30 for a question that needed revision
(4th column in the tables). Again, this is outside of a study that looks to
use a Difficult Index of 0.30 as a conditional probability predictor, but
users of this method should be aware that in small cohort situations,
the 0.30 threshold may not always be met. This is evident in the table
above where it can be seen that in each quiz, at least one cohort out of
seven did not have a Difficulty Index of 0.30 or below for at least one of
the question items that were later determined to need some refinement.

TABLE 13.3 Checklist for Analyzing Validity of Question Items

Validity Yes No

Does the item truly assess Item appears valid. Further review
the knowledge or skill required.
assigned to it? (e.g.
Bloom’s verb, practical
performance, etc.)
Does the item assess Item may be valid as Item does not
prerequisite or necessary part of the body of appear valid.
supporting knowledge/ knowledge for subject
skill? domain.
Are learners presented with Further review required. Ensure content
content required to learn The assessment is presented
this concept? may simply be to learner
beyond appropriate and review
expectations for assessment again
learners at this level. with new data.
Instructor/Instructional Designer: If the item appears valid, the instructor may use this
information as augmented knowledge concerning learner performance, and adjust learner
activity accordingly (e.g., review material, etc.). If the item’s validity is suspect, it should be
corrected as a learning object so that it is valid.
238 James Bennett, Kitty Kautzer, and Leila Casteel

TABLE 13.4 Checklist for Analyzing Reliability of Question Items

Reliability Yes No

Is the Difficulty Index Further review required. Item appears reliable,


less than 0.30 or over but further review
0.80? can be performed.
Note that 0.30 is simply
a trigger. Different
assessment types
may trend to other
Difficulty Index
values.
Are there any distractors Item may not be reliable. Further review
or other elements of Further review required.
the item that appear required. SME
to generate an unusual review should seek to
number of incorrect determine why this is
answers? occurring.
E.g., larger number of
learners selecting
the same incorrect
distractor, miskeyed
element, etc.
Is there something Item is not reliable and Item may not be
unclear or misleading may not be valid reliable. Further
in the question stem (misleading stems can review required.
that could account misalign an item with SME review should
for the low Difficulty what is intended to be seek to determine
Index? measured). why this is
occurring.
Is there something in Item is not reliable and Item may not be
the question stem may not be valid reliable. Further
that could account (misleading stems can review required.
for an unusually high misalign an item with SME review should
Difficulty Index (e.g., what is intended to be seek to determine
question gives away measured). why this is
the answer)? occurring.
Instructor/Instructional Designer: If the item appears reliable, the instructor may use this
information as augmented knowledge concerning learner performance, and adjust learner
activity accordingly (e.g., review material, etc.). If the item’s reliability is suspect, it should
be corrected as a learning object. This may include adjusting learner scores if they are influ-
enced by an unreliable assessment.
Analyzing question items with limited data 239

Discussion and conclusions


The results of this study have proven to be reassuring. We found that the
Difficulty Index can serve adequately as a first indicator for question item
review, even if the analysis must rely only on results from small cohorts.
In fact, it is interesting to note that the conditional probability trend seen
in the low cohort sample was not much lower from what we would expect
to see in larger cohorts. This gives us confidence that the analysis of mul-
tiple-choice questions items through the tool we have developed is sound,
regardless of cohort size. This does away with the need to wait for the
accumulation of large amounts of data to when refining assessments in
and adaptive learning system.
This study was conducted out of a need to determine if our usual meth-
ods for question item analysis were still applicable with the data from
small cohorts. The concern was over the potential for inaccurately low
Difficulty Index scores from small cohorts that would artificially increase
the number of question items earmarked for review. If that number was

TABLE 13.5 Checklist for Analyzing Standardization of Question Items

Standardization Yes No

Are there anomalies Further review required. Assessment item may


indicated in the be standardized.
Discrimination Index?
E.g., point biserial score
below 0.0,
Are there any distractors Item may not be Further review
or other elements of standardized or required. SME
the item that appear reliable. Further review should seek
to generate an unusual review required. SME to determine why
number of incorrect review should seek to this is occurring.
answers? determine why this is
E.g., larger number of occurring.
learners selecting
the same incorrect
distractor.
Are there any other Further review required. Assessment item may
anomalies (such as stem SME review should be standardized.
or distractor bias) that seek to determine why
might explain the lack this is occurring.
of standardization?
Instructor/Instructional Designer: If there appears to be standardization with the item,
the instructor may use this information as augmented knowledge concerning learner per-
formance, and adjust learner activity accordingly (e.g., review material, etc.). If the item’s
standardization is suspect, it should be corrected as a learning object.
240 James Bennett, Kitty Kautzer, and Leila Casteel

TABLE 13.6 Checklist for Analyzing Bias of Question Items

Bias Yes No

Is there evidence that Further review required. Assessment item may


certain groups perform be free from bias.
better or worse on the
same question items?
Are there any distractors Item may not be free Further review
or other elements of from bias. Further required. SME
the item that appear review required. review should seek
to generate an unusual SME review should to determine why
number of correct seek to determine this is occurring.
or incorrect answers why this is occurring.
among certain groups?
E.g., larger number of
learners selecting
the same incorrect
distractor.
Instructor/Instructional Designer: If there appears to be bias with the item, the instructor
may use this information as augmented knowledge concerning learner performance, and
adjust learner activity accordingly (e.g., review material, etc.). If the item’s bias is suspect,
it should be corrected as a learning object.

found to be too high, our method of using Difficulty Index numbers as a


first trigger for deeper analysis might prove to be inefficient. Tables 13.3–
13.6 provide a tool we use as a guide for the process of analyzing question
items for validity, reliability, etc.
With confirmation that our methods can be used with small cohorts,
we think it is valuable to share a version of our question analysis tool as a
part of this chapter. This guide has been developed over several years as a
way to quickly identify question items that may need review.
The tool is a simple check list that evaluates question items based on
the psychometric principles of Validity, Reliability, and Standardization.
We have found this tool to be exceptionally reliable in identifying ques-
tion items that might need refinement, as well as a step through guide that
helps pinpoint the cause of any problems.

References
Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of
educational goals. Longmans, Green. Or Bloom’s revised taxonomy: Anderson,
L. W., & Krathwohl, D. R. (2001). A taxonomy for teaching, learning, and
assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.
Analyzing question items with limited data 241

Gut, A. (2013). Probability: A graduate course (2nd ed.). Springer.


Lord, F. M. (1952). The relationship of the reliability of multiple-choice test to the
distribution of item difficulties. Psychometrika, 18, 181–194.
Rust, J. (2007). Discussion piece: The psychometric principles of assessment.
Research Matters, Issue 3.
14
WHEN ADAPTIVITY AND
UNIVERSAL DESIGN FOR
LEARNING ARE NOT ENOUGH
Bayesian network recommendations for tutoring

Catherine A. Manly

Adaptive learning systems have become well positioned to assist learners


with wide variation in perceptual and processing ability, but their effec-
tiveness remains tied to content design. Given that colleges increasingly
enroll students with disabilities (Kimball et al., 2016),1 an approach like
Universal Design for Learning (UDL) aids making course material acces-
sible in ways beneficial for all students (Rose & Meyer, 2002; Tobin &
Behling, 2018). Higher education must go beyond traditional accommoda-
tion provisions for students with disabilities since only about 35% of such
students tell their institution about their disability (Newman & Madaus,
2015). While UDL offers design guidance for intentionally planning for
learner variability, UDL and adaptive features take time to implement,
often through iterative (re)designs over multiple semesters. As a result, a
fully universally designed approach often remains aspirational, including
in adaptive systems. Thus, educators face helping struggling students in
courses not (yet) fully universally designed. This leads educators to ask
how to help students when universal design efforts fall short or remain in
process.
This proof-of-concept study illustrates a learning analytics-informed
approach to this issue combining formative data traces from tutoring and
adaptive activity. It builds a prescriptive analytics model to identify when
recommending tutoring may be warranted to support students. This study
assumes practices such as tutoring, use of multiple modalities within an
adaptive system, learning activity repetition, and time on task benefit
learning. This builds on prior adaptive learning research demonstrating
positive learning impact when providing content via multiple modalities

DOI: 10.4324/9781003244271-17
When adaptivity and universal design for learning are not enough 243

(i.e., text, video, audio, interactive, or mixed content), a key UDL com-
ponent (Manly, 2022). Within this framing, I illustrate how analyzing
within-course data may help make recommendations to students.
Institutions increasingly use predictive analytics to inform feedback
to students, often through vendor-driven systems involving proprietary
algorithms with unknown characteristics. Prescriptive analytics extends
the predictive analytics approach to include modeling and simulating
alternative possibilities to investigate optimal decisions while accounting
for uncertainty (Frazzetto et al., 2019). Analytics have received increas-
ing attention by researchers, informing the individual-centered approach
investigated (Dawson et al., 2019). While predictive analytics has become
increasingly important within higher education, “analytics for the pur-
poses of improving student learning outcomes … remain sparsely used in
higher education due to a lack of vision, strategy, planning, and capacity”
(Gagliardi, 2018). Institutional mechanisms to use predictive capability
often remain nascent, of limited scope, or in early development, reflect-
ing the analytics field generally (Dawson et al., 2014). The novel analytics
approach illustrated here aims to let students know when tutoring might
be beneficial, augmenting assistance from universally designed course
elements.
The example features an introductory undergraduate English course
where system logs captured students’ learning actions in both adaptive and
traditional learning management systems (LMSs), along with information
about online tutoring received. While this online course includes addi-
tional collaborative components intended to foster a learning community
within the class (Picciano, 2017), such as discussion forums, this study
focuses on the course’s self-paced adaptive content and LMS assignment
grades, as well as optional, individualized, interactive one-on-one tutor-
ing, supplementing the class learning community. While UDL includes
numerous practice guidelines, the focal element here included presenting
options for perception by offering content through multiple modalities. I
posit identifying patterns in students’ use of multiple modalities and tutor-
ing utilization should offer insight into where students struggle to learn
course material, and thus where future students may benefit from seeking
additional support.
As a proof-of-concept, the investigation conducted illustrates the type
of analysis that could offer predictive suggestions based on past data
about modality switches and tutoring. As such, the illustration presents
preliminary results more argumentatively than analytically. Through this
example I argue that institutions should think creatively and expansively
about using the wealth of student online learning data increasingly aggre-
gated in institutional data warehouses to assist struggling students. Using
244 Catherine A. Manly

an English course taught during one academic year at a women-only insti-


tution that collects such data, the study provides a first look at the kind
of analysis that could be expanded to other circumstances where simi-
lar capacity exists to merge data across campus and vendor systems. My
approach combines the idea of “closing the loop” to students from the
learning analytics field (Clow, 2012) with the possibility of utilizing a net-
work of prior and current data in dynamic predictions about key course
intervention points aiming to improve student course success.
The following research question guided this investigation: How can
information about modality switches and tutoring be used to predict later
learning module success in one week of an introductory English course? I
posited that combining content through multiple modalities with tutoring
may have provided the additional assistance struggling students needed
for success if material presentation did not sufficiently address learning
variability. That is, I viewed tutoring as augmenting course design in ways
holding potential to address gaps in the universality of content presenta-
tion, since human tutor explanations would be highly interactive and per-
sonalized, going beyond other content presentation methods.

Theory and literature review


UDL practice builds on the educational implications of natural learning
variations as explored for several decades by scholars and practitioners
interested in universally designing educational experiences (Burgstahler,
2015; Meyer et al., 2014; Silver et al., 1998). Inspired toward supporting
people with disabilities and grounded in cognitive science, UDL challenges
faculty and staff to design learning experiences intentionally including
multiple means of engagement, representation, and action and expres-
sion (Burgstahler, 2015). UDL’s empowering frame arises from consider-
ing disability as a social construction (rather than a medical diagnosis;
Jones, 1996), viewing all individuals as capable learners given a support-
ive environment not disabling their capacity. Often enacted through back-
ward design starting from learning objectives, UDL facilitates intentional
course design encouraging consideration of alternative means to achieve
equivalent learning ends.
From a social justice standpoint, supporting learning across the full
spectrum of ability moves toward inclusive practice equitable for all
(Levey et al., 2021). In general, improving college student learning, as
well as subsequent retention and success for traditionally underserved
student groups such as low-income, nontraditional age, and those with
disabilities, constitutes a well-acknowledged challenge (Kuh et al., 2007;
Tinto, 2006; Wessel et al., 2009). Recently, the COVID-19 pandemic
When adaptivity and universal design for learning are not enough 245

exacerbated practical systemic challenges faced by educators committed


to sharing and enacting inclusive principles (Basham et al., 2020). Refining
instructional design offers potential to improve achievement, particularly
for disadvantaged groups (Edyburn, 2010; Tobin & Behling, 2018). Given
that online students frequently come from underrepresented populations
(Barnard-Brak et al., 2012; Wladis et al., 2015), online education offers a
salient environment for investigating alternative content presentation, as
done here.
Support for multiple approaches to learning advocated by UDL ide-
ally would become integrated into all aspects of regular instruction and
would sufficiently address the learning needs of all students in a course.
However, faculty and support professionals must be prepared when exist-
ing design efforts and resources prove insufficient (Burgstahler & Cory,
2008). While demonstrably more could be done toward achieving fully
individualized support without singling out particular students, the aspi-
ration of universally designed instruction can be practically difficult to
achieve (Evans et al., 2017). Realities of existing courses and historical
development practices mean courses may need multiple revisions or rede-
velopments. Supplemental individualized support may also be provided,
such as through tutoring or accommodations, when available options
do not (yet) encompass a wide enough array to meet particular students’
needs.
Tutoring can augment UDL-based instruction when students demon-
strate individualized learning needs not sufficiently addressed through
existing course design. This is consistent with Edyburn’s concern that,
“we need to renew our commitment to equitably serving all students in
the event that our UDL efforts fall short” (Edyburn, 2010, p. 40). Tutors
individualize instruction, customizing content presentation to an even
greater degree than otherwise possible, even with adaptive learning tech-
nology. Given extensive evidence showing positive effects of tutoring prior
to college (Gordon et al., 2007), and demonstrated benefits for college
courses (Abrams & Jernigan, 1984) as well as longer-term college persis-
tence (Laskey & Hetzel, 2011), benefits of tutoring for student learning
were expected.
In recent years, institutions have welcomed students with an increas-
ing range of disabilities, diversity in neurological functioning, forms
of engagement, and cognitive approaches, with the rate of known dis-
abilities among college students rising over 12 years from 11% to over
19% in 2015–2016 (Snyder et al., 2019; Snyder & Dillow, 2013). Since
many students with disabilities choose not to identify their disability to
their postsecondary institution (Newman & Madaus, 2015), faculty will
frequently not know which students have a disability, though inclusive
246 Catherine A. Manly

educational design can benefit such students anyway. Given the widening
range of abilities, needs, and ways of knowing that aspiring students bring
to higher education, designing courses while viewing a broad range of
abilities and experiences as normally expected becomes imperative.
Issues of learning accessibility and variation have found expression
in several related theoretical frameworks, including UDL (CAST, 2014),
Universal Instructional Design (UID; Silver et al., 1998), and Universal
Design for Instruction (UDI; Scott et al., 2003), among others (McGuire,
2014). UDL has been intimately connected to educational technology sup-
port for learning because technology facilitates complying with UDL prin-
ciples (Mangiatordi & Serenelli, 2013). Despite enough interest to generate
a variety of alternatives, such universal design frameworks still struggle
to gain acceptance in academic culture (Archambault, 2016) and remain
understudied in postsecondary education (Rao et al., 2014). Given wide-
spread interest in universal design, including implementation guidelines
and many arguments calling for its adoption (Burgstahler, 2015; CAST,
2014), the lack of research is surprising.
Within UDL research, it remains unusual to study student learning
outcomes. Subjective perceptions have been heavily researched, typically
of faculty (e.g., Ben-Moshe et al., 2005; Lombardi & Murray, 2011) and
occasionally of students (Higbee et al., 2008) or employees (Parker et al.,
2003). In one study, graduate students recognized the benefits of hav-
ing content provided in multiple formats, and to a lesser extent, reported
using them (Fidalgo & Thormann, 2017). Additionally, Webb and Hoover
(2015) conducted one of very few studies of UDL principles expressly
investigating multiple content representations, aiming to improve library
tutorial instruction. However, effects in classroom settings remain
understudied.
Interestingly for this study, faculty UDL training appears to matter for
student perception regarding content presentation. Content presentation
would be covered in typical UDL training and subsequent evaluation, and
some studies break down subtopics, allowing understanding of content
presentation within the larger study’s context. One such study of over
1,000 students surveyed before and after professors received 5 hours of
UDL training indicated improvements in areas such as faculty providing
material in multiple formats (Schelly et al., 2011). A follow-up studying
treatment and control groups and more detailed questions about UDL
practices also found improvements perceived by students after 5 hours
of faculty UDL training, again including offering materials in multiple
formats (Davies et al., 2013). However, without baseline condition evalu-
ation, pre-existing differences in faculty knowledge and practice may con-
found these results.
When adaptivity and universal design for learning are not enough 247

Despite plentiful general course design guidance for faculty, including


for online education (e.g., Ko & Rossen, 2017; McKeachie & Svinicki,
2010), universal design practices are still working their way into this lit-
erature and practice (Dell et al., 2015). Development and incorporation of
universal design remains beneficial to research empirically (Cumming &
Rose, 2021). The present study extended one aspect of research inspired
by universal design work.

Data
Attending a Northeast women-only university, the undergraduates studied
were typically nontraditional age, juggling family and work with school, a
group higher education traditionally underserves. All sections of one basic
English course (called ENG1 here) from the 2018/2019 academic year
were analyzed. This course combined an LMS for discussion, assignments
and grades, with an adaptive system including content and mastery-level
learning assessments.
The six-week accelerated course contained two to nine learning activi-
ties per week designed to take about twenty minutes each. An articu-
lated knowledge map with explicit connections between content activities
made modeling structural connections between them possible. Analysis
included the full two activity sequence of adaptive learning activities in
week one. The student had a known knowledge state score upon enter-
ing each activity, which was updated after a brief assessment upon
completion.
In an approach consistent with universal design principles around pro-
viding alternatives for perception, the adaptive system encouraged stu-
dents who showed signs of struggling to understand the material to pursue
paths along alternate modalities until a path guided them to successful
content mastery. The system log recorded each path traversed with an
identifier, timestamp, and duration, along with the modality used each
time the student went through an activity’s material. Additionally, any
student could use other modalities by choosing to repeat the activity even
if she was not struggling. Even though adaptive systems can make content
available in multiple modes when such content is designed into them, this
possibility has not been a focal point of prior research investigating pres-
entation mode (Mustafa & Sharif, 2011).
In addition to modality, information from tutor​ .c​
om was available
within the institutional data warehouse. The LMS linked to tutor​.co​m,
where each student had several hours of free tutoring. The data ware-
house contained the tutoring subject, start date/time, and session dura-
tion. These were combined with LMS (weekly grade outcome), adaptive
248 Catherine A. Manly

learning system (learning activity information), and student information


system data (covariates).

Variables
The outcome was the predicted probability of achieving either an A, B, or
C, versus a D or F, on first week course assignments and quizzes.
A binary indicator representing whether the student received tutor-
ing provided the simulated intervention focus. Model training employed
actual data for this variable, whereas the simulation set this variable to
yes (1) or no (0) depending on the analysis scenario, allowing potential
outcome prediction comparisons.
The Bayesian network variables also included use of multiple con-
tent modalities during an activity, activity repetition, and other covari-
ates. Multiple modality use was operationalized as student use of at least
any second content modality (e.g., either text, video, audio, interactive,
or mixed) when learning material within the activity. A binary indica-
tor denoted activity repetition if the student went through part or all the
activity two or more times. Demographic and prior educational independ-
ent conditioning variables included measures of race/ethnicity, age, Pell
grant status (a low-income proxy), and number of credits transferred upon
entry to the institution (a prior educational experience proxy).

Methods
Adaptive log data captured sequences of usage, and patterns of content
representation use were investigated visually. Descriptive plots showed
tutoring, modality information, and general activity information, with
the first two weeks shown here. Consistent with a student-focused learn-
ing analytics approach, heatmap plots with rows for each student allowed
visualization of all students’ cases simultaneously. Variables displayed
were the same as the sequence plots. Heatmaps were clustered by similar-
ity on both rows and columns to show patterns of the combinations of
modality use and tutoring. They were created using R’s ComplexHeatmap
package with the default “complex” agglomeration method of clustering
and Euclidean distance as the clustering distance metric (i.e., the shortest
difference between two points in the multidimensional space including the
analysis variables).
The analytical proof-of-concept was based on a Bayesian network
using a directed acyclic graph (DAG) to model assumed causal relation-
ships including modality use and tutoring (Pearl, 1995, 2009). The analy-
sis describes how a network of probabilities could be updated on the fly for
individual students during a course. The approach simulates a presumed
When adaptivity and universal design for learning are not enough 249

tutoring intervention using a model crafted from aggregated warehouse


data. This illustrates the kind of calculations that could identify oppor-
tune moments to recommend tutoring when predictions indicate that mul-
tiple modality use may be insufficient to help a student master content.
Given the complexity of this type of analysis and its proof-of-concept
nature, missing data were handled via listwise deletion. Listwise deletion
was applied individually for the regression equation for each variable in
the network when determining the appropriate parameters for subsequent
prediction.

Limitations
Those interested in applying this approach should be aware of several
considerations. While external validity is potentially limited because only
one course at one institution was studied, the intent was to illustrate the
type of analysis approach utilizing data from multiple campus systems
that could have wide applicability across institutions. To this end, several
important features of the context studied include the integration of data
within the data warehouse and the provision of tutoring via a service that
allowed warehouse inclusion. The amount of data analyzed here suffi-
ciently illustrated the process, but this work should be expanded with
additional data and piloted to explore operationalization and communi-
cating results to students.

Results
Figure 14.1 illustrates patterns of modality use and tutoring aggregated
across all course sections. The x-axis shows activity ordering from the
course’s knowledge map as instantiated in the adaptive system. Headings
distinguish the two first-week activities from the six second-week activi-
ties. Plot rows show the amount of time spent getting tutoring, ratio of
number of times multiple modalities were used by number of activity rep-
etitions, number of activity repetitions for each student, and amount of
time the student worked on the adaptive activity.
Many students used multiple modalities upon repeating the activity
and students frequently repeated activities. The amount of time spent
varied, but density of higher values did not always correspond to either
tutoring, modality use, or repetition. Notably, few students received tutor-
ing, presenting a limitation for this sample (Morgan & Winship, 2015).
However, enough students received tutoring during the second activity
to enable illustration of the technique with appropriate extrapolation. In
actual application, evaluation of sufficient overlap between treatment and
comparison cases should occur with a larger sample. As data continue to
250 Catherine A. Manly

FIGURE 14.1 Patterns of Modality Use and Tutoring, First Two Weeks for Full
Sample

be gathered in the warehouse over time, model estimation could continue


to be refined, improving predictive ability.
Given that the Bayesian network simulation aims to present predic-
tions for individual students, a descriptive approach looking at individu-
als across the entire dataset holds more salience than typical descriptive
aggregated statistical summary measures. Heatmaps offer a compact yet
complete description of the full dataset through visualization consistent
with a prescriptive analytics perspective. To visualize the entire dataset,
Figure 14.2 presents a heatmap of all individual cases, clustered hori-
zontally by variable and vertically by case. Clustering variables included
whether a student used multiple modalities, whether activity repetitions
occurred, and time spent on that activity and tutoring. To facilitate dis-
tributional comparisons across these variables, ranges were converted to
[0,1] using min–max scaling.
Due to the small number of cases with tutoring, tutoring was omitted
from Figure 14.2 and featured in Figure 14.3 to make patterns involving
tutees more visible, including total time spent on tutoring. In addition to
clustering that grouped similar cases together, these plots split cases by
grade. The lower panel shows students receiving either an A/B/C on their
mean weekly assignment/quiz grade, and the top panel shows students
When adaptivity and universal design for learning are not enough 251

FIGURE 14.2 Clustered
Heatmap of Adaptive Activity for Full Sample, Split by
Week Assignment/Quiz Grade

FIGURE 14.3 ClusteredHeatmap of Adaptive Activity for Students Receiving


Tutoring, Split by Week Assignment/Quiz Grade

receiving a D/F. Two students from each group in the test data were
selected to illustrate the intervention simulation.
I highlight several student groups visible from inspection of clustering
patterns in Figures 14.2 and 14.3 labeled by the numbered regions. In
both figures, Region 1 includes students who struggled the most, receiving
252 Catherine A. Manly

a D/F for their weekly assignment/quiz grade. These students typically


spent less time in the adaptive system than other students and were not the
heaviest users of multiple modalities.
Figure 14.2 Region 2 highlights the group with high activity levels who
received an A/B/C. These students frequently repeated adaptive learning
activities and used different modalities while doing so (i.e., high/light in
both multiple modalities and activity repetition). In this region, a pattern
appeared where students were higher on either modality use (and associ-
ated repetition) or time spent on adaptive activities, but not both. That
is, students who used multiple modalities most frequently (or repeated
the activity most frequently) did not take the longest time working in
the adaptive system. This pattern held across the highest activity group
(Region 2), the moderate activity group (Region 3), and as a less pro-
nounced pattern in the lower activity group (Region 4); lighter groups for
modality use and activity time variables in each region do not correspond
with each other, instead correspond with darker groups in the other vari-
able. This suggests that students use various strategies, either choosing
to go through material in different ways or going through it more slowly.
Additionally, a substantial number of students used multiple modalities
but did not repeat the activity (Region 5), suggesting numerous activities
may have been designed expecting use of multiple modalities along the
main content path.
Looking at students who received tutoring in Figure 14.3, most were
not high on the distributions of other variables. That is, tutees were not
particularly high (i.e., light) relative to others on multiple modality use,
activity repetition, or time on adaptive activities. For example, the Region
2 tutee group spent noticeably more time getting tutoring but only moder-
ate time on adaptive activity (i.e., high/light in tutoring time, but darker
for other variables). Tutees in Region 3 spent moderate time receiving
tutoring and were slightly higher (i.e., lighter) on the distributions of adap-
tive activity such as modality use than other tutees. However, tutees were
not among the highest in adaptive activity across the entire course. In gen-
eral, patterns seem similar for students who received lower week grades
(Region 1) and higher grades. However, it is striking that most tutees who
received lower grades spent relatively more time getting tutoring.
While other students also appear to need assistance, the groups high-
lighted stand out, either because of noticeable differences from others
given the skewed variable distributions, or because of similarities. From
this look across tutoring, modality use, and overall adaptive activity, pat-
terns across different types of learning support do not mirror each other.
Each provides unique information potentially useful in combination for
understanding student actions that may support success. This means
When adaptivity and universal design for learning are not enough 253

modeling intended to predict future course outcomes should include all


these indicators.
For the Bayesian network simulation showing projected predictions,
training data included 142 students in 22 sections. Analysis aimed at
predictions for individuals, with 500 predictions per student under
each simulated scenario. These predictions revealed the effect distri-
bution, facilitating comparison of predicted outcomes in the two sim-
ulated worlds. Four focal test students sufficed for a proof-of-concept
demonstration that useful predictions can integrate tutoring, adaptive,
and administrative data. These students received a range of grades in the
course (B, C, D, and F) and on their week one assignments (0.7, 0.44,
0.96, and 0).
Figure 14.4 shows kernel density plots for these four students compar-
ing predicted week grade for the simulated interventions of not receiving
and receiving tutoring. These outcome distributions in the two simulated
worlds plotted for each student represent the simulated treatment effect
estimate uncertainty. While such plots would probably not be shared with
students, they visualize the heart of the illustrated prescriptive approach:
identifying an optimal choice. In real-world applications, such results

FIGURE 14.4 
Kernel Density Plots of Tutoring Intervention Differences for
Four Students
254 Catherine A. Manly

would be turned into analytics-based recommendations that could be


offered to students, faculty, or other stakeholders.
To facilitate figure interpretation, note that such predictive modeling
differs from typical higher education applications predicting results for
certain student groups from the same data that created the model. That
typical approach facilitates understanding overall effects, providing
nuance in understanding how an effect may operate for different student
groups.
In contrast, the present prescriptive analytics application aims to
understand individual-level predictions for students not included in the
model-generating dataset. In the two different simulated worlds visual-
ized in the kernel density plots, the individual’s expected outcome in each
experimental scenario (i.e., T=0 or T=1) can be calculated in the corre-
sponding simulated world and then compared to determine the predicted
treatment effect for that individual.
In this example, the effect of tutoring for all four focal students had
similar, statistically significant effect sizes (at α = 0.10). The paired sample
effect size was Cohen’s d = 0.075 for student one (p = 0.094), d = 0.078
for student two (p = 0.082), d = 0.079 for student three (p = 0.076), and
d = 0.077 for student four (p = 0.085). These can be considered medium
effect sizes for an educational intervention (Kraft, 2020).

Discussion
This exploratory investigation of variation in modality use and tutoring
activity described clustered data patterns and illustrated how such infor-
mation might be used in a simulation that projects predictions within an
educational system. While some students used both multiple modalities
and tutoring, not all utilized both supports simultaneously. Most stu-
dents who received tutoring used multiple modalities, but not the most
frequently. The heaviest users of multiple modalities typically did not
spend the most time in the adaptive system, and vice versa. Most tutees
received weekly assignment grades of A, B, or C. Other groups’ outcomes
appear mixed, suggesting that additional tutoring may be beneficial for
some. Existence of noticeably different clusters of modalities and tutor-
ing warrants drilling into these patterns to identify combinations leading
to greater student learning at particular points in the course. Four such
groups stand out, including those who rely most heavily (a) on tutoring,
(b) on repeating activities using different modalities, and (c) on spending
time going through the adaptive system material slowly, as well as (d)
those who spend a moderate amount of time getting tutoring combined
with moderate use of multiple modalities.
When adaptivity and universal design for learning are not enough 255

Interestingly, use of multiple modalities and tutoring did not coincide


as frequently as anticipated. Some students may prefer repeating activities
independently using different modalities whereas other students may pre-
fer getting help by talking with someone. Descriptive patterns along with
predictive information about specific activities could be used to identify
points in the course where students more frequently sought help through
working with multiple modalities and tutoring. This in turn should facili-
tate future exploration of combinations of modality use and tutoring for
activities that may provide greater benefit for grades on certain weeks’
assignments, as well as course grade. Future qualitative research could
investigate student motivations for pursuing these different strategies. This
would help distinguish the extent of additional predictive power when
students use both strategies together, along with the extent to which each
strategy (i.e., tutoring and use of multiple modalities) separately entails
useful information about student confusion and difficulty with the mate-
rial. Initial results here suggest all these strategies (i.e., tutoring, multiple
modality use, and their combination) may provide beneficial information
for future prescriptive modeling.
The four example students each had a prediction of what would happen
in the simulated world where they received tutoring and the world where
they did not. Each predicted point estimate in this experiment on a simu-
lated educational system was drawn from a distribution of such possible
estimates, leading to treatment effect estimates reported for each student.
In the real world, time limits our ability to explore potential outcomes not
observed in real time. Here, simulating two potential future worlds for
each student allowed exploration of potential outcomes under the two sce-
narios through models developed from prior students’ data. Such results
obviously depend on how well models built on the training data apply to
the example students being tested through the simulated worlds.
Effects for the four focal students for the second adaptive activity were
reasonably sized for educational interventions (Kraft, 2020), but should
be combined with further analysis to determine what effect would be
considered actionable at this institution. As hypothesized, prescriptive
analysis found that combining modality switches and tutoring should
benefit some students. Additional research should be conducted to deter-
mine the extent of this benefit and the most opportune moments during
the course to provide tutoring recommendations to help focus guidance
to students. This initial look offers an example of the kind of analysis
that could be conducted with more activities to evaluate various course
points. The approach offers numerous avenues for future research to
refine modeling, improve targeted predictions, and identify practically
useful prescriptive analytics. Future work could extend this analysis
256 Catherine A. Manly

to model-based success predictions in subsequent activities and weeks


that could be based dynamically on data collected about each student
throughout the course. Available data for any given activity-tutoring
combination is small, so collecting additional data in future semesters
would be beneficial. This kind of initiative would be possible at an insti-
tutional level, particularly as data collection continues within the data
warehouse across time.

Implications
The COVID-19 pandemic catapulted UDL design concerns into a broader
spotlight as faculty rushed to shift their pedagogy to emergency remote
delivery (Basham et al., 2020; Levey et al., 2021). The example analysis
presented offers one avenue to extend that changing practice to further
support students through UDL-informed instructional design decisions
(Burgstahler, 2021; Izzo, 2012). Learning behavior patterns can effec-
tively inform appropriate recommendations regarding student support
and also help instructors identify where to focus additional class time.
Bayesian network-informed approaches utilized in adaptive learning
are often either still experimental or developed behind proprietary sys-
tem paywalls (Kabudi et al., 2021). I argue that institutions could develop
similar capability or work with vendors willing to make the predictive
process transparent enough to enable augmented predictions utilizing
aggregated data across multiple systems. This illustration shows such an
approach might benefit students, particularly when an institution aggre-
gates data across different vendor systems in a data warehouse.
Adaptive learning literature has used learning style information to tai-
lor adaptivity (Khamparia & Pandey, 2020) even though the hypothesis
that matching material to learning style preferences is considered a “neu-
romyth” (Betts et al., 2019). For example, prior experimental research
on adaptive learning across 5 days with 42 students suggested students
benefit from having the initially presented modality tailored to their learn-
ing style, although use of different modalities provided was not explicitly
studied (Mustafa & Sharif, 2011). For the present study, system admin-
istrators believe it likely that the adaptive system did not have sufficient
time to determine such preferences and change the default modality pre-
sented for many, if any, students studied. How to manually change that
default was not advertised, so the default initial presentation (typically
textual) was likely enabled for almost all students. Future research could
compare results before and after the adaptive system adjusted the initial
modality presented to verify the extent to which benefit comes from using
multiple modalities regardless of order.
When adaptivity and universal design for learning are not enough 257

Focusing on the example analysis structure, this simplified illustration


could be scaled up in numerous increasingly realistic and useful ways in
practice. Future extensions could construct a more thorough model by
adding possible alternative explanations, extending the model throughout
the course, and/or connecting to linked courses in the introductory English
sequence. This type of analysis could be extended across many students
and tutoring at different time points could be compared. Hopefully this
illustration provides basic prescriptive analytics guidance inspiring future
work expanding this kind of simulation approach comparing outcome
predictions within different potential worlds for a given student.
Additionally, it would be worth investigating the type of material avail-
able at points where students seek tutoring. This might help faculty and
other course developers address “pinch points” (Tobin & Behling, 2018)
which offer promise for additional redesign. Perhaps places where stu-
dents seek tutoring are points offering fewer modality alternatives. If so,
these points would be strong candidates for redesign involving modali-
ties. Alternately, perhaps points where students sought tutoring constitute
harder course segments, and perhaps other UDL-based supports could
be more effective in those moments. Perhaps in these places students find
demonstrating their knowledge through the adaptive system’s multiple-
choice questions harder. If so, this may challenge course developers to
devise alternative means of formative assessment. Such possibilities for
addressing issues faced by students deserve further investigation, particu-
larly when adopting a “plus-one” course development approach or doing
larger course revisions (Tobin & Behling, 2018). The present proof-of-
concept suggests productive directions for using the increasingly volu-
minous data being collected about student learning to positively benefit
struggling students.
UDL encourages educators to ask questions about alternate perspec-
tives on teaching based on understanding affective, recognition, and
strategic brain networks involved in learning (Meyer et al., 2014). This
research encourages extending such questioning to consider how data-
informed educational practices may utilize simulated worlds to flag and
help address current gaps in the universal design of instruction. Further
investigating combinations of practices connected to UDL suggesting
effective, timely student support remains warranted.

Conclusion
This work augments prior analytics research by providing a proof-of-
concept that could be extended to investigate, predict, and present results
connecting UDL elements to student success. It illustrates the kind of
258 Catherine A. Manly

institutional prescriptive analytics that could utilize a data warehouse.


Just as curb cuts provide an easily understood symbol of universal design
in the physical, built environment, providing multiple modalities for
learning content may become a similarly easily understood symbol of edu-
cational universal design. Using multiple modalities holds face validity,
is straightforwardly implemented, and offers benefits for a wide range of
students (Manly, 2022). Combining this with information about when
prior students benefit from tutoring holds potential to inform simulation-
based modeling offering insight into when feedback about seeking tutor-
ing might be most beneficial. Such prescriptive support could aid students
in optimizing when to seek additional help. As institutions continue to
explore ways to ethically utilize student learning data to benefit students,
this approach offers promise worth pursuing.

Note
1 While acknowledging differing language choices when discussing people who
have disabilities (Association on Higher Education and Disability, 2021), I use
person-first language as a group signifier.

References
Abrams, H. G., & Jernigan, L. P. (1984). Academic support services and the
success of high-risk college students. American Educational Research Journal,
21(2), 261–274. https://doi​.org​/10​. 2307​/1162443
Archambault, M. (2016). The diffusion of universal design for instruction in
post-secondary institutions [Ed.D. dissertation, University of Hartford].
https://search ​ . proquest ​ . com ​ / pqdtglobal ​ /docview ​ / 1783127057​ /abstract​
/363278AEC9FF4642PQ​/64
Association on Higher Education and Disability. (2021). AHEAD statement on
language. https://www​.ahead​.org ​/professional​-resources ​/accommodations​/
statement​-on​-language
Barnard-Brak, L., Paton, V., & Sulak, T. (2012). The relationship of institutional
distance education goals and students’ requests for accommodations. Journal
of Postsecondary Education and Disability, 25(1), 5–19.
Basham, J. D., Blackorby, J., & Marino, M. T. (2020). Opportunity in crisis:
The role of universal design for learning in educational redesign. Learning
Disabilities: A Contemporary Journal, 18(1), 71–91.
Ben-Moshe, L., Cory, R. C., Feldbaum, M., & Sagendorf, K. (Eds.). (2005).
Building pedagogical curb cuts: Incorporating disability in the university
classroom and curriculum. Syracuse University Graduate School.
Betts, K., Miller, M., Tokuhama-Espinosa, T., Shewokis, P. A., Anderson, A., Borja,
C., Galoyan, T., Delaney, B., Eigenauer, J. D., & Dekker, S. (2019). International
report: Neuromyths and evidence-based practices in higher education. Online
Learning Consortium. https://www​.academia​.edu​/41867738​/ International​
_Report​_ Neuromyths​_ and​_ Evidence​_ Based​_ Practices​_ in​_ Higher​_ Education
When adaptivity and universal design for learning are not enough 259

Burgstahler, S. E. (2015). Universal design in higher education: From principles


to practice (2nd ed.). Harvard Education Press.
Burgstahler, S. E. (2021). A universal design framework for addressing diversity,
equity, and inclusion on postsecondary campuses. Review of Disability
Studies: An International Journal, 17(3), Article 3. https://rdsjournal​.org
Burgstahler, S. E., & Cory, R. (Eds.). (2008). Universal design in higher education:
From principles to practice. Harvard Education Press.
CAST. (2014). UDL guidelines—Version 2.0: Principle I. Provide multiple
means of representation. Universal Design for Learning Guidelines Version
2.0. http://www​.udlcenter​.org​/aboutudl​/udlguidelines​/principle1​#principle1​
_g1
Clow, D. (2012). The learning analytics cycle: Closing the loop effectively.
Proceedings of the 2nd international conference on learning analytics and
knowledge, 134–138. https://doi​.org​/10​.1145​/2330601​. 2330636
Cumming, T. M., & Rose, M. C. (2021). Exploring universal design for learning
as an accessibility tool in higher education: A review of the current literature.
Australian Educational Researcher, 1–19. https://doi​.org​/10​.1007​/s13384​- 021​
-00471-7
Davies, P. L., Schelly, C. L., & Spooner, C. L. (2013). Measuring the effectiveness
of universal design for learning intervention in postsecondary education.
Journal of Postsecondary Education and Disability, 26(3), 195–220.
Dawson, S., Gašević, D., Siemens, G., & Joksimovic, S. (2014). Current state
and future trends: A citation network analysis of the learning analytics field.
In Proceedings of the 4th international conference on learning analytics and
knowledge (pp. 231–240). https://doi​.org ​/10​.1145​/2567574​. 2567585
Dawson, S., Joksimovic, S., Poquet, O., & Siemens, G. (2019). Increasing the
impact of learning analytics. In Proceedings of the 9th international conference
on learning analytics and knowledge (pp. 446–455). https://doi​.org​/10​.1145​
/3303772​. 3303784
Dell, C. A., Dell, T. F., & Blackwell, T. L. (2015). Applying universal design for
learning in online courses: Pedagogical and practical considerations. Journal
of Educators Online, 12(2), 166–192.
Edyburn, D. L. (2010). Would you recognize universal design for learning
if you saw it? Ten propositions for new directions for the second decade of
UDL. Learning Disability Quarterly, 33(1), 33–41. https://doi​.org​/10​.1177​
/073194871003300103
Evans, N. J., Broido, E. M., Brown, K. R., & Wilke, A. K. (2017). Disability in
higher education: A social justice approach. Jossey-Bass.
Fidalgo, P., & Thormann, J. (2017). Reaching students in online courses using
alternative formats. International Review of Research in Open and Distance
Learning, 18(2). https://search​.proquest​.com ​/socscicoll ​/docview​/1931686015​/
abstract​/ B78DE55451F4459EPQ​/104
Frazzetto, D., Nielsen, T. D., Pedersen, T. B., & Šikšnys, L. (2019). Prescriptive
analytics: A survey of emerging trends and technologies. VLDB Journal,
28(4), 575–595. https://doi​.org​/10​.1007​/s00778​- 019​- 00539-y
Gagliardi, J. S. (2018). Unpacking the messiness of harnessing the analytics
revolution. In J. S. Gagliardi, A. Parnell, & J. Carpenter-Hubin (Eds.), The
260 Catherine A. Manly

analytics revolution in higher education: Big data, organizational learning,


and student success (pp. 189–200). Stylus Publishing, LLC.
Gordon, E. E., Morgan, R. R., O’Malley, C. J., & Ponticell, J. (2007). The
tutoring revolution: Applying research for best practices, policy implications,
and student achievement. Rowman & Littlefield Education.
Higbee, J. L., Bruch, P. L., & Siaka, K. (2008). Disability and diversity: Results
from the multicultural awareness project for institutional transformation. In
J. L. Higbee & E. Goff (Eds.), Pedagogy and student services for institutional
transformation: Implementing universal design in higher education (pp. 405–
417). Center for Research on Developmental Education and Urban Literacy,
University of Minnesota.
Izzo, M. V. (2012). Universal design for learning: Enhancing achievement of
students with disabilities. Procedia Computer Science, 14, 343–350. https://
doi​.org​/10​.1016​/j​.procs​. 2012​.10​.039
Jones, S. R. (1996). Toward inclusive theory: Disability as social construction.
NASPA Journal, 33(4), 347–354.
Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning
systems: A systematic mapping of the literature. Computers and Education:
Artificial Intelligence, 2, 100017. https://doi​.org​/10​.1016​/j​.caeai​. 2021​.100017
Khamparia, A., & Pandey, B. (2020). Association of learning styles with different
e-learning problems: A systematic review and classification. Education and
Information Technologies, 25(2), 1303–1331. https://doi​.org​/10​.1007​/s10639​
-019​-10028-y
Kimball, E. W., Wells, R. S., Ostiguy, B. J., Manly, C. A., & Lauterbach, A.
(2016). Students with disabilities in higher education: A review of the literature
and an agenda for future research. In M. B. Paulsen & L. W. Perna Higher
education: Handbook of theory and research (Vol. 31, pp. 91–156). Springer.
Ko, S., & Rossen, S. (2017). Teaching online: A practical guide. Taylor & Francis.
Kraft, M. A. (2020). Interpreting effect sizes of education interventions.
Educational Researcher, 49(4), 241–253. https://doi​.org​/10​. 3102​
/0013189X20912798
Kuh, G. D., Kinzie, J., Buckley, J. A., Bridges, B. K., & Hayek, J. C. (Eds.).
(2007). Piecing together the student success puzzle: Research, propositions,
and recommendations. ASHE Higher Education Report, 32(5), 1–182. https://
doi​.org ​/10​.1002​/aehe​. 3205
Laskey, M. L., & Hetzel, C. J. (2011). Investigating factors related to retention of
at-risk college students. Learning Assistance Review, 16(1), 31–43.
Levey, J. A., Burgstahler, S. E., Montenegro-Montenegro, E., & Webb, A. (2021).
COVID-19 pandemic: Universal design creates equitable access. Journal of
Nursing Measurement, 29(2), 185–187. https://doi​.org​/10​.1891​/ JNM​-D​-21​
-00048
Lombardi, A. R., & Murray, C. (2011). Measuring university faculty attitudes
toward disability: Willingness to accommodate and adopt universal design
principles. Journal of Vocational Rehabilitation, 34(1), 43–56.
Mangiatordi, A., & Serenelli, F. (2013). Universal Design for Learning: A meta-
analytic review of 80 abstracts from peer reviewed journals. Research on
Education and Media, 5(1), 109–118.
When adaptivity and universal design for learning are not enough 261

Manly, C. A. (2022). Utilization and effect of multiple content modalities in


online higher education: Shifting trajectories toward success through universal
design for learning [Ph.D. dissertation, University of Massachusetts Amherst].
https://doi​.org​/10​.7275​/27252773
McGuire, J. M. (2014). Universally accessible instruction: Oxymoron or
opportunity? Journal of Postsecondary Education and Disability, 27(4),
387–398.
McKeachie, W., & Svinicki, M. (2010). McKeachie’s teaching tips: Strategies,
research, and theory for college and university teachers. Cengage Learning.
Meyer, A., Rose, D. H., & Gordon, D. (2014). Universal design for learning:
Theory and practice. CAST Professional Publishing.
Morgan, S. L., & Winship, C. (2015). Counterfactuals and causal inference:
Methods and principles for social research (2nd ed.). Cambridge University
Press.
Mustafa, Y. E. A., & Sharif, S. M. (2011). An approach to adaptive e-learning
hypermedia system based on learning styles (AEHS-LS): Implementation and
evaluation. International Journal of Library and Information Science, 3(1),
15–28. https://doi​.org​/10​. 5897​/ IJLIS​.9000009
Newman, L. A., & Madaus, J. W. (2015). Reported accommodations and supports
provided to secondary and postsecondary students with disabilities: National
perspective. Career Development and Transition for Exceptional Individuals,
38(3), 173–181. https://doi​.org​/10​.1177​/2165143413518235
Parker, D. R., Embry, P. B., Scott, S. S., & McGuire, J. M. (2003). Disability service
providers’ perceptions about implementing universal design for instruction
(UDI) on college campuses (Tech. Rep. No. 03). Center on Postsecondary
Education and Disability, University of Connecticut. https://citeseerx​.ist​.psu​
.edu ​/viewdoc ​/download​?doi​=10​.1​.1​. 510​. 3158​&rep​=rep1​&type​=pdf
Pearl, J. (1995). Causal diagrams for empirical research. Biometrika, 82(4),
669–710.
Pearl, J. (2009). Causal inference in statistics: An overview. Statistics Surveys, 3,
96–146. https://doi​.org ​/10​.1214​/09​- SS057
Picciano, A. G. (2017). Theories and frameworks for online education: Seeking an
integrated model. Online Learning, 21(3), Article 3. https://doi​.org​/10​. 24059​
/olj​.v21i3​.1225
Rao, K., Ok, M. W., & Bryant, B. R. (2014). A review of research on universal
design educational models. Remedial and Special Education, 35(3), 153–166.
https://doi​.org​/10​.1177​/0741932513518980
Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age:
Universal design for learning. Association for Supervision and Curriculum
Development.
Schelly, C. L., Davies, P. L., & Spooner, C. L. (2011). Student perceptions of faculty
implementation of universal design for learning. Journal of Postsecondary
Education and Disability, 24(1), 17–30.
Scott, S. S., McGuire, J. M., & Shaw, S. F. (2003). Universal design for instruction:
A new paradigm for adult instruction in postsecondary education. Remedial
and Special Education, 24(6), 369–379. https://doi​.org​/10​.1177​/074​1932​5030​
240060801
262 Catherine A. Manly

Silver, P., Bourke, A., & Strehorn, K. C. (1998). Universal instructional design
in higher education: An approach for inclusion. Equity and Excellence in
Education, 31(2), 47–51. https://doi​.org​/10​.1080​/1066568980310206
Snyder, T. D., de Bray, C., & Dillow, S. A. (2019). Digest of education statistics
2017 (NCES 2018-070). National Center for Education Statistics, Institute
of Education Sciences, U.S. Department of Education. https://nces​ .ed​
.gov​
/
pubsearch​/pubsinfo​.asp​?pubid​=2018070
Snyder, T. D., & Dillow, S. A. (2013). Digest of education statistics 2012 (NCES
2014-105). National Center for Education Statistics, Institute of Education
Sciences, U.S. Department of Education. https://nces​ .ed​
.gov​
/pubsearch​
/
pubsinfo​.asp​?pubid​=2014015
Tinto, V. (2006). Research and practice of student retention: What next? Journal
of College Student Retention: Research, Theory and Practice, 8(1), 1–19.
https://doi​.org​/10​. 2190​/4YNU​- 4TMB​-22DJ​-AN4W
Tobin, T. J., & Behling, K. (2018). Reach everyone, teach everyone: Universal
design for learning in higher education. West Virginia University Press.
Webb, K. K., & Hoover, J. (2015). Universal design for learning (UDL) in the
academic library: A methodology for mapping multiple means of representation
in library tutorials. College and Research Libraries, 76(4), 537–553. https://
doi​.org ​/10​. 5860​/crl​.76​.4​. 537
Wessel, R. D., Jones, J. A., Markle, L., & Westfall, C. (2009). Retention and
graduation of students with disabilities: Facilitating student success. Journal
of Postsecondary Education and Disability, 21(3), 116–125.
Wladis, C., Hachey, A. C., & Conway, K. M. (2015). The representation of
minority, female, and non-traditional STEM majors in the online environment
at community colleges: A nationally representative study. Community College
Review, 43(1), 89–114. https://doi​.org​/10​.1177​/0091552114555904
SECTION IV

Organizational
Transformation



15
SPRINT TO 2027
Corporate analytics in the digital age

Mark Jack Smith and Charles Dziuban

“The pace of change has never been this fast, yet it will never be this slow
again”—Justin Trudeau (Trudeau, 2018).

Analytics in the global corporate culture


Analytics data have become a driving force in the global corporate cul-
ture helping companies adapt to their changing environments. As a result,
organizations understand that their hiring practices must be data-driven
for the employee onboarding process (Carey, 2016) and that their human
resources units must more fully embrace a corporate culture based on
data (Angrave, 2016). This transformation produces significant challenges
that require an effective strategic plan, competent data scientists, and a
responsible analytics ethics policy (Vidgen, 2017). Further, this emerging
culture requires a functional relationship among those scientists, corpo-
rate administrators, and partitioners (Simon, 2017). They must embrace
notions such as business intelligence, advanced analytics, and prescription
enablers such as simulation and optimization (Delen, 2018). This requires
a strong corporate commitment that produces sustainable performance
(Singh, 2019) underpinned by a policy of evaluating management tools for
their effectiveness (Prollochs, 2020).

The PGS evolution


Corporate environments pose challenges for commercial enterprises
attempting to transform their products and services. This, in turn, requires

DOI: 10.4324/9781003244271-19
266 Mark Jack Smith and Charles Dziuban

the enterprises’ workforce to reskill at a rapid pace and embrace the digi-
talization of work historically performed onsite. The process is predicted
to be rapid and pervasive. For instance, the World Economic Forum esti-
mates that by 2025, 85 million jobs may be displaced by machines, while
97 million new more technology-driven roles emerge (“The future of job
reports,” 2020). Therefore, approximately 40% of workers will require
additional training even though business leaders expect them to acquire
these upgraded skills on the job.
This is a case study of how one company is using analytics to accom-
modate the reskilling challenge for their workforce in a five-year horizon
and the strategy it is using to retrain its existing workforce by develop-
ing a new corporate paradigm. PGS (formerly Petroleum Geo-Services) is
an integrated marine geophysical company in the energy sector. Founded
in 1991, it acquires images of the marine subsurface through seismic
data collection using streamers towed behind purpose-built vessels. The
stream spreads towed by these vessels are as long and wide as the island
of Manhattan. The data acquired are processed in onshore data centers
to produce images of the marine subsurface that may offer potential for
energy resource exploration. To accomplish this, PGS relies on a wide
variety of competencies from mechanics and maritime crew offshore to
geophysicists and data scientists onshore. However, the competencies
required of the workforce is evolving and changing rapidly as innovations
appear and PGS transitions into new business areas that include carbon
storage spaces below the subsurface and where its digital products will be
viable in sectors other than the fossil fuel industry. Figure 15.1 depicts the

16
Headcount Vessels in Operation
14 2534
2316
12
1948 - 60%
1826
10

1401 1472
8 1284
6 1035

0
14 15 16 17 18 19 20 21

FIGURE 15.1 PGS Personal Headcount and Vessels in Operation: 2014–2021


Sprint to 2027 267

rapidity of the decreasing workforce and reduction of the seismic search


vessel fleet from 2014 to 2021.
The reduction has been dramatic averaging a decrease of 60 percent
in personal and vessels caused by multiple factors such as the economics
of volatile oil and gas prices and issues such as climate change, global
politics, market conditions, the push for clean energy, the electrification
of vehicles, etc. These findings highlight the need for PGS to reinvent itself
and redefine its business model with a significant redesign of its workforce
and corporate foundation both of which hinge upon reskilling its work-
force. To meet the challenge, PGS management created a digitization unit
in 2019 and a separate Learning and Development Department (L&D) in
2020 to support the application of digital technology across the company,
while ensuring that its workforce possessed the requisite digital skills to
meet the coming challenges.

Projecting to 2027
Starting with the premise that by 2027 processes in PGS will be mostly
digital, offshore operations often managed remotely and customer inter-
action will be more collaborative, management set out to understand
the long-term aggregate skills required in sales, data acquisition, and
data processing. The first step was to examine the technology drivers
for recent changes which are cloud-based data and increased bandwidth
offshore. The latter allows for the transfer of enormous amounts of data
from remote offshore sites into the cloud, allowing it to be immediately
utilized in a cooperative sharing of the imaging process with customers.
The impact of immediate data access to all stakeholders effects every role
in the company. Sales, acquisition, and data processing will not only be
more collaborative, but it will also increase data analysis speed driven
by digitalization of both PGS’ and its customer’s business processes
(Whitfield, 2019).
Recognizing that collaboration will be pervasive but varied across the
business, PGS undertook a top-down review of skills needed in 2027.
They included being able to work together on innovative technology,
adaptability to change in groups, taking an interactive solutions-oriented
approach, increased business acumen to allow for empowerment to make
decisions lower down in the organization, and most importantly the abil-
ity to communicate the value of data. These skills inform every role in the
company and require a mindset shift at every organizational level. This
fundamental change is seen as being primarily directional at the aggre-
gate and will require individual assessment and learning that is specific to
cohorts across organizational units.
268 Mark Jack Smith and Charles Dziuban

Because of the pace of technology, required skills will change rapidly


and is confounded by difficulty predicting the changing business of PGS,
so we chose an agile approach to the employee-reskilling process. Initially
a software development process, the agile approach is being adopted for
roles not related to programming. However, at least one study showed
that for an agile approach to nonprogramming related activities to be
successful, it is necessary to analyze, anticipate, and define the expected
outcomes and goals for an agile approach (Korhonen, 2011). Therefore,
analyzing expected outcomes and defining future objectives became the
core of the PGS approach.

Establishing the components of the reframed business


To determine the skills required by 2027, each major component of
the business was examined using the existing business process and
analytics to determine how they will be reconfigured in the company
transformation.

Sales
Selling starts by agreeing to acquire the data or selling existing datasets
from the company’s library. During the last decade, due primarily to cloud
technology, selling seismic data has changed to become increasingly inter-
active. Specifically, customers want data faster and to collaborate on its
processing, creating an increasing effect on the sales process and its com-
plexity. In the future, sales will be more relational constantly iterating
with customers. Sharing datasets is a fundamental change from selling
data acquired specifically for the client or them purchasing specific data-
sets for a specific period.
An example of this change is digitization of entitlements. Entitlements
are datasets for areas of the subsea. Previously, this process could take
weeks and months to determine the overlapping structure, areas allowed
for exploration, and several other factors. Using artificial intelligence,
however, the company developed software that reduces the determination
of data availability from weeks to hours.
The sales force will no longer spend time determining data availability,
but will focus on customer relationships and decreasing data delivery time
in a collaborate dialog, something that demands new skills not required of
current employees. Taking a more cooperative approach to building and
utilizing datasets that cover expansive areas from different companies and
cooperatively sharing the data in the cloud is already well underway and
will continue to grow (“TGS, CGG, and PGS Announce Versal,” 2021).
Sprint to 2027 269

Data acquisition
Digitalization will impact acquiring seismic data in the offshore environ-
ment which is a highly technical, high-risk environment, and an expensive
undertaking that currently less than 30 vessels in the world can do. Given
the complexity, risks and costs involved, PGS focused on working reliably
at level 4 on the 10-point Sheridan-Verplank automation scale, where the
computer suggests optimal vessel operating speed (Sheridan & Verplank,
1978). By 2027, PGS hopes to reach level 7, where the computer executes
the speed control and reports to the crew. This will constitute a significant
increase in automation made possible with the introduction of specially
designed vessels like PGS’ Titan class.
Automation and increased connectivity to onshore operations will
affect the size and skills of crew onboard seismic vessels. Currently, ves-
sels have a crew of approximately 50, with half working on seismic acqui-
sition. The size and composition of the crew remained relative to the size
of the vessel over the last 20 years. However, efficiency gains have been
used to reduce the time to acquire seismic data while not decreasing its
quality. That efficiency is depicted in Figure 15.2.
In 2018, to make crews more cost efficient, a large-scale initiative
started focusing on digitization of onboard systems similar to the offshore
platform industry where functionalities are operated by land-based teams
cooperatively with maintenance and repair crews onboard. This has oper-
ating advantages such as requiring fewer crew members onboard, being

35,000
+3.3%
CAGR
30,000

25,000
Sq Km/Year*

20,000

15,000

10,000

5,000

-
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021
*Normalized Square Kilometers Per Year

FIGURE 15.2 PGS Mean Vessel Productivity Normalized sq km/Year


270 Mark Jack Smith and Charles Dziuban

High Digizaon Potenal Number of Crew in


Posion
1
Chief
Sr Geo 6
Chief Navigator
Marime Posion
Sr/QC
Chief Mechanic Seismic Posion
SL/Observer Chief Observer
SL/Mechani SL/Navigator PC/FSM
Low Experse High Experse
PAM/MMO/FLO Ch Eng
Captain
Electrician Assistant
Motorma
2-4 Eng
Ch/2nd Off
AB Fi e Medi
Coxswain/Bosu
Ch/2nd Cook

Ch/Stewards
Low Digizaon Potenal

* Based on typical vessel crew configuraon November 2019

FIGURE 15.3 PGS Crew Standard Configuration: Digitalization Potential and


Expertise Requirements

able to utilize maritime crew to do seismic activity and procedures for


the acquisition of data offshore in the marine environment. Figure 15.3
demonstrates the migration where many seismic positions can be, or have
been, digitalized.
The implication is that maritime crews must be reskilled in order to
effectively collaborate remotely with onshore supervisors performing pro-
cedures currently done by highly skilled seismic crew. Although several
onboard maritime crew members have mechanical and electronic skills,
they will need to be retrained to perform the procedures currently done
by skilled seismic crew. To accomplish this, PGS started a project in 2019
to certify maritime crew to do routine seismic processes, making crew
configuration size more flexible. The training was accomplished through
traditional teaching modalities including in-person instruction onboard
and increasingly by video. Currently, PGS is testing augmented reality
technology to both train the crew and guide them in their work from an
onshore base.
Another area where digitization is being leveraged offshore is predic-
tive maintenance and supply. PGS embarked on digitization to improve
the predictability and sustainability of seismic equipment and maritime
maintenance in 2018. Although resulting in reduced costs the process will
need to further address the data analysis ability and interpretation com-
petencies of both office and onshore personnel. The speed and quality of
digitization between the onshore and the offshore environment will be
Sprint to 2027 271

dramatically increased by office offshore connectivity. Satellite technol-


ogy has developed to the point where video instruction, live videos, and
data exchange will allow for faster deployment of seismic equipment and
data acquisition.

Data processing
Newly acquired or existing seismic data in PGS’ library is processed to
client specifications agreed to in the sales agreements. Here too, cloud
technology will have a significant impact on the business configuration
and require reskilling of employees to accommodate faster processing in
collaboration with customers. PGS data were historically processed in
data centers located in Houston, Oslo, and Cairo. Previously these centers
were networked to supercomputers. However, in 2022 these supercom-
puters were decommissioned, and all processing was moved to the cloud.
Although data are currently delivered on physical tapes from offshore to
the processing centers, that tape is increasingly being replaced by more
portable high-capacity storage devices and will at some point be directly
transferred to the cloud from the vessel via satellite. This will replace some
preprocessing of data on the vessel, then transferred to tape, and trans-
ported to a processing center.
Immediately available data from the cloud for processing by anyone
with access have long-term strategic implications for how geophysicists
work. With faster access and ease of collaboration they will, increasingly,
create subsurface images together with the customer and take on a more
advisory role in cross-disciplinary teams. Digitalization will require that
sales and geophysical processes merge into a more collaborative relation-
ship with the customer.

Strategy and planning


The projections created for each major business area require long-term
strategic plans to reskill the workforce because market dynamics and
unpredictability of technological developments require an agile approach.
An important consideration in the development of that strategy was cost.
Reskilling procedures originally developed internally were built upon the
realization that blended learning solutions were much less costly in the
business environment especially given the geographic distribution of PGS
employees.
With the cost issue in mind, the strategy was developed in areas built
on the 70:20:10 model. This model is common in business environments.
It holds that employees obtain 70% of their knowledge from job-related
experiences, 20% from interactions with others, and 10% from formal
272 Mark Jack Smith and Charles Dziuban

educational or training. Although it has come under some criticism, the


proportionality still holds for job-related experiences (Clardy, 2018).
The strategy has critical areas that cover both technology management
and the changes required to meet evolving business challenges.

Technology platform
The most important educational enabler is an integrated learning man-
agement and knowledge acquisition platform (LMS and LXP). These
technologies support learning modalities by leveraging assessment
practices. Here again the advantages of cloud technology come into
play. They provide a single source of learning for employees both on
and offshore. These learning support systems, especially when in the
cloud, enable collaborative learning with short deployment and imple-
mentation timelines utilizing assessment data to determine baseline,
growth, and final outcomes.

Business alignment
Learning strategies in commercial enterprises must follow a business
model where alignment requires an interactive process including extensive
reevaluation of capabilities, feedback, and data analysis. Examples include
assessing the ability of a crew member to perform a procedure. From the
data analysis feedback loops observed in our adaptive learning structure,
the education and retraining become auto-analytic. Learning measures
of employee’s performance skills taught with onboard data determine the
quality of the learning, and new or revised instruction needed to keep
pace with changes. However, the dynamics of the learning feedback will
require constant realignment consistent with the dynamics of the business
strategy.

Coownership
Like alignment, coownership of learning and skills training between
L&D and the business constitutes a cultural shift for many companies.
Traditionally, employees and subject matter experts defined learning needs
and enlisted L&D to linear knowledge acquisition processes. This will not
be effective in a dynamic environment that takes the agile approach to a
corporate business model.
Skills development coownership has been recognized as critically
important for some time in the corporate environment. Manville (2001)
pointed out that greater line management ownership of programs makes
learning part of the day-to-day business life, implying that integrating
Sprint to 2027 273

skills development in employee development programs increases the


impact of programs. Therefore, learning “becomes part of the overall pro-
cess of new product development, sales and efforts to increase customer
satisfaction and retention” (Manville, 2001, p. 40).

Assessment
In the corporate environment verifying the employee’s ability and skills
requires assessment based on the position’s requirements. These can be
routine for some skills that are related to compliance rules; however,
there appears to be a general aversion among the employee population to
testing theoretical knowledge that would reflect poorly on their perfor-
mance, compensation, or career advancement. The degree to which this is
true, however, varies from country to country depending on work ethics
and labor laws. Headquartered in Norway, in 2021 the PGS workforce
onshore is 26% Norwegian and has over 44 nationalities worldwide in
its offices. Offshore the largest nationality group is British (32%) with 31
other nationalities. This multinational workforce is a conscious decision
motivated by cost and business considerations given that vessels work in
up to six different countries in any given year. PGS has had some suc-
cess accommodating this extreme diversity using adaptive learning. This
was an initiative that successfully upskilled mechanics working offshore
to improve their hydraulics capabilities where PGS created specific adap-
tive learning modules that developed a better understanding of hydraulic
theory (Smith, 2021). Adopting an adaptive mindset which aligns with
the agile approach using blended learning proved effective for assessing
knowledge and branched learning improved the speed and efficiency of
knowledge acquisition while lowering costs.

Learning journeys
The diverse geographical spread of PGS’ workforce required that learn-
ing be delivered in multiple modalities utilizing multiple technologies
including video conferencing, asynchronous videos, gamified learning,
etc. Groups also needed to be configured into effective learning communi-
ties depending on individuals’ position and current skill level. This was
a unified learning modalities approach housed on a single platform as
described above. However, L&D staff will need to better understand arti-
ficial intelligence as it structures multiple learning modalities into educa-
tion and training journeys based on data from assessments, performance,
and biographical information.
274 Mark Jack Smith and Charles Dziuban

Learning data
Measuring the impact of learning and how it arranges priorities, builds
skills, and strengthens well-being that supports an individual’s learning
informs the other strategic areas. Data, especially in the cloud, allow
effective and fast analysis of large datasets. Learning data will be an indis-
pensable tool to improve and make reskilling effective by measuring the
quality of the learning culture. This will be discussed below as a part of
the third phase of implementation of the strategy.

Implementing the strategy


With the areas defined and agreed upon, the newly developed strategy
needed an implementation plan that was devised by the L&D and business
management teams, and divided into three phases—Scope, Introduce and
Grow—implemented in late 2020. The first phase (Scope) assessed existing
structures and evaluated future skills that PGS needed in 2027. Using the
existing career framework that placed all positions in the company into
a three-dimensional matrix based on skills, areas of responsibility, and
authority, elements were further structured and refined to encompass the
entire organization for better positioning of where current skills are and
where new skill would be needed. Additionally, it created a framework to
utilize artificial intelligence for matching skills and learning to individuals
in positions and learning journeys using the instructional platform.
After the scope was established, the introduction phase launched the
learning platform to deliver training that would eventually be linked to
operational data based on the career framework per position. The cloud-
based learning platform chosen was also a personal management system.
Personal management platforms typically include performance, compen-
sation, biographical, and career preference data. They allow a company to
determine skill gaps and define the required training supported by career
preferences and performance history.
The first accomplishment in the introduction phase was the incorpora-
tion of assessments into the new platform. The depth and specificity of these
assessments increased over time as the platform yielded analytics about
learning and skill gaps relevant for overall performance of employees and
collectively in the company. Supporting the introduction, social contacts,
connectivity, and communities of interest were launched to support the
70:20:10 model. Expert communities allow employees to share and aug-
ment learning as well as network across the organization. The introduction
phase continues at the time of this writing and will grow as the quality and
amount of data improve and connectivity to operational data is configured.
Sprint to 2027 275

The third component of the implementation plan is Grow. This phase is


designed to create, curate, and measure development, close skill gaps, iden-
tify new skills and individuals for their value to the company, and identify
employees requiring more in-depth development. This involves determining
learning needs as well as defining the skills needed in the near-to-medium
term currently not developed or identified among employees. Defining these
skills will be based on the competencies required to fill roles in the company
as it transitions into other industries and digitalization efforts continue.
As digitalization advances, skills will evolve from knowing how to
implement a procedure to knowing how the machine completes the pro-
cess, freeing up resources to perform more value-added functions. This
will require activities representing additional skills not identified in the
projection analysis. As the amount of available data increases, employees
use, understand, and manage new software and technology. Therefore,
the reevaluation of skill gaps both for individuals and collectively will be
a continual process.
At its core, the sprint to 2027 requires a strong learning culture that
is attractive to existing employees and one that attracts new, and better
skilled, ones. Four strategic interconnected components that PGS defined
to determine how commercially successful and attractive to employees the
learning culture is are:

• Support values and culture—learning is important to building a val-


ues-based culture and a sense of community. Employees’ interest in
working for a values-based sustainable enterprise that contributes to
society has increased over the last several years. This is especially true
for companies working in the energy and fossil-fuel industry (Lin &
Chiu, 2017).
• Engage—opportunities to learn are a key factor in motivating and
engaging employees. If they are motivated and engaged, they are more
likely to be satisfied with their current organization (Boushey & Glynn,
2012).
• Attract and retain talent—clear development processes are among the
top criteria for joining an organization. Conversely data show a lack
of it is a key reason people leave a company. A strong learning culture
provides quality learning that is meaningful for the individual and sup-
ports the values of the company (Bethke-Langenegger et al., 2011).
• Reinforce the employee brand—a strong learning culture supports being
the employer of choice. We compete in a shrinking talent pool, espe-
cially for specialized talent related to data and geo-science. Creating a
learning environment that engages and supports values is important to
creating the brand (Adkins & Rigoni, 2016).
276 Mark Jack Smith and Charles Dziuban

Conclusion
The PGS sprint to 2027 is a metaphor for analytics in the corporate
environment created by external conditions forcing a reexamination of
a longstanding business model. The process hinged on creating an agile
organization that must accommodate continual environmental churn.
The data that drove the transformation were extant and simple. No pre-
dictive models were developed here. The corporate environment shifted
rapidly and PGS was able to monitor those conditions. However, those
simple data became the basis of a new corporate ecosystem—one based on
the nine essential components described in this chapter. The operational
questions became:

1. What will we look like in five years?


2. What structural changes in the company are required?
3. How will those changes be made?
4. What current employee work skills must be upgraded or deleted and
what new ones need to be added?
5. What is it that we are doing now that will become obsolete and when?
6. How will contemporary technology disrupt current practices?
7. What staffing patterns will be required?
8. What steps must management take to lead the change and at what
level in the organization?
9. How digital must we become?
10. How virtual can we become?
11. How can we relate to our customers in more effective ways based on
their evolving needs?
12. How will we acquire, process, and disseminate data?
13. What will the corporate learning culture look like and how will we
assess it?
14. Do we have a strategic plan and if so, how will we implement it?

The list is long, but inherently necessary if PGS is to successfully evolve


and remain viable as an enterprise. The findings necessitated a far more
comprehensive analytics process. This transformation incorporated but
surpassed customer relations management predictive models. This is the
new normal corporate analytics—a complete transformation of a cul-
ture, one that will interface with the global market and determine how
its workforce and management will be reinvented, positioning PGS for
changes that cannot be anticipated but will most certainly force con-
tinual change. The irony of our having five-year plan is not lost on us.
Sprint to 2027 277

Ransbotham and colleagues (2016) make the point that the marginal and
conditional utilities of analytics in the corporate sector will require peri-
odic updates because their value add diminishes over time. These days
everyone is doing analytics and getting better at it, so the competition is
demanding.
PGS may be a small company, but it is complex. Also given its extreme
diversity it is virtually impossible to predict how an intervention will rip-
ple through it. Often, outcomes will be counter-intuitive and there will be
both positive and negative side effects that will have to be accommodated
(Forrester, 1971). Although the company designed its plan around nine
individual aspects, its culture and the new structure is far more than the
sum of its individual parts. This is the emergent property of complex sys-
tems and bases of what is described in case study (Johnson, 2017; Mitchell,
2011; Taleb, 2020). At its core emergence is a function of diversity, inter-
dependence, connectedness, and adaptiveness in proper balance. If one of
them becomes too extreme the system becomes dysfunctional. At some
level PGS has accommodated all four components. However, the extreme
diversity of the workforce creates known and has yet unknown challenges.
At its core, however, this case study describes how PGS accommodated
the reskilling and development of current employees and the onboard-
ing of new hires. Therefore, a new unit was created devoted to learning
and framed it around multiple modes of the instruction that defined a
new corporate learning paradigm that was well understood throughout
the organization. The model that evolved corresponded to Floridi’s four
stages of learning (Floridi, 2016).

1. Information—I know that I need reskilling.


2. Incipience—but I don’t know what reskilling I need.
3. Uncertainty—I’m not sure I will be able to do it.
4. Ignorance—I am unaware that I need reskilling.

PGS is in many ways a unique case undergoing a digital transformation


in an industry in transition. But like with so many other commercial
enterprises, the digital transformation means its employees will require
a wide range of reskilling. The new skills developed and the competen-
cies retained through the multiple difficulties of the business cycle need
to be more dynamically honed and utilized. By projecting what PGS will
be doing in five years and putting in place a data-driven reskilling strat-
egy and encouraging a learning culture, a unit devoted to learning and
development will become indispensable in any company’s commercial
success.
278 Mark Jack Smith and Charles Dziuban

References
Adkins, A., & Rigoni, B. (2016, June 30). Millennials want jobs to be development
opportunities. Gallup​.com​. https://www​.gallup​.com ​/workplace ​/236438​/
millennials​-jobs​-development​-opportunities​.aspx
Angrave, D., Charlwood, A., Kirkpatrick, I., Lawrence, M., & Stuart, M. (2016).
HR and analytics: Why HR is set to fail the big data challenge. Human
Resources Management Journal, 26(1), 1–11.
Bethke-Langenegger, P., Mahler, P., & Staffelbach, B. (2011). Effectiveness of
talent management strategies. European Journal of International Management,
5(5). https://doi​.org​/10​.1504​/ejim​. 2011​.042177
Boushey, H., & Glynn, S. J. (2012, November 16). There are significant business
costs to replacing employees. Center for American Progress. https://www​
.americanprogress​.org​/wp​- content​/uploads​/2012​/11​/CostofTurnover​.pdf
Carey, D., & Smith, M. (2016). How companies are using simulations,
competitions, and analytics to hire. Harvard Business Review. 4. https://
hbr​.org ​/ 2016 ​/ 04 ​/ how​- companies ​- are ​- using ​- simulations ​- competitions ​- and​
-analytics​-to​-hire
Clardy, A. (2018). 70-20-10 and the dominance of informal learning: A fact in
search of evidence. Human Resource Development Review, 17(2), 153–178.
https://doi​.org​/10​.1177​/1534484318759399
Delen, D., & Ram, S. (2018). Research challenges and opportunities in business
analytics. Journal of Business Analytics, 1(1), 2–12.
Floridi, L. (2016). The 4th revolution: How the infosphere is reshaping human
reality. Oxford University Press.
Forrester, J. W. (1971). Counterintuitive behavior of social systems. Theory and
Decision, 2(2), 109–140. https://doi​.org​/10​.1007​/ BF00148991
Johnson, N. F. (2017). Simply complexity: A clear guide to complexity theory.
Oneworld.
Korhonen, K. (2011). Adopting agile practices in teams with no direct
programming responsibility – A case study. In D. Caivano, M. Oivo, M.
T. Baldassarre, & G. Visaggio (Eds.), Product-Focused Software Process
Improvement. PROFES 2011. Lecture notes in computer science, 6759.
Springer.
Lin, C. L., & Chiu, S. K. (2017). The impact of shared values, corporate cultural
characteristics, and implementing corporate social responsibility on innovative
behavior. International Journal of Business Research Management (IJBRM),
8(2), 31–50.
Manville, B. (2001). Learning in the new economy. Leader to Leader, 20, 1–64.
https://doi​.org​/10​.1002/(issn)
Mitchell, M. (2011). Complexity: A guided tour. Oxford University Press.
Prollochs, N., & Feuerriegel, S. (2020). Business analytics for strategic
management: Identifying and assessing corporate challenges via topic
modeling. Information & Management, 57(1). https://www​.sciencedirect​.com​
/science​/article​/pii ​/S0378720617309254
Ransbotham, S., Kiron, D., & Kirk-Prentice, P. (2016). Beyond the hype: The
hard work behind analytics success. MITSloan management review: Research
report.
Sprint to 2027 279

Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of


undersea teleoperators. https://doi​.org​/10​. 21236​/ada057655
Simon, C., & Ferreiro, E. (2017). Workforce analytics: A case study of scholar-
practitioner collaboration. Human Resource Management, 57(3), 781–793.
Singh, S., & El-Kassar, A. N. (2019). Role of big data analytics in developing
sustainable capabilities. Journal of Cleaner Productions, 213, 1264–1273.
Smith, M. J. (2021). A blended learning case study: Geo exploration, adaptive
learning, and visual knowledge acquisition. In Blended learning: Research
perspectives, volume 3 (1st ed., Vol. 3, pp. 178–190). Routledge.
Taleb, N. N. (2020). Skin in the game: Hidden asymmetries in daily life. Random
House.
TGS, CGG and PGS announce versal, a unified ecosystem for accessing multi-
client seismic data across multiple vendors. GlobeNewswire News Room.
(2021, October 28). https://www​.globenewswire​.com ​/news​-release ​/2021​
/10 ​ / 28​ / 2322289​ / 0 ​ /en ​ / TGS ​ - CGG ​ - and​ - PGS ​ - Announce ​ -Versal​ - a​ - Unified​
-Ecosystem ​ - for ​ - Accessing ​ - Multi ​ - Client ​ - Seismic ​ - Data ​ - across ​ - Multiple​
-Vendors​.html
The future of jobs report 2020. World Economic Forum (2020, October). https://
www3​.weforum​.org ​/docs​/ WEF​_ Future ​_of​_ Jobs ​_ 2020​.pdf
Trudeau, J. (2018, January 23). Justin Trudeau’s Davos address in full. World
Economic Forum. https://www​.weforum​.org ​/agenda ​/2018​/01​/pm​-keynote​
-remarks​-for​-world​- economic​-forum​-2018/
Vidgen, R., Shaw, S., & Grant, D. (2017). Management challenges in creating
value from business analytics. European Journal of Operational Research,
261(2), 626–639.
Whitfield, S. (2019, March 12). Is the cloud mature enough for high-performance
computing? Journal of Petroleum Technology (JPT). https://jpt​.spe​.org​/cloud​
-mature​- enough​-high​-performance​- computing
16
ACADEMIC DIGITAL TRANSFORMATION
Focused on data, equity, and learning science

Karen Vignare, Megan Tesene, and Kristen Fox

Introduction
Institutions have been focused on improving student outcomes for dec-
ades (Vignare et al., 2018; Ehrmann, 2021). However, data continue to
show that student success remains inequitable within postsecondary edu-
cation institutions where Black, Latinx, Native American, Pacific Islander
students and those from low-income backgrounds are the most systemati-
cally penalized. Institutions also report that often first-generation adults,
single parents, and those with either physical or neurodiverse challenges
experience significant inequities in student success outcomes. These dif-
ferences in outcomes have not narrowed quickly enough, especially as
these student populations have grown and become the new majority of
students attending college (Fox, Khedkar et al., 2021).
The range of student success measured in graduation rates also varies
widely by type of institution. While the data show variation amongst insti-
tutional types (e.g., public, private, two-year, four-year, MSIs, HBCUs)
there are institutions, independent of type, which report higher rates of
student success regardless of student demographics (Fox, Khedkar et al.,
2021; Freeman et al., 2014; Lorenzo et al., 2006). It is clear that overall,
there are simply far too many students failing that should not. Recent
reporting on college completion indicates that less than half of college stu-
dents graduate on time and more than one million college students drop
out each year, with 75% of those being first-generation college students
(Katrowitz, 2021). Minoritized student populations are disproportion-
ately represented among those who drop out (Katrowitz, 2021). This lack
of success has long been blamed on students, but a better argument is that
DOI: 10.4324/9781003244271-20
Academic digital transformation 281

institutions are not incorporating learning science nor the use of real-time
student academic data to transform institutions, courses, and the student
experience in order to accelerate student success.
Prior to the pandemic, few institutions were leveraging any digital data
to support ongoing student success (Parnell et al., 2018). Further, the edu-
cational drop-out rate for students was persistent throughout the student
journey—into college, through first-year courses, as part of transfer, and
as returning adults and women and minorities in certain fields— this phe-
nomenon is often referred to as the Leaky Pipeline (Zimpher, 2009). This
analogy is weak at conveying the substantive impact and loss to both stu-
dents and society when students drop out or don’t succeed. As an outcome
of the pandemic, economists at Organisation for Economic Co-operation
and Development (OECD) have reminded us that the loss of a year’s
worth of education converts to 7–10% less in future earnings (Reimers
et al., 2020). College completion continues to remain critical to ensur-
ing that American workers remain competitive both at home and glob-
ally. When students drop out or fail, existing inequities are exacerbated
between those who are college educated and those who are not (Perez-
Johnson & Holzer, 2021). Those communities that have been historically
marginalized—whether due to racial, gender-based, economic, or other
categorizations of minoritization—will suffer the most (Perez-Johnson &
Holzer, 2021). The magnitude of loss to individuals and society is clear.
Through the intentional and effective use of internal data, institutions can
better monitor student success and set data-informed strategic goals that
aim to achieve equitable student success.
The types of interventions vary and there are a handful of highly suc-
cessful institutions that are improving student success and creating more
equitable outcomes. These institutional success stories include activities
such as the use of sophisticated advising technology coupled with smaller
caseloads that provide professional advisors with meaningful information
and time to identify, meet with, and support their students (Shaw et al.,
2022). With smaller caseloads, high-impact advising practices could more
often be implemented at scale (Shaw et al., 2022). Institutions that adopted
these practices had statistically significant evidence of narrowing the out-
comes gap for Black, Latinx, and Indigenous students (Shaw et al., 2022).
Other technological-based interventions include offering online or hybrid
courses to address student desires for increased flexibility (Bernard, 2014;
Means et al., 2013; James et al., 2016; Ortagus, 2022).
Several institutions have led well-known transformative change at the
presidential level. For example, the presidents of Arizona State University,
Georgia State University, and Southern New Hampshire University all
approached transformation work in very different ways but are similar
282 Karen Vignare, Megan Tesene, and Kristen Fox

in that they all expanded the mission of the universities to become more
equitable. Arizona State University and Georgia State University focused
on embracing the needs of their local minoritized student populations.
They challenged the university staff, faculty, and community to hold the
institution accountable so that local minoritized student populations
were as successful as any student segment. Southern New Hampshire
University instituted a model that benefitted adults through increased
online learning and competency-based education. The growth of Southern
New Hampshire University and the success of its part-time, highly diverse
students continue to inspire other institutions. Each of these institutions
shared a collaborative change approach as well as a willingness to utilize
technology to scale, data to measure and refine, and continuous innova-
tion efforts to improve equitable student success.
In many of the aforementioned institution transformation examples,
leadership initiated the changes, but we know that in order to meaning-
fully transform institutions so that they are more equitable and better
serve their diverse student communities, efforts must occur collabora-
tively across all levels of an institution. The collaborative change models
described within this chapter provide promising frameworks for those
seeking to establish intentional and effective transformation through-
out their institution. The lessons gleaned from these models highlight
the importance of implementing a collaborative model that integrates
top-down (institutional leadership), bottom-up (faculty-driven), and
middle-out (academic departments and institutional offices) approaches
to build institutional capacity and infrastructure so that progress
remains consistent across leadership or staffing changes (Miller et al.,
2017). Institutional successes in digital transformation are also impor-
tant and provide guidance in understanding how data can be used to
continuously improve processes and practices being implemented at an
institution, especially while learning is actively occurring and real-time
interventions can be leveraged (Krumm et al., 2018).
Ultimately, there is no single recipe for transformation as every institu-
tion has unique needs, resources, capacities, and contexts. As such, the
approach each institution takes should be designed for that institution.
However, when looking at institutions that are achieving transformative
equitable student success, we find common guideposts that are aligned
to the Association of American Universities (AAU) and Equitable Digital
Learning frameworks presented within this chapter. Through collabo-
ration across units, with measurement of progress through continuous
improvement, and applications of evidence-based improvement strategies,
institutions can better ensure that equitable student success is at the center
of their academic change efforts.
Academic digital transformation 283

Literature review
For several decades, the pursuit of academic transformation for student
success has been a critical focus by multiple associations, government
agencies, and philanthropic funders. There is clear evidence that active and
engaged teaching has been measured to be effective and improve student
outcomes (Miller et al., 2017; Singer et al., 2012). The AAU represents 65
research and doctoral universities, which are often recognized as contrib-
uting significantly to science, technology, engineering, and mathematics
(STEM) education. Like others, they recognized that STEM education
was not successful for all students and there was very little dissemination
of best practices by faculty members whose students were equitably suc-
cessful (AAU, 2017). Hopefully, with this work and the rigorous expec-
tations of National Science Foundation funding, more empirical studies
related to equitable student success will be conducted for STEM fields.
A meta-analysis of 225 studies found that students are more likely to
fail in courses taught in the traditional lecture style than when taught with
an active learning approach (Freeman et al., 2014). The authors reviewed
642 studies and selected 225 to include in their meta-analysis, with the
hypothesis that active learning versus traditional lecture maximizes learn-
ing and course performance. The courses included entry-level courses
across undergraduate STEM. The effect sizes indicate that on average,
student test scores and concept inventories increased by 0.47 standard
deviations under active learning (n = 158 studies), and that the odds ratio
for failing was 1.95 under traditional lecturing (n = 67 studies). These
results indicate that average examination scores improved by about 6%
in active learning sections, and that students in classes with traditional
lecturing were 1.5 times more likely to fail than were students in classes
with active learning. The results were tested among the different STEM
disciplines and no differences in effectiveness were found. The researchers
found that impact was greater in classes where only 50 students or less
were enrolled (Freeman et al., 2014). The significance of these findings
brings to the forefront, why isn’t active learning being used in more STEM
courses?
Additional research documents the positive impact that active learning
has for women and other minority populations such as Black students
and first-generation student communities (Eddy & Hogan, 2014; Haak et
al., 2011; Lorenzo et al., 2006; Singer et al., 2012). These earlier studies
looked at equitable student outcomes in active learning courses in STEM
within the small, but important, lens of course- and institutional-level
analyses. Each found that active learning was most beneficial to women
and minorities. Singer et al. (2012) laid out a case for discipline-based
284 Karen Vignare, Megan Tesene, and Kristen Fox

education research, providing research methods to effectively test instruc-


tional approaches and finding that in order to be beneficial, instruction
must be interactive. Eddy and Hogan’s (2014) biology course research
found that increasing active learning within a termed course structure was
transferable to other courses, but more importantly it increased Black and
first-generation students’ exam performance above that of other student
groups. The tools used to improve course structure included applications
of courseware for pre-class work and digital polling software to measure
in-class active learning (Eddy & Hogan, 2014). Lorenzo et al. (2006) fur-
ther found that the use of active learning reduced the gender gap within a
physics course.
As research on the positive effects of active learning was being studied,
online learning began meeting the needs of more adult learners. There has
been considerable disagreement about the effectiveness of online learning.
Much like active learning, as long as you use high-quality course design
and evidence-based digital teaching strategies, there is research that sup-
ports its effectiveness (Bernard et al., 2014; James et al., 2016; Means et
al., 2013). James et al. (2016) found that online courses help with flex-
ibility and progression for campus-based students. The data came from a
set of 20 universities that included two- and four-year public institutions
and for-profit universities. In their meta-analysis of research examining
the efficacy of online versus in-person instruction, Means et al. (2013)
concluded that while students in online courses tended to perform bet-
ter than those receiving face-to-face instruction, this was only true for
blended course options that combined elements of online and face-to-face
instruction where faculty implemented additional learning time, instruc-
tional resources, and interactive learning opportunities. Bernard et al.
(2014) looked at both blended learning and whether interactions with
other students, content, or instructor differed. Similar to Means et al.
(2013) there was moderate achievement effect size that was almost exactly
the same in both meta-analyses, even though each used different sources
and definitions. While each of these studies framed their research ques-
tions differently and sourced their studies or datasets differently, the find-
ings are pretty clear: digital and online learning can be highly beneficial
for students.
Much of the theoretical guidance on how to intentionally design online
and blended courses was influenced by the Community of Inquiry (COI)
framework, which substantiated that online and blended courses must
concurrently integrate three critical and interdependent elements: social,
cognitive, and teaching presence (Garrison et al., 2000). The COI frame-
work has lasted the test of time and spawned many studies which con-
clude that social, cognitive, and teaching presence must be interwoven
Academic digital transformation 285

into the design of online and blended courses to improve student outcomes
(Stenbom, 2018). Social presence is a combination of student-to-student
interaction and being connected to a social learning community with fac-
ulty. The cognitive presence represents the content and mastery of activi-
ties to ensure learning objectives are met. Teaching presence consists of
“the design, facilitation, and direction of cognitive and social processes
for the purpose of realizing personally meaningful and educationally
worthwhile learning outcomes” (Anderson et al., 2001, p. 5).
Stenbom’s (2018) analysis systematically reviewed the 103 research
papers written between 2008 and 2017 that included the use of the
Community of Inquiry survey, which was designed to operationalize
Garrison et al.’s (2000) CoI framework. The review validated the use of
the CoI survey, which looks for evidence of the framework through the
lens of coordinated social presence, teaching presence, and cognitive pres-
ence (Stenbom, 2018). The COI framework, like other high-quality teach-
ing frameworks, provides structure and outlines which activities would
need to be implemented for a course to qualify as “high-quality.” As is
the case with well-designed and implemented active learning practices, if
you leverage high-quality online teaching practices, online learning is also
highly effective.
Pre-COVID, there was considerable effort to bring digital tools, paired
with active learning teaching strategies, into face-to-face instruction
occurring in high-priority, high-enrollment, gateway and foundational
courses. This trend has persisted and as such, the use of digital tools and
online homework systems such as adaptive courseware has grown con-
siderably (Bryant et al., 2022). While many studies and institutions have
adapted coordinated, collaborative approaches to teach critical under-
graduate courses, it is common practice to allow each instructor to decide
on pedagogy on a class-by-class basis. From Means and Peters (2016) we
also know that attempts to scale effective evidence-based instructional
practices can be problematic and do not always result in the same level
of success. Many instructors use active learning practices, but how active
learning is implemented matters significantly (Peters & Means, in press).
The issue of instructional consistency is rarely covered in the literature,
but the need for a concerted effort to increase instructional consistency is
paramount—particularly for those instructional approaches that include
well-researched, effective pedagogy such as active learning (Peters &
Means, in press).
Underlying all these efforts at academic reform are the use of data,
learning science, and improved process. While many faculty and insti-
tutions regularly review data post-course implementation to redesign
and modify their courses, assessments, and activities, few use the data
286 Karen Vignare, Megan Tesene, and Kristen Fox

to adjust instruction during course implementation (Fox, Khedkar et al.,


2021). Few also build collaborative department-level teams such as that
proposed within the AAU framework (Miller, 2017) or the Active and
Adaptive model (Vignare et al., 2018). At the institutional planning level,
of the 25% of institutions that have a defined digital learning strategy,
only half of those utilize personalized learning and integrate student data
from multiple sources. In Fox, Vignare, et al. (2021), a strategic frame-
work for digital learning relies on a system’s approach that includes lead-
ership, budget, and policy; course design and delivery; student success
for digital learning; evaluation and analytics; professional learning; and
technology infrastructure.
In the following sections, additional information on the AAU and
Equitable Digital Learning frameworks can be found along with examples
of how they are being put into practice at select institutions. These institu-
tional examples provide considerable evidence on how to set up evidence-
based teaching regardless of course modality. Evidence-based teaching
can be effectively implemented and through a collaborative continuous
improvement approach, scaled in face-to-face, blended, or online courses.

Department and institutional change focused on students

Association of American Universities shifting


the culture around teaching
AAU’s work was represented by a culmination of many researchers as well
as some of the organization’s member institutions. The model they devel-
oped makes it clear that improved teaching and learning is not a singu-
lar responsibility, nor the sole responsibility of faculty. While faculty are
critical in ensuring that pedagogical reform can occur within an institu-
tion, those reforms require institutional leadership, institution-wide sup-
port for students and faculty, and transformative cultural changes (AAU,
2017; Miller et al., 2017). “Increasing the implementation and widespread
adoption of instructional strategies shown to be effective in teaching and
learning requires a systemic view of educational reform within academia”
(Miller et al., 2017).
Launched in 2011, the AAU undergraduate STEM Education Initiative
sought to address two problems: high attrition rates among declared STEM
majors and the failure of STEM education reforms to improve the quality
of teaching and learning through the widespread adoption of evidence-
based teaching practices (AAU, 2017). The project further attempted to
move beyond an approach that placed faculty as the sole levers of STEM
education reform, acknowledging that in order to achieve widespread
Academic digital transformation 287

CULTURAL CHANGE

SCAFFOLDING

PEDAGOGY

Pedagogy Scaffolding Cultural Change

• Providing faculty professional


• Articulated learning goals • Leadership commitment
development
• Provide faculty with easily • Establish strong measures of
• Educational practices
accessible resources teaching excellence
• Collect and share data on • Align incentives with the
• Assessments
program performance expectation of teaching excellence

• Access • Align future facilities planning

(Modified from AAU’s Framework for Systemic Change to Undergraduate STEM Teaching and Learning 2013; 2017)

FIGURE 16.1 AAU’s Framework for Systemic Change to Undergraduate STEM


Teaching and Learning

adoption of improved pedagogies, a macro-level approach was necessary


(AAU, 2017). AAU engaged its members in the development and refine-
ment of a framework (Figure 16.1) that could account for the complexity
of these problems. While the problems may be complex and multifaceted,
the AAU framework is simple in its framing of the solution: institutions
must prioritize quality pedagogy adopted by faculty in support of student
learning (pedagogy); the institution must provide a range of scaffolded
supports for both faculty and students in service of improved teaching and
learning (scaffolding); and the institution must enable broader cultural
changes that facilitate pedagogical reform (cultural change). Within each
of these layers, AAU (2017) identifies a set of key elements that should be
addressed to support meaningful reforms.

AAU case studies


Four of the case studies that AAU (2017) profiled include those from
Cornell University, Michigan State University, University of North
Carolina Chapel Hill, and University of Arizona. While each institution
288 Karen Vignare, Megan Tesene, and Kristen Fox

had a different focus and approach, common themes emerged across the
set of participating institutions that helped to inform the AAU frame-
work, as was the intention of those developing the framework so that
it would provide foundational guidance to the field while being flexible
enough to be adopted and implemented by a diverse set of higher educa-
tion institutions (AAU, 2017).
Cornell University: Cornell initially named their project, “Active
Learning Initiative” (ALI), which was aimed broadly at the university’s
undergraduate curriculum. The institution encouraged change through
an internal grant competition where the prime goal was to improve stu-
dent learning, measure impact, and scale impactful practices to additional
courses and departments.
Michigan State University: They focused on bringing together the
College of Education with STEM faculty under the Collaborative Research
in Education, Assessment and Teaching Environments (CREATE) initia-
tive. The focus was to build interdisciplinary collaboration and create a
space for tenure track faculty to focus on improving teaching through
innovation.
University of North Carolina at Chapel Hill: This university estab-
lished a mentor–apprenticeship model, where faculty teach redesigned
courses in pairs. Each pair includes one experienced faculty member and
one faculty with less experience. Large, introductory, typically lecture-
based courses are redesigned to include high-structure, active learning
(HSAL) practices.
University of Arizona: They implemented three distinct, but intercon-
nected strategies to change the faculty culture so that active learning and
student-centric instructional practices were prioritized. The institution
provided faculty professional development in evidence-based teaching
practices, developed faculty learning communities, and created collabora-
tive learning spaces (CLS) so that faculty could better implement active
learning instructional practices.
The recognition by AAU and its member organizations that improved
teaching and learning is imperative and requires coordinated, collabo-
rative change ushered in a new expectation—that all institutions must
prioritize the dissemination of evidence-based teaching practices so that
all faculty, across all sections, are providing better pedagogy to support
more equitable outcomes. Further, institutions must invest in the develop-
ment and resourcing of critical scaffolded supports that enable this work,
serving both faculty and students so that meaningful cultural change
can take root and flourish. This foundational work has proliferated and
continued beyond 2017, but it simply has not narrowed gaps enough in
outcomes for minoritized, low-income and first-generation students.
Academic digital transformation 289

Building an infrastructure that supports change


The seminal work outlined within the AAU framework (2017) inculcated
many efforts amongst colleges and universities to focus on pedagogi-
cal reforms (Singer, 2012) such as advanced technology infrastructure,
increased access to learning, and a deeper understanding of evidence-
based teaching (Gunder et al., 2021; Clark & Mayer, 2021; Peters &
Means, in press). Yet, student success outcomes continue to fall short and
faculty adoption of evidence-based instruction is sparse, regardless of aca-
demic discipline. The persistence of these problems requires higher edu-
cation leaders, institutions, and communities to identify and implement
practicable solutions that are both scalable and sustainable.
Today, it is clear that transformation needs to be driven digitally
(Brooks & McCormack, 2020). The Brooks & McCormack report
states, “Digital Transformation is a series of deep and coordinated cul-
ture, workforce, and technology shifts that enable new educational and
operating models and transform an institution’s business model, strategic
directions, and value proposition.” To drive transformation, institutions
need to create an “Equity-focused Digital Learning Infrastructure” (Fox,
Vignare et al., 2021). This framework brings together institutional units
that were previously siloed, but also recognizes that as a result of COVID
and our responsibility to measure our progress, technology is needed as
a foundation, a lever, and to house evidence. Yet, like the Brooks and
McCormack work, it recognizes that transformational infrastructure is
a deeply coordinated effort amongst leadership, faculty, and student sup-
port (Fox, Vignare et al., 2021; Vignare et al., 2020, 2022).

A strategic digital learning infrastructure model—


equitable digital learning framework
Fox, Vignare et al. (2021) define a digital learning infrastructure as con-
sisting of cross-functional areas required for the institution to sustain equi-
table digital learning at scale. The distributed technology helps maintain
continuity throughout the institution (Vignare et al., 2020, 2022). While
technology infrastructure is essential to continuity, there are elements of
digital-learning infrastructure that are critical to ensure that student suc-
cess is equitable. These six elements are described in detail in Figure 16.2,
they are: leadership, budget, and policy; course design and delivery; student
success for digital learning; evaluation and analytics; professional learning;
and technology infrastructure. Other researchers and experts have noted
that these cross-functional areas are critical for any sustained institution-
wide efforts (Crow & Dabars, 2020; Ehrmann, 2021; Gumbel, 2020).
290 Karen Vignare, Megan Tesene, and Kristen Fox

FIGURE 16.2 Equity Focused Digital Learning Infrastructure Elements

The Equitable Digital Learning framework was developed over a


period of several years and continuously iterated on through a series
of discussions with select institutions. These institutions were identi-
fied as exemplars and innovators in equitable student success and digi-
tal learning by engaging in innovative and successful equitable digital
learning initiatives on their campuses (Fox, Vignare et al., 2021). This
work was in partnership with the Every Learner Everywhere network
and supported through grant funds received by the Bill and Melinda
Gates Foundation (Fox, Vignare et al., 2021). Using a Delphi method,
we interrogated the framework through a series of interviews with
seven distinct institutions: Cuyahoga Community College, Fayetteville
State University, Georgia State University, Ivy Tech College, Tennessee
Board of Regents, University of California, Fresno (Fresno State), and
University of Texas El Paso.
The methodology consisted of a collection of data and interviews,
which were conducted by multiple team members across two organiza-
tions—the Association of Public and Land Grant Universities and Tyton
Academic digital transformation 291

Partners. Through an iterative interview process, the interview teams col-


lected and analyzed data from the institutions to develop a set of case
studies. Additional team members reviewed and contributed to the devel-
opment and synthesis of case studies, incorporating lessons learned. The
team then collaborated, reviewed the full set of case studies, and refined
the Equitable Digital Learning framework based on key learnings and
themes across all seven institutions.
The institutions profiled are notable as they serve significant numbers of
students of color and/or students from low-income backgrounds, invested
in redesigning general education and/or high-enrollment courses using
digital tools, leveraged the digital learning to close equity gaps, presented
evidence of success, and showed evidence of scaling the initiatives (Fox,
Vignare et al., 2021). The seven institutions took diverse approaches on
how they initiated and organized their initiatives but shared the common
goal of working institution-wide, setting priorities and improving, with
continued focus on achieving equitable student outcomes (see Appendix
in Fox, Vignare et al., for detailed institutional case studies). The use of
a framework allowed the team to extract seven overall recommendations
for institutions and leaders seeking to develop and sustain digital learning
that is equitably designed and implemented:

• use an institution-wide approach to define and implement equitable


digital learning
• build a sustainable business approach that may include both internal
and external funding
• build capacities to support high-quality and equitable course design
• engage in ongoing evaluation and continuous improvement
• create a learning community to equip faculty and staff with relevant
and continued professional learning
• make sure students are well supported
• leverage a flexible approach to utilize external partners to build
capacity

These institutions’ leaderships were critical at either initiating or catapult-


ing the work to scale digital learning, consistent with the work of highly
recognized leadership at institutions like Arizona State University and
Southern New Hampshire University (Crow & Dabars, 2020; LeBlanc,
2021). It also highlights how working from the middle and both up and
down within an institution, just as AAU’s work recognized (Miller et al.,
2017), is critical for successful collaborative change. Leadership should be
distributed, especially if an institution-wide approach is to be long-lasting.
Several organizations, such as the Tennessee Board of Regents ended up
292 Karen Vignare, Megan Tesene, and Kristen Fox

including this work as part of their strategic plan, which is a well-recog-


nized method for increasing institutional accountability.
Additionally, the presence of sustained funding to support investments
in digital learning are critically important. While strategic plans articulate
institutional priorities, these need to be supported by institutional invest-
ments and resource allocations. While increases to completion and persis-
tence rates have a net positive result on tuition and fee revenue overtime,
upfront investments are needed in order to put new initiatives in place.
A combination of an internal budget allocation, state funding, the use of
technology fees, and external philanthropic and grant funding are typical
sources that sustain digital learning initiatives across the academic enter-
prise (Fox, Vignare et al., 2021). Take for example, Fresno State, which
has used internal foundation funding to start new initiatives—building
sustainability plans for initiatives that rely on ongoing institutional budget
allocations, state funding, and philanthropy.
The research on redesigning courses is clear; and it incorporates learn-
ing science, course design, and learning theory (Ambrose et al., 2010
Clark & Mayer, 2021; Gunder et al., 2021; Twigg, 2005). One lesson
learned from these institutions is that it is necessary to invest in and pri-
oritize courses to ensure they are explicitly addressing the needs of stu-
dents of color, from low-income backgrounds, first-generation, and other
historically underserved student populations. For example, one way that
the University of Texas El Paso has put students in the center of its design
process is by providing faculty with information on the experiences and
strengths their large populations of Latino students and poverty-impacted
students bring to the classroom. Using this research and asset-based
approach, more faculty can engage students successfully (Joosten et al.,
2021; Peters & Means, in press).
Many of these institutions measured progress through institutional
and course-level data disaggregated by student demographic categories.
Georgia State University makes course data available to all faculty through
its focus on making data transparent and actionable (Gumbel, 2020). The
continuous improvement process, which measures outcomes and KPIs is
an approach that allows for all stakeholders to review progress. That said,
there is more research to be conducted via randomized control trials and
other rigorous methods. There is a recognition that likely we need both
a commitment to continuous improvement processes that rely on institu-
tional datasets to evaluate progress along with an opportunity to research
pedagogical changes.
The next recommendation that is critical is investment in building a
learning and caring community for faculty, staff, and students (AAU,
2017; Fox, Vignare et al., 2021). To create a supportive environment for
Academic digital transformation 293

students, those delivering education or services need to also learn and


be supported. Cuyahoga Community College and Ivy Tech College have
both made investments in professional learning for faculty. At Cuyahoga,
faculty designate professional learning topics that are of interest and
urgency to them. With the support of senior leadership, faculty create
and sustain faculty learning communities, which ensure that professional
development topics are highly tailored to their classroom and pedagogical
needs. Another example is Ivy Tech, where an investment has been made
in a center for teaching and learning with a team of faculty experts who
mentor instructors and support their ongoing development in courses.
The support provided has a particular focus on fostering quality instruc-
tion and on creating engaging courses that incorporate active learning.
Lastly, the Tennessee Board of Regents uses mini grants to support fac-
ulty (both full- and part-time) in implementing new pedagogies and tools
in the classroom, compensating them for their time and efforts. Each of
the institutions makes significant investments in supporting and celebrat-
ing instructors as they work to incorporate digital pedagogies into their
courses. Centers for teaching and learning serve as key facilitators and
hubs for instructors, instructional designers, and academic leaders seek-
ing to transform and redesign courses. Faculty learning communities
enable instructors to collaboratively learn new skills and approaches and
external partnerships with organizations helps to augment capacity.
Much was learned during COVID about inequitable and complicat-
ing factors for students of color and students from low-income back-
grounds; many who don’t have the same internet access, devices, quiet
spaces to work, and access to digital instruction (Means & Neisler, 2020).
Success in digital learning requires thoughtful and targeted student sup-
port. Ensuring support to all students in their ability to access and use
digital tools starts with acknowledging that students enter their educa-
tional experiences with varying levels of access to Wi-Fi, devices, and lit-
eracy and experience with digital tools. At Fayetteville State University,
the institution supports students through a combination of a technology
(including nudges—an online platform that connects students to institu-
tion-wide tutoring and support services) and personalized support (i.e.,
peer mentorship and advisors).
These institutions, like others, also faced the challenge of determin-
ing whether to leverage external partners or build capacity internally.
Successful institutions such as Arizona State University, Georgia State
University, and Southern New Hampshire University embrace both exter-
nal innovation and partners alongside building internal capacity (Crow
& Dabars, 2020; Gumbel, 2020; LeBlanc, 2021). Three institutions
increased the size of their historically underserved student populations,
294 Karen Vignare, Megan Tesene, and Kristen Fox

improved student success for those students and others, and incorporated
external partners. These case study institutions sought and established
partnerships with organizations with expertise in digital learning and
equity and knew where their institution’s internal capacity or internal
capacity limits resided and brought in expertise and support for histori-
cally under-resourced students. Considering where to partner for capabili-
ties and services versus where to invest in them permanently is a strategy
that can offer cost savings for institutions or access to targeted expertise
in digital learning and equitable course design (Fox, Vignare et al., 2021).
The findings from these case studies (see Table 16.1) support the argu-
ment that an integrated digital learning infrastructure can make the dif-
ference in increasing equitable student success (Fox, Vignare et al., 2021;
Vignare et al., 2018). This framework can be leveraged by any institution.
Given the flexibility and needs of students (and faculty) and the need to
plan for learning continuity, it is critical that institutions see a digital
infrastructure as not only as one that assumes growing online learning
as a primary goal, but one that supports the entire institution regardless
of course modality. By building out the digital infrastructure, an institu-
tion is more capable of weathering the unexpected. Additionally, digital
infrastructure makes measurement of disaggregated student data much
more accessible and transparent (Maas et al., 2016, Vignare et al., 2016;
Wagner, 2019). When institutions use outdated data to inform interven-
tions for students, it is often too late.

Conclusion
For decades, higher education institutions have attempted to find frame-
works, methodologies, and approaches that effectively describe and guide
the work necessary to improve equitable outcomes in critical courses.
Many institutions show evidence of impact, but the national data con-
tinue to show that outcomes for racially minoritized, low-income, and
first-generation students have not progressed or improved in equitable
ways. The continued lack of progress compounded by the 1.4 million stu-
dents who have left higher education since the onset of COVID as well as
the steady declines in enrollment across student populations is particu-
larly alarming (NSC, 2022). Ultimately, the impact of prior work in these
areas is not sufficient.
Fox, Vignare et al.’s (2021) framework makes a clear case for an inte-
grated approach in order to drive the actions and planning of institutional
leaders to achieve more equitable outcomes. An additional focus must be
on establishing cultural changes informed by real-time data, focused to
interrogate prioritized problems such as equity gaps in critical courses.
Academic digital transformation 295

TABLE 16.1 Institutional Case Studies

Institution Institution Student Characteristics, Fall 2020*


Characteristics

Cuyahoga 2-year public 40% Age 25+


Community 34% Pell**
College 36% Students of color^
19,000 Total undergraduate enrollment^^
Fayetteville 4-year public, 40% Age 25+
State HBCU 53% Pell
University 75% Students of color
6,000 Total undergraduate enrollment
Georgia State 4-year public 16% Age 25+
University 48% Pell
75% Students of color
29,000 Total undergraduate enrollment
Ivy Tech 2-year public 38% Age 25+
Community 40% Pell
College 24% Students of color
64,000 Total undergraduate enrollment
Tennessee 2-year state 28% Age 25+
Board of system 41% Pell
Regents 27% Students of color
102,000 Total undergraduate enrollment
University of 4-year public, 15% Age 25+
California, HSI, 55% Pell
Fresno AANAPISI 74% Students of color
(Fresno State) 23,000 Total undergraduate enrollment
University of 4-year public, 21% Age 25+
Texas at El HSI 58% Pell
Paso 90% Students of color
21,000 Total undergraduate enrollment

Source: Fox, Vignare, et al., 2021


*All calculations reflect Fall 2020, although the Tennessee Board of Regents characteristics
are calculated for Fall 2019.
**Percent Pell is defined as the percent of total undergraduate students receiving the Federal
Pell Grant. Pell Grant receipt functions as one measure of student income, though an imper-
fect one; at community colleges, Pell eligibility (and therefore receipt) is often lower because
many part-time students at community colleges do not file for FAFSA to be able to receive
the Pell Grant.
^Percent Students of color is defined as percent of total undergraduate students that iden-
tify as Black or African American, Asian, Hispanic/Latino, American Indian or Alaskan
Native, Native Hawaiian or other Pacific Islander, or two or more races.
^^Total undergraduate enrollment includes transfer-in undergraduate enrollment.
However, due to data limitations, the Tennessee Board of Regents undergraduate enroll-
ment estimate does not include transfer-in students.
296 Karen Vignare, Megan Tesene, and Kristen Fox

While a digital infrastructure plays an important role by making data


transparent, it relies on cultural change that must occur from the bot-
tom up, middle out, and top down (AAU, 2017; Austin, 2011). Bringing
together focused efforts using integrated and sustained approaches is
essential to withstand internal changes in personnel and leadership.
Digitization and data availability alone cannot lead to transformation
without meaningful changes in leadership and process at the institutional,
departmental, and classroom levels (Brooks & McCormack, 2020; Maas
et al., 2016). The AAU and Equitable Digital Learning frameworks and
their representative institutional use cases provide clarity about how these
reforms can support outcome improvement while instilling guidance for
institutional leaders seeking to engage in this work (Fox, Vignare et al.,
2021; Miller et al., 2017).

References
Ambrose, S., Bridges, M., DiPietro, M., Lovett, M., & Norman, M. (2010).
How learning works: Seven research-based principles for smart teaching.
Jossey-Bass.
Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching
presence in a computer conferencing context. Journal of Asynchronous
Learning Networks, 5(2). https://olj​.onl​inel​earn​ingc​onsortium​.org​/index​.php​
/olj​/article ​/view​/1875
Association of American Universities. (2017). Progress toward achieving system
change: A five-year status report on the AAU undergraduate STEM education
initiative. https://www​.aau​.edu ​/sites​/default​/files​/AAU​-Files​/STEM​-Education​
-Initiative​/STEM​- Status​-Report​.pdf
Austin, A. E. (2011). Promoting evidence-based change in undergraduate science
education (pp. 1–25). National Academies’ Board on Science Education.
https://sites​.nationalacademies​.org​/cs​/groups​/dbassesite​/documents​/webpage​/
dbasse​_ 072578​.pdf
Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami,
P. C. (2014). A meta-analysis of blended learning and technology use in
higher education: From the general to the applied. Journal of Computing
in Higher Education, 26(1), 87–122. https://doi​.org​/10​.1007​/s12528​- 013​-
9077-3
Boyer commission on educating undergraduates in the research university.
(1998). Reinventing undergraduate education: A blueprint for America’s
research universities. https://dspace​.sunyconnect​.suny​.edu ​/ bitstream ​/ handle​
/1951 ​/ 26012 ​/ Reinventing​%20Undergraduate​%20Education​%20​%28Boyer​
%20Report​%20I​%29​.pdf​?sequence​=1​&isAllowed=y
Brooks, D. C., & McCormack, M. (2020, June). Driving digital transformation
in higher education. EDUCASE Center for Analysis and Research. https://
library​.educause​.edu/-​/media ​/files​/ library​/ 2020​/6​/dx2020​.pdf​?la​= en​&hash=​​
28FB8​​C377B​​59AFB​​1855C​​225BB​​A8E3C​​F BB0A ​​271DA​
Academic digital transformation 297

Bryant, G., Fox, K., Yuan, L., Dorn, H., & NeJame, L. (2022, July 11). Time for
class: The state of digital learning and courseware adoption. Tyton Partners.
https://tytonpartners​. com ​/ time​-for​- class​- the​- state​- of​- digital​- learning​- and​
-courseware​-adoption/
Clark, R., & Mayer, R. (2021). E-learning and the science of instruction: Proven
guidelines for consumers and designers of multimedia learning (4th ed.).
Wiley.
Crow, M., & Dabars, W. (2020). The fifth wave: The evolution of higher
education. Johns Hopkins Press.
Eddy, S., & Hogan, K. (2014). Getting under the hood: How and for whom does
increasing course structure work? CBE-Life Sciences Education, 13(3), 453–
468. https://doi​.org ​/10​.1187​/cbe​.14 ​- 03​- 0050
Ehrmann, S. (2021). Pursuing quality, access, and affordability: A field guide to
improving higher education. Stylus Publishing.
Fox, K., Khedkar, N., Bryant, G., NeJame, L., Dorn, H., & Nguyen, A. (2021).
Time for class – 2021: The state of digital learning and courseware adoption.
Tyton Partners; Bay View Analytics; Every Learner Everywhere. https://www​
.eve​r yle​arne​reve​r ywhere​.org ​/wp​- content ​/uploads​/ Time​-for​- Class​-2021​.pdf
Fox, K., Vignare, K., Yuan, L., Tesene, M., Beltran, K., Schweizer, H., Brokos,
M., & Seaborn, R. (2021). Strategies for implementing digital learning
infrastructure to support equitable outcomes: A case-based guidebook for
institutional leaders. Every Learner Everywhere; Association of Public and
Land-Grant Universities; Tyton Partners. https://www​.eve​r yle​arne​reve​r ywhere​
.org​ / wp​ - content​ / uploads​ / Strategies​ - for​ - Implementing​ - Digital​ - Learning​
-Infrastructure​- to ​- Support​- Equitable​- Outcomes​-ACC ​- FINAL ​-1​- Updated​
-Links​-by​- CF​-2​.pdf
Freeman, S., Eddy, S., McDonough, M., Smith, M., Okoroafor, N., Jordt, H.,
& Wenderoth, M. (2014). Active learning increases student performance in
science, engineering, and mathematics. Proceedings of the National Academy
of Sciences of the United States of America, 111(23), 8410–8415. https://doi​
.org​/10​.1187​/cbe​.14​- 03​- 005010​.1073​/pnas​.1319030111
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-
based environment: Computer conferencing in higher education. Internet
and Higher Education, 2(2–3), 87–105. https://doi​.org​/10​.1016​/S1096​
-7516(00)00016-6
Gumbel, A. (2020). Won’t lose this dream: How an upstart urban university
rewrote the rules of a broken system. The New Press.
Gunder, A., Vignare, K., Adams, S., McGuire, A., & Rafferty, J. (2021).
Optimizing high-quality digital learning experiences: A playbook for faculty.
Every Learner Everywhere; Online Learning Consortium (OLC); Association
of Public and Land-Grant Universities. https://www​.eve​r yle​arne​reve​r ywhere​
.org​/wp- conte​nt/up​loads​/ele_​​facul​​t ypla​​ybook​​_ 2021​​_v3a_​​gc​- Up​​dated​​-link​​s​​
-by-​​C F​.pd​f
Haak, D., Hille Ris Lambers, J., Pitre, E., & Freeman, S. (2011). Increased
structure and active learning reduce the achievement gap in introductory
biology. Science, 332(6034), 1213–1216. https://doi​.org​/10​.1126​/science​
.1204820
298 Karen Vignare, Megan Tesene, and Kristen Fox

James, S., Swan, K., & Datson, C. (2016). Retention, progression and the taking
of online courses. Online Learning, 20(2), 75–96. https://doi​.org​/10​. 24059​/
OLJ​.V20I2​.780
Joosten, T., Harness, L., Poulin, R., Davis, V., & Baker, M. (2021). Research
review: Educational technologies and their impact on student success for
racial and ethnic groups of interest. WICHE Cooperative for Educational
Technologies (WCET). https://wcet​.wiche​.edu​/wp​- content​/uploads​/sites​/11​
/2021 ​/ 07​/ Research​- Review​- Educational​-Technologies​- and​-Their​- Impact​- on​
-Student​- Success​-for​- Certain​-Racial​-and​-Ethnic​- Groups ​_ Acc ​_v21​.pdf
Kantrowitz, M. (2021, November 18). Shocking statistics about college graduation
rates. Forbes. https://www​.forbes​.com ​/sites​/markkantrowitz ​/2021​/11​/18​/
shocking​-statistics​-about​- college​-graduation​-rates/​?sh​=2e5bd8622b69
Krumm, A., Means, B., & Bienkowski, M. (2018). Learning analytics goes to
school: A collaborative approach to improving education. Routledge.
LeBlanc, P. (2021). Students first: Equity, access and opportunity in higher
education. Harvard Educational Press.
Lorenzo, M., Crouch, C. H., & Mazur, E. (2006). Reducing the gender gap in the
physics classroom. American Journal of Physics, 74(2), 118–122. https://doi​
.org​/10​.1119​/1​. 2162549
Maas, B., Abel, R., Suess, J., & O’Brien, J. (2016, June 8). Next-generation digital
learning environments: Closer than you think. [Conference presentation].
EUNIS 2016: Crossroads where the past meets the future, Thessalo-niki.
Greece. https://www​.eunis​.org​/eunis2016​/wp​- content​/uploads​/sites​/8​/2016​
/03​/ EUNIS2016 ​_paper​_4​.pdf
Means, B., & Neisler, J. (2020). Suddenly online: A national survey of undergraduates
during the COVID-19 pandemic. Digital Promise. https://elestage​.wpengine​.com​/
wp​-content​/uploads​/Suddenly​-Online​_ DP​_ FINAL​.pdf
Means, B., & Peters, V. (2016, April). The influences of scaling up digital
learning resources [Paper presentation]. 2016 American Educational Research
Association annual meeting. Washington, DC. https://www​ .sri​
.com​/wp​
-content ​/uploads​/2021​/12​/scaling​-digital​-means​-peters​-2016​.pdf
Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online
and blended learning: A meta-analysis of the empirical literature. Teachers
College Record, 115(3), 1–47. https://doi​.org​/10​.1177​/016146811311500307
Miller, E., Fairweather, J., Slakey, L., Smith, T. & King, T. (2017). Catalyzing
institution transformation: Insights for the AAU STEM initiation. Change:
The Magazine of Higher Learning, 49(5), 36–45. http://dx​.doi​.org​/10​.1080​
/00091383​. 2017​.1366810
National Student Clearinghouse Research Center (NSC). (2022, May 26). Current
term enrollment estimates: Spring 2022. https://nscresearchcenter​.org​/current​
-term​- enrollment​- estimates/
Ortagus, J. (2022, August). Digitally divided: The varying impacts of online
enrollment on degree completion (Working Paper No. 2201). https://ihe​
.education​.ufl​.edu​/ wp​- content​/uploads​/ 2022​/ 08​/ IHE​-Working​- Paper​-2201​
_Digitally​-Divided​.pdf
Parnell, A., Jones, D., & Brooks, D. C. (2018). Institutions’ use of data and
analytics for student success: Reports from a national landscape analysis.
Academic digital transformation 299

National Association of Student Affairs Administrators in Higher Education


(NASPA); EDUCAUSE. https://library​.educause​.edu/-​/media ​/files​/ library​
/2018​/4​/useofdata2018report​.pdf
Perez-Johnson, I., & Holzer, H. (2021, April). The importance of workforce
development for a future-ready, resilient, and equitable American economy.
American Institutes for Research. https://www​.air​.org ​/sites​/default​/files​/
WDEMP​-Importance​-of​-Workforce​-Development​-Brief​-April​-2021​.pdf
Peters, V., & Means, B. (in press). Evidence-based teaching practices in higher
education: A systematic review. Pre-publication copy.
Reimers, F., Schleicher, A., & Ansah, G. (2020). A framework to guide an
education response to the COVID-19 pandemic of 2020. The Organisation
for Economic Co-Operation (OECD). https://globaled​.gse​.harvard​.edu​/files​/
geii​/files​/framework​_ guide​_v2​.pdf
Shaw, C., Bharadwaj, P., Smith, S., Spence, M., Nguyen, A., & Bryant, G. (2022,
July 11). Driving towards a degree: Closing outcome gaps through student
supports. Tyton Partners. https://drivetodegree​.org ​/wp​- content ​/uploads​/2022​
/07​/ TYT111​_ D2D22​_ Rd7​.pdf
Singer, S. R., Nielsen, N. R., & Schweingruber, H. A. (Eds.). (2012). Discipline-
based education research: Understanding and improving learning in
undergraduate science and engineering. The National Academies Press.
https://www​. nap​ . edu ​ /catalog ​ / 13362 ​ /discipline ​ - based​ - education​ - research​
-understanding​-and​-improvinglearning​-in​-undergraduate
Stenbom, S. (2018). A systematic review of the community of inquiry survey.
Internet and Higher Education, 39, 22–32. https://doi​.org​/10​.1016​/j​.iheduc​
.2018​.06​.001
Twigg, C. A. (2005). Increasing success for underserved students: Redesigning
introductory courses. National Center for Academic Transformation. https://
www​.thencat​.org ​/ Monographs​/ IncSuccess​.pdf
Vignare, K. (Ed.). (2020). APLU special issue on implementing adaptive learning at
scale. Current Issues in Emerging eLearning, 7(1), 1–137. https://scholarworks​
.umb​.edu ​/cgi ​/viewcontent​.cgi​?article​=1097​&context​= ciee
Vignare, K., Lammers Cole, E., Greenwood, J., Buchan, T., Tesene, M., DeGruyter,
J., Carter, D., Luke, R., O’Sullivan, P., Berg, K., Johnson, D., & Kruse, S.
(2018). A guide for implementing adaptive courseware: From planning
through scaling. Association of Public and Land Grant Universities; Every
Learner Everywhere. https://www​.aplu​.org ​/ library​/a​-guide​-for​-implementing​
-adaptive​- courseware​-from​-planning​-through​-scaling ​/file
Vignare, K., Lorenzo, G., Tesene, M., & Reed, N. (2020; 2022). Improving
critical courses using digital learning & evidence-based pedagogy. Association
of Public and Land-Grant Universities; Every Learner Everywhere. https://
www​.eve​r yle ​a rne​reve​r ywhere​.org ​/ wp ​- content ​/uploads ​/ Improving ​- Critical​
-Courses​-1​.pdf
Wagner, E. (2019). Learning engineering: A primer. The Learning Guild. https://
www​.learningguild​.com ​/insights​/238​/ learning​- engineering​-a​-primer/
Zimpher, N. (2009, May 28). The leaky pipeline: IT can help. Educause Review,
44(3), 4–5. https://er​.educause​.edu​/articles​/2009​/5​/the​-leaky​-pipeline​-it​- can​
-help
SECTION V

Closing



17
FUTURE TECHNOLOGICAL
TRENDS AND RESEARCH
Anthony G. Picciano

In September 2019, I published an article in the Online Learning Journal


entitled “Artificial intelligence and the academy’s loss of purpose!” The arti-
cle speculated on the future of higher education as online technology, specifi-
cally data analytics and adaptive learning infused by artificial intelligence (AI)
software, develops and matures. Online and adaptive learning had already
advanced within the academy, but it was my position that the most significant
changes were yet to come. I proposed a model in which advanced nanotech-
nology and quantum computing would usher in new developments in man–
machine interfacing. I further speculated that these developments depended
upon AI software, super-cloud computing, robotics, and biosensing technol-
ogy all of which held possibilities for radically altering the way most organi-
zations including schools, colleges, and universities functioned. In 2019, it
was my estimate that most of this development was at least a decade or more
away. I was wrong, not on the nature of these developments but when they
would occur. In fact, we are seeing many of these developments now and they
are accelerating as a result of broader acceptance in our society of technology
and especially because of the advent of the coronavirus pandemic that at the
time of this writing has been ravaging the world for two years. COVID-19
and its recurring variants have forced all enterprises, including education, to
intensify efforts to utilize technology.

The evolving technological landscape


Any attempt at predicting the future should be based on calculated spec-
ulation. Over the next decade, digital technologies will advance in the

DOI: 10.4324/9781003244271-22
304 Anthony G. Picciano

Technology Shaping Man-Maching Interfacing

Artificial
Super Cloud
Intelligence
Man-
Machine
Interfacing
Biosensing
Robotics
Devices

Nanotechnology and Quantum Computing

FIGURE 17.1 Technology Forces Shaping the Future of Man-Machine Interfacing

development of man–machine interfacing or the ability of digital technol-


ogy to interact more directly with and assist in human activities. Figure
17.1 provides an overview of the major technologies presently in various
stages of development and evolution. Nanotechnology and quantum com-
puting form the base for the development of man–machine interfaces such
as AI, biosensing devices, robotics, and super-cloud computing. These
technologies are already visible, but in another ten years they will mature,
integrate, and realize their greatest impact. For the purposes of data ana-
lytics and adaptive learning, AI and the super-cloud are most important in
terms of impact on education. For a more extensive description of the ele-
ments in Figure 17.1, please see “Artificial Intelligence and the Academy’s
Loss of Purpose” (Picciano, 2019).
“Nano” refers to a billionth of a meter or the width of five carbon
atoms. The simplest definition of nanotechnology is technology that func-
tions very close to the atomic level. Governments around the world have
been investing billions of dollars to develop applications using it, focusing
on areas such as medicine, energy, materials fabrication, and consumer
products. However, companies such as Intel and IBM have been devel-
oping nanochip technology with the potential to change the scope of all
computing and communications equipment. IBM, since 2015, has been
producing a chip with transistors that are just 7 nanometers wide, or about
1/10,000th the width of a human hair. Nanochip technology is here now
and is developing into commercial production and application. During the
next decade, it will become a mature and ubiquitous technology.
By the late 2020s, the concept of the digital computer may give way
to the quantum computer operating entirely on a scale the size of atoms
Future technological trends and research 305

and smaller. Major companies are now investing billions of dollars in


quantum computing development. IBM is presently getting ready to roll
out and sell 1,000 qubit quantum computers for commercial use (Science
Insider, 2020). Another decade or so of research and development on
quantum computers will find their speed thousands of times faster than
the speed of today’s supercomputers. The storage capacity of such equip-
ment will replace the gigabyte (109) and terabyte (1012) world with zet-
tabyte (1021) and yottabyte (1024) devices. Large-scale digitization of all
the world’s data will occur with access available on mobile devices. And
all this technology and computing power will eventually be less expensive
than it is now. Nanotechnology and quantum computing will provide the
underlying base for the development of a host of new applications using AI
and super-cloud computing. The first generation of quantum computers
will likely be available via the super-cloud and geared to specific applica-
tions related to large-scale, complex research in areas such as neurosci-
ence, NASA projects, DNA, climate simulations, and machine learning.
These will be followed by applications for everyday activities in commer-
cial enterprises. They will specifically change the way people work and
interact on a daily basis. It is quite possible that many jobs will be dis-
placed by these technologies. Joseph Aoun, the president of Northeastern
University and author of Robot Proof, Higher Education in the Age of
Artificial Intelligence, commented:

If technology can replace human beings on the job, it will. Preventing


business owners from adopting a labor-saving technology would
require modifying the basic incentives built into the market economy.
(Aoun, 2017, p. 46)

Beyond the market economy, this also holds true for education, medicine,
law, and other professions.

Enter COVID-19
COVID-19 has added a new dimension to the technological landscape in
all aspects of our society including education. In Spring 2020, when the
coronavirus reached our shores, it was estimated that over 90 percent of
all courses offered in education had an online component. Faculty in all
sectors converted their courses as quickly as they could to remote learn-
ing, mainly because they had no choice. It was a clear emergency with
their own and their students’ health at risk. In the opinion of some, online
technology saved the semester for the education sector during the pan-
demic (Ubell, 2020). It forced many faculty, who had never used online
306 Anthony G. Picciano

education or had used it modestly, to depend upon it for instruction.


Synchronous online communication using Zoom and other videoconfer-
encing software became especially popular. Many faculty have contin-
ued to use online facilities even as the COVID-19 pandemic has eased.
Blended learning models combining face-to-face and online instruction
have become more popular and are the new normal in education with a
good deal of instruction provided online.
In the near future, COVID-19 will continue to have serious ramifi-
cations for education. First, faculty were forced to move their courses
online with little time for planning or testing. Early feedback from
faculty is that many were able to adjust and, in fact, had decent or
good experiences with online instruction. Faculty, who taught online
for years, have come to see its effectiveness, and the same is prov-
ing true for many forced into it. Those faculty new to online learning
came to appreciate the ability to continue class discussions beyond bell
schedules, to provide students convenient access to media, simulations,
and other illustrations, and to mix a variety of instructional modali-
ties into their teaching. Faculty and commuter students who were new
to online instruction also discovered the convenience and cost savings
of not having to travel to a campus for part or all of their courses. It
should be noted that K–12 education has not embraced the fully online
models. This sector which has great responsibility for the social and
emotional development of students, has integrated online instruction
using blended models and has preferred to function mostly in tradi-
tional, face-to-face environments.
Second, COVID-19 caused major financial strains in education in gen-
eral and especially in colleges and universities dependent upon tuition.
Many colleges were forced to refund tuition, dormitory charges, and
other fees since students were required to leave campus and attend courses
online. However, the pandemic subsequently also impacted student enroll-
ments at all levels. In Spring 2022, the National Student Clearinghouse
Research Center (May 22, 2022) reported that the enrollment in American
higher education was down 4.1 percent or 685,000 students from the pre-
vious Spring 2021 semester. All segments (public, private, for-profit) were
affected. Undergraduate enrollment dropped by 4.7 percent. Public higher
education, especially community colleges, were among the hardest hit,
resulting in many state and local governments diverting resources to other
budgetary areas. Several states such as Wisconsin, Georgia, Connecticut,
and Alaska have already taken draconian steps of closing and consolidat-
ing colleges. Private residential institutions likewise have concerns about
their ability to attract students if they must operate either fully or partially
online. Without a residential experience, students and their parents may
Future technological trends and research 307

question why they should pay tuition that is much higher than that at
public institutions.
Third, prior to the pandemic, to be competitive and to attract more
students, education policymakers and administrators at colleges and uni-
versities were providing incentives to many of their faculty to teach online.
While many faculty resisted these calls, their positions and reasons for
resistance have mellowed. Indeed, many have also come to see the benefits
of online teaching. In fact, the genie may be out of the bottle and faculty
in all sectors are beginning to embrace some form of online instruction
for pedagogical reasons as well as health, safety, and the convenience of
time and commuting. Tom Standage (2021), the editor of The Economist,
in an article entitled, “The World Ahead 2022,” identified ten trends to
watch in the future, including Number Four on his list: “There is broad
consensus that the future is ‘hybrid’ and that more people will spend more
days working from home” (Standage, 2021, p. 11). He attributed this to
the effect that the pandemic had on work styles wherein people moved
online to perform their job responsibilities.

How will educators adapt to advances in technology?


The critical questions to be answered are how will educators address and
then adapt to the new technologies. Teachers may have to adjust to a tutor
role rather than develop their own content or pedagogical practices. There
will exist many more off the shelf courses developed at another school,
college, university, or likely by a private supplier. Full-time, tenure-bear-
ing faculty at colleges and universities will likely see their ranks reduced
and replaced by contingent and part-time faculty. This will accelerate a
trend that commenced in the early years of the twenty-first century.
Technology has been evolving for decades that provides more accurate
and timely data to assist all manners of education activities. However,
the nature of this assistance has begun to shift with the technology not
just providing data but making critical decisions. For example, faculty
researchers, especially those engaged in large-scale projects that involve
multiple partners in the academy and in private industry, will work
increasingly with AI algorithms. The lead researchers may not be people
but the algorithms themselves. For example, two AI programs, DeepMind
and RoseTTAFold, are taking the ability to predict protein structures to
new heights. In 2020, these programs first succeeded in modeling the
3D shapes of individual proteins as accurately as decades-old experi-
mental techniques (Picciano, December 4, 2020). In 2021, researchers
used these AI programs to assemble a near-complete catalog of human
protein structures, something that had never been done before. Now,
308 Anthony G. Picciano

researchers have upped the ante once again, unveiling a combination of


programs that can determine which proteins are likely to interact with
one another and what the resulting complexes of the cell look like. None
of these developments would have happened without the facilities pro-
vided by AI. Mohammed AlQuraishi, a biologist at the Harvard Medical
School who has dedicated his career to protein research, commented
that he felt “a melancholy” seeing the performance of DeepMind. “I was
surprised and deflated … The smartest and most ambitious researchers
wanting to work on protein structure will look to DeepMind for oppor-
tunities” (AlQuraishi, 2018). He urged the life-sciences community to
shift their attention toward the kind of AI work practiced by DeepMind.
Another researcher, Derek Lowe said “It is not that machines are going
to replace chemists. It’s that the chemists who use machines will replace
those that don’t” (Metz, February 5, 2019). Lowe too was predicting that
successful research was moving into a blended environment of man and
AI-enhanced technology.
Printed books and other library holdings are moving to all-elec-
tronic access, with AI speeding searches for materials and delivering
them within minutes on mobile devices. The textbook industry, includ-
ing the production and inventory of expensive print versions, is facing
difficult times. More faculty are moving toward electronic versions or
no textbooks at all, while students are bypassing purchasing expensive
volumes altogether. In a survey conducted in 2020, 65 percent of stu-
dents reported skipping buying a textbook because of cost; 63 percent
skipped purchasing one during the same period the previous year (US
PIRG, February 24, 2021). The Alamo Colleges (San Antonio, Texas)
have taken an extra step by providing free electronic textbooks to their
65,000 students. In previous years, the Alamo Colleges estimated that
only 50–60 percent of their students were able to acquire learning
materials for their classes (Perez, 2021).
Teaching assistants, academic advisers, and counselors will see their
roles limited to offering assistance only to students with personal situ-
ations where the human side of their work is most important. All aca-
demic advisement regarding course requirements, majors, and careers will
be supplanted by AI applications. There is the well-reported story of Jill
Watson—the Teaching Assistant (TA) in an Online Master’s program in
Computer Science at Georgia Tech—who was nominated by the students
to be TA of the year. Jill is not a real person but an AI algorithm developed
by Georgia Tech faculty and IBM staff to run on IBM’s cloud computer,
Watson. Jill is now five years old, and faculty at Georgia Tech’s Design
and Intelligence Lab have enhanced “her” to be used by nontechnology
experts in any course.
Future technological trends and research 309

Administrative functions are being consolidated and centralized, uti-


lizing cloud services for admissions, registration, financial aid, bursar-
ing, and purchasing. Online programming has resulted in the enrollment
at Southern New Hampshire University growing to almost 135,000 stu-
dents. The fully online Western Governors University enrolls approxi-
mately 150,000 students. Large public university systems are beginning
to see many services centralized, reducing the need for presidential, vice
presidential, and other administrative operations at the local campus
level. The Connecticut State Colleges & Universities now has all 12 of its
community colleges serving almost 50,000 students, administered cen-
trally with skeletal staffs at the individual campuses. The City University
of New York, with almost 500,000 regular and continuing education stu-
dents, now runs on a common computer system called CUNYFirst. At its
newest colleges, presidents have been replaced by deans and the adminis-
trative staffs are lean.
How will education adjust to and accommodate the new world order
where technology will provide foundational services? Many educators will
feel a loss of purpose as their expertise is overshadowed by AI software.
Younger and newer educators will take their places, accept the new order,
and work within it to make it successful. But the period of transition will
be tense if not painful. Educators will have to accept technology as a pri-
mary partner in the education enterprise as have their counterparts in pri-
vate industry (McAfee & Brynjolfsson, 2017, p. 15). That the technology
changes, improves, and enhances is not the issue; but how people change
in response to the technology is. This will be education’s challenge over
the next decade and beyond. It would be easy to dismiss negative specula-
tion as just “crying wolf” and to assume that our schools, colleges, and
universities will weather any possible storm. I hope this is the case, but it
is not likely. Much of higher education, with the exception of the heav-
ily endowed colleges, is already facing difficult financial times. Closures,
mergers, and consolidations are happening and/or being considered. It
will not be easy to move gracefully beyond the financial exigencies already
in evidence. The federal government is the one institution that might be
able to ease this situation, but as its debt has grown considerably in the
past five years, there does not appear to be the political will or where-
withal to address it. It is unlikely that the federal government will come to
the rescue of higher education, especially since there are considerable pres-
sures from other government areas such as healthcare and the military.
Lastly, there will be severe competition between the United States and
the People’s Republic of China for dominance. Right now, there are seven
companies making the greatest investments in AI development (Lee, 2018,
p. 91). Four (Google, Amazon, Microsoft, and Facebook) are based in this
310 Anthony G. Picciano

country and three (Baidu, Alibaba, and Tencent) are based in China. The
Chinese government is pouring huge amounts of capital into the devel-
opment of its AI capabilities and will very possibly take the lead in this
area in the not-too-distant future. US education will be directed, if not
forced, to respond to this AI challenge. It will be beneficial to all educa-
tion sectors to consider how they might partner with AI centers that exist
in the corporate sector to expand data analytics and adaptive technolo-
gies. Technology companies developing AI applications are proliferating
and are welcoming collaborators for their products and services.

Implications for research


The advances in technology described above will have significant ramifi-
cations on the conduct of all research in our schools, colleges, and uni-
versities. The availability of big data files, advanced analytics software,
and AI-infused adaptive learning applicatons will provide rich environ-
ments for those researchers inquiring into the education enterprise itself.
It would be appropriate here to provide some examples of advances and
developments in these areas that will aid and facilitate education explora-
tions, investigations, and analyses.

Big data
Raj Chetty and colleagues (2017) at the National Bureau of Economic
Research reported on a project entitled, “Mobility report cards: The role
of colleges in intergenerational mobility.” The purpose of their research
was to study intergenerational income mobility at each college in the
United States using data for all 30 million college students in attendance
from 1999 to 2013. The findings were as follows:

First, access to colleges varies greatly by parent income. For example,


children whose parents are in the top 1% of the income distribution
are 77 times more likely to attend an Ivy League college than those
whose parents are in the bottom income quintile. Second, children
from low- and high-income families have similar earnings outcomes
conditional on the college they attend, indicating that low-income stu-
dents are not mismatched at selective colleges. Third, rates of upward
mobility—the fraction of students who come from families in the bot-
tom income quintile and reach the top quintile—differ substantially
across colleges because low-income access varies significantly across
colleges with similar earnings outcomes. Rates of bottom-to-top quin-
tile mobility are highest at certain mid-tier public universities, such as
the City University of New York and California State colleges. Rates of
Future technological trends and research 311

upper-tail (bottom quintile to top 1%) mobility are highest at elite col-
leges, such as Ivy League universities. Fourth, the fraction of students
from low-income families did not change substantially between 2000-
2011 at elite private colleges but fell sharply at colleges with the highest
rates of bottom-to-top-quintile mobility.
(Chetty et al., 2017)

There are several elements of this project that are particularly noteworthy.
First, the subject matter of intergenerational income mobility is a critical area
of education research and the findings added significantly to our understand-
ing of the phenomenon. Second, the number of subjects (30 million) had
never been studied before. The study consolidated data from several data-
bases maintained by the US Department of Education, the US Treasury, and
several other nongovernmental organizations. Third, in addition to publish-
ing the findings and results, the authors established a public database that is
searchable for further research by others and for examining data on any indi-
vidual college or university. Fourth, much of the data analyses used to report
the findings relied on basic descriptive statistics such as quintile comparisons.
In sum, the magnitude of this effort has provided a model for “big data”
research for years to come. Not everyone will be doing research with subjects
totaling in the tens of millions, however, this project provides a roadmap for
what is possible as the technology advances in both the quantity and quality
of the data that can be collected and analyzed.

Artificial intelligence, data analytics, and adaptive learning


Learning analytics software is evolving and gaining traction as an impor-
tant facility for teaching and learning. This software is dependent upon
AI techniques that use statistical analysis and algorithms to understand
instructional processes that, in turn, use data to build a series of decision
processes. Significant increases in the speed and storage capabilities of
computing devices, possible through nanotechnology and quantum com-
puting, will increase the accuracy and nature of AI-driven learning ana-
lytics software as well as expand the possibilities for research. In addition,
as AI evolves and develops, the underlying algorithms driving them will
grow in sophistication.
While AI is sometimes referred to as a single generic entity, there are
different forms of AI with varying capabilities. Lee (2018) classified AI
into five distinct levels with increasing complexity as follows:

1. Internet AI—makes recommendations based on Internet activity (i.e.,


Amazon);
312 Anthony G. Picciano

2. Business AI—uses data that companies and other organizations rou-


tinely capture for commercial and procedural activities to make predic-
tions (i.e., bank loan approval, insurance fraud, medical prognosis);
3. Perception AI—uses data from the physical world to make predic-
tions using sensors and smart devices (i.e., weather, traffic flow, facial
recognition);
4. Autonomous AI—uses all the capabilities of the previous stages plus
directs and shapes the world around it (i.e., self-driving cars, assembly
line production control);
5. Artificial General Intelligence—AI functions similar to the human
brain and can perform any intellectual task.

Most AI applications that exist today are in the lower levels of Internet and
Business AI, however, research and development with the other three is
rapidly increasing. Most AI specialists will agree that we may be decades
away from developing level 5 or artificial general intelligence. Regardless,
AI presently allows researchers to take advantage of learning analytics to
expand to support adaptive and personalized learning applications in real
time. For these applications to be successful, data are collected for each
instructional transaction that occurs in an online learning environment.
Every question asked, every student response, every answer to every ques-
tion on a test or other assessment is recorded and analyzed and stored for
future reference. A complete evaluation of students as individuals as well
as entire classes is becoming common. Alerts and recommendations can
be made as the instructional process proceeds within a lesson, from les-
son to lesson, and throughout a course. Students receive prompts to assist
in their learning and faculty can receive prompts to assist in their teach-
ing. The more data available, the more accurate will be the prompts. By
significantly increasing the speed and amount of data analyzed through
nanotechnology and quantum computing, the accuracy and speed of
adaptive or personalized programs expand and improve. In the future,
faculty will use an “electronic teaching assistant” to determine how
instruction is proceeding for individual students, a course, or an entire
academic program. They will be able to receive suggestions about improv-
ing instructional activities and customizing curricula. Most AI applica-
tions in use today, and for the near future are narrow in their application
and focus on a specific activity. Oxford scholar and best-selling author of
Sapiens, Yuval Noah Harari, comments that there is no reason to assume
that AI will ever gain consciousness because “intelligence and conscious-
ness are two different things.” Intelligence is the ability to solve problems
while consciousness is the ability to feel things such as pain, joy, love,
and anger. Humans “solve problems by intelligence and feeling things”
Future technological trends and research 313

while computer technology is void of having feelings (Harari, 2018, p.


69). Regardless, in the years to come, broader AI applications will evolve
and have extensive capabilities. The real-time research opportunities that
these environments will present will enhance what is possible within
today’s limited adaptive learning techniques and open up new avenues
of inquiry into instructional processes. To realize their potential, these
AI applications will also require large databases as provided in the cloud
(Davenport, 2018). Earlier in this chapter, AI programs such as DeepMind
and RoseTTAFold were mentioned as taking the lead in the prediction of
protein structures, the same is likely to occur in the study of instructional
processes with lead researchers not being people but AI-infused adaptive
learning algorithms themselves.

Man–Machine Interfacing: Neuroscience in the Classroom


The model presented at the beginning of this chapter portends a time when
technology will connect external experiences and circumstances to our
bodies including brain functions. In the previous section on AI and adap-
tive learning, an electronic teaching assistant that will monitor instruc-
tional activities was discussed. However, prototypes are already being
used in education research that function near dimensions at which “the
chemistry in the brain operates at less than ten nanometers” Blanding,
2021, p. 34).
Davidesco et al. (2021) in an article entitled, “Neuroscience research
in the classroom: Portable brain technologies in education research,”
described experiments presently being conducted that use wearable tech-
nology such as portable electroencephalographs (EEG) to allow research-
ers to conduct experiments in everyday classrooms. Most cognitive
neuroscience research has typically been conducted in controlled labora-
tory conditions; however, it is Davidesco and colleagues’ position that
conducting this type of research in natural settings and ecologically valid
environments better aligns the research questions and issues being stud-
ied. Furthermore, portable EEG technology can be used at a fraction of
the cost incurred in traditional laboratory research conditions. They go on
to describe recent advances in portable and wireless EEG technology that
allow for the measurement of the brain activity of students and teachers in
regular classrooms. They describe early research that has been conducted
on individual students as well as groups of students that examines “brain-
to-brain synchrony” that can involve students as well as the teacher. For
example, the article describes one study that demonstrated that brain syn-
chrony predicted student performance on a test that was given one week
after a related lecture (Davidesco et al., 2021).
314 Anthony G. Picciano

In another study, Bevalacqua et al. (2019) found mixed results between


brain synchrony and student performance on a knowledge quiz. However,
Bevalacqua and colleagues concluded that higher social closeness to the
teacher did exhibit higher student-to-teacher brain synchrony. This new
and complex area of research demonstrates that man–machine interfacing
as exemplified by portable EEG technology is already upon us and will
likely grow as an area of research in the future as digital and quantum
technology advances.
Davidesco et al. (2021) identified three major areas or research ques-
tions for future inquiry including:

1. How engaged are seemingly disengaged students?


2. Does tailoring instruction to students’ working memory load improve
learning?
3. What are the impacts of neurofeedback on students’ attention, meta-
cognition, and self-regulation?

The authors concluded their article by raising ethical concerns and chal-
lenges regarding student privacy and the potential misuse of brain data by
teachers, administrators, and parents. As this type of research advances,
these concerns will need to be addressed carefully and fully.

Comments about research methods

Statistical significance
In considering again the Raj Chetty et al. (2017) study on mobility
score cards, I was pleased to see that Chetty and colleagues used rela-
tively simple descriptive statistics to make the most important points of
their findings. Quintiles were the basic procedures used for sharing their
results. While they used sophisticated statistical analyses to establish
the accuracy of their databases and to do cross-file investigations, they
shared their data with their readers using clear and simple descriptive
statistics. I had the pleasure of meeting Chetty in 2018 and I asked him
about this. He made the point that it was done purposefully in order to
widen the audience who would understand his work. He further com-
mented that such large sample sizes do not require complex procedures
dependent upon statistical significance. In a chapter of a book that I
wrote with colleague Charles Dziuban in 2022, Blended Learning:
Research Perspectives Volume 3, we made a complementary point and
stated: “This whole conversation [statistical significance] came to a head
in the journal Nature where Valentin Amrhein and over 800 co-author
Future technological trends and research 315

scientists throughout the world were signatories to an article entitled


“Retire Statistical Significance” (Amrhein et al., 2019). They analyzed
the results of 791 articles in five journals, finding that, mistakenly, about
half those studies assumed that non-significance meant no effect. Their
point was that just because investigators fail to reject the null hypoth-
eses, that does not mean that important impacts are absent. They state:
“The trouble is human and cognitive more than it is statistical: bucket-
ing resulting into ‘statistically significant and statistically nonsignificant’
makes people think that the items assigned that way are categorically
different.” (Picciano et al., 2022, p. 306)
In March 2019, The American Statistician devoted its entire Volume
73 to the issue of statistical significance with the lead article entitled,
“Moving to a world beyond p < .05.”
An important element of our position on statistical significance in our ear-
lier book was that it is used as a binary decision (yes/no) when so much of
what we do in education research is far more nuanced, subtle, and subject
to degrees of interpretation. This remains our position and we encourage
inquiry that goes beyond the dichotomy of statistical significance.

Open resources
Another important element of the Chetty et al. research was their willing-
ness to share results and data through open-source venues. Their reports
and datafiles are open to the public for use and for scrutiny. This is a very
desirable trend that has been evolving in social science research and one that
should be encouraged. In 1996, when a group of researchers were beginning
to sow the seeds of what is now the Online Learning Consortium (formerly
known as the Alfred P. Sloan Consortium), one of the first decisions they
made was to establish a free, peer-reviewed open resource where researchers
would share their work with others. The Journal of Asynchronous Learning
Networks (JALN), now the Online Learning Journal (OLJ), is arguably the
most important depository of peer-reviewed research on online learning
and has been instrumental in moving the development of online and digi-
tal learning forward. Articles clarifying both benefits and drawbacks have
graced its pages and have helped in our understanding of these developments.
We strongly encourage researchers in all disciplines to consider sharing their
work in free, open resources.

Mixed-methods research
In 2002, I published an article entitled, “Beyond student perceptions:
Issues of interaction, presence, and performance in an online course” in
316 Anthony G. Picciano

the Journal of Asynchronous Learning Networks (JALN). The purpose


of the study was to examine performance in an online course in rela-
tionship to student interaction and sense of presence in class activities.
Data on multiple independent (measures of interaction and presence) and
dependent (measures of performance) variables were collected and sub-
jected to analysis. An attempt was made to go beyond typical institutional
performance measures such as grades and withdrawal rates and to exam-
ine measures specifically related to course objectives. The students (N =
23) in this study were all teachers working in public schools who were
enrolled in a graduate course, “Issues in Contemporary Education,” in a
school leadership program.
In addition to student perceptions of their learning, as collected on the
student satisfaction survey, two student performance measures were col-
lected: scores on an examination and scores on a written assignment. The
performance measures related to the course’s two main objectives:

1. to develop and add to the student’s knowledge base regarding contem-


porary issues in education and
2. to provide future administrators with an appreciation of differences in
points of view and an ability to approach issues that can be divisive in
a school or community

The examination was designed to assess knowledge of the course subject


matter and was based on the 13 issues explored during the semester. An
objective, multiple choice question and answer format was used. The writ-
ten assignment was a case study that required the students to put them-
selves in the position of a newly appointed principal who is to consider
implementing a new, controversial academic program. For purposes of
this study, this assignment was graded by an independent scorer who used
content analysis techniques to identify phrases and concepts to determine
student abilities to integrate multiple perspectives and differing points of
view in deciding whether and how to implement the academic program.
These performance measures corresponded specifically to the objectives of
the course as established by the instructor. Other performance measures
such as grades were not used because of the difficulty in adequately dis-
tinguishing between student work of differing quality where letter grades
(A, B, C) are used. In addition, student participation or interaction was
included as part of the overall grading criteria bringing into question the
use of these grades in relationship to interaction. Withdrawal or attrition
data also were not a factor in this study since all of the students completed
the course.
Future technological trends and research 317

In the years that followed, I gave a number of presentations on this


article in professional conferences all over the world. I was amazed at how
often I received questions about the mixed-methods research approach
and especially about the content analysis of the written assignment. In
2006 alone, this article had more than 250,000 downloads from JALN,
the most by far of any article. Even today (20 years later), it continues to
be downloaded and according to a Publish or Perish analysis, is the most-
cited article ever in JALN or its successor OLJ. I have always attributed
much of the acceptance of this article to my use of the mixed-methods
approach. Later on, when I conducted national studies on the extent of
online learning in K–12 education with my colleague, Jeff Seaman, we
provided a way for participants to express in their own words their feelings
and impressions of online technology in order to supplement responses to
Likert-scale questions. Readers may be interested in an article by Weis
et al. (2019) in which they also advocated for the use of mixed-methods
research:

Although there has been scholarly discussion of what mixed-methods


research is or should be, limited attention has been paid to the ways
in which such methods can be thoughtfully and rigorously employed
in the service of broad-based projects that attack significant education
issues, … Unfortunately, the value of multiple investigative methods in
education research has, at times, been overshadowed by a Manichean
debate pitting quantitative and qualitative research approaches against
each other.
(Weis et al., 2019)

It is my firm belief that now, and in the future, mixed-methods research


deserves a critical and rightful place in education inquiry.

Interdisciplinarity
An earlier section of this chapter described a form of neuroscience research
in the classroom using portable EEG technology. It is likely to be obvious
to the readers of this book that the expertise and experience needed to
conduct this type of inquiry is not usually found in schools of education.
To the contrary, this research requires teams of individuals with back-
grounds in biology, chemistry, computer science, and psychology as well
as education. Neuroscience is a field that does not exist in the domain of
any single academic area and exemplifies the interdisciplinary nature of
the research that is becoming prevalent not just in education but in many
fields. Education researchers are and will continue to collaborate with
318 Anthony G. Picciano

biologists, biopsychologists, computer technology specialists, and philos-


ophers to collect, study, and analyze data.
Drew Faust, the former president of Harvard University, in a message
to the World Economic Forum, described three major forces that will
shape the future of higher education:

1. the influence of technology


2. the changing shape of knowledge
3. the attempt to define the value of education

She went on to extol the facilities that digital technology and communica-
tions will provide for teaching, learning, and research. She foresaw great
benefits in technology’s ability to reach masses of students around the
globe and to interrogate easily large databases for scaling up and assess-
ment purposes. On the other hand, she made it clear that “residential
education cannot be replicated online” and stressed the importance of
physical interaction and shared experiences.
On the nature of knowledge, she noted that the common organization
of universities by academic departments may disappear because “the most
significant and consequential challenges humanity faces” require investi-
gations and solutions that are flexible and not necessarily discipline spe-
cific. Doctors, chemists, social scientists, and engineers will work together
to solve humankind’s problems.
On defining value, she pointed out that quantitative metrics are now
evolving to assess and demonstrate the importance of meaningful employ-
ment. However, she believes that higher education provides something
very valuable as well: it gives people “a perspective on the meaning and
purpose of their lives,” a type of student outcome that is not able to be
quantified. She concluded that:

So much of what humanity has achieved has been sparked and sus-
tained by the research and teaching that take place every day at colleges
and universities, sites of curiosity and creativity that nurture some of
the finest aspirations of individuals and, in turn, improve their lives—
and their livelihoods. As the landscape continues to change, we must be
careful to protect the ideals at the heart of higher education, ideals that
serve us all well as we work together to improve the world.
(Faust, 2015)

Will technology drive the shape of knowledge and the definition of


value, or will it be the other way around? Techno-centrists see tech-
nology as the driver while others look at higher education holistically
and see technology as a tool that serves the needs of its other elements.
Future technological trends and research 319

Regardless of our opinion on whether technology will be the driver or


merely a tool, it is clear that much of what we study will have to depend
upon expertise from various disciplines as Faust suggests. Education
researchers will have to develop interdisciplinary relationships in order
for their work to be relevant. I would like to think that they are up to
that challenge.

Conclusion
As we move forward in our education research agendas, most of us will
examine issues at a granular level. Student outcomes, student perceptions,
and faculty attitudes will continue to be popular avenues of inquiry. As
new technology evolves, we need to participate in determining whether
it is beneficial for all or some or none. During the past fifty plus years of
using technology in education, we have learned through our research that
new approaches are sometimes not evenly deployed, not advantageous, or
even detrimental. We should never lose sight of the role of education in
addressing complex issues related to equity, diversity, and democratic val-
ues in the larger society. Education technology has, is, and will continue
to be a critical area for inquiry in this regard.

References
AlQuraishi, M. (2018, December 9). AlphaFold @ CASP13: “What just
happened?” Blog Posting. Retrieved January 3, 2022, from https://
moalquraishi ​ .wordpress ​ . com ​ / 2018 ​ / 12 ​ / 09 ​ / alphafold ​ - casp13 ​ - what​ - just​
-happened/
Amrhein, V., Greenland, S., & McShane, B. (2019). Comment: Retire statistical
significance. Nature, 567(7748), 305–307.
Aoun, R. E. (2017). Robot proof: Higher education in the age of artificial
intelligence. The MIT Press.
Bevilacqua, D., Davidesco, I., Wan, L., Chaloner, K., Rowland, J., Ding, F,
Poeppel, D., & Dikker, S. (2019). Brain-to-brain synchrony and learning
outcomes vary by student–teacher dynamics: Evidence from a real-world
classroom electroencephalography study. Journal of Cognitive Neuroscience,
31(3), 401–411. https:// doi​.org​/10​.1162​/jocn​_ a​​_ 01274
Blanding, M. (2021). Thinking small. Fordham Magazine, (Fall/Winter), 34–37.
Chetty, R., Friedman, J. N., Saez, E., Turner, N., & Yagan, D. (2017).
Mobility report cards: The role of colleges in intergenerational mobility
Stanford University White Paper. Retrieved January 16, 2022, from https://
opportunityinsights​.org​/wp​- content​/uploads​/2018​/03​/coll​_ mrc​_paper​.pdf
Davenport, T. H. (2018, November). From analytics to artificial intelligence.
Journal of Business Analytics, 1(2). Retrieved June 1, 2022, from https://www​
.tandfonline​.com​/doi​/full​/10​.1080​/2573234X​. 2018​.1543535
320 Anthony G. Picciano

Davidesco, I., Matuk, C., Bevilacqua, D., Poeppel, D., & Dikker, S. (2021,
December). Neuroscience research in the classroom: Portable brain technologies
in education research. Educational Researcher, 50(9), 649–656.
Faust, D. (2015). Three forces shaping the university of the future. World
Economic Forum. Retrieved February 1, 2022, from https://agenda​.weforum​
.org ​/2015​/01​/three​-forces​-shaping​-the​-university​-of​-the​-future/
Harari, Y. N. (2018). 21 lessons for the 21st century. Seigel & Grau.
Lee, K. F. (2018). AI super-powers: China, Silicon Valley, and the new world
order. Houghton Mifflin Harcourt.
McAfee, A., & Brynjolsson, E. (2017). Harnessing our digital future: Machine
platform crowd. W.W. Norton & Company.
Metz, C. (2019, February 5). Making new drugs with a dose of artificial
intelligence. The New York Times. Retrieved January 3, 2022, from https://
www​ . nytimes ​ . com ​ / 2019 ​ / 02 ​ / 05​ / technology​ /artificial ​ - intelligence ​ - drug​
-research​-deepmind​.html Accessed: .
National Student Clearinghouse Research Center. (2022, May 26). Spring 2022
current term enrollment estimates. Retrieved August 2, 2022, from https://
nscresearchcenter​.org ​/current​-term​- enrollment​- estimates/
Perez, D. (2021, December 23). San Antonio Colleges to loan digital textbooks
for free. Government technology. Retrieved January 7, 2022, from https://
www​.govtech​.com ​/education ​/ higher​- ed ​/san​-antonio​- colleges​-to ​-loan​- digital​
-textbooks​-for​-free
Picciano, A. G. (2019, September). Artificial intelligence and the academy’s loss
of purpose! Online Learning, 23(3). https://olj​.onl​inel​earn​ingc​onsortium​.org ​/
index​.php​/olj​/article ​/view​/2023
Picciano, A. G. (2020, December 4). Blog posting, DeepMind’s artificial
intelligence algorithm solves protein-folding problem in recent CASP
competition! Tony’s thoughts. Retrieved January 3, 2022, from https://
apiccia no ​ . com mons ​ . gc ​ . cu ny​ . edu ​ / 2020 ​ / 12 ​ / 0 4 ​ / deepm i nds ​ - a r tif icial​
-intelligence ​ - algorithm ​ - solves ​ - protein ​ - folding ​ - problem ​ - in ​ - recent​ - casp​
-competition/
Picciano, A. G., Dziuban, C., Graham, C., & Moskal, P. (2022). Blended learning
research perspectives: Volume 3. New York: Routledge/Taylor & Francis.
Science Insider. (2020, September 15). IBM promises 1000-qubit quantum
computer—a milestone—by 2023. Science. Retrieved January 5, 2022, from
https://www​.science​.org​/content​/article​/ ibm​-promises​-1000​- qubit​- quantum​
-computer​-milestone​-2023
Standage, T. (2021, November 1). The world ahead 2022. The Economist. Special
Edition. Retrieved January 15, 2022, from https://www​.economist​.com​/the​
-world​-ahead​-2022
Ubell, R. (2020, May 13). How online learning kept higher ed open during the
coronavirus crisis. IEEE Spectrum. Retrieved January 25, 2022, from https://
spectrum​. ieee​.org ​/ tech​- talk ​/at​-work ​/education ​/ how​- online​- learning​- kept​
-higher​- ed​- open​-during​-the​- coronavirus​- crisis
U.S. PIRG. (2021, February 24). Fixing the broken textbook market (3rd ed.).
Retrieved January 5, 2022, from https://uspirg​.org​/reports​/usp​/fixing​-broken​
-textbook​-market​-third​- edition
Future technological trends and research 321

Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P.,
Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R.K., Rumbaut, R.G.,
Sloane, F., Weisner, T.S., & Wilson, J. (2019, October). Mixed methods for
studies that address broad and enduring issues in education research. Teachers
College Record, 121(10). Retrieved January 14, 2022, from https://www​
.tcrecord​.org​/content​.asp​?contentid​=22741
INDEX

70:20:10 model 271–272 teacher 35–36; three-body problem


8–10; University of Louisiana at
academic advising 281; GPS Advising Lafayette (UL Lafayette) 212–213;
84–87; predictive analytics 80, see also modality switching;
84–87 Universal Design for Learning
academic mindset, and deep learning (UDL)
51–54 adaptive learning system (ALS):
accessibility: class 41; Universal applying the data 163–165;
Design for Learning (UDL) 242 equitable outcomes 182–183;
accuracy 95; machine learning insights 149; instructors’ general use
algorithm 99, 100, 104–105; versus of dashboards 147–148; instructors’
precision 60, 61; predictive 99; pedagogical use of data 148–149;
question item 230–231 instructors’ self-efficacy in using
achievement gap 110, 111, 114; lack of 149–156, 161–162; participants in
preparation 112; see also success study 152; Realizeit 1–8, 152, 173,
active learning 283–285 194–197, 204, 205, 212–215; see
adaptive learning 3, 6, 15, 78, 94, also dashboards; Realizeit
111, 124, 126, 174, 211, 303; admissions: inequitable practices 171;
big data 8–11; English courses race-based 170
177, 179; environment 8; for advising see academic advising
equitable outcomes 172–173; affordances, mixed reality simulations
Georgia State University 80; (MRS) 24
intentional integration 149–150; in agile 268
introductory mathematics 81–83; algorithm/s 7, 10, 30, 32; accuracy
mixed reality simulations (MRS) 104–105; Artificial Neural Network
24–32, 34–35; modality switches (ANN) 13–14, 95–96, 99, 101, 103,
244, 252, 254–255; models 125; 105; Classification Tree 95, 96, 99,
Pell-eligible students 176–177; 104, 105; deep learning 13–14; Jill
platforms 7; question item analysis Watson 308; machine learning 95;
230–231; self-regulated behaviors Support Vector Machine (SVM)
104; student performance 83; 95–96; see also machine learning


324 Index

AlQuraishi, M. 308 69; major findings of the study


American Statistician 315 71–73; methodology 62; model
American Women’s College, The building and results 65, 68–73;
(TAWC) 173 model precision versus accuracy
analytics 3–5, 15, 78; big data 8–11; 60–61; model prediction and results
learning 92–95, 125; models 10; 73, 75–76; modeling coefficients
optimization 129–131; predictive 70, 71; node identification 68–70;
243; prescriptive 243; proxies online activity indicators 94–95;
130–131; weigh-feed-weigh precision rate 76; receiver operator
metaphor 128–129; weighted mean characteristic (ROC) curves 70, 71;
28, 30, 35; see also data analysis; see also minority students
predictive analytics attention 192
annual assessment 182–183 automation: data analysis 31–32;
ANOVA 156 energy resource exploration 269, 270
Aoun, J., Robot Proof, Higher Autonomous AI 312
Education in the Age of Artificial avatars 32; student 24
Intelligence 305
aptitude 6, 113–114; -treatment Bandura, A. 149–151
interaction 121 Bass, S. 191
Argyris, C. 23 Bay Path University 173, 184
Arizona State University 282 Bayesian estimation model
Artificial General Intelligence 312 212–213, 256
artificial intelligence 10, 14, 78, 111, Bayesian network simulation
268, 303, 305; chatbot 80–81, 248–250, 253, 256
87–90; DeepMind 307–308; Harari beliefs 23
on 312–313; Jill Watson 308; bias 11, 14, 130; identifying 15;
RoseTTAFold 307–308; types of question item 233
311–312 big data 11, 15, 60, 84, 130; in
Artificial Neural Network (ANN) education 8–10; implications for
13–14, 95–96, 99, 101, 103, 105 research 310–311; silos 12
assessment 132, 134; adaptive Big Short, The 13
courses 222–223; annual 182–183; “black box” 105
formative 124; model 125; quality Black students 53–54, 174–176,
231–232; self- 189; skill 272, 273; 184–185; achievement gap 111–112;
summative 225; see also question adaptive learning outcomes 177,
item analysis 179; GPS Advising 85–86; mindset
Association of American Universities 55–57; spaced practice 197
(AAU) 282, 283, 286; building Blackboard 189, 194, 204
an infrastructure that supports blended learning 92, 94–96, 172,
change 289; case studies 287–288; 284–285, 306
department and institutional change Bloom, B. 110, 113, 126, 190; “The
focused on students 286, 287 2 Sigma Problem: The Search for
Association of Public and Land Grant Methods of Group Instruction as
Universities 12, 83, 152, 290–291 Effective as One-to-One Tutoring”
Atlas​.​ti 154 127; mastery learning 123–124;
at-risk students, identifying: datasets theory of learning 120–123
62–65; demographic variables Boghosian, B. M. 4
62–65; dependent variable 65, 68; brain synchrony 313–314
future possibilities 76–77; GPA branching 125
as predictor 71–72, 76–77; GPS Brooks, D. C. 289
Advising 86–87; high-precision Brown University 88–89
model 75–76; logistic regression Bruner, J. 141
Index 325

bubble chart 43 Columbia University 14


Budapest Open Access Initiative communication 139; metalogue
(BOAI) 135–136 140–142; scholarly 137–138
Business AI 312 community: of learning 132–133;
of practice (CoP) 149, 151, 156,
calculated questions 194 164–165
Carpenter, T. 191, 200, 201, 203, 207; Community of Inquiry (COI)
spaced practice 189–190, 193–195, framework 284–285
205–206 compass 134
Carroll, J. B. 6, 123 complex systems 8, 14; emergence
Castleman, B. 87 5–6, 277
catapult courses 41–42; English and conditional probability 231, 236
math 43–44 confusion matrix 73
categorization 14 constraints 114
causality 126–127 continuous improvement 292
Chardin, T. 136 coownership, skills development
chatbot 78; Georgia State University 272–273
80–81, 87–90; knowledge base 89 corequisite model 44–51, 57
cheating, Realizeit platform Cornell University 288
222–223, 226 corporate environment: assessment 273;
chemistry: Realizeit 204, 205; spaced business alignment 272; coownership
practice 196, 197, 199–202, 206 272–273; data acquisition 269–271;
Chetty, R. 310, 314 sales 268; strategy and planning
Chi, M. 121 271–272; technology platform 272;
China 309–310 see also PGS
Chorost, M., World Wide Mind: The cost, postsecondary education 4
Coming Integration of Humanity, “Course Signal” intervention 93
Machines, and the Internet 136 course/s and coursework 81–83;
City University of New York 309 academic mindset 51–54;
Classification Tree 95, 96, 99, 100, accessibility 41; adaptive 221–226;
104, 105 catapult 41–44; centralized
classroom 10; interruptions 31; management 173; consequences
optimal learning environment of failure 60; design 112–113,
121–123; relationship between 122; English for Academic
size and resources 61; traditional Purposes (EAP) 92–93, 95–97;
learning environment 115, 116, 119 fixed mindset 54–57; gateway 44,
closing the achievement gap 110, 111, 57; growth mindset 54–57; hub
114; barriers 126–127; Bloom Effect 40–41; language 183; management
121; learning velocity 116–119; model 151, 182; Math Interactive
optimized instruction 119, 120 Learning Environment (MILE)
cloud computing 12–13, 267; data labs 82; prerequisite knowledge
acquisition 269; data processing 117–119; spaced practice 196, 197,
271; learning systems 272; 199–202; usefulness to a future
super- 305 career 52–54; see also English;
coaching: teacher 26–29, 36; see also Mathematics; Realizeit; Universal
feedback; professional development Design for Learning (UDL)
(PD) COVID-19 pandemic 9, 11, 12, 83,
cognitive presence 285 244–245, 256, 281, 293, 303,
collaboration 267, 282; sharing 305–307
datasets 268–269 cramming 191, 192, 202–203
Collateralized Debt Obligations 13 customer relations management, data
Colorado Technical University 7–8 analytics 4–5
326 Index

dashboards 5–6, 93–94, 172; ALS disadvantaged students 245; adaptive


147; applying the data 163–165; learning outcomes 182–183;
drill-down 148; feedback 148; developmental education 46, 47;
insights 148, 149; instructors’ stereotype threat 184
general use of 147–148, 156–158; discussion forums 94
instructors’ pedagogical use of diversity, equity, and inclusion
148–149; instructors’ self-efficacy in (DEI) 173
using 149–156, 161–162; Realizeit domain model 125
223, 224; student learning 31 drill-down 148
data 139 drop-out rate 280–281
data analysis: ALS study 153; Dziuban, C., Blended Learning:
automated 31–32; EAP study Research Perspectives Volume 3
98–99; qualitative 154; quantitative 314–315
153–154; Realizeit study 216–217;
UDL study 247–248 early career teachers, professional
data collection 182; Realizeit study development (PD) 32, 34, 35
216 Ebbinghaus forgetting curve 191, 192
data mining 62 Economides, A. A. 94
data processing, PGS 271 EDUCAUSE Horizon Report: Data
data security 10 and Analytics Edition 5
data silos 12 Eddy, S. 284
data-driven culture 15 education: big data 8–11; cloud
datasets: entitlements 268; computing 12–13; and COVID-19
identifying at-risk students 62–65; 306–307; data silos 12;
predicting grades using machine developmental 44–51; online 11;
learning algorithms 96–99; postsecondary, cost of 4; remedial
sharing 268–269 44; teacher 23; technology
Davidesco, I. 313, 314 integration 15; three-body problem
Davidson, C. N 192 8–10, 16
Dawson, S. 94 Education Advisory Board (EAB) 84
decision making, evidence-based efficacy 149–150
approaches 12 Ellul, J. 142
decision trees 7 email 93, 94, 104
deep learning 13–14, 42; and emergence 5–6, 277
academic mindset 51–54 energy resource exploration
DeepMind 307–308 266; automation 269, 270;
Delphi method 290 entitlements 268
developmental education, corequisite engagement 226
model 44–51 Engelbart, D. 132–133, 142–143
DFW rate 82, 83; risk factors 84 English: adaptive learning 177, 179,
differentiated instruction 26, 181; as catapult course 43–44;
124–126, 149 comparing developmental education
Difficulty Index 233–234, 237, 239, approaches 44–51; growth mindset
240 57; usefulness to a future career
Digital Analytics Association 5 52–54
digital communication 137–138 English for Academic Purposes
digital learning 286, 294; (EAP) 92–93, 95, 96; multimodal
infrastructure 289–294; see also learning package (MLP) 97–98;
institutional transformation participants 98
disabilities, students with: accessibility enrollment 39
41; course design and 245–246; see entitlements 268
also accessibility environment, adaptive learning 8
Index 327

Equitable Digital Learning framework Interactive Learning Environment


286, 289–294 (MILE) labs 82; predictive analytics
equity 14, 32, 181; and adaptive in advising 84–87; student-support
learning 172–173; in digital initiatives 80–81; summer melt 87
learning 207; gaps 46, 79, 80, 87, “Getting to Know You” survey 51
182–183, 280, 281; metrics 171; Glymour, C. 113, 126; Galileo in
social justice 244–245; “sweat” Pittsburgh 111–112
112–113 Goolsby-Cole, C. 208
Essa, A. 6 GPS Advising 84–87
ethics 14, 15 grade point average (GPA) 64, 68, 71,
Every Learner Everywhere 207, 290 72, 81; chatbot intervention 89–90;
evidence-based approaches 12, 286, Georgia State University students 85;
289; catapult courses 42, 43 as predictor of at-risk students 76–77
exams: “open-note” 194; see also grades and grading 42, 316–317;
assessment as outcome variable 183–184;
predictive accuracy 99; predictors
F1 ratio 95, 99, 100, 104–105 100, 101, 103; specifications
faculty 132; interviews, Realizeit 205–206
study 221–224; UDL training 246; gradient boosting 15
see also teachers graduation rates 42, 58, 90, 280;
Faust, D. 318–319 Georgia State University 78, 80;
feedback 35, 112, 243; automated minority students 80; predictors 79
data analysis 31–32; dashboard- grants 81, 83, 207, 208; Pell 173
prompted 148; high-information growth mindset 54–57
26–29, 31; learning 272; mastery Guédon, J. C., Open Access: Toward
learning 124; valuing 30, 31 the Internet of the Mind 132–133,
financial aid 87, 88 135–139, 141
first-generation students 174–175;
adaptive learning outcomes Harari, Y. N., on artificial intelligence
177, 179 312–313
fixed mindset 54–57 heatmaps 248, 250
formal model 114 Hess, A. J. 4
formative assessment 124 higher education 3; annual assessment
formative learning 222–223 182–183; cloud computing 12–13;
Forrester, J. W. 8 COVID-19 effects 306–307;
Foundations model 44–51 curricular requirements 39; data
“framework” 135 analytics 5; drop-out rate 280–281;
Framework for Information Literacy graduation rates 42, 58, 78–80, 90,
for Higher Education 132–135, 280; libraries 137–139; machine
139–142 learning 14; online 11; predictive
Freshman Writing, usefulness to a models 5; technology integration
future career 54 15; three-body problem 16;
Friedman 4 transformation work 281–282;
function: nnet() 99; rpart​.pl​ot() 99 see also education; institutional
transformation
gateway courses 44, 57 high-information feedback 26–28, 31
Gateway to Completion project 41, 43 Historically Black Colleges and
Georgia State University 78, 79, Universities (HBCUs) 171
282, 292; adaptive learning in Hogan, K. 284
introductory mathematics 81–83; Hong Kong 92, 96, 105; see also
AI-enhanced chatbot 80–81, English for Academic Purposes
87–90; GPS Advising 84–87; Math (EAP)
328 Index

Hoover, J. 246 tutoring 242–245, 248–249, 251,


HOPE scholarship 81 252, 254, 255
household income, as predictor of interviews, Realizeit study 216,
educational achievement 79 221–224
hub classes 39–41; English and math introductory mathematics, adaptive
43–44 learning 81–83
IT 132
identifying at-risk students 76;
datasets 62–65; demographic Jacobs, J. 131
variables 62–65; dependent variable Jefferson, T. 134
65, 68; future possibilities 76–77; John Gardner Institute 42, 43
GPA as predictor 71–72, 76–77; Johnson, C. 128–129
high-precision model 75–76; logistic Johnson, S., Dictionary 133
regression 69; major findings of Journal of Asynchronous Learning
the study 71–73; methodology 62; Networks (JALN) 316–317
model building and results 65, just-in-time intervention 162
68–73; model precision versus
accuracy 60, 61; model prediction Katz, M. 138–139
and results 73, 75–76; modeling kernel density plots 253
coefficients 70, 71; node Keuning, T. 149
identification 68–70; receiver kinematics 113–114
operator characteristic (ROC) Knoop-van Campen, C. 148
curves 70, 71; UCF GPA 71, 72 knowledge 138, 318; prerequisite
inclusivity 244–245 117–119; space 125; state 125
income, as predictor of educational knowledge base, chatbot 89
achievement 79
information 133–134; literacy Lakoff, G. 128–129
136–137, 139; Wiener on 139–140 language courses 183
insights 15, 142, 148, 149 Laurison, D. 4
institutional transformation 281–282; Leaky Pipeline 281
building an infrastructure that learner entry characteristics 122
supports change 289; case studies learner model 125
287–288; change focused on learning 6, 39, 139, 189; active
students 286, 287; continuous 283–285; blended 92, 94–96, 172,
improvement 292; digital 284–285, 306; catapult courses
infrastructure 296; Equitable 41–42; check points 195, 197,
Digital Learning framework 199; community of 132–133;
289–294; faculty 292–293; funding conditions 120–123; constraints
292; leadership 291–292 114; cramming 202–203; curve 192,
instructional consistency 285 193; deep see deep learning; digital
Integrated Postsecondary Education 286; Ebbinghaus forgetting curve
Database (IPEDs) 171 191, 192; English and math 43–44;
interactive learning 80 equitable 171–173; by example
interdisciplinarity 317–319 13–14; feedback 272; formative
Internet 129, 131–132, 135–136; AI 222–223; interactive 80; machine
311; see also online learning see machine learning; mastery
interruptions, classroom 31 110–111, 123–124; metacognition
interventions 126, 244, 281; 190, 191, 207; online 129, 284–285,
AI-enhanced chatbot 87–90; 306; rate 117; variables 110, 114,
“Course Signal” 93; data-driven 122; velocity 116–119; vicarious
158–160; email 93, 94, 104; GPS experience 150; virtual 94; see also
Advising 84–87; just-in-time 162; adaptive learning; theory of learning
Index 329

learning analytics (LA) 92–95, memory and retention 190;


103–104, 125; process model Ebbinghaus forgetting curve
163–165; reports 154–158 191, 192
Learning and Development metacognition 190, 191, 207
Department (L&D) 267 metalogue 140–142
learning kinematics 110–111, 113–114 metaphor 128; anthropology 141;
learning management system (LMS) community of learning 132–133;
62, 63, 76, 89, 95, 96, 103–104, compass 134; information
129; Blackboard 189, 194, 204; superhighway 129
blended learning 92; dashboards methodology: ALS study 152, 153;
93–94; system log 247; see also identifying at-risk students 62;
dashboards; Universal Design for predicting grades using machine
Learning (UDL) learning algorithms 96–99;
libraries 133, 137–139 question item analysis 233–234;
literacy, information 136–137, 139 Realizeit study 215–217
literature 133 metrics, diversity, equity, and
Liu, C. 8 inclusion (DEI) 171, 173
logistic regression 69, 95, 96, 99–101, Michigan State University 288
103–104 mindset 113; academic 51–54;
Long, P. 94, 103 fixed 54–57; growth 54–57;
Lowe, D. 308 optimization 142
low-income students 79–80 minority students: adaptive learning
181; GPS Advising 85–86;
Macfadyen, L. P. 94 graduation rate 80; stereotype
machine learning 95, 96, 98, 126; threat 184; summer melt 87
algorithm accuracy 99, 100, Mitchell, M. 4
104–105; Artificial Neural Network mixed reality simulations (MRS) 24,
(ANN) 101, 103, 105; branching 35; automated data analysis 31–32;
125; Classification Tree 99, 100, differences between interventions
104, 105; deep learning 13–14; 26–28; differences in teacher
identifying bias 15 performance 32, 34; personalized
Mainstay 87–89 goal setting 28, 29; scoring teacher
Mann-Whitney U test 156 performance 25–26; value system
mastery learning 110–111, 123–125, 30, 31
150, 157, 171 mixed-methods research 315–317
Math Interactive Learning modality switching 255; data analysis
Environment (MILE) labs 82 247–248; implications of study
Mathematics: adaptive learning 81–83; 256–257; results of study 249–254;
adaptive learning outcomes 181; as variables 248; see also tutoring;
catapult course 43–44; comparing Universal Design for Learning (UDL)
developmental education approaches model/s: 70:20:10 271–272; adaptive
44–51; growth mindset 54, 55; course development 214, 215;
usefulness to a future career 52–54 adaptive learning 125; analytics
McCormack, M. 289 10; assessment 125; Bayesian
McGuire, S. 191, 207; Teach Students estimation 212–213; business
How to Learn: Strategies You Can 272; conventional instruction
Incorporate into Any Course to 123; course management 182;
Improve Student Metacognition, deep learning 14; digital learning
Study Skills, and Motivation 190 infrastructure 289–294; domain
Means, B. 285 125; formal 114; identifying at-risk
measuring: professional development students 65, 68–73; learner 125;
(PD) 36; skills 272 learning analytics process
330 Index

163–165; mastery learning Pell Institute 4


123–124; pedagogical 125; Pell-eligible students 173, 184–185;
precision versus accuracy 60, 61; adaptive learning outcomes
protein structure 307–308; training 176–177, 179
14, 248, 253 Perception AI 312
modules: adaptive learning 82; Perception of ALS survey 150, 151;
Realizeit 213 distribution 151–152
Molenaar, I. 148 perseverance 6
Momentum Year 38–39, 57–58 Peters, V. 285
Morrison, H. C. 123 PGS 272–273; agile approach
Moses, R. 131–132 268; business alignment 272;
motion 113–114 coownership 272; data acquisition
motivation 113–114, 207 269–271; data processing 271;
multimodal learning package (MLP) Learning and Development
97–98 Department (L&D) 267; maritime
crews 269, 270; offshore operations
nanotechnology 304 267; predictive maintenance and
National Bureau of Economic supply 270–271; sales 268; skills
Research 310–311 assessment 267–268, 273; sprint
National Center for Education to 2027 274–277; strategy and
Statistics (NCES) 171 planning 271–272; workforce
National Federation of the Blind 41 competencies 266, 267; see also
National Science Foundation 284 skills development/reskilling
natural language processing 31–32 physics, three-body problem 8–9, 16
network/s 131–133, 138, 142; Picciano, A. G.: “Artificial intelligence
Bayesian 248–250, 253, 256; hubs and the academy’s loss of purpose!”
39; small-world 39–42 303; “Beyond student perceptions:
neuroscience 313–314 Issues of interaction, presence, and
Newton, I. 8 performance in an online course”
Ngwako, F. A. A. 150 315–317; Blended Learning:
Nilson, L. 205 Research Perspectives Volume 3
nnet() function 99 314–315
platforms 173; adaptive learning 7,
online learning 11, 129, 211, 82–83; knowledge acquisition 272;
284–285, 306 see also adaptive learning system
Online Learning Consortium 315 (ALS); Realizeit
Open Educational Resources (OER) population and sample, Realizeit
222, 225–226 study 215–216
open resources 315 postsecondary education, cost of 4
“open-note” exams 194 practice: community of 149, 151, 156,
opportunity 6 164–165; spaced 189–190,
optimization 129–132; mindset 142 194–197, 199–202, 205–206
optimized instruction 119, 120 precision 95; versus accuracy 60, 61
Oxford English Dictionary 133–134 predictive accuracy 99
predictive analytics 78, 80, 243; for
Page, L. 87, 88 academic advising 84–87
Papamitsiou, Z. 94 predictors, grade 100, 101, 103
participants: ALS study 152; EAP prerequisite knowledge 117–119
study 98 prescriptive analytics 243
pedagogy 4; model 125; spaced presence 285
practice 189–190; use of data from preservice teachers, mixed reality
ALSs 148–149 simulations (MRS) 35
Index 331

Princeton University 111–112 redistribution 4


professional development (PD) 23, reference learner 117
83, 149; differentiated coaching Reich, R., System Error: Where Big
26; high-information feedback Tech Went Wrong and How We
26–28; measuring 36; mixed Can Reboot 130–131
reality simulations (MRS) 24–29, reliability, question item analysis
31–32, 300; tailoring to teachers 232–233
at different career points 34–35; remedial education 44; adaptive
see also mixed reality simulations learning outcomes 181; comparison
(MRS) of approaches 44–51
proxies 130–131 reports, learning analytics (LA)
Pugliese, L. 7, 8 154–158
Python 31, 36 research 15; big data, implications
of 310–311; interdisciplinarity
qualitative analysis, ALS study 154 317–319; mixed-methods 315–317;
quality: of assessment 231–232; of open resources 315; statistical
instruction 119, 120, 122 significance 314–315; UDL 246
quantitative analysis, ALS study reskilling see skills development/
153–154 reskilling
quantum computing 304–305 risk, student 8
question item analysis 230–231; RoseTTAFold 307–308
bias 233; data collected from the rpart​.pl​ot() function 99
study 234, 235; Difficulty Index
233–234, 237, 239, 240; reliability scale, Teacher Self Efficacy 150, 154
232–233; results from the study scholars 137
236, 237; standardization 233; Scholarship of Teaching and Learning
validity 232 (SoTL) 202
Schön, D. A. 23
race and racism 170; developmental Scientific American 4
education 46–51; diversity 173; self-efficacy 149–151; scale 150; in
growth mindset 55 using adaptive learning systems
random forest analysis 15 (ALS) 152–156, 161–162
randomized controlled trial (RCT) self-reflection 26
88–89, 126; GPS Advising 84–85 self-regulation 92, 94, 149; as
Ransbotham, S. 277 predictor of student success 104; see
Realizeit 7–8, 152, 173, 194–197, also spaced practice
203–205, 212, 228; adaptive Shami, M., System Error: Where Big
course design 225–226; adaptive Tech Went Wrong and How We
course structure 213, 214, Can Reboot 130–131
221–222; assessment 222–223; sharing datasets 269
challenges associated with Siemens, G. 94, 103, 142; “Learning
226–227; cheating 222–223, 226; Analytics: The Emergence of a
communication tools 226–227; Discipline” 129–130
dashboard 223, 224; features Sigma 2 Problem 120
influencing teaching effectiveness simulation: Bayesian network
219, 220; Learning Map 213, 214; 248–250, 253, 256; learning
lessons 213; student performance kinematics 113–114; optimized
outcomes 220, 221; study findings instruction 119, 120; traditional
217–224; study methodology instruction 115, 116
215–217; support 227 skills development/reskilling
receiver operator characteristic (ROC) 272–273, 277; assessment 273;
curves 70, 71 learning data 274; maritime crews
332 Index

270; strategy and planning transformation work 283; variances


271–272; technology platform 272; 121; see also grades and grading;
see also training graduation rate
Smallwood, R., A Decision Structure summative assessment 225
for Teaching Machines 124–125 summer melt 87
small-world networks 39–42 super-cloud computing 305
social justice 244–245 supercomputers, PGS 271
social network analysis 94 Support Vector Machine (SVM)
social presence 285 95–96, 105
Society for Learning Analytics survey: Community of Inquiry (COI)
Research (SoLAR) 5 285; Perception of ALS 150, 151;
Southern New Hampshire University Realizeit study 216; Realizeit
309 study, quantitative analysis
spaced practice 189–190, 217–219
193–195, 203; flexibility 205–206;
incorporating across the general teacher education 32; see also
chemistry curriculum 206; results professional development (PD)
196, 197, 199, 200, 207; student Teacher Practice Observation Tool
survey responses 201, 202; see also (TPOT) 25
Realizeit teachers: adapting to new technologies
specifications grading 205–206 307–310; adaptive learning 35–36;
Standage, T., “The World Ahead general user of ALS dashboards
2022” 307 147–148; mixed reality simulations
standardization, question item 233 (MRS) 25–28; professional
standardized tests 62 development (PD) 23–24; self-
statistical significance 314–315 efficacy in using ALS 149–156,
statistics 5 161–162; see also student/s
STEM 54, 62, 284; tutoring 86–87 teaching 15, 139; evidence-based
Stenbom, S. 285 286, 289; instructional consistency
Stenhouse, L. 132 285; machines 124–125; optimized
structure 110 instruction 119, 120; presence 285;
student/s: academic mindset 51–54; traditional instruction 115, 116
avatars 24, 30, 32; first-generation technology 111, 318; adapting to
174–175; growth mindset 54–57; new 307–310; Aoun on 305; and
high-information feedback 26–29, COVID-19 305–307; implications
31; loans 81; Pell-eligible 174–175; for research 310; nano 304;
pig metaphor 128; risk 8; self- neuroscience 313–314; quantum
monitoring 5–6; social presence computing 304–305; wearable 313;
285; underprepared 113; see also see also adaptive learning; artificial
Black students; disabilities, students intelligence; cloud computing;
with; disadvantaged students; machine learning
first-generation students; minority technology integration 15
students; Pell-eligible students Tennessee Board of Regents (TBR) 38,
Suber, P. 136 40, 41, 45
success 6, 39, 113, 132, 280; academic tests 112–113
mindset 51–54; catapult courses textbooks 308
41–44; constraints 114; growth theory of learning 110–111, 120–123
mindset 54–57; hub classes 41; three-body problem 8–10, 16
Momentum Year 57–58; predictive timed tests 112
models 103–104; remedial traditional instruction 115, 116, 119
education 45, 46; self-regulated training 8, 266, 272; deep learning
behaviors as predictor 104; model 14; maritime crews 270;
Index 333

model 248, 253; self-efficacy 150; validity, question item 232


UDL 246 value 318
Tufte, E. R. 11 van Geel, M. 149
Turing, A. 142 VanLehn, K. 121
tutoring 242–245, 248–249, 251, 252, variables 5; achievement 122; grade
254, 255; STEM 86–87; see also 183–184; identifying at-risk
academic advising students 62–65, 68; learning 110,
Tyton Partners 212, 290–291 114–117; modality switching study
248, 250–252
underprepared students 113, 119 vicarious experience 150
underserved students: chatbot Virginia Tech 82
intervention 87–90; see also at-risk virtual classroom, mixed reality
students, identifying simulations (MRS) 24
United States 309–310; college debt 4 virtual learning 94
Universal Design for Learning (UDL) visual presentations 11
242, 243, 245; data analysis 247– visualizations 93, 213; bubble chart
248; faculty training 246; limitations 42, 43; engagement data 226;
of study 249; research 246 heatmaps 248, 250; kernel density
universities 137–138 plots 253
University of Arizona 288
University of Central Florida 7–8, 194 Washburne, C. W. 123
University of Louisiana at Lafayette wealth advantage 4, 170
(UL Lafayette): adaptive course wearable technology 313
development model 214, 215; Webb, R. B. 246
adaptive learning 212–213 weighted mean 28, 30, 35
University of Maryland, Baltimore Weinstein, J., System Error: Where
County 189, 208; see also spaced Big Tech Went Wrong and How We
practice Can Reboot 130–131
University of North Carolina at Western Governors University 309
Chapel Hill 288 Wiener, N., Human Use of Human
University of Technology Sydney Beings 139–140
126–127 Wolff, A. 94–95
University System of Georgia (USG) World Economic Forum 266, 318
40, 41, 51; catapult courses 42–44
US Department of Education, “First in Zdrahal, Z. 94–95
the World” program 84–85 Zeno’s paradox 117, 118

You might also like