0% found this document useful (0 votes)
14 views

Joksimovi ModelLearningScale 2018

Uploaded by

Karen Dichi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Joksimovi ModelLearningScale 2018

Uploaded by

Karen Dichi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

How Do We Model Learning at Scale?

A Systematic Review of Research on MOOCs


Author(s): Srećko Joksimović, Oleksandra Poquet, Vitomir Kovanović, Nia Dowell, Caitlin
Mills, Dragan Gašević, Shane Dawson, Arthur C. Graesser and Christopher Brooks
Source: Review of Educational Research, Vol. 88, No. 1 (February 2018), pp. 43-86
Published by: American Educational Research Association
Stable URL: https://www.jstor.org/stable/44667693
Accessed: 16-06-2024 23:16 +00:00

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

American Educational Research Association is collaborating with JSTOR to digitize,


preserve and extend access to Review of Educational Research

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Review of Educational Research
February 2018, Vol. 88, No. 1, pp. 43-86
DOI : 10.3102/0034654317740335
© 201 7 AERA, http://rer.aera.net

How Do We Model Learning at Scale? A


Systematic Review of Research on MOOCs

Srećko Joksimovic ©, Oleksandra Poquet, and Vitomir Kovanovic ©


University of South Australia

Nia Dowell
University of Michigan

Caitlin Mills
University of Notre Dame

Dragan Gaševič
University of Edinburgh

Shane Dawson
University of South Australia

Arthur C. Graesser
University of Memphis

Christopher Brooks
University of Michigan

Despite a surge of empirical work on student participation in online learn-


ing environments, the causal links between the learning-related factors and
processes with the desired learning outcomes remain unexplored. This
study presents a systematic literature review of approaches to model learn-
ing in Massive Open Online Courses offering an analysis of learning-
related constructs used in the prediction and measurement of student
engagement and learning outcome. Based on our literature review, we iden-
tify current gaps in the research, including a lack of solid frameworks to
explain learning in open online setting. Finally, we put forward a novel
framework suitable for open online contexts based on a well-established

43

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

model of student engagement. Our model is intended to guide future work


studying the association between contextual factors (i.e., demographic,
classroom, and individual needs), student engagement (i.e., academic,
behavioral, cognitive, and affective engagement metrics), and learning out-
comes (i.e., academic, social, and affective). The proposed model affords
further interstudy comparisons as well as comparative studies with more
traditional education models.

Keywords: nonformal education, learning environments, MOOCs, engagement

Massive Open Online Courses (MOOCs), as one of the most prominent


ways for facilitating learning at scale, have now been part of the educational
landscape for almost a decade. The volume of learners enrolling in MOOCs
generated widespread interest among the public, popular press, and social and
education commentators (Reich, Stewart, Mavon, & Tingley, 2016). Some
stakeholders expressed their belief in the groundbreaking effect MOOCs may
have on higher education, possibly making traditional brick-and-mortar uni-
versities obsolete (Shirky, 2013). Alongside the potential of MOOCs, profes-
sionals in educational technology have expressed concerns about widely
applied pedagogical models based on the information transmission integrated
in many of the MOOCs. Despite a polarized debate (Selwyn, Bulfin, &
Pangrazio, 2015), student enrollment numbers and course offerings continued
to grow (Jordan, 2015a; Shah, 2015). This has resulted in a wave of interest
from researchers and, within a relatively short time frame, we have witnessed
a substantial number of research studies and reports on MOOCs (Jordan,
2015b), as well as the formation of two annual MOOC-related scholarly con-
ferences (Haywood, Aleven, Kay, & Roll, 2016).
Research has largely focused on students' persistence in MOOCs and the
development of models to predict dropout or academic performance. Despite the
volume of work to date, commentators have criticized such research as being
primarily observational and lacking appropriate rigor. Reich (2015), for example,
asserted that MOOC research has failed to provide causal linkages between the
observed metrics and student learning, despite the vast amount of data collected
on student activity within MOOCs. This limitation is in part due to the lack of
theoretically informed approaches employed in the analysis of MOOCs.
Institutional reports on MOOC provisions as well as special issues on MOOCs
have offered some insight into engagement during learning with MOOCs, but
have presented little (or no) evidence of the factors contributing to learning per se
(DeBoer, Ho, Stump, & Breslow, 2014; Reich, 2015).
The limited insight offered by the research thus far can be attributed to a gen-
eral lack of understanding that nonformal educational settings, such as MOOCs
(Walji, Deacon, Small, & Czerniewicz, 2016), differ from those of more tradi-
tional forms of education in many aspects. Technology and economies of scale
allows for designing courses for unparalleled numbers of students and in ways
that were not available in more traditional forms of learning (Reich, 2015). Some
of the reports indicate that more than 58 million of students enrolled in at least one

44

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

of almost 7,000 MOOCs, offered by more than 700 universities (Shah, 2015).
Students' interactions in such contexts result in a magnitude of data on learning
and in various data formats, stored within platforms to promote practices that are
substantially different from those in traditional face-to-face or online learning
(DeBoer et al., 2014; Evans, Baker, & Dee, 2016). The diversity of students rep-
resented in MOOCs is also unprecedented. The range in diversity is reflected in
students' cultural backgrounds, socioeconomic and employment status, educa-
tional level, and importantly, their motivations and goals for registering in a par-
ticular course (DeBoer et al., 2014; Glass, Shiokawa-Baklan, & Saltarelli, 2016;
Reich et al., 2016). Therefore, DeBoer et al. (2014) and Evans et al. (2016) among
others, have argued that MOOCs require a "re-operationalization and reconceptu-
alization" of the existing educational variables (e.g., enrollment, participation,
achievement) commonly applied to conventional courses.
This study concurs with the argument by DeBoer et al. (2014) and posits that a
more holistic approach is needed to understand and interpret learning-related con-
structs (observed during learning) and their association with learning (outcomes).
These learning-related constructs are often observed under the broader concept of
learning - a term commonly applied across a range of contexts with multiple
interpretations and definitions (Illeris, 2007). Conceptually, learning refers to
both (a) a complex multilevel process of changing cognitive, social, and affective
aspects of the self and the group as well as (b) the outcomes of this process
observed through the cognitive, social or affective change itself. Distinguishing
between the process and the outcomes of learning, along with the contextual ele-
ments, is essential when modeling the relationships between them.
The necessity to redefine existing educational variables within new contexts
originates from the concept of validity in educational assessment (Moss, Girard,
& Haniford, 2006). Validity theories in educational measurement have been pri-
marily concerned with a (a) standardized forms of assessment (e.g., tests); (b)
providing a framework for interpretations of assessment scores in a given learning
environment; and (c) making decisions and taking actions to support and enhance
students' learning (Moss et al., 2006). However, aiming to take a more pragmatic
approach to validation, Kane (1992, 2006) posited that performance assessment
should not be restricted to "test items or test-like tasks" (Kane, 2006, p. 31).
Evaluation of students' performance can include a wide variety of tasks, per-
formed in different contexts and situations (Kane, 2006). To make valid interpre-
tations of student performance in MOOCs, it is necessary to have a clear
understanding of how evaluation metrics have been defined for a given learning
environment and its students (Kane, 2006; Moss et al., 2006).
This study contributes to the development of the "next generation of MOOC
research" (Reich, 2015, p. 34) that can aid in explaining the learning process and
the factors that influence learning outcomes. The present study critically exam-
ines how learning-related constructs are measured in MOOC research, and reop-
erationalizes commonly used metrics in relation to the specific educational
variables within (a) learning contexts, (b) learning processes (i.e., engagement),
and (c) learning outcomes. The study is framed in Reschly and Christenson's
(2012) model of the association between context, engagement, and outcome.
Reschly and Christenson (2012) defined engagement as both a process and an

45

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovic et al.

outcome, therefore aligning the concept of engagement with a broader under-


standing of learning. In their work, Reschly and Christenson (2012) observed four
aspects of student engagement: academic, behavioral, affective, and social. The
authors conceptualized these as mediators between contextual factors, such as
student demographics or intentions, and learning outcomes. Thus, we first exam-
ine commonly used learning-related metrics through a systematic review of the
literature between 2012 and 2015 inclusive. We then analyze these metrics of
observed student activity in light of Reschly and Christenson's (2012) model of
associations between context, engagement, and student outcomes. Reschly and
Christenson's (2012) model stems from the work on dropout prediction and
increasing school completion, observing engagement on a continuum scale (rang-
ing from low to high). By discussing the metrics representing the outcomes and
indicators of learning within Reschly and Christenson's model, we demonstrate
limitations and strength of current approaches to measuring learning in MOOCs.
We then highlight differences that emerge between the Reschly and Christenson
model and open online settings, to propose a modified operationalization of how
learning in MOOCs can be studied.
We refer to MOOCs as planned learning experiences within nonformal, digital
educational settings, used to facilitate learning at scale. In computer-mediated (net-
worked) settings, as is the context of our research, learning is observed as a
dynamic and complex process. Learning, involves student interactions with other
students, teachers, and content (Goodyear, 2002; Halatchliyski, Moskaliuk,
Kimmerle, & Cress, 2014). By nonformal , we assume any systematic learning
activity conducted outside the formal/institutional settings (Eraut, 2000); in
MOOCs, such activity occurs within the structure prepared by the instructor but is
heavily influenced by learner's motivations, actions, and decisions. Finally, digital
( education ), refers to an emerging approach to learning mediated by various tech-
nological methods (Siemens, Gaševic, & Dawson, 2015). Digital learning brings
online, distance, and blended learning under a single concept, and could be struc-
tured as formal/informal, self-regulated, structured/unstructured, or lifelong.

Research Questions
The present study identifies student engagement metrics and contextual factors
commonly used to model learning and predict learning outcome or course persis-
tence in nonformal, digital educational settings. First, we examine traces of stu-
dent activity operationalized as indicative of learning processes through a
systematic review of the literature. We then use findings from the review to refine
a well-established model of student engagement in the context of learning with
MOOCs. Finally, we summarize the common methods used to examine the asso-
ciation between the metrics calculated and outcome measured, as means for defin-
ing and interpreting eventual association between different elements of the model
constructs. To address these aims we posed the following research questions:

Research Question 1: What are the most common approaches to operationally


defining and measuring learning outcomes in the research on MOOCs? Is there
misalignment between them with a common model of student engagement?

46

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Ho w Do We Model Learning at Scale?

Research Question 2: What are the most common approaches to operationally


defining and measuring learning context and student engagement in the
research on MOOCs? Is there misalignment between them with a common
model of student engagement?
Research Question 3: What are the common approaches to studying the asso-
ciation between the identified metrics and measured outcome?

In contending that the majority of the current MOOC studies focus on the exam-
ination of the association between student engagement and course outcomes, Reich
(2015) argues that "[distinguishing between engagement and learning is particu-
larly crucial in voluntary online learning settings" (p. 34). However, Reich's argu-
ment is limited to assessment scores, rather than on the individual and group
changes that take place during and over the process of learning. According to
Reich, introducing assessment at multiple time points, relying on the assessment
methods validated in prior research, and making a better integration of assessment
in the course design in general, are important steps in understanding learning in
MOOCs (Reich, 2015). In part, we concur with Reich's (2015) premise. However,
we also acknowledge that not all MOOCs include (formal) assessment practices,
especially those MOOCs designed with connectivist pedagogies (Siemens, 2005).
Additionally, the diversity of student intentions for enrolling in voluntary online
learning requires additional considerations on how learning might be operational-
ized in the context of MOOCs in the absence of assessment models. Moreover,
Gaševic, Dawson, Rogers, and Gaševic (2016) stressed the importance of consid-
ering contextual factor when trying to predict learning outcome or course persis-
tence. Framing their research around the Winne and Hadwin (1998) model of
self-regulated learning, Gaševic et al. (2016) showed how instructional conditions,
as a vital component of external conditions affect the interpretation of learning-
related measures. Therefore, we rely on the Reschly and Christenson (2012) model
that observes student engagement as a mediator between contextual factors (e.g.,
intents) and learning outcomes, regardless of their operationalization. The model
offers a broader view on the outcomes of learning, defining engagement as both a
process and an outcome (Reschly & Christenson, 2012).
Method

Literature Search and Inclusion Criteria

To derive the extant research literature, a computer-based search from 2012 to


2015 (inclusive) was undertaken over three phases (Figure 1). Although the first
MOOC was offered in 2008, it was only in 2012 when the major MOOC provid-
ers (i.e., Coursera, edX, and Udacity) were established, and an inaugural course
was launched.1 Moreover, as noted by Raffaghelli, Cucchiara, and Persico (2015),
it was only post 2012 when the MOOC research proliferated, demonstrating a
growing maturation of the field.
The first phase involved a search of the following databases: EdiTlib,
EBSCOhost (Education Source, ERIC, PsychlNFO, PsychArticles, and Academic
Search Complete), Scopus, Web of Science, Science Direct, Taylor & Francis, and
Willey. The following search criteria were used for defining inclusion in the study:

47

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
ÊÊÊÊËKÊÊHÊÊÊÊÊÊk
i

■■■■i

I :
!

/ Two Coders

BBBBHBBI "

"iJiiii.iiiiiiiiu.a

FIGURE 1 . Overview of the systematic search and coding process

Title, abstract, and/or keyw


terms: mooc* OR "massiv*
Title, abstract, and/or keyw
terms: predict OR learn * OR
Title, abstract, and/or keyw
terms: engage* OR outcom
attrition OR dropout OR part

The initial search resulted in


researchers coded the studie

48

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

process comprised reading the title and abstract for each study and assigning a
binary category - relevant/not relevant. In cases where it was not obvious from
the title and abstract whether a given study would be relevant for answering our
research questions, the coders examined the article in detail (i.e., reading the
methods and results sections). The coding was conducted through several steps.
The first step included the joint coding of an initial set of 50 studies, in order to
refine the inclusion criteria and to define a set of rules for accepting studies for the
review. The changes between the original inclusion and exclusion criteria were
minor. Specifically, the initial version of the inclusion criteria did not consider
employees (e.g., we were not aware of the significant number of studies focusing
on professional medical education), as it was further added to Item 6 in the list
below. Also, in the initial inclusion criteria, we had not been precise about Item 8
from the list below, that is, exclusion of studies relying on log data and surveys or
questionnaires. These were later included as a special subset because they con-
tained various learning-related metrics extracted from log data, often used to
describe the data sets of the analyzed studies. In other words, although such stud-
ies did not attempt to predict learning outcome of course persistence, they included
operationalization of learning-related constructs.
Two coders coded all the studies together and interrater agreement (Cohen,
1960) was calculated after coding 250, and 500 studies, as well as at the end of the
coding process. All conflicts were resolved at each of the steps. The two coders
reached an average interrater agreement of 93.6%, with an average Kappa of 0.67.
The final set included 96 studies that satisfied the following criteria for inclusion
in this review, where the study:

1 . Presents an original (primary) research, analyzing MOOC data


2. Addresses a problem of predicting learning and/or persistence in MOOCs
3. Analyzed higher or adult education
4. Was published in 2012 or beyond
5. Was published in peer-reviewed journal/conference proceedings, avail-
able in English
6. Participants in primary studies were nondisabled undergraduate students,
graduate students, and/or employees (e.g., teachers and nurses)
7. Focuses on algorithms that help to identify variables related to learning
8. Relies on a log data and/or surveys/questionnaires, and the study applies
inferential statistics and not primarily descriptive analysis to investigate
the data

Inclusion of both journal and conference papers in our systematic review was
necessary. The exclusion of conference papers (and conference proceedings in
computer science) would significantly limit the number of studies analyzed. In
addition, the analysis targeted studies publicized at the onset of MOOC research,
and publishing in conference proceedings would represent the most prominent
way for disseminating novel research in a field. Their exclusion would also mean
that research published in the main outlet for publication by computer scientist
(for whom conference publications are mostly more important than journals), an

49

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al

important constituent group in the field, would be ignored. By integrating the lit-
erature from a variety of sources, this review aimed at summarizing the broadest
possible set of learning-related metrics used to date. Such a broad overview did
not negatively affect the quality of the analysis. Rather, the extension of the
review materials offered a fiiller representation of the quantitative measures used
to investigate learning at scale.
To ensure a comprehensive and accurate search, we manually searched the fol-
lowing journals: Journal of Learning Analytics , Journal of Educational Data
Mining, British Journal of Educational Technology, The Internet and Higher
Education, Journal of Computer Assisted Learning, The International Review of
Research in Open and Distributed Learning, Journal of Educational Technology
& Society, Educational Technology Research & Development, IEEE Transactions
on Learning Technologies, Distance Education, International Journal of
Computer-Supported Collaborative Learning, ACM Transactions on Computer-
Human Interaction , and the International Journal of Artificial Intelligence in
Education. A manual search was also conducted for conference proceedings
including: International Conference on Learning Analytics and Knowledge,
International Conference on Educational Data Mining, International Conference
on Computer Supported Collaborative Learning, ACM Annual Conference on
Learning at Scale, ACM SIGCHI Conference on Human Factors in Computing
Systems, ACM Conference on Computer Supported Collaborative Work,
European Conference on Technology Enhanced Learning, and International
Conference on Artificial Intelligence in Education Conference. The list of rele-
vant journals and conferences was obtained from Google Scholar metrics list of
top publications in the educational technology research category. The manual
search resulted in an additional 23 studies, providing a total list of 119 studies
selected for further consideration.
In the final phase, we coded the selected 119 studies according to the coding
scheme (see Supplemental Table SI in the online version of the journal).2 The
coding scheme was developed with respect to the STROBE Statement3 recom-
mendations for the observational studies, adapted and extended to account for the
specific research questions of this systematic review. Although the STROBE list
has been primarily used in medical research, these recommendations for the
observational studies are comprehensive, offering a valid basis for coding schemes
used in other domains (such as educational research). Nevertheless, given the
focus of our study, we removed items such as "Give reasons for nonparticipation
at each stage," as one of the aspects of describing study participants available in
the STROBE recommendations, as well as "Funding" (also available among the
STROBE items), as these items were not relevant for the context of the present
study. Following the final screening by four independent coders 38 studies were
identified that met the above-defined criteria for inclusion (Figure 1).

Analysis
To address research questions, a synthesis of the 38 systematically selected
studies was undertaken. The main focus of the systematic review was on the met-
rics used to assess learning in MOOCs and the outcome variables measured. Thus,
each of the studies was coded with respect to these parameters. Moreover, we

50

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

examined how different studies defined outcome (e.g., learning outcome or drop-
out), as well as how each of the predictors was extracted. Besides the variables
used, we also indicated the statistical methods used to examine the association
between predictors and outcome(s), and the noted results (if reported) for each of
the analyses applied in the reviewed studies. A definition for each of the coded
attributes is provided in Supplemental Table SI (available in the online version of
the journal).
Additionally, the studies were coded with respect to (a) the theories they
adopted to analyze learning (e.g., online or distance education theories) and (b)
study objectives (e.g., predicting final course grade, or predicting dropout). We
also examined whether a study was exploratory of confirmatory, whether authors
discussed limitations and generalizability of study findings, and to what extent
pedagogical and/or contextual factors were considered. The main study findings
across the reviewed literature were summarized to identify common and signifi-
cant conclusions.
To contextualize the variables, and for further research, we coded the plat-
form where a MOOC was delivered, the educational level suggested for each of
the offered courses, course domain, and course completion rates. Due to numer-
ous interpretations of how course completions are calculated (see section
"Common Operationalization of Learning Outcomes"), here we captured the
count of registered, active students, and the number of students who obtained a
certificate, if reported. Furthermore, we were interested in the domain of the
analyzed courses. That is, whether the courses offered a certificate, and how
many xMOOCs or cMOOCs were included in the analyses. The types of
MOOCs were labeled based on the categorization commonly found in the litera-
ture distinguishing between the connectivist cMOOCs4 and Coursera-like
xMOOCs5 (Rodriguez, 2012).
We also identified the data sources used for each of the studies included in the
review as well as the study focus (e.g., all students, only students who posted to a
discussion forum, or students who successfully completed a course).

Limitations

The diversity of terms describing similar concepts and measures presented a


significant challenge for this study. Researchers would frequently state that the
study examined an association between "learning outcome" and various metrics
of student engagement, without a clear description what was considered as an
outcome. The lack of specificity in the reviewed studies prompted the need for
added interpretations based on a review of the analyzed data. Additional chal-
lenges again related to a lack of detail surrounding the metrics used to measure
variables associated with any developed predictive model. For example, simply
stating that a measure included a "count of discussion activities" is insufficient
detail. Simply referring to a broad count of activity does not make it clear if the
metric included an aggregation of all possible discussion activities (e.g., posting,
viewing, voting) or a specific subset.
The ability to determine measures of time-on-task also presents issues for the
review. As Kovanovic et al. (2015) pointed out, it is important to specify how
time-on-task is determined and which (if any) heuristics or approximations were

51

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

applied. This was not always the case with the studies included in this review.
Therefore, the majority of the reviewed studies required detailed investigation of
the methods applied and the description of the data analyzed to determine appro-
priate categorization. The lack of consistency in terminology necessitated further
interpretations. Furthermore, we classified variables across the various dimen-
sions of student engagement in light of Reschly and Christenson's model. This
classification added a level of subjectivity, which could lead to challenges in
ensuring internal validity. Finally, to maintain a quantitative focus, this study
excluded often rich observations drawn from qualitative studies which would be
more appropriate for a separate literature review.

Quantitative Overview of the Selected Studies


The aim of this section is to present the selected dataset of MOOC research
papers. Specifically, here we reviewed 38 studies in relation to their bibliographic
information and their overall focus prior to the in-depth analysis of learning-
related metrics used in these academic papers.
Appendix Table Al shows the author(s), titles, publication year, publication
venue types, the number of courses analyzed, data sources used, and the number
of students6 (registered, active, completed) in the studies included in this review.
We observed that, as noted in Figure 2, a majority of studies included in the sys-
tematic review were published at conferences (Figure 2). Although we reviewed
the literature published between 2012 and 2015, only one study published prior
2014 satisfied the inclusion criteria.
Courses delivered on the Coursera platform were most commonly analyzed,
followed by the edX platform (Figure 3). We observed that only a few studies
examined courses delivered by other MOOC providers. For example, only one
study analyzed data delivered via the Desire2Learn learning management sys-
tem (Goldberg et al., 2015), Sakai (Heutte, Kaplan, Fenouillet, Caron, &
Rosselle, 2014), UNED-COMA platform (Santos, Klerkx, Duval, Gago, &
Rodríguez, 2014), or a course delivered in a distributed environment (i.e.,
Distributed), using social media (Joksimovič et al., 2015). Finally, only
Adamopoulos's (2013) study utilized data from MOOCs delivered across vari-
ous platforms (i.e., Canvas Network, Codeacademy, Coursera, edX, Udacity,
and Venture Lab). However, this study was not included in the summary pro-
vided in Figure 3, as it was not clear which of the 133 courses analyzed was
delivered within the various platforms.
Most of the evidence derived from the modeling of learning behavior in
MOOCs was collected from computer science courses (Figure 3). Physical sci-
ence and engineering, life and social sciences, and arts and humanities courses
were also well-represented. In contrast, language learning and personal develop-
ment courses were rarely examined. This observation is reflective of the sheer
volume of MOOC offerings related to the computer sciences compared with other
disciplines (Shah, 2015), as well as the technical skills that are required to process
MOOC data for analysis.
Only two studies within the data set analyzed data from connectivist learning
environments (Figure 3). Heutte et al. (2014) and Joksimovič et al. (2015)

52

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
FIGURE 2. The number of studies per year, with bars showing the respective number of
papers published in respective venues (i.e., journal or conference)

incorporated data from social media (e.g., Twitter or blogs) in order to under-
stand factors that could explain learning in cMOOCs. The remaining studies
examined MOOCs that were designed in a more structured framework (i.e.,
xMOOCs).
The systematic review further revealed that typically learning in MOOCs is
studied through the analysis of the trace data combined with discussion or sur-
vey data, and is generally derived from a single course (Figure 4). Very few
studies combined more than two data sources (e.g., survey, trace, and discussion
forum data). Moreover, there was only one study that relied on learner-gener-
ated data, such as blogs, Twitter, and/or Facebook posts. On the other hand,
studies that analyzed two or more courses primarily focused on trace or discus-
sion forum data.
For most the courses analyzed, researchers reported 25,000 to 50,000 regis-
tered students (Appendix Table Al). This size of cohorts is not surprising given
that an enrollment of 25,000 students is commonly referred to as a typical MOOC
size (Jordan, 2015b). However, the number of active students or students included
in the analyses was generally less than 10,000. As indicated in Appendix Table
Al , researchers often failed to report the number of registered and active/observed
students in their studies.

53

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
FIGURE 3. The number of studies within a given topic, delivered on a given MOOC
platform, with shapes indicating MOOC (massive open online courses) design (i.e.,
xMOOC or cMOOC)

Results and Discussion

Common Operationalization of Learning Outcomes (Research Question 1)


As a part of the first research question, our analysis aimed to identify how the
reviewed literature defined the results of the learning process, and to discuss
their alignment with a common model of student engagement. Specifically, we
analyzed how researchers operationalized and measured the outcome variables
they were predicting in their various models. Our analysis suggests that learning
outcomes have been defined as course completion (e.g., Crossley et al., 2015;
Loya, Gopal, Shukla, Jeimann, & Tormey, 2015), engagement (Sharma, Jeimann,
& Dillenbourg, 2015), social interactions (Vu, Pattison, & Robins, 2015), socia-
bility (Brooks, Stalburg, Dillahunt, & Robert, 2015), and learning gains
(Koedinger, Kim, Jia, McLaughlin, & Bier, 2015; X. Wang, Yang, Wen,
Koedinger, & Rosé, 2015). The majority of studies use the metrics capturing in-
course academic performance and persistence interchangeably with the notions
of failure and success within the course (e.g., Adamopoulos, 2013; Santos et al.,
2014; Sharma et al., 2015).

54

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
FIGURE 4. The number of courses using different data sources with the number of
courses included in the analyses

Academic Performance
Academic achievement in the form of final exam or an accumulated course
grade was the predominant variable or proxy for course outcome (Bergner, Kerr,
& Pritchard, 2015; Coffrin, Corrin, de Barba, & Kennedy, 2014; Crossley et al.,
2015; Gillani & Eynon, 2014; Kennedy, Coffrin, de Barba, & Corrin, 2015;
Koedinger et al., 2015; Ramesh, Goldwasser, Huang, Daume, & Getoor, 2014b;
Sinha & Cassell, 2015; Tucker, Pursel, & Divinsky, 2014; X. Wang et al., 2015).
Alternative to the final grade, a course outcome was defined through basic levels
of certification: for example, "no certificate," "normal certificate," and "certifi-
cate with distinction" (e.g., Brooks, Thompson, & Teasley, 2015); potentially
complemented with additional categories such as "completing some exams" and
"completing all exams without passing the course" (Engle, Mankoff, & Carbrey,
2015). In most cases, these levels were derived from the grades, with the excep-
tion of Adamopoulos (2013) who asked students to self-report their level of per-
formance from a predefined list.

Cognitive Change
Instead of using grades or categories representing performance to measure the
result of learning, several studies employed measures to capture cognitive change
of a learner. Champaign et al. (2014) defined course outcome as the improvement

55

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

of students' ability to succeed on quizzes, that is, if they were overperforming


their prior grades, rather than whether they were receiving high scores. Konstan,
Walker, Brooks, Brown, and Ekstrand (2015) took a somewhat similar approach
by measuring the change in knowledge through 20-item pre- and postclass knowl-
edge tests created by the instructor. Finally, Li, Kidziński, Jeimann, and
Dillenbourg (2015) conducted a study predicting the difficulty of the course con-
tent that in a way reflected if a learning material required more effort from a
learner. Their study established an association between student viewing patterns
of the in-course video lectures with student perceived video difficulty.

Persistence and Dropout


In our review, the studies predicting learning persistence were observed as
another approach mainstream to the analysis of learning in MOOCs. Researchers
appeared to willingly include course completion or course grade as a point of
reference in persistent behavior. Many authors explicitly defined persistence as
engagement with both content and assessment and sometimes forum activity as
well. For instance, Ye et al. (2015) defined a dropout as a learner who accessed
fewer than 10% of the lectures and performed no farther assessment activities. Vu
et al. (2015) integrated participation in more activities than just assessment by
operationalizing dropout events as a stop of engagement in learning events span-
ning across the course activity including the forums as well as quiz grades.
Alternatively, the students not earning a certificate and taking no action between
a certain point in time and the time of the issuance of the certificates were defined
as "stop-outs" in the study by Whitehill, Williams, Lopez, Coleman, and Reich
(2015). In some of the reviewed articles (e.g., Boyer & Veeramachaneni, 2015),
the authors did not explain which learner activity was included as a measure of
persistence from one week to the next, that is, a task and/or a lecture.
In sum, we observed that persistent undertaking of assessment was commonly
included as a full or partial indicator of how persistence was measured. Such can
be interpreted as an indication of a limited understanding of MOOCs. That is, by
defining persistence as a learning outcome and a predictor of interest, researchers
indicate that the mind-set guiding such analysis is similar to that applied in a uni-
versity setting. Specifically, learners undertake courses where their learning is
marked by assessments. However, MOOCs nature of open participation does not
limit student learning to undertaking assessment, but is varied depending on stu-
dents' motivation (Eynon, 2014). In a way, using persistence as a proxy for learn-
ing ignores the nonformal nature of MOOCs where students are not required to
get assessed or follow through the course. For some of the individuals, learning
happens outside of continuous in-course assessment if they are sampling content
or getting their "just-in-time" insights relevant to a very specific question they are
solving. Currently, these MOOC-speciflc groups with divergent intentions to
learn that reach beyond the formal assessment and prescribed course activities are
often grouped within an all-encompassing "no certificate" category, the one
dichotomous to full course completion.
In the analyzed data set, the study by Sharma et al. (2015) was representative
of academic work trying to work around preexisting formal education assump-
tions about measuring the outcomes of learning through grades or continuous

56

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

assessment. The authors expanded course outcomes to include learners who may
not be pursuing certification. Measured outcomes were defined by either grades
or degrees of interaction with the course material. The authors analyzed the asso-
ciation of clickstream data and performance with two main learner types clearly
distinct in their desired course outcomes: active student (submitting graded
assignments successfully or failing) and a viewer (engaging in lectures and/or
quizzes without graded assignments).

Social and Affective Aspects of Learning as a Part of Learning Outcome


A focus on social dimensions of learning outcomes was scarce as compared
with academic performance or persistence. The majority of studies in this
domain focused on the volume of posts or number of connections gained in
course forums. Importantly, where social aspects of learning captured through
the numbers of connections or posts were used as measured outcomes, they
were included as complementary to grades. The number of forum posts is the
most common measure of learning associated with the social interaction. This
measure has been typically recorded at the end of the course (Brooks, Stalburg,
et al., 2015; Goldberg et al., 2015). Alternatively, Joksimovic et al. (2015) relied
on the concept of social capital to explain the outcome of the learning process.
Joksimovic et al. (2015) used social network analysis to quantify individual
positions in networks of learners. Joksimovic et al. demonstrated that socially
engaged MOOC takers with higher grades and socially engaged participants
with higher social capital were not necessarily the same individuals. Such a
result supports the premise that MOOCs are used differently by learners, and
learning with others is only relevant to some individuals. In relation to students'
persistence in participating in MOOC forums, a series of studies focused on
student disengagement from posting activity (X. Wang et al., 2015; Yang, Wen,
Howley, Kraut, & Rose, 2015). Specifically, X. Wang et al. (2015), as well as
Yang et al. (2015), found the relationship between the time students joined an
MOOC and student difficulty in engaging with others in online discussion
forums. This work emphasized the importance of the temporal aspect for model-
ling aspects of social interaction and collaboration (i.e., learning through the
interactions with the others) as an outcome.
Affective aspects of learning outcomes were rarely incorporated into the learn-
ing outcomes and were limited to student satisfaction.

Multidimensional Measures
Some authors used multidimensional measures of course outcomes. For
instance, Kizilcec and Schneider (2015) predicted learner behavior that was oper-
ationalized as a multidimensional construct. The authors approached learnin
behavior as defined by learner progress in the course, their general performance,
and social engagement. The dimension of learner progress was quantified by the
proportion of watched videos and attached assignments (more than 10%, more
than 50%, and more than 80%). General performance was operationalized as
receiving a certificate of completion. Finally, social engagement was operational-
ized through a combination of the number of posts (in relation to the most prolific
learner) and received votes. Again, although the focus on metrics typical in forma

57

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al

courses is evident, the authors integrated different dimensions that described the
learning outcomes.
Overall, in analyzing measured outcomes of learning in the selected studies, we
observed formal education mind-set guiding researchers using measures related to
certification, assessment, and prediction of dropout as undesired behavior. Such is
not surprising, as the literature stemming from formal educational contexts has vali-
dated measures allowing to capture learning as performance, or learning as progress
toward completion, or learning as participating in assessment. Hence, operational-
izing the learning outcome perceived through an academic (formal education) lens
is mostly developed. Few authors maintained focus on measuring cognitive change,
whereas the focus on social outcomes of learning is scarce, with the emphasis on the
volume of posts or number of connections. Affective aspects of learning outcomes
are currently limited to student satisfaction. Few studies employed a more holistic
approach using multidimensional constructs to measure (and predict) learning out-
comes, or by distinguishing that not all learners in MOOCs can be described by a
more common university-like profile.
In their model of engagement, Reschly and Christenson (2012) described
learning outcomes of two broad types. The so-called proximal learning outcomes
indicate the product of the learning process that can be proximal and distal.
According to the authors, proximal learning outcomes can fall under academic,
social and emotional subcategories (see Supplemental Figure SI in the online ver-
sion of the journal). A proximal learning outcome is used to indicate school-
related outcomes, such as grades, relationships with peers, self-awareness of
feelings, among others. Distal learning outcomes are observed in post-graduation
settings related to adult life. In the model, these are exemplified as for instance
related to employment or productive citizenry. Such distinction between what is
learnt and applied at school and what is learnt and beyond is fitting in a K- 12 set-
ting for which the authors developed their model. The MOOC context, however,
has some differences. For the majority of their participants, MOOC experiences
do not aggregate to 10 years of relationships within a community where formal
assessment is necessary at different phases. The MOOC participants may be inter-
ested in a timely content they need to learn as they engage for a short period of
time. Alternatively, they also may undertake the MOOC in its entirety and follow
all different learning goals set throughout the entire offering. Therefore, we sug-
gest that proximal learning outcomes are redefined into the immediate and course
level, instead of the school level, otherwise preserving their academic, social, and
affective aspects. For the distal learning outcomes, we suggest to redefine them as
postcourse, instead of referring to them as distal learning outcomes. These sug-
gested modifications are captured in Figure 5 demonstrating the reoperationalized
model, whereas the table that summarizes all the studies included in the review
along with the learning outcome measured is provided in the supplemental mate-
rial (see Supplemental Table S3 in the online version of the journal).

Providing Means for Defining Context and Engagement Types in Learning at


Scale (Research Question 2)
A challenge for this systematic review involved summarizing a wide variety of
variables used to model learning in MOOCs. This was particularly noted in the

58

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
S
"Si

è
la
■S g
mbC 22
§ S C/3

s? 'â
S ř > çj
<-> -S 12
a îa S
00 ^ 43

K§ 5*
■S 5* 8 oo
S -S
*-■» o "^5
^ 3 -O
Q >ļ o
S >ļ S o S
"s ^ &
§.a ^ g &
Ď bo
•S-Ss s
SÎ ^ - r
«^^-«
-rio « ^
s: c fi
âi <&-
5 k* 2
6 § k* -S 2
°p C Ûû
S* c "c
g) -S c O
§ öo a
-f -S ~
Us
!-§! u
u >» 2.
g «a, -a

I<M P c^ °
P c^ SJ ■SJ °■ •- î
-Ci ^ u
S S'A
•Se®
«Sc
'G « -2
« S E
U >
a -S «
* -s -s
*-,? -s-g-g»» OÕ
° - u
*- >» '""N. -C
*- a> >» '""N. r^Tï -C

^ oo ^o o-o
C
K 'N ^
^3 S ™
s g e
§• 1 ä
^i ÄE
a .a -
*> S 8
£3 §
* =1
pq « a«
s -«

S
O & S
(Ł, aj S:

59

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovic et al

definition of latent constructs various studies claim to measure. Thus, for exam-
ple, several studies measured engagement as a latent construct (Ramesh,
Goldwasser, Huang, Daume, & Getoor, 2014a; Ramesh et al., 2014b; Santos
et al., 2014; Sinha & Cassell, 2015). However, Santos et al. (2014) focused pri-
marily on metrics extracted from students' interaction within a discussion forum.
Ramesh et al. (2014a, 2014b), as well as, Sinha and Cassell (2015) also consid-
ered students' interaction with other course resources (e.g., quizzes, videos, or
lectures). On the other hand, X. Wang et al. (2015) measured discussion behavior
operationalized through the cognitive activities extracted from discussion forum
messages. Nevertheless, most studies, although focusing on somewhat similar or
same metrics, did not report constructs measured. That is, those researchers
focused on the measures of student activity with the course materials or with their
peers (e.g., counts of videos watcher, number of messages posted), without neces-
sarily defining such measures as engagement. Although some of the studies used
the same operationalization of the measured variable, those metrics were usually
labeled in different ways (e.g., discussion behavior, behavior, or engagement).
Therefore, to provide a more coherent summary of findings, we framed our results
around the constructs introduced in Reschly and Christenson's (2012) model of
student engagement and adopted in our study (Figure 5).

Contextual Variables
A significant number of studies (39.5%) included in the systematic review,
observed contextual variables to determine to what extent student demographic
data (10 studies), course characteristics (5 studies), or student motivation (8 stud-
ies) predict learning outcome and/or course persistence. Only one study (i.e.,
Konstan et al., 2015) observed all three contextual factors. On the other hand, a
majority of studies that analyzed demographic data (around 66%) also observed
either motivational factors or course-related characteristics.
Demographic variables have been commonly used in understanding factors that
influence learning in MOOCs. Age, gender, and level of education were consid-
ered in various studies in terms of predicting course persistence and/or achieve-
ment. Some 80% of studies that observed demographic data (i.e., out of 15 studies)
included the level of education of course participants. The results somewhat differ
across the studies included in the review. Goldberg et al. (2015), as well as, Heutte
et al. (2014), found no significant difference in a likelihood of completing a course
across the observed levels of education. The studies observed rather different
course settings - health and medicine xMOOC delivered on the Desire2Learn plat-
form Goldberg et al. (2015), and a distributed (cMOOC) version of a humanities
course (Heutte et al., 2014). Moreover, Konstan et al. (2015) found no significant
association between the level of education and knowledge gain and a final course
grade, in a data science xMOOC, delivered using the Coursera platform. However,
through the analysis of courses from various disciplines delivered on the Coursera
platform, Engle et al. (2015), Greene, Oswald, and Pomerantz (2015), Kizilcec and
Halawa (2015), and Koedinger et al. (2015) showed that more educated students
are more likely to persist in a course and achieve higher grades.
Existing research does not provide univocal conclusions with respect to the
importance of students ' age for predicting course persistence and achievement.

60

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

Engle et al. (2015), Koedinger et al. (2015), and Konstan et al. (2015) failed to
find an association between students' age and course completion, final course
grade, or knowledge gain. Whereas, on the other hand, Greene et al. (2015),
Heutte et al. (2014), and Kizilcec and Halawa (2015) showed that older students
were more likely to persist with a course. However, Kizilcec and Halawa (2015)
also showed that older students achieved lower grades compared with their
younger peers.
The prevailing understanding found in the studies included in this systematic
review that observed students' gender (five studies) as an important determinant
of learning in MOOCs is that there are no differences between male and female
students with respect to the course persistence, course outcome, and attained
knowledge gains (Adamopoulos, 2013; Heutte et al., 2014; Koedinger et al.,
2015; Konstan et al., 2015). Only Kizilcec and Halawa (2015) showed that male
students were more likely to persist with lectures and assessment, as well as to
achieve a grade above 60th percentile, across a wide range of courses (i.e., 21
courses) from various subject domains.
The existing literature on student motivation and engagement in online learn-
ing argue that the lack of student affinity to complete a course leads to higher
dropout rates, and consequently failure to complete a course (Hartnett, George, &
Dron, 2011). Thus, intention to complete a course and number of hours intended
to devote to a course work, are commonly considered in predicting course persis-
tence and achievement (i.e., included in 40% to 50% of studies that observed
student motivation). Except for Konstan et al. (2015), who failed to confirm the
association between students' intention (i.e., complete a course, and time devoted)
and final course grade, findings from other studies (i.e., Engle et al., 2015; Greene
et al., 2015; Heutte et al., 2014; Kizilcec and Halawa, 2015) confirmed general
understanding of students' intrinsic motivation for persistence and achievement in
MOOCs.
Generalizing the findings with respect to the course (or classroom) character-
istics is rather challenging given a diverse set of metrics used in the studies
included in this systematic review. For example, Adamopoulos (2013) showed a
negative effect of course difficulty, planned workload, and course duration (in
weeks) on student retention. It is also interesting that Adamopoulos 's (2013) stud
revealed a negative effect of self-paced courses, compared with more structured
course design on successful course completion. On the other hand, Adamopoulo
(2013) also showed that peer assessment (compared with automated feedback),
and open textbooks, had positive effects on successful course completion.
Likewise, Konstan et al. (2015) showed that being in a specific course track (i.e.
programming vs. concepts track7) significantly predicts course grade, also being
negatively associated with normalized knowledge gains. Finally, Brooks,
Thompson, et al. (2015) revealed that the fact whether students were paying for
certificate or not had a minimal predictive power on course grades.
Although original Reschly and Christenson's model (see Supplemental Figure
SI in the online version of the journal) argues for the importance of understandin
context through the four factors, namely family (e.g., support for learning, goals an
expectations), peers (e.g., educational expectations, shared common values, aspira-
tion for learning), school (e.g., instruction and curriculum, support, management

61

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

and community (e.g., service learning), contemporary MOOC research suggests


somewhat different operationalization of the contextual elements. Therefore, for
research of learning at scale, we argue that contextual factors should be observed
through students' demographic data (e.g., age, gender, level of education), class-
room characteristics (e.g., peers, course characteristics, course platform), and indi-
vidual students' needs and motivation (e.g., intent to complete a course, interests in
topic), as outlined in Figure 5. It should be noted here that "classroom characteris-
tics" primarily refer to the specific attributes of the given course and not to the
notion of the traditional (i.e., face-to-face) classroom.

Student Engagement
Given the purpose of the systematic review and specified search criteria,
unsurprisingly, 89.5% of the studies went beyond contextual factors (primarily
demographic data) and included engagement-related metrics in predicting reten-
tion or achievement in MOOCs. A considerably smaller number of studies (21%),
however, attempted to align extracted metrics with existing educational variables.
Such an approach resulted in a wide diversity of variables used to quantify student
engagement in nonformal, digital educational settings.
Around 20% of the studies included in the review is the total number of mes-
sages students contributed in a discussion forum, during a course. Crossley et al.
(2015), Engle et al. (2015), Goldberg et al. (2015), as well as Vu et al. (2015),
showed that students who actively participated in the discussion forum (i.e., cre-
ated a high number of posts) were more likely to complete a course. However,
predicting knowledge gain or exam score, yielded somewhat different results.
Specifically, Konstan et al. (2015) showed that the number of messages posted to
a discussion forum was not significantly associated with an increase in knowledge
gain. Similar findings were noted by (X. Wang et al., 2015), who showed there
was no association between forum participation and knowledge gain. Finally, Vu
et al. (2015) also showed that the overall activity in discussion forums did not
predict the number of quiz submissions nor submission scores. As explained by
Vu et al. (2015), the relationship between the number of posts and assessment
grade seemed to be one-directional. That is, higher grades predicted the number
of posts, but the number of posts did not necessarily predict the grade.
A substantial number of studies that measured various forms of student engage-
ment also observed to what extent interaction with course assessment (17.6%;
e.g., the number of total assignment submissions, count of correct quiz attempts)
predicted learning outcome or retention. In general, studies showed a significant
and positive association between assignment and/or quiz interaction and success-
ful course completion (Brooks, Thompson, et al., 2015; Konstan et al., 2015;
Sharma et al., 2015; Ye et al., 2015). Nevertheless, Kennedy et al. (2015) revealed
somewhat contradictory results, failing to demonstrate the association between
the number of submitted assignments and course performance (i.e., final course
grade).
To evaluate the quality of student-generated discourse and examine the asso-
ciation between student cognitive behavior and learning, researchers mainly
relied on content analysis methods to identify underlying cognitive processes. For
example, analyzing cognitively relevant behaviors in discussion forum messages

62

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

using Chi 's Interactive, Constructive, Active, and Passive framework (Chi, 2009),
X. Wang et al. (2015) showed that active and constructive cognitive processes
could predict learning gains. On the other hand, Yang et al. (2015) demonstrated
the importance of resolving contusion in the discussion forum to reduce student
dropout. However, in detecting different confusion states, Yang et al. (2015) relied
on psychologically meaningful categories of words, extracted from online discus-
sions using the Linguistic Inquiry and Word Count tool (Tausczik & Pennebaker,
2010), as one of the classification features. Whereas Joksimovic et al. (2015) as
well as Dowell et al. (2015) exemplified how linguistic indices of text narrativity,
cohesion, and syntax simplicity extracted from online discussion, transcripts pre-
dict learning outcome and social positioning in various contexts.
Similar to studying cognitive processes, researchers primarily relied on con-
tent analysis methods when studying affect in MOOCs, and the association
between affect and course persistence or outcome. Thus, Tucker et al. (2014)
revealed a strong negative correlation between student sentiment expressed in
the discussion forum and average assignment grade. Whereas, this correlation
was low and positive between student sentiment and quiz grades. Tucker et al.
(2014) relied on a word-sentiment lexicon (Taboada, Brooke, Tofiloski, Voll, &
Stede, 2011), and Adamopoulos (2013) used Alchemy API to extract student
sentiment from discussion forum messages. Adamopoulos (2013) further
showed that student sentiment toward course instructor, assignments, and course
materials have a positive effect on the course retention. Yang et al. (2015) on the
other hand, highlighted the importance of resolving confusion (expressed in
student forum posts) to increase retention. However, in order to detect confu-
sion from student contribution to the discussion forum, Yang et al. (2015) relied
on Linguistic Inquiry and Word Count features (among others) and word cate-
gories that depict student affective processes, including positive and negative
emotions.
Through the analysis of the results related to our second research question, we
were able to observe a large diversity of metrics used to understand learning and
predict student persistence and/or course outcome. Given a large scale and vari-
ous sources of data, it seems that the first generation of MOOC research (Reich,
2015) primarily focused on understanding "what works" in this new settings, in
terms of supporting learning activities and increasing retention. However, another
reason for such diversity of metrics used (Supplemental Table S3 in the online
version of the journal) presumably lies in the fact that there is no single commonly
accepted analytical method or framework that would allow for studying learning
in nonformal, digital educational settings. Failing to provide a common interpre-
tation of observed variables used to understand learning can potentially lead
toward limited generalization and low interpretability of results.
Supplemental Table S3 (available in the online version of the journal) provides
a complete list of metrics, extracted from the studies included in this systematic
review, used to model learning in nonformal learning settings. In the following
text (see section "Conceptualizing Learning in MOOCs" primarily), we also pro-
vided a rationale for conceptualizing learning in MOOCs and definition of the
constructs that comprise the adopted model of the association between context,
engagement, and proximal learning outcome.

63

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
TABLE 1

Overview of statistical approaches reported in reviewed publications

Statistical approach Number of studies used Proportion of studies used

Machine learning 13 0.34


Descriptive 9 0.24
Correlational 7 0.18
Regression 7 0.18
Chi-square 7 0.18
MANOVA/ANOVA 6 0.16
Survival analysis 5 0.13
Linear-mixed models 3 0.08
Other 5 0.13

Note. ANOVA = an

Following the
we argue that
behavioral, aca
as further disc
propose differ
alizations are t
characteristics
overarching fr
MOOCs and the
ior and outcom
posed in this p
contrast diffe
greater scienti
work encomp
research, and c

Association Be
Question
Systematic lit
ferences in sta
metrics and lea
variability in
Outcomes") and
and Engagem
included paper
dom forest or
variance or mu
analysis, and m
tional papers u
thus were class

64

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

relational event modeling (n = 1), discrete choice model (i.e., random utility
model or latent regression model; n = 1), or a structural equation model (n = 1).
A few insights can be gleaned from Table 1. The most common analysis
method adopted was machine learning techniques. Of the papers that used
machine learning approaches, only 38% of the 13 also reported another statistical
method. The usage of machine learning suggests that a common goal among the
papers was to build predictive models (vs. explanatory models). Indeed, the goal
of predicting students' success in MOOCs is a highly relevant goal for incorporat-
ing interventions. It is also important to point out that correlational and regression
techniques were also commonly used (36% combined). This may suggest that
another important goal among these papers was to not only build predictive mod-
els but also explain variance in the dependent variable(s) of interest. Taken
together, the statistical methods were quite diverse, perhaps targeting different
theoretical or more applied goals.

Conceptualizing Learning in MOOCs


This systematic review of the MOOC research literature involved two related
aims. The first aim focused on the development of a summary of the metrics
commonly used to measure and model learning in nonformal educational set-
tings. The second aim was to extend these findings and establish a conceptual
model that would distinguish between the factors affecting students' learning in
a MOOC context. Building on Reschly and Christenson's (2012) model of the
associations between context, engagement, and student outcomes, we further
redefined and reoperationalized these constructs (i.e., context, engagement, and
outcome) for research on MOOCs. In so doing, we relied on the insights obtained
from the systematic literature review to understand how a diverse set of learn-
ing-related constructs is measured in MOOCs, and how these constructs could
be linked to an existing model of learning previously validated in educational
research. Providing such a model would offer a possibility to compare factors
that shape learning in nonformal, digital educational settings with formal (e.g.,
traditional face-to-face or online) formats of learning. Specifically, such a model
would enable studying whether, and to what extent, factors that contribute to
learning differ across various educational settings (e.g., face-to-face; online and
MOOCs). Finally, given that a majority of studies in this review, and in MOOC
research in general according to Reich (2015), observe (a) certain form(s) of
students' engagement in predicting course outcome and/or persistence, it seems
reasonable to provide a reoperationalization of this particular concept for the
study of MOOCs.
In the context of MOOCs, our systematic review indicated a mainly explor-
atory nature of the existing research that attempts to investigate the association
between various forms of student engagement (or behavior) and learning -
defined through learning outcomes or course persistence. In so doing, researchers
often failed to account adequately for existing educational frameworks that would
allow for more salient interpretations of the results. Even when relying on existing
learning theories, researchers generally did not account for a different learning
context or a greater diversity of students observed in open nonformal educational
context (as compared with online or face-to-face settings).

65

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al

Following the intention to provide coherence into the diverse analyses of learn-
ing-related constructs in MOOCs (see section "Results and Discussion"), we
framed our inquiry around Reschly and Christenson's (2012) work on dropout pre-
vention and enhancing learning in traditional classroom settings. Similar to Reschly
and Christenson (2012), we recognize engagement as a two-fold construct - both
a process and an outcome - that mediates the association between a context (e.g.,
student intent, classroom settings) and a relevant learning outcome (Figure 5).
Moreover, we also posit that the student engagement in MOOCs has a mediating
role between contextual factors and desired learning outcomes. However, our lit-
erature review highlights some of the shortcomings of Reschly and Christenson's
original model when applied to MOOCs. For instance, Reschly and Christenson's
model is designed to address systems where children obtain literacies and content
while they undergo developmental processes. In that sense, Reschly and
Christensen's range of contextual variables is guided toward this particular con-
text, especially in relation to such aspects as learner agency, learner intent, and
prior knowledge. In a similar manner, Reschly and Christenson's notion of out-
comes is not suited to learner-driven learning process where a learner has both
power over deciding to which end to engage with the learning activities as well as
when to disengage. Learning outcomes in Reschly and Christensen's model address
the role of secondary education and how it prepares for future life, whereas in the
learning outcomes may have another level of granularity. Finally, although engage-
ment may be defined similarly in both digital and face-to-face settings, the ways of
gleaning information about it in digital environments and at scale, require reopera-
tionalization. Therefore, following insights obtained from the systematic literature
review, we propose a novel engagement model applicable in the context of learn-
ing with MOOCs that considerably differs from Reschly and Christenson's, pri-
marily in the way the model constructs have been operationalized.
The first and foremost difference to Reschly and Christenson's model is the
conceptualization of each of the components of the model proposed in this article
(see Supplemental Figure S3 in the online version of the journal). Specifically,
whereas the original model observes family, peers, school, and community as
main contextual determinant, in the MOOC research, we defined contextual fac-
tors as being composed of (a) demographic information, such as age, gender, or
level of education (Goldberg et al., 2015; Heutte et al., 2014); (b) classroom struc-
ture, for example, course platform, course characteristics (Adamopoulos, 2013);
and (c) individual needs, for example, students' intentions (Brooks, Stalburg,
et al., 2015; Kizilcec & Halawa, 2015).
Despite an extensive body of research on student engagement in various edu-
cational settings, and prevailing understanding of its importance, there is no clear
consensus what comprises engagement (Christenson, Reschly, & Wylie, 2012).
As noted in the Christenson et al. (2012) review, researchers most commonly refer
to two subtypes (i.e., participatory and affective) or include a cognitive engage-
ment as a third subtype. However, there are notable differences in how various
subtypes of engagement have been operationalized in a traditional educational
context. Thus, the lack of agreement on how engagement has been defined and
operationalized in MOOCs (see section "Providing Means for Defining Context
and Engagement Types in Learning at Scale") perhaps comes as no surprise.

66

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

Nevertheless, we posit that an attempt to establish a common understanding of


how engagement is measured and interpreted in the context of learning in nonfor-
mal, digital educational settings is a necessary step toward better understanding
learning in this particular context.
Although Reschly and Christenson (2012) observed engagement in traditional
learning settings, the theoretical and practical stances considered in conceptual-
izing the engagement model, seem to align with the general understanding of
what important factors of learning in MOOCs are. Specifically, a multidimen-
sional nature of variables observed when assessing learning in nonformal educa-
tional settings (see Supplemental Table S3 in the online version of the journal)
supports the necessity to have multidimensional constructs that include different
types of learner activity (e.g., Konstan et al., 2015; Sinha & Cassell, 2015), emo-
tions (e.g., Crossley et al., 2015; Yang et al., 2015), or cognition (Dowell et al.,
2015; X. Wang et al., 2015). Finally, similar to Kizilcec and Halawa (2015),
Brooks, Stalburg, et al. (2015), and Reschly and Christenson (2012) argue for the
importance of considering a specific learning context (e.g., peers or school) and
student agency. In spite of some similarities, operationalizing student agency in
Reschly and Christenson's (2012) model is somewhat different from what has
been considered in MOOC research included in this study. Reschly and Christenson
(2012) draw on the assumption that "students are able to report accurately on their
engagement and environments" (p. 9). Although we agree that "student perspec-
tive is essential for change in student learning and behavior" (Reschly &
Christenson, 2012, p. 9), we further aim at extracting a majority of evidence of
student engagement from the data stored within learning platforms used to deliver
courses at scale.
Reschly and Christenson's model was designed to analyze formal educational
settings. Thus, we further review the consistency of their model's categories in
relation to the metrics observed in MOOC studies. First, we find that academic
engagement in MOOCs aligns with Appleton, Christenson, Kim, and Reschly
(2006) and Reschly and Christenson's (2012) work, and refers to time spent on
course activities (e.g., viewing pages, engaging with quizzes, and assignments),
number of days (weeks, hours) being engaged with a course, assessment (e.g.,
homework and quiz), completion rate and accuracy, credit toward course comple-
tion, and pre- and/or posttest results (e.g., Boyer & Veeramachaneni, 2015; Li
et al., 2015).
Second, our view of behavioral engagement aligns with the original model of
engagement (Reschly & Christenson, 2012). A common definition of behavioral
engagement

draws on the idea of participation; it includes involvement in academic and social or


extracurricular activities and is considered crucial for achieving positive academic
outcomes and preventing dropping out. (Fredricks, Blumenfeld, & Paris, 2004, p. 60).

For MOOCs, this form of engagement can still be defined through participation in
discussion forums, viewing lectures, following course activities, or number of
times student accessed course wiki pages (e.g., Li et al., 2015; Santos et al., 2014;
Sinha & Cassell, 2015).

67

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

Third, cognitive engagement usually refers to students' motivational goals and


self-regulated learning skills (Christenson et al., 2012; Fredricks et al., 2004;
Reschly & Christenson, 2012). In the context of learning with MOOCs, thus far,
research has primarily focused on linguistic indicators (e.g., text narrativity or
cohesion) of student cognitive engagement, obtained from learner-generated arte-
facts (Dowell et al., 2015; Joksimovič et al., 2015; X. Wang et al., 2015). The
rationale behind this subtype of engagement is grounded in the premise that learn-
ing and understanding in computer-mediated learning are primarily expressed
through the artefacts students generate in the learning process (Goodyear, 2002;
Jones, 2008). Thus, studying learning in MOOCs should account for the quality
of discourse, as a proxy for students' cognitive engagement.
Fourth, Reschly and Christenson's (2012) model of engagement considers stu-
dents' affective reactions in the classroom, school identification, valuing learning,
and sense of belonging as factors that characterize affective engagement. However,
drawing on the premise that language represents a primary means of communica-
tion in computer-mediated interactions, as well as the lack of social cues that char-
acterize learning in nonformal, digital educational settings, MOOC research
primarily relies on linguistic indices in assessing affective engagement (e.g., posi-
tive or negative emotions) in MOOCs (e.g., Adamopoulos, 2013; Tucker et al.,
2014). Nevertheless, there has been significant work done recently in assessing stu-
dent emotions and affect using certain (arguably) more advanced approaches (e.g.,
Baker, D'Mello, Rodrigo, & Graesser, 2010; D'Mello, Dowell, & Graesser, 2009).
Finally, failing to account for contextual determinants of learning in general
(Appleton et al., 2006) or the contextual factors for online and distance education
in particular (Gaševič et al., 2016; Joksimovič et al., 2016) could lead toward
misinterpretations of the association between engagement and learning, providing
an intervention that might not result with an intended outcome. In defining con-
textual variables, our understanding of factors that frame learning in MOOCs is
defined through demographic data about course participants, classroom settings
(e.g., peers and course design), and student individual needs (e.g., intent to com-
plete and interest in topic; Adamopoulos, 2013; Brooks, Stalburg, et al., 2015;
Kizilcec & Halawa, 2015).
Course-level learning outcomes are the most commonly assessed in current
MOOC research. They are also further developed as they reach beyond the focus
on academic achievement, and include social and affective aspects. Thus, knowl-
edge mastery as the outcome is measured through graded assessment. Alternative
metrics are also employed, such as capturing knowledge or skill change. Course-
level learning outcomes within the social aspect are limited to engagement with
others, rather than the measures of quality of the knowledge construction within
the dialogue, or capture of the increased sense of belonging or identity formation.
Affective course-level outcomes are limited to course satisfaction only. In con-
trast, Reschly and Christenson's model defined affective learning outcomes as
self-awareness of feelings, emotional regulation, and conflict resolution skills.
Both intermediate and postcourse outcomes are not of the main focus in cur-
rent MOOC research. This is too constraining as such kinds of outcomes seem to
be common in nonformal and open settings. For instance, intermediate learning
outcomes are of relevance to the vast numbers of just-in-time learners sampling

68

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

parts of the content. Current approaches to the identification of immediate learn-


ing outcomes in MOOC research is limited to academic performance, as the
majority of metrics is focused on either predicting module outcomes, or detecting
when a student stops engaging with the course. Reschly and Christenson's model,
however, argues that engagement can be seen both as the process, as well as the
outcome. Thus, it could be hypothesized that engagement metrics could serve as
indicators of an intermediate learning outcome for those learners not interested in
course completion.
When it comes to postcourse outcomes, exemplified as employability and pro-
ductive citizenry in the original model, they have not been the subject of much
MOOC research, with the exception of the focus on employability (E. Y. Wang &
Baker, 2015). Again, the lack of focus beyond assessment is limiting, as better
measures of postcourse outcomes could enrich stakeholders' understanding of the
wider impact of MOOCs, and finally evaluate the value of producing MOOCs.
Conclusions

MOOC research has demonstrated significant advances in a relatively short


time frame (Raffaghelli et al., 2015; Reich, 2015). Nevertheless, contemporary
research in MOOCs almost unequivocally argues for the lack of generalizability
of existing results, and for failing to investigate factors that contribute to learning
in nonformal, educational settings (DeBoer et al., 2014; Evans et al., 2016). To
advance the field of research in nonformal, digital educational settings, there is an
imperative to shift the focus from observational studies and introduce more exper-
imental research approaches across different domains and course designs (Reich,
2015). Moreover, we agree with Reich's (2015) assumption that future MOOC
research should build on the existing research frameworks, evaluated across edu-
cational contexts, in order to provide a basis for comparison between learning in
MOOCs and other (more traditional) settings.
Our contribution to the development of the next generation research in nonfor-
mal, digital educational settings is twofold. First, we conducted a systematic lit-
erature review of the existing body of research in MOOCs that tries to model
learning in this particular setting. We were able to identify a wide range of metrics
used to predict learning and measure student engagement, across various contexts
(e.g., centralized within a single platform, or distributed, using various social
media). Nevertheless, usually referred to as a discussion behavior (X. Wang et al.,
2015), behavior (Ramesh et al., 2014a, 2014b), or engagement (Santos et al.,
2014; Sinha & Cassell, 2015; Tucker et al., 2014), various researchers tended to
observe engagement-related metrics from a single perspective operationalized
through students' participation in different activities. Specifically, researchers
tend to measure engagement as a form of participation in discussion forums
(quantity of contribution; Vu et al., 2015; X. Wang et al., 2015), watching video
lectures (Li et al., 2015), or participating in course assessment activities (Whitehill
et al., 2015; Ye et al., 2015). It is also noticeable that the definition of a course
outcome is dominated by the formal education mind-set for the majority of studies
included in this review (Appleton et al., 2006). Regardless of the fact that various
researchers have argued for the importance of aligning learning outcomes with
students' intentions and interest in completing a course, only a few studies (e.g.,

69

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimoviò et al.

Dowell et al., 2015; Joksimovic et al., 2015) made a considerable effort toward
the operationalization of social or affective learning outcome (Figure 5).
The second part of our contribution is framed around the redefinition of the
existing educational framework to account for specific aspects of learning in
MOOCs. Specifically, following Reschly and Christenson's (2012) research, we
proposed a model for studying the association between context, student engage-
ment, and learning outcome (Figure 5). We further suggest that engagement in
MOOCs, and learning at scale in general, should be observed as a multidimen-
sional construct, composed of academic, behavioral, cognitive, and affective
engagement. Such a definition should bring coherence into MOOC research, pro-
viding a common understanding what engagement actually is and how it should
be measured in this complex learning context, which seems to lack in the existing
studies. We also provided a list of metrics used to operationalize elements of the
proposed model (see Supplemental Table S2 in the online version of the journal).
However, by no means, we argue that this is a complete list of metrics used to
measure learning (or engagement) in MOOCs.
We contend that for advancing the MOOC research and allowing for compari-
sons with different (more traditional) forms of education, researchers should align
metrics used for assessing learning with the proposed model. Having a generally
accepted conceptualization of engagement would allow for obtaining more com-
prehensive insights into the factors that influence learning with MOOCs as well
as how these factors could be generalized across different platforms or compared
with diverse context (such as traditional online or face to face learning; DeBoer
et al., 2014). Such a conceptualization would also allow for moving beyond
observing student "click data" and exploring how quantity and quality of interac-
tions with the course content, peers, and teaching staff could predict course out-
come and persistence, thus providing more salient connection with existing
learning theories and practices (Dawson, Mirriahi, & Gasevic, 2015; Gaševic
et al., 2016; Wise & Shaffer, 2015). Nevertheless, we also acknowledge the lack
of metrics in some aspects of the model - that is, social and affective learning
outcomes - that require further conceptualization in the context of learning at
scale. Recent advances in the (multimodal) learning analytics research field pro-
vide a promising venue for investigation of students' cognition, metacognition,
emotion, and motivation using multimodal data, such as eye gaze behaviors,
facial expressions of emotions, heart rate, and electrodermal activity, to name a
few (Azevedo, 2015; D' Mello, Dieterle, & Duckworth, 2017; Molenaar & Chiù,
2015). Moreover, conducting a systematic literature review of qualitative research
conducted in the field would provide complementary insights into the findings
introduced here. Being designed to help understanding the process of learning in
rich detail, qualitative studies could potentially provide thick description of the
various aspects of social and affective engagement to accompany findings
obtained from quantitative research.
Our future research will examine the hypothesized association between con-
text, student engagement, and learning outcome. Thus, the proposed model
(Figure 5) assumes a mediating effect of student engagement between contextual
variables and desired outcome, which is in line with the original model proposed
by Reschly and Christenson (2012). Reschly and Christenson (2012) also observed

70

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

affective and cognitive engagement as mediating factors for the development of


behavioral and academic engagement (as indicated with arrows from cognitive
and affective to academic and behavioral engagement). However, given the pro-
posed operationalization, this association may not hold in our proposed model. It
seems reasonable to expect that direction of the mediating effect would be from
behavioral toward cognitive and affective engagement. This assumption is simply
due to the fact that in order to reveal traces of cognitive and affective engagement
(as currently operationalized) students should first engage with course material
and peer learners (i.e., reveal traces of behavioral engagement). Nevertheless, in
order to examine those assumptions, we aim to create a statistical model(s) that
would allow us to determine the validity of the hypothesized relations.
The original model, as proposed by Reschly and Christenson (2012), also
assumes the Matthew Effect (Ceci & Papierno, 2005) between the contextual fac-
tors and engagement "wherein as students are engaged, contexts provide feedback
and support that promote ever greater engagement" (Reschly & Christenson,
2012, p. 9), as indicated with the arrows pointing from context to engagement and
vice versa). We posit that in the context of learning at scale, and MOOCs in par-
ticular, this association would still hold. Such an implication could be inferred
from the existing research on self-regulated learning. Specifically, Winne and
Hadwin's (1998) model of self-regulated learning posits that conditions (i.e.,
learning experiences, domain knowledge, motivation, intents), operationalized
here through the contextual variables, influence both "standards as well as the
actual operations a person performs" (Greene & Azevedo, 2007, p. 336). Through
cognitive evaluation , students compare products and operations (here operation-
alized through the four engagement types) to determine whether a learning goal
has been achieved or further adjustments to the cognitive conditions should be
applied, completing thus a recursive model of self-regulated learning (Greene &
Azevedo, 2007; Winne & Hadwin, 1998).

71

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
t3 CN
~ ^ m ^
*2 p¿ ^ «o ^ p¿ §
I z ? - z «
a


I <3 1
I <L>

£3 ^
5 5
Ì5^ ^*T3
« o o
§
3
a «2
0 O-è 2o
è 900
°- O
O£Ēž£¡
£
fe
e 1 <■§ ?
s
o
1
o ►3
)Z¡ £> co r-
*§*
S * £ ~ tí
fe
0
S
s
'lOí
S z 2^
^ s z

cu
£ O Q ° co
I
os
ê
<u CA
S§ £ O w
s
.o
•G
1 tM
© CA
Uh d>

i II Ü
"e 3 ^ ^
S O ^
ÇA co >- < m ^

«I I«
5 -S £

« 8.Ì C
O
<L>
pu,
<D
o
(U
O
u
O
a>
O
< .5 I«
■S 3
vs
cd^ 5Ç
^ ppu,<l>
c O<L>
e O c<L>
O a
3 "8 .a P»P
»P <S a
Cui P
<S <S
<S ,ö Cui 3 <
« ^
Sì •2
a «
c o
c o
a o
c a
o
6 °. Oh > U U (J (J
ã^ ■s ^
3 S
< ? ?S
'S
CC
-
-
CA
ej
•8 ^ T3U <D <L> o. S S oo_
s G 3 Já £ g o 3 £ _
Co
s o
0 ^
<00 G Já 'S § g
o- ca c a ^ 'ac o x> o
;§ g «J g 3
1Š S" yOo-'S° g3caü* ed
c £ aCs «« .a*0
'ac oG mO<L>
•§ÎCiCi ^^ O^Ž O 'S 3 g>3^ ü ed C « *0 B g, G *U.B mO<L>
„ 5 aj U Ü Q O ť ^O -C«c
1V
H 2 ^ ;C ^ O ^ Ë £•£ ÎÂfcc
1 <o ^
3> .52 0 2 ^ ;C u°* ^ O ^ 2 I £•£ c s -g ÎÂfcc ¿ jł
<s 'S &
•S ^ 2|.s.a^^^(§¡S
sü.&d ÙÛ§°cÙÛ§°
3 5 SP° o CA
S•-'£ß Äfe--
-2 Söpß JÄfe-- §®s
^l2>ca
«3 ^
s 3 .ss -S 5 o ^ CA -g c íž.s öp J ui2 2 S c a ~
3
S'CG'C^OCCtüü^ ^ c § «5 a *> ~
*Ã Í2í 2tíS S^-g
« í °SS c
c-oS-Ś
° o «5
£ *>
5 c o
'§ 3 2 2 U S w o c <
£ S H J

2Ï S' S"
§-1 -aSJJ
f
•s fe
•S g g J
1<DJ 2co <D ^.§3^
*3 3
fc> 3
0 s y
^3 3-rf-rf
13
§• fe SĒ^»® y
Í -S § §• 1 fe s, ë la -g^
•s ö o S O 2 «
s053 ^1 S 'C
>,< PŪ ffl>CÍ00fe
S
íHH ãCo
Co *Co
Co co i-i (N m Tf

72

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
"8 1
I oí SS S » t- BÍ g
cc
c Z £
0000 S (N
(N« £ Z •«
K K
<L>
§ a 1
T3 ^
5 ^ ^ (U M ^ Q Os Tfr _1. o
«ģ: |g^^ (U
« «w
~§ w^^ 1sa
Q sa ^
Ą «o
«o * Tfr
m m^
? _1.
~ ^ «n
» «n
: fc < ^ -§ <n ^ S ^ sa «o - ~ ^ -
J^^ ^ Ho <*■ § ^ 00
0
& <n
oo^ ^
KKon § os
os o mr-
'5b ^ ^ rt ^ ^
P¿
« O On rt ^ -H m

crt 8 C/3
Sa
Q g crt
(- H ^ ¿ w
C/3
y Q
~
~ í
O »
ł-H d)
üc«^H (N (N < - i _

Z
i*
G d> d> d) d> d) d>
O Qh o o o o o
C G G G G
aj U p d) d) P (U
•S¿ -hG<+H S ,0
-hG<+H CiCi Ö
CíCí c+H
Ö Ö Ö 2G
G
ļ
O
cUC OGGGG
oooo
5
5
o->u u u u u 5

o i ■;§ .s i £ ! I
«a » Se -s sjsso j¡ .2
ál||
G ««£U
£Uig 2
* 'S* i¡'S£ G
il -i£JJ
-i ^-a
^-aiff
§ §Oü

£ G I « :Ś £U j |l 2 * Ii 'S G gl -sí ®lã Oü
1 11^1 *<£ 2ïi
P > c <a® Ët3 řQ S 2 « J § u
ë Jž > 8 & si sa ¡ -a B 8 § „ -g
B ë Jž «t? 8 & ä* Ä1 filili -a § „
5P G ^ 'c ^ £ S 7) ^ í O
I||||l^llg|¡sil¡ 5P G ^ 'c ^ £ S ¡Iii 7) ^ í O
iJJltlPpifPjiì
^ u >^ uj> j S
S u Pài
u

?
9 g 2 ¡2
2 "G o O S
G
& 2 "G 2 CL . CS ^
e
o
o ^ ü ^
o
w fi 2 & 2 « 2 *
. o *3 « >, Ü t>
a CS 04 ^ G JÜ ^ S d)
<
H 8- a §2 ^ i S52> ^ M d)
2" -«So 2 o 5 (g
¡á >. M u U Ci- U U Ci- Q
ŽS
2 O
00 sn so r- oo os 4-h

73

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
"8 ^
"S O nj (N . cd 3

!" 5 § í 5 s . S cd I 3
ü
1 <3 £
"5 ^ ^ «O _ ^ vo VO
£
n.>CC ^
£ ^
^ p¿
z 00 r- ^*_ ^p¿7^ r-
^ 00 o r-Tt
(N
2
jš ^ o
n ^ 4Ž C £ ^ t ^ z ^
S
5 73
^ E 0 © 0O ^ m S
sto
to r-
8 0°'
5 °'
£ S-H
^ s ^m- 2c¿
c¿ S
S
'g5 00 2 S - S; 2
&

I /3

S P M M C/5 (ZI Q _
S g a a ti ti ř D _ J
C/î

® »
t-> 1)
d) c/3 -- ' »- i i-i ^ ^ (N 1

1o°

C D d> O <L>
O Oh O O O
'fi >* ccc
cd £• __< ^ O CD <L>
.2 S3 2 2 2 2 Ö Ö Ö
33.2 oj
S3 § 3
2 E?E 2
2
E2
2 2oE 3o3 o
ti
0« > £ -2 £ £ u u u

f Io ! ! o :§ o
•s "•» P .ž G » Ä 2 £ g O

S ¡ia«
•s ,l| .ž
I o g.§-§§o
G g.§-§§o <ÔS
» <ÔS £
•S S ,l| ^«jš.! I -a .S 3£°|
iž Si« Sž
S c«2 CáĒ !" I?S §1c *"« cI OSI
§ ^ 8 £ 5 *
£ H Scëo.owg^rt C to c g o ^ ^ c fi o 8 o
H »h
cu oè ^^ 'J
'J^^Ūmu o «tc-«co«^ö'5
ë U« o^ ^ë w)
o >fi•£g o< <o ^o
Oü^í&.Su^fiS&HüÉOS g u
'c c Tv
cdiií.yipj 73
o cie o c£ o ^o ^c^H
o <u"o /2 ±¿ <5
o sr 3
> i-h_3 ^ .2
.gc^^OrtCgluĄW^oi)® cdiií.yipj o cie o c o <u"o o ^ /2 1 'g ±¿ w Í^UC o sr >
S^C -ŁSłS^OJt^ ^/Ss.S'- 1 Ohä O ^ O)
§coi3«a>cc.2.SU«Qc^ ^/Ss.S'- p_4 Ohä O Q.2 O ^ S O
§coi3«a>cc.2.SU«Qc^
oOÖJ.SDOö<Og£§^ ^acjJ^ca^ocQgc^^ •§8p_42O ocžu
2 2 Q.2 ocžu
£ £oo Soo
U 0¿ Oh 2 PU 00 SC
/- N ,

-o
0> c - 2 , „ ö'l 2
I -a
-a - ö I ~ -JoS-Šs
3
fi
« w2 ~ «^832°
fi
O "2° - 2 13 §°J|u I
"2° §2 SP 2 « 15 I i Id J
W
w

•a^K^ o . Id "¿o
< •a^K^ g 2 € 2 2 § S o 2>J

tí SqõoO K
^ Sqõo Ci §OS5I
Ci O^ Sg^ SlS'ŽS
^ g ī? ggCi
° °- Ci
P5
<
•a . .....
H ¿ - . (N m <n vo r~j

74

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
£ oo "S
'SOH
OH SS r~-
r~- SK
SKP¿_i
P¿ vo
^ voin
^ ^ (N
(N g-i:
-i: R;
R;
OH B ■> r~- * SK * wT _i vo
¿2 ÇJ s^.
g
T3 _-
5 ii u ^ S ^ ^ *-* © o ^
C/5 > u > CO OO ^ ^ 00 «O ^ ¡g
'c > «? f"- °°r«Ti- OO (N m oo on v5
° 'c S «? S oí S - TT - - ON v5 S
IjH ,g (NJ ;_J t- r- (N CO <N

Z ? r* g £ o ON «n
£ to
to £: ©S
co£! °° co c* Sen £ c*
Sen S S
M h-' 2on ^ 00 O ^
M CL) rn £ gj <N <N m
C¿

C/î

■§§!_ co en , H r. 1/3 1/3 H D


¿ I- I o S H ti ti H , 1/3 r. ti H ti
I- I o
C/Î

U-i
® co
(-H O
D c/D -I 1-H Tt ^ ^ (N ^ '- '
'C S (N - «
S o
Z °
C <D ü <D D (D
O Cl, O O o o
'c
ed
Cl,^£ <u
O e O
<l>
e o
o
e «
e

~32I£h
u Ö ÖS Ö 2 22
<■% G <£ P OC
Ö
ce
-SP, e eg e g § § g
* > u õ 5 U 5 U

2 .ScSrj^^íŽMaOiž
2 Eo _
e -O S « *5
.ScSrj^^íŽMaOiž
M&3MAeSsB-.s 1/1 £ gJo ao cjUŠ äc £sÜ _.
o s äc 'a« _.
g _ 32 r2 (D i? i C/!) ^ w *0 /-^> O s ô o w ło
J * U tt ^ jBflOSejrvŽ'^S^ ^ <D 2 O
(D 32 * C O r2 ^ (D Û ¿ i % C/!) ^ w w C O /-^> cd rr ^ o O w tíh ło
^ ^ O tt g <D i u O
o ^ O g ° o e y ^ ^ C - p o $ oo<d
e ï«S o ^ £ g(2«Łlil§f.Sa'|^ o e ^ ^ p o $ oo<
e H ÊÒOq -2Ž £ öi> ü • • 'S ^ h Üfíü
s?jÊÒOq
13 «?â
o. •S2i1je?o)^,MS>(s-gr«c
5 _,DP-4->(DC H5 ¿j öi> .H
ü • •>-%
h ¿j >-%
rn ••^ ^«o+¿
rS 5,
Hoo ^c o. _,DP-4->(DC e ob o ä 5 o ē ^ où v¡ +
<3 Hoo i « 1 'S ^c ! 5s'|:Sä«ö|-S«g|0 e ob o ä 5 o ē ^ i lili où v¡ +¿ e .g ,
's's«¡SI* *">
"> «gi 1 og 'SoS eo Iťc3
c3 SeC '^t-s
C Se oû
e eoû
Í^"S eSucn^^ju ?Q 1 S I »-Se eoo
Sucn^^ju
's .B o "> -ž 5 -S o S e c3 > a e > C .S e i £ 00 g X ed 5 ¿ J) 'S ž o
.B tí -e o ° u 'P e ve > a e 'C C § o S y cd S S *5 2 ^ .S J) § .
tí gíĚCŽ -e ° u 'P ßO e ve ¿OŮ e SQJ o gjjg^ y cd *5 g ^ g j g . <£

✓-s
-o
4)
S ^ wî S'
#g •d £T © 2 - ^ S
e
^
2 - g g
>> O 7 O -n ¡hQjQj ^ C ww2- _c
cd fN cd ¡h t> «D <N
o
w
^
e ^ iágjág g > -3^ ^ to^ iS^ 13 2 « cd13
O g > O ^ -n .£ ^ iS^ C 2^ « _c 'S ^ Id
'w'

<
H
J
is e :s iágjág -a :s -s ss go 2 cd o 2^ I®
yJCl yîCi yJCl j J OííCS
M -o
< S 00 ON O ^ (N rn^J- uS
H OO - CN (N (N (N (N (N

75

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
T) ^
o S Ol c¿ Q¿ I
& °-ri
Cr- ^ ^
ZZ^Z
Z ^^ZZ^ ^ZZ^ ^ 'Ci
'Ci c
42 U s§
1
2 ü w o o r-
^£ £^ ü
■§w 8Ł £g 2
S*C¿§ §- OÍ
Z 2
3
Xl
ö < •§ ^ - ">

Z "§ § vo VO
So 52 oí oi pí S
' Sb S ^ Z Z Z vo"
Jļ Sb S A "> «
CA

?2 û ^ H ^ L, i- H 99
ûg
CA
(i H ^ ^ L, i- H 99 99
«♦H
© CA
1-4 <D
(U CA m (N Ttm ^ *-* '- 1
•ss
S o
Z

G D <D <L> <U u


O ÛH O O O O O

^ i § 2 2 - -2
ŠS
x>
i e<2e
<2
e
<§ g
ce §
¡<2
§e
co© o
&<><_> U oo
o U3U ® go
® go °
° O
Jo g >

II If I
iS3&t8
¡
|1.S^&<j>8Q©u °0 čilis
lîlll
«glicini« ^ 18 |8sf|?*.sí8
c ^ § 4c S 8 °°-S ë10 i 1 s ig i - 'öo -S g <
'C c P ! 5 4c u S Û 'g « •« ë10 s t, M S ig W S - S .s E-2 g a
'C iii P īsi u Û t§ 'g « •« ¡if s t, i M rs-s^l S W S S .s iti a I
Sp
£ £ ää
§ S §S
| S <3u
u lláp|SSa|JS
S oí i S oí£
£¡œ£
'V
o
9 O *5 O ^
a <n % Ci ^
« ^9 w

a
o 2 -a 1
ü ü Ì3 « &Q
<
u
I? 2 Í2~jí 'C I s |î o
Sog _2
ISCÍM Sog möwü
o 'C o g 3 . o
_2
E
■g . .oo
Č ovo
(Z)<N
r-
«N (N
os
(N
o
m
^
m
tN
m

76

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
-o
2 VO
•S piá o & & °t
1 z z o § z z z
d>
a u ô u M
'S ^ ^
"Ö o
*0
<i>
Ss <
rsi Z
£ > t ~ rsi - & ot ot ot ^ «
o S8 ? ^ ^ Z Z Z Z 1*2
6 -2
<■§ ^
O
l|
g II O
G T3 00 «
Ż hy r-5
r-5 S ^
CO r-
.S r-73
0>
2 ¡Q p¿ © q¿ o4c* iS «
M is
^^^Z
Z ^^(N
<n Z Z ^ Z ^ 'a.
Tt <n Z Z .2
^ Z ^ g 'a.
o e
oá g o 8
aS II
SQQ O
i o M
Û i- Q
° M
<Z>
O H 7«
" -O ë " -O
eg 2 "

0 73 <t?
«« _ o
ł-1 D _ w .
u « m co o <N co (N 95

1 O 8 fi
z 1 O 8 Ļ. s 2 "fi
G O O O O O D O ¿2
00,0000 O b s
■tt
os ^ £
^ o g o gogog_ g 32
o b s
S
ti * 1 1- i IM IM IM _ fr* IM Ii
.íhSmO ti * 1 i O O O fr* S O ed*-1
-G G Ci O Ci O <£ O G CS O -tío ed*-1
»G
GOG fioGO
G G
O§ O
3 G3
O ^
O -tío
O >ř
ř
H > u U U U 5 U 73 &
o -»M -h G §ļ
•S o -»M ļs S ^S-mŁ^ -h
•S G * ļs ¡ X -o 3 S .g I ^S-m
GG
•S 9 SSa E2§
" 2 «2a §edC Oo.E:
.2 "S Oo Oh u S
•S g §* S Q a 5 frļ " -9 £ « « a .2 u <| "S .E: | oh u
£ g §* » 8 Q 1 a -9 Ê £ "8 « u g o| | oh ° * z s §
o £c* -g
tt*i3
o oo «5■» si
• 1}?
'¡¡¡G
B I ts
c ■
S f I ^^ oo? ° °.S .S
i 'S *'G
|£8 Eoo
Eoo s§
h S -^h '¡¡¡G G •Gr»-
}? . ts
. oo
'5 '5
2 ^
öd.3.3"£ o G
o "£ .S §G
o G
*i3
« o o
.S(M
(M-^h
¿"5? 55 -säg
Gr»-oohh G
G J¡
Èd"C
g o
00o o
H So«.
o P «a
«a 'ö
'ö o2 oo »
0 S § :s « §" ,S 2? «fci"2«c » »- i
* 0 S g <Í § |.Sg :s « §" ,S bS'I 2? O r9 £ 'S -2 C^-O »
2 ŠĚ § SĎfl i §^^5H-c 3 S-^Ē f - i
.2 2PQ
.2 1 a»
U i§1.8
û h§ ûo »3 lil x
c/o i §"*§
Q OQoOg!•§«§
>5 h>5 x>
öl -*o8
t- 1 oo PQ H PQ PQ V S" OÛ
OÛ >%

o
s
fi
73 73 5<l>
I- ío ^1 I
I ^ 2*0<D

!? ÛÛ 2 s.i ÛÛ
*<C
fi
o ÛÛ 2 ÛÛ 2 ü G ZI (N

< is is iB 2 íi? ÖO
ßO cf O .tí 2 ÖOöt)^ ÖD tť .2¿4-ed
. Io i o^^ « o oí O§o
§o ^
.tí Ci
^ g. >- XÖD
C
U
J ßO I cf go g
PQ
<
H
fco .Tt .uS.. vo
G
C/3 CO
s"CO
11
. oo ,o 11i
CO
rh22
CO CO CO < I- I y O'

77

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful and con-
structive comments that greatly contributed to improving the final version of the article.
They would also like to thank the editor for the generous comments and support during the
review process.

Notes

1 http://news.mit.edu/20 1 2/edx-faq-0502 1 2
2http://bit.ly/learning-at-scale-supplement
3http://www.strobe-statement.org/index.php?id=strobe-home
decentralized MOOCs that utilize various platforms to foster interactions between
learners, focused on knowledge construction, where teachers' role is primarily focused on
the early instructional design and facilitation.
5Focused on content delivery to large audiences, utilizing a single platform such as
Coursera or edX, where the learning process is teacher-centered.
6Several studies did not report precise information about the number of participants
included or did not report number of students at all; thus, we noted "more than" a certain
number of participants or noted as "NR."
7The course design in Konstan et al. (2015) study included two tracks: (a) programming
track that included assignments and all the content and (b) concepts track that was focused
on learning programming concepts, without programming assignments and with only few
video lectures related to specific programming tasks.

ORCID iD

S. Joksimovič © https://orcid.org/0000-000 1 -6999-3547


V. Kovanovič Ęg> https://orcid.org/0000-0001-9694-6033

References

Adamopoulos, R (2013). What makes a great MOOC? An interdisciplinary analysis of


student retention in online courses. In International Conference on Information
Systems ( ICIS 2013): Reshaping society through information systems design (Vol. 5,
pp. 4720-4740). Retrieved from http://aisel.aisnet.org/icis2013/proceedings/
Breakthroughldeas/ 1 3/
Appleton, J. J., Christenson, S. L., Kim, D., & Reschly, A. L. (2006). Measuring cogni-
tive and psychological engagement: Validation of the student engagement instru-
ment. Journal of School Psychology , 44 , 427-445. doi:10.1016/j.jsp.2006.04.0020
Azevedo, R. (2015). Defining and measuring engagement and learning in science:
Conceptual, theoretical, methodological, and analytical issues. Educational
Psychologist , 50, 84-94. doi: 10. 1080/0046 1520.20 15. 1004069
Baker, R. S. J., D'Mello, S. K., Rodrigo, M. M. T., & Graesser, A. C. (2010). Better to
be frustrated than bored: The incidence, persistence, and impact of learners' cogni-
tive-affective states during interactions with three different computer-based learn-
ing environments. International Journal of Human-Computer Studies, 68, 223-241.
doi: 1 0. 1 0 1 6/j .ijhcs.2009. 1 2.003
Bergner, Y., Kerr, D., & Pritchard, D. E. (2015). Methodological challenges in the
analysis of MOOC data for exploring the relationship between discussion forum
views and learning outcomes. Paper presented at the proceedings of the 8th

78

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

International Conference on Educational Data Mining, Madrid, Spain. Retrieved


from http://www.educationaldatamining.org/EDM20 1 5/uploads/papers/
paper_61.pdf
Boyer, S., & Veeramachaneni, K. (2015, March). Transfer learning tor predictive
models in massive open online courses. In C. Conati, N. Heffernan, A. Mitrovic,
& M. F. Verdejo (Eds.), Artificial intelligence in education (pp. 54-63). Retrieved
from http://link.springer.com/chapter/10.1007/978-3-319-19773-9 6
Brooks, C., Stalburg, C., Dillahunt, T., & Robert, L. (201 5, March). Learn with friends:
The effects of student face-to-face collaborations on massive open online course
activities. In Proceedings of the Second (2015) ACM Conference on Learning @
Scale (L@S 20 15) (pp. 24 l-244).New York,NY: ACM. doi: 1 0. 1 145/2724660.2728667
Brooks, C., Thompson, C., & Teasley, S. (2015, March). Who you are or what you do:
Comparing the predictive power of demographics vs. activity patterns in Massive
Open Online Courses (MOOCs). In Proceedings of the Second (2015) ACM
Conference on Learning @ Scale (pp. 245-248). New York, NY: ACM.
doi: 1 0. 1 145/2724660.2728668
Ceci, S. J., & Papiemo, P. B. (2005). The rhetoric and reality of gap closing: When the
"have-nots" gain but the "haves" gain even more. American Psychologist , 60, 149-
160.
Champaign, J., Colvin, K. F., Liu, A., Fredericks, C., Seaton, D., & Pritchard, D. E.
(2014, March). Correlating skill and improvement in 2 MOOCs with a student's time
on tasks. In Proceedings of the First (2014) ACM Conference on Learning at Scale
(L(a).S 2014) (pp. 11-20). New York, NY: ACM. doi: 10. 1145/2556325.2566250
Chi, M. T. H. (2009). Active-constructive-interactive: A conceptual framework for dif-
ferentiating learning activities. Topics in Cognitive Science, 7(1), 73-105.
doi: 1 0. 1 1 1 1 /j . 1 756-8765 .2008 .0 1 005 .x
Christenson, S. L., Reschly, A. L., & Wylie, C. (2012). Handbook of research on stu-
dent engagement. New York, NY: Springer.
Coffrin, C., Corrin, L., de Barba, P., & Kennedy, G. (2014, March). Visualizing patterns
of student engagement and performance in MOOCs. In Proceedings of the Fourth
International Conference on Learning Analytics and Knowledge (LAK '14)
(pp. 83-92). New York, NY: ACM. doi: 10. 1145/2567574.2567586
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and
Psychological Measurement, 20, 37-46.
Crossley, S., Danielle, S., Baker, R., Wang, Y., Paquette, L., Barnes, T., & Bergner, Y.
(2015). Language to completion: Success in an educational data mining massive
open online class. In Proceedings of the 8th International Conference on Educational
Data Mining (EDM'15) (pp. 388-391). Retrieved from http://www.educationaldata
mining.org/EDM20 1 5/uploads/papers/paper_l 53 .pdf
Dawson, S., Mirriahi, N., & Gasevic, D. (2015). Importance of theory in learning
analytics in formal and workplace settings. Journal of Learning Analytics, 2(2),
1-4.

DeBoer, J., Ho, A. D., Stump, G. S., & Breslow, L. (2014). Changing "course":
Reconceptualizing educational variables for massive open online courses.
Educational Researcher, 43(2), 74-84. doi: 10.3 102/001 3 189X1 452303 8
D'Mello, S., Dieterle, E., & Duckworth, A. (2017). Advanced, analytic, automated
(AAA) measurement of engagement during learning. Educational Psychologist, 52,
1 04- 1 23 . doi : 1 0. 1 080/0046 1 520.20 1 7. 1 28 1 747
D'Mello, S., Dowell, N., & Graesser, A. (2009). Cohesion relationships in tutorial
dialogue as predictors of affective states. In Proceedings of the 2009 Conference on

19

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

Artificial Intelligence in Education: Building Learning Systems That Care : From


Knowledge Representation to Affective Modelling (pp. 9-16). Amsterdam,
Netherlands: IOS Press. Retrieved from http://dl.acm.org/citation.cfin?id= 1659457
Dowell, N., Skrypnyk, O., Joksimovič, S., Graesser, A. C., Dawson, S., Gaševič, D.,
. . . Kovanovič, V. (2015, June). Modeling learners' social centrality and perfor-
mance through language and discourse. Presented at the proceedings of the 8th
International Conference on Educational Data Mining, Madrid, Spain. Retrieved
from https://eric.ed.gov/?id=ED560532
Engle, D., Mankoff, C., & Carbrey, J. (2015). Coursera's introductory human physiol-
ogy course: Factors that characterize successful completion of a MOOC.
International Review of Research in Open and Distributed Learning , 76(2), 46-68.
doi : 1 0. 1 9 1 73/irrodl. v 1 6i2 .20 1 0

Eraut, M. (2000). Non-formal learning and tacit knowledge in professional work.


British Journal of Educational Psychology , 70(1), 113-136. doi: 10.1 348/
000709900158001
Evans, B. J., Baker, R. B., & Dee, T. S. (2016). Persistence patterns in massive open
online courses (MOOCs). Journal of Higher Education, 87, 206-242. doi: 10. 1353/
ihe.20 16.0006
Eynon, R. (2014, April). Conceptualising interaction and learning in MOOCs (MOOC
Research Initiative Final Report). Retrieved from http://www.Moocresearch.com/
wpcontent/uploads/20 1 4/06/C9 1 46_E YNONMOOC-ResearchlnitiativeEynon
9146Final.Pdf
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential
of the concept, state of the evidence. Review of Educational Research , 74, 59-109.
doi: 1 0.3 1 02/0034654307400 1 059
Gaševič, D., Dawson, S., Rogers, T., & Gaševič, D. (2016). Learning analytics should
not promote one size fits all: The effects of instructional conditions in predicting
academic success. Internet and Higher Education, 28, 68-84. doi: 10.1016/j.ihe-
duc.20 15. 10.002
Gillani, N., & Eynon, R. (2014). Communication patterns in massively open online
courses. Internet and Higher Education, 23, 18-26. doi: 1 0. 1 0 1 6/j.ihe-
duc.2014.05.004
Glass, C. R., Shiokawa-Baklan, M. S., & Saltarelli, A. J. (2016). Who takes MOOCs?
New Directions for Institutional Research, 2015(161), 41-55. doi:10.1002/ir.20153
Goldberg, L. R., Bell, E., King, C., O'Mara, C., Mclnerney, F., Robinson, A., &
Vickers, J. (20 1 5). Relationship between participants' level of education and engage-
ment in their completion of the Understanding Dementia Massive Open Online
Course. BMC Medical Education, 75(1), 1-7. doi: 1 0.1186/s 12909-01 5 -0344-z
Goodyear, P. (2002). Psychological foundations for networked learning. In C. Steeples
& C. Jones (Eds.), Networked learning: Perspectives and issues (pp. 49-75). New
York, NY: Springer, doi: 10. 1007/978-1 -447 1-01 8 1-9-4
Greene, J. A., & Azevedo, R. (2007). A theoretical review of Winne and Hadwin s
model of self-regulated learning: New perspectives and directions. Review of
Educational Research , 77, 334-372. doi: 10.3 102/003465430303953
Greene, J. A., Oswald, C. A., & Pomerantz, J. (2015). Predictors of retention and
achievement in a massive open online course. American Educational Research
Journal, 52(5), 1-31. doi: 10.3 102/000283 12 15584621
Halatchliyski, I., Moskaliuk, J., Kimmerle, J., & Cress, U. (2014). Explaining authors'
contribution to pivotal artifacts during mass collaboration in the Wikipedia's

80

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

knowledge base. International Journal of Computer-Supported Collaborative


Learning, , 9, 97-115. doi: 10.1007/sl 1412-01 3-9 182-3
Hartnett, M., George, A. S., & Dron, J. (2011). Examining motivation in online dis-
tance learning environments: Complex, multifaceted and situation-dependent.
International Review of Research in Open and Distributed Learning , 72(6), 20-38.
Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/1030/1954
Haywood, J., Aleven, V., Kay, J., & Roll, I. (2016). L@S'16: Proceedings of the Third
ACM Conference on Learning @ Scale. New York, NY: Association for Computing
Machinery.
Heutte, J., Kaplan, J., Fenouillet, F., Caron, P. -A., & Rosselle, M. (2014). MOOC user
persistence. In L. Uden, J. Sinclair, Y.-H. Tao, & D. Liberona (Eds.), Learning tech-
nology for education in cloud : MOOC and big data (pp. 13-24). New York, NY:
Springer.
Illeris, K. (2007). How we learn: Learning and non-learning in school and beyond.
Oxford, England: Routledge.
Jiang, S., Fitzhugh, S. M., & Warschauer, M. (2014). Social positioning and perfor-
mance in MOOCs. In Proceedings of the Workshops held at Educational Data
Mining 2014, co-located with 7th International Conference on Educational Data
Mining (EDM 2014) (Vol. 1 1 83, p. 14). Retrieved from http://ceur-ws.org/Vol- 1 1 83/
gedm_paper08 .pdf
Jiang, S., Warschauer, M., Williams, A. E., O'Dowd, D., & Schenke, K. (2014).
Predicting MOOC performance with week 1 behavior. In Proceedings of the 7th
International Conference on Educational Data Mining (EDM'14) (pp. 273-275).
Retrieved from http://educationaldatamining.org/EDM20 1 4/uploads/procs20 1 4/
shortpapers/273_EDM-20 1 4-Short.pdf
Joksimovic, S., Dowell, N., Skrypnyk, O., Kovanovic, V., Gaševič, D., Dawson, S., &
Graesser, A. C. (2015, March). How do you connect? Analysis of social capital
accumulation in connectivist MOOCs. In Proceedings of the Fifth International
Conference on Learning Analytics and Knowledge (LAK' 15) (pp. 64-68). New
York, NY: ACM. doi: 10. 1145/2723576.2723604
Joksimovic, S., Manataki, A., Gaševič, D., Dawson, S., Kovanovic, V., & de Kereki, I.
F. (2016, April). Translating network position into performance: Importance of cen-
trality in different network configurations. In Proceedings of the Sixth International
Conference on Learning Analytics & Knowledge (LAK '16) (pp. 314-323). New
York, NY: ACM. doi: 10. 1145/2883851.2883928
Jones, C. (2008). Networked leaming-a social practice perspective. In Proceedings of
the 6th International Conference on Networked Learning (pp. 616-623). Retrieved
form http://www.lancaster.ac.uk/fss/organisations/netlc/past/nlc2008/abstracts/
PDF s/Jones_6 1 6-623 .pdf
Jordan, K. (201 5a). Massive open online course completion rates revisited: Assessment,
length and attrition. International Review of Research in Open and Distributed
Learning , 16, 341-358. Retrieved from http://www.irrodl.org/index.php/irrodl/
article/view/2112
Jordan, K. (2015b, February 13). Synthesising MOOC completion rates, MoocMoocher
[Web log post]. Retrieved from https://moocmoocher.wordpress.com/2013/02/13/
synthesising-mooc-completion-rates/
Kane, M. (1992). An argument-based approach to validation. Psychological Bulletin ,
112 , 527-535. doi: 10. 1037/0033-2909. 112.3.527

81

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

Kane, M. (2006). Validation. In R. Brennan (Ed.), Educational measurement (4th ed.,


pp. 17-64). Westport, CT: American Council on Education and Praeger.
Kennedy, G., Coffrin, C., de Barba, P., & Corrin, L. (2015, March). Predicting success:
How learners' prior knowledge, skills and activities predict MOOC performance. In
Proceedings of the Fifth International Conference on Learning Analytics and
Knowledge (LAK' 15) (pp. 136-140). New York, NY: ACM. doi:10.1145/
2723576.2723593
Kizilcec, R. F., & Halawa, S. (2015, March). Attrition and achievement gaps in online
learning. In Proceedings of the Second ACM Conference on Learning @ Scale
(L@S'15) (pp. 57-66). New York, NY: ACM. doi: 10. 1145/2724660.2724680
Kizilcec, R. F., & Schneider, E. (201 5). Motivation as a lens to understand online learn-
ers: Toward data-driven design with the OLEI scale. ACM Transactions on
Computer-Human Interaction , 22(2), 6:1-6:24. doi: 10. 1145/2699735
Koedinger, K. R., Kim, J., Jia, J. Z., McLaughlin, E. A., & Bier, N. L. (2015, March).
Learning is not a spectator sport: Doing is better than watching for learning from a
MOOC. In Proceedings of the Second ACM Conference on Learning @ Scale (pp.
111-120). New York, NY: ACM. doi: 10. 11 45/2724660.2724681
Konstan, J. A., Walker, J. D., Brooks, D. C., Brown, K., & Ekstrand, M. D. (2015).
Teaching recommender systems at large scale: Evaluation and lessons learned from
a hybrid MOOC. ACM Transactions on Human-Computer Interaction , 22(2), 61-
70. doi : 1 0. 1 1 45/2728 1 7 1
Kovanovič, K., Gaševič, D., Dawson, S., Joksimovič, S., Baker, R. S., & Hatala, M.
(20 1 5, March). Penetrating the black box of time-on-task estimation. In Proceedings
of the Fifth International Conference on Learning Analytics and Knowledge
(pp. 184-193). New York, NY: ACM. doi: 10. 1145/2723576.2723623
Li, N., Kidziński, L., Jermann, P., & Dillenbourg, P. (2015). MOOC video interaction
patterns: What do they tell us? In G. Conole, T. Klobučar, C. Rensing, J. Konert, &
E. Lavoué (Eds.), Design for teaching and learning in a networked world (Lecture
Notes in Computer Science Vol. 9307 , pp. 197-210). Berlin, Germany: Springer,
doi: 1 0. 1 007/978-3-3 1 9-24258-3-1 5
Loya, A., Gopal, A., Shukla, I., Jermann, P., & Tormey, R. (2015). Conscientious
behaviour, flexibility and learning in massive open on-line courses. Procedia -
Social and Behavioral Sciences , 191 , 519-525. doi: 10.1 01 6/j.sbspro.20 15.04.686
Molenaar, I., & Chiù, M. M. (2015). Effects of sequences of socially regulated learning
on group performance. In Proceedings of the Fifth International Conference on
Learning Analytics and Knowledge (pp. 236-240). New York, NY: ACM.
doi: 10. 1145/2723576.2723586
Moss, P. A., Girard, B. J., & Haniford, L. C. (2006). Chapter 4: Validity in educational
assessment. Review of Research in Education , 30, 109-162. doi: 10.3 102/009 17
32X030001109
Raffaghelli, J. E., Cucchiara, S., & Persico, D. (2015). Methodological approaches in
MOOC research: Retracing the myth of Proteus. British Journal of Educational
Technology, 46, 488-509. doi: 10.1 111/bjet.l 2279
Ramesh, A., Goldwasser, D., Huang, B., Daume, H. I., & Getoor, L. (2014a). Learning
latent engagement patterns of students in online courses. In Proceedings of the
National Conference on Artificial Intelligence (Vol. 2, pp. 1272-1278). Retrieved
from http://www.aaai.org/ocs/index.php/AAAI/AAAI 14/paper/view/857 1
Ramesh, A., Goldwasser, D., Huang, B., Daume, H. I., & Getoor, L. (2014b, March).
Uncovering hidden engagement patterns for predicting learner performance in

82

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Ho w Do We Model Learning at Scale?

MOOCs. In Proceedings of the First ACM Conference on Learning @ Scale


Conference (L@S'14) (pp. 157-158). New York, NY: ACM. doi: 10. 1145/
2556325.2567857
Reich, J. (2015). Rebooting MOOC research. Science , 347(6217), 34-35. doi: 10. 1 126/
science. 126 1627
Reich, J., Stewart, B., Mavon, K., & Tingley, D. (2016, April). The civic mission of
MOOCs: Measuring engagement across political differences in forums. In
Proceedings of the Third (2016) ACM Conference on Learning @ Scale (L@S'16)
(pp. 1-10). New York, NY: ACM. doi: 10. 1145/2876034.2876045
Reschly, A. L., & Christenson, S. L. (2012). Jingle, jingle, and conceptual haziness:
Evolution and future directions of the engagement construct. In L. S. Christenson,
L. A. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement (pp.
3-19). Boston, MA: Springer, doi: 10. 1007/978- 1-46 14-20 18-7-1
Rodriguez, C. O. (2012). MOOCs and the AI-Stanford like courses: Two successful
and distinct course formats for massive open online courses. European Journal
of Open, Distance and E-Learning. Retrieved from https://eric.ed.
gov/?id=EJ982976
Santos, J. L., Klerkx, J., Duval, E., Gago, D., & Rodríguez, L. (2014, March). Success,
activity and drop-outs in MOOCs an exploratory study on the UNED COMA
courses. In Proceedings of the Fourth International Conference on Learning
Analytics and Knowledge (LAK '14) (pp. 98-102). New York, NY: ACM.
doi: 1 0. 1 1 45/2567574.2567627
Selwyn, N., Bulfin, S., & Pangrazio, L. (201 5). Massive open online change? Exploring
the discursive construction of the "MOOC" in newspapers. Higher Education
Quarterly , 69, 175-192.
Shah, D. (2015, December 28). MOOCs in 2015: Breaking down the numbers [Web
log post on EdSurge News]. Retrieved from https://www.edsurge.com/news/2015-
1 2-28-moocs-in-20 1 5 -breaking-do wn-the-numbers
Sharma, K., Jermann, P., & Dillenbourg, P. (2015). Identifying styles and path towards
success in MOOCs. In Proceedings of the 8th International Conference on
Educational Data Mining (EDM' 15) (pp. 408-411). Retrieved from http://www.
educationaldatamining.org/EDM20 1 5/uploads/papers/paper 88.pdf
Shirky, C. (20 13, July 8). MOOCs and economic reality. Chronicle of Higher Education,
59(42), B2. Retrieved from http://www.chronicle.com/blogs/conversa-
tion/20 1 3/07/08/moocs-and-economic-reality/
Siemens, G. (2005). Connectivism: A learning theory for the digital age. International
Journal of Instructional Technology & Distance Learning, 2(1), 3-10.
Siemens, G., Gaševic, D., & Dawson, S. (2015). Preparing for the digital university :
A review of the history and current state of distance, blended, and online learning
(Research report). Retrieved from http://linkresearchlab.org/PreparingDigital
University.pdf
Sinha, T., & Cassell, J. (2015, March). Connecting the dots: Predicting student grade
sequences from bursty MOOC interactions over time. In Proceedings of the Second
(2015) ACM Conference on Learning @ Scale (L@S'15) (pp. 249-252). New York,
NY: ACM. doi: 10. 1145/2724660.2728669
Taboada, M., Brooke, J., Tofiloski, M., Voll, K., & Stede, M. (2011). Lexicon-based
methods for sentiment analysis. Computational Linguistics, 37, 267-307.
doi: 1 0. 1 1 62/COLI_a_00049

83

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al

Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words:


LIWC and computerized text analysis methods. Journal of Language and Social
Psychology , 29, 24-54. doi: 10. 11 77/026 1927X093 5 1676
Tucker, C., Pursei, B. K., & Divinsky, A. (2014, June). Mining student-generated tex-
tual data in MOOCS and quantifying their effects on student performance and learn-
ing outcomes. Paper presented at ASEE Annual Conference, Indianapolis, Indiana.
Retrieved from https://peer.asee.org/22840
Vu, D., Pattison, P., & Robins, G. (2015). Relational event models for social learning
in MOOCs. Social Networks , 43, 121-135. doi:10.1016/i.socnet.2015.05.001
Walji, S., Deacon, A., Small, J., & Czerniewicz, L. (2016). Learning through engage-
ment: MOOCs as an emergent form of provision. Distance Education, 37, 208-223.
doi: 1 0. 1 080/0 1 5879 1 9.20 16.11 84400
Wang, E. Y., & Baker, R. (2015). Content or platform: Why do students complete
MOOCs? Journal of Online Learning and Teaching, 11(1), 17-30. Retrieved from
http://jolt.merlot.org/vol 1 1 no 1 /Wang_03 1 5 .pdf
Wang, X., Yang, D., Wen, M., Koedinger, K., & Rosé, C. P. (2015). Investigating how
student's cognitive behavior in MOOC discussion forums affect learning gains. In
Proceedings of the 8th International Conference on Educational Data Mining
(EDM'15) (pp. 226-233). Retrieved from http://www.educationaldatamining.org/
EDM20 1 5/uploads/papers/paper_89.pdf
Wen, M., Yang, D., & Rose, C. P. (2014a). Linguistic reflections of student engagement
in massive open online courses. In Proceedings of the Eighth International AAAI
Conference on Weblogs and Social Media (pp. 525-534). Retrieved from http://
www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8057
Wen, M., Yang, D., & Rose, C. P. (2014b). Sentiment analysis in MOOC discussion
forums: What does it tell us? In Proceedings of the 7th International Conference on
Educational Data Mining (EDM '14) (pp. 130-137). Retrieved from http://educa-
tionaldatamining.org/EDM2014/uploads/procs2014/long%20papers/130_EDM-
2014-Full.pdf
Whitehill, J., Williams, J. J., Lopez, G., Coleman, C. A., & Reich, J. (2015). Beyond
prediction: First steps toward automatic intervention in MOOC student stopout. In
Proceedings of the 8th International Conference on Educational Data Mining
(EDM'15) (pp. 171-178). Retrieved from http://www.educationaldatamining.org/
EDM20 1 5/uploads/papers/paper_l 1 2.pdf
Winne, P. H., & Hadwin, A. F. ( 1 998). Studying as self-regulated learning. Metacognition
in Educational Theory and Practice, 93, 27-30.
Wise, A. F., & Shaffer, Ď. W. (2015). Why theory matters more than ever in the age of
big data. Journal of Learning Analytics, 2(2), 5-13.
Yang, D., Wen, M., Howley, I., Kraut, R., & Rose, C. (2015, March). Exploring the
effect of confusion in discussion forums of Massive Open Online Courses. In
Proceedings of the Second (2015) ACM Conference on Learning @ Scale (L@S'15)
(pp. 121-130). New York, NY: ACM. doi: 10. 1145/2724660.2724677
Yang, D., Wen, M., Kumar, A., Xing, b. P., & Rose, L. P. (2U14). towards an integra-
tion of text and graph clustering methods as a lens for studying social interaction in
MOOCs. International Review of Research in Open and Distributed Learning, 15,
2 1 4-234. doi: 1 0. 1 9 1 73/irrodl.v 1 5i5 . 1 853
Ye, C., Fisher, D. FL, Kinnebrew, J. S., Narasimham, G., Biswas, G., Brady, K. A., &
Evans, B. J. (2015, March). Behavior prediction in MOOCs using higher granularity
temporal information. In Proceedings of the Second (2015) ACM Conference on

84

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
How Do We Model Learning at Scale?

Learning @ Scale (L@S'15) (pp. 335-338). New York, NY: ACM. doi: 10. 1145/
2724660.2728687

Authors

SREĆKO JOKSIMOVIČ is a research fellow at the University of South Australia, David


Pank Building, GPO Box 2471, Adelaide, South Australia 5001, Australia; email: srecko.
[email protected]. He is also a research assistant at the Learning Innovation and
Networked Knowledge Research Lab at University of Texas, Arlington. With the back-
ground in computer science, his research interests center on the analysis of teaching and
learning in networked learning environments. His efforts include developing theory-driven
data-informed analytics models for assessing the quality of learning in computer-mediated
context, generating insights into factors that promote effective collaborative learning, and
informing the design of digital environments in which collaborative learning occur. He is
also an executive committee member and Website Portfolio Chair of the Society for
Learning Analytics Research (SoLAR; http://www.solaresearch.org/).

OLEKS ANDRA POQUET is a postdoctoral research fellow at the Teaching Innovation


Unit, University of South Australia, David Pank Building, GPO Box 2471, Adelaide,
South Australia 5001, Australia; email: [email protected]. Oleksandra's
interests span across learning analytics, social processes in networked learning environ-
ments and network science. Her current research focuses on the relationship between
learning tasks and social learning, as well as evidence-informed teaching practices.

VITOMIR KOVANOVIČ is a research fellow at the University of South Australia, David


Pank Building, GPO Box 2471, Adelaide, South Australia 5001, Australia; email:
[email protected]. He is also a research assistant at the Learning
Innovation and Networked Knowledge Research Lab at University of Texas, Arlington.
He obtained his PhD in computer science in 2017 from the University of Edinburgh,
United Kingdom. His research in learning analytics and educational data mining focuses
on the development of novel learning analytics methods based on the trace data collected
by learning management systems and their use to improve inquiry-based online educa-
tion. He authored and coauthored more than 40 peer-reviewed publications in the field of
learning analytics, educational technology, and online education. He had a pleasure serv-
ing as a local chair for LAK' 16 conference and program chair for 2016 Learning with
MOOCs (LWMOOCs) conference and currently serves on the executive committee of
the Society for Learning Analytics Research (SoLAR). He is also the recipient of two best
paper awards (LAK 15 and HERDSA15 conferences) and several scholarships by the
University of Edinburgh, Simon Fraser University, and Serbian Ministry of Education.

NIA DO WELL is a postdoctoral research fellow in the School of Information and Digital
Innovation Greenhouse at the University of Michigan, 105 S State St, Ann Arbor, MI;
email: [email protected]. Her primary interests are in cognitive psychology, dis-
course processing, and learning sciences. In general, her research focuses on using lan-
guage and discourse to uncover the dynamics of socially significant, cognitive, and
affective processes. She is currently applying computational techniques to model dis-
course and social dynamics in a variety of learning environments including intelligent
tutoring systems (ITSs), small group computer-mediated collaborative learning envi-
ronments, and massive open online courses (MOOCs).

85

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms
Joksimovič et al.

CAITLIN MILLS is a postdoctoral fellow in the Cognitive Neuroscience of Thought Lab


at the University of British Columbia, 3320 Kenny Building, 2136 West Mall,
Vancouver, British Columbia, Canada; email: [email protected]. She com-
pleted her PhD at the University of Notre Dame under the advisement of Dr. Sidney
D'Mello. Her primary research interests are in the areas of spontaneous thought, mind
wandering, and affect. Other interests include investigating engagement and mind wan-
dering in educational contexts, such as during complex learning and reading.

DRAGAN GAŠEVIČ is a professor and the chair in learning analytics and informatics in
the Moray House School of Education and the School of Informatics at the University
of Edinburgh, Informatics Forum, 10 Crichton Street, Edinburgh EH8 9AB, Scotland,
UK; Faculty of Education, Monash University, 29 Ancora Imparo Way, Clayton VIC
3800, Australia; email: [email protected]. His research interests are in learning
analytics, self-regulated and social learning, and technology-enhanced learning.

SHANE DAWSON is the director of the Teaching Innovation Unit and professor of learn-
ing analytics at the University of South Australia, David Pank Building, GPO Box
2471, Adelaide, South Australia 5001, Australia; email: [email protected].
His research centers on the analysis of digital traces derived from student engagement
with information communication technologies in education settings. He is a coeditor of
the International Journal for Learning Analytics , a founding executive member of the
Society for Learning Analytics Research, and program and conference chair of the
International Learning Analytics and Knowledge conferences. He is a codeveloper of
SNAPP, an open source social network visualization tool, designed for teaching staff to
better understand, identify, and evaluate student learning, engagement, academic per-
formance, and creative capacity.

ARTHUR C. GRAESSER is a professor in the Department of Psychology and the Institute


of Intelligent Systems at the University of Memphis, Psychology Building, Room 202,
FedEx Institute of Technology, Room 403C, Memphis, TN; email: graesser@
cc.memphis.edu. He is also a senior research fellow in the Department of Education at
the University of Oxford. He received his PhD in psychology from the University of
California at San Diego. His primary research interests are in cognitive science, dis-
course processing, and the learning sciences. He served as editor of the journal Discourse
Processes (1996-2005) and Journal of Educational Psychology (2009-2014) and as
president of the Empirical Studies of Literature, Art, and Media (1989-1992), the
Society for Text and Discourse (2007-2010), International Society for Artificial
Intelligence in Education (2007-2009), and the FABBS Foundation (2012-2013). He
has published over 500 articles in journals, books, and conference proceedings.

CHRISTOPHER BROOKS is a research assistant professor in the School of Information


and Director of Learning Analytics and Research in the Office of Digital Education &
Innovation at the University of Michigan, 105 S State St, Ann Arbor, MI 48109; email:
[email protected]. He works with colleagues to design tools to better the teaching
and learning experience in higher education. His particular research focus is on under-
standing how learning analytics can be applied to human computer interaction through
educational data mining, machine learning, and information visualization.

86

This content downloaded from 132.248.9.8 on Sun, 16 Jun 2024 23:16:21 +00:00
All use subject to https://about.jstor.org/terms

You might also like