0% found this document useful (0 votes)
11 views14 pages

Jeductechsoci 19 3 221

Uploaded by

Awaz Saleem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views14 pages

Jeductechsoci 19 3 221

Uploaded by

Awaz Saleem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

International Forum of Educational Technology & Society

Effectiveness of a Learner-Directed Model for e-Learning


Author(s): Stella Lee, Trevor Barker and Vivekanandan Suresh Kumar
Source: Journal of Educational Technology & Society , Vol. 19, No. 3 (July 2016), pp. 221-
233
Published by: International Forum of Educational Technology & Society
Stable URL: https://www.jstor.org/stable/10.2307/jeductechsoci.19.3.221

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

International Forum of Educational Technology & Society is collaborating with JSTOR to digitize,
preserve and extend access to Journal of Educational Technology & Society

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
Lee, S., Barker, T., & Suresh Kumar, V. (2016). Effectiveness of a Learner-Directed Model for e-Learning. Educational
Technology & Society, 19 (3), 221–233.

Effectiveness of a Learner-Directed Model for e-Learning


Stella Lee1, Trevor Barker2 and Vivekanandan Suresh Kumar1*
1
Faculty of Science & Technology, Athabasca University, Canada // 2School of Computer Science, Hertfordshire
University, United Kingdom // [email protected] // [email protected] // [email protected]
*
Corresponding author

(Submitted January 14, 2015; Revised April 9, 2015; Accepted November 3, 2015)

ABSTRACT
It is a hard task to strike a balance between extents of control a learner exercises and the amount of guidance,
active or passive, afforded by the learning environment to guide, support, and motivate the learner. Adaptive
systems strive to find the right balance in a spectrum that spans between self-control and system-guidance. They
also concern a smoother shifting of the balance point during learning episodes in light of competing
requirements from learning goals, learner capacity, instructional affordances, and educational theories, among
others. This research investigates one of the extremes of this spectrum, where learners actively assume control
and take responsibility for their own learning, while catering to individual preferences with little or no guidance
from the e-learning environment. In this study, one unit material from an online Introduction to Java
Programming course has been redesigned based on the proposed Learner-Directed Model for the experimental
design study. The model is developed based on the exploration of two educational theories — Experiential-
Learning Theory (ELT) and Self-Regulated Learning (SRL) Theory. The study involved a total of 35
participants (N = 35) divided randomly into one Experimental Group and one Control Group. They were
assigned to either a Learner-Directed Model (Experimental Group) or a linear model (Control Group). Pre/post
tests, survey, follow-up interview as well as log file analysis were instruments used for assessing students’
domain knowledge, meta-knowledge, and their attitudes for their overall learning experience. The results of the
study have revealed that there is a statistically significant higher level of overall learning experience and better
learning attitudes compared to Control Group students who studied with e-learning components that are linear in
nature and are without explicit associations with educational theories.

Keywords
Learner-directed model, Self-regulated learning, Learning preferences, Instructional design, Learning design

Introduction
In the space of the past decade, e-learning has gone from being a supplementary form of learning to an increasingly
staple part of higher education and industry training. Nowadays, learners are free to enroll in a Massive Open Online
Course (MOOC) with diverse topics ranging from The Science of Gastronomy to Jazz Improvisation. The growth of
portable digital devices, from tablets, smart phones, touch screen mini-laptops to e-readers further solidify the
ubiquitous learn-as-you-see-fit mentality. Yet, the demand for personalized learning is not adequately supported by
current technology or practices (Johnson et al., 2013). As students’ tastes for online pedagogy become more
sophisticated, they will increasingly demand learning that provides learner-directed choices and control. As the e-
learning service provider sphere becomes more competitive, competition will drive the change on a more customized
education model that cater to each student’s unique learning needs. One-size-fits-all e-learning has declined in
popularity — it was a solid base to start just as Web 1.0 had its place in popularizing information dissemination
electronically in a networked environment. Now, that method is less effective, and in some ways, it is irrelevant to
the changing demographics of fragmented global learners. Just as the travelers can opt to drive to their destination
instead of journey by train; as self-directed, autonomous adult learners, they need the options to negotiate modes that
best support each individual’s learning needs, be it the amount of content, the types of learning activities, or the form
of devices used to access the material. Thus, the level of control a learner has depends on how much he/she is willing
to trade off between having to make independent decisions at a granular level in one end of the spectrum (thus
having a greater control) and allowing the system to take care of the bulk of the decision-making process (thus
having less control) on the other end. It is a hypothesis of this research that having an optimal level of learner control
is essential in creating a positive learner-directed learning experience. In particular, the aim of this research is to
support learners to assume active control and take responsibility for their own learning. As an extension of that, it
aims to augment the learning of computer programming in a way that would motivate and engage learners, so as to
counter the high attrition rate in this domain, especially in an asynchronous online environment.

ISSN 1436-4522 (online) and 1176-3647 (print). This article of the Journal of Educational Technology & Society is available under Creative Commons CC-BY-ND-NC
221
3.0 license (https://creativecommons.org/licenses/by-nc-nd/3.0/). For further queries, please contact Journal Editors at [email protected].

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
To create a positive learner-directed learning experience and provide optimal levels of learner control, an alternative
approach to learning computer programming online called a Learner-Directed Model was developed for this study.
This model is grounded in the integration of two established education theories that are learner-centric. The two
theories are Experiential Learning Theory (ELT) and a modified version of Self-Regulated Learning (SRL) Theory,
which this research called Self-Directed Regulated Learning (SDRL). ELT was used as a guideline for the domain-
knowledge content with the four learning modes. SDRL was used as a guideline for the meta-knowledge (i.e., study
skills) content that was integrated and contextualized into the model. This model caters to learner control with the
consideration of freedoms of pace, content, and media, and it includes domain knowledge learning activities as well
as meta-cognitive skills’ interface.

The hypotheses of this research are as follows:


• Online computer systems that are designed based on learning style theories and educational theories are
beneficial to learners
• Online computer systems that employed unobtrusive learner support for meta-cognitive activities such as study
skills are beneficial to learners
• Online computer systems developed in the above ways could improve learning for students in terms of learner
control, attitudes, and learner experience (LX)

Based on the above stated hypotheses, research questions were formulated. They are as follows:
• Which learning style theory and education theory would be useful to develop an e-learning system?
• How could an e-learning system be developed based on the learning style and education theories?
• What is the effectiveness of an e-learning system based on a model of learning style and education theories?
o What would the effect of such a system be on the performance of learners?
o What would be the attitudes and learner experience (LX) of learners who would be using such a system?

Related work
Experiential Learning Theory (ELT)

Developed by David Kolb, Experiential Learning Theory (ELT) suggests that learning requires polar opposite
abilities; thus it creates conflict and learners are forced to choose which set of abilities to use in a certain learning
situation in order to resolve the conflict. For example, it is not possible to drive a car and analyze a driver’s manual
at the same time, we need to choose which approach to take; as a result, over time, we develop a preferred way of
choosing (Kolb, Boyatzis, & Mainemelis, 2000). While ELT recognizes that learners might have their preferences in
how they learn, it emphasizes the importance of a well-rounded learning experience. Kolb (1984) stated that
effective learners need four types of abilities to learn: concrete experience (CE); reflective observation (RO); abstract
conceptualizations (AC); and active experimentations (AE). Figure 1 below illustrates the four types of learning
abilities as well as how a learner can progress through the experiential learning cycle: experience is translated
through reflection into concepts, which are in turn used as guides for active experimentation and the choice of new
experience. Kolb (1984) stated that a learner could begin the learning cycle at any one of the four modes, but that
learning should be carried on as a continuous spiral. As a result, knowledge is constructed through the creative
tension among the four modes and learners will be exposed to all aspects of learning: experiencing, reflecting,
thinking, and acting.

Furthermore, there are four dominant learning styles that are associated with these modes. The four learning styles
are: Converging (AC and AE focus); Diverging (CE and RO focus); Assimilating (AC and RO focus); and
Accommodating (CE and AE focus). Converging is interested in practical applications of concepts and theories. A
learner who has this preference is good at problem solving, decision making, and practical application of ideas.
Diverging is interested in observation and collection of a wide range of information. A learner who is strong in this
preference is imaginative and aware of meanings and values. They are interested in people and are socially inclined.
Assimilating is interested in presentation and creation of theoretical models. A learner leaning toward this preference
is more concerned with ideas and abstract concepts than with people. Accommodating is interested in hands-on
experiences. A learner with this preference likes doing things, carrying out plans, and the trial-and-error method.

222

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
Figure 1. Kolb’s experiential learning cycle

As this research is grounded in individualized instruction, it furthered Kolb’s take on distinctive learning preferences
in order to provide options on learning activities accordingly. The learning orientations Kolb proposed provide a
framework for learning design, and it promotes holistic learning in the context of a self-paced online learning
environment. The four modes in the learning cycle have been used to design task-level content in an online Java
programming course in the research design. However, it is important to note that it is not the researchers’ intention to
prescribe and match a learner with a particular learning preference, as every learner uses a mix of learning styles, and
not just one.

Self-Regulated Learning (SRL) Theory

As this research is interested in the person-environment interaction in academic learning online, another learning
theory that helps facilitate individuals in monitoring their learning process and to act accordingly to gain some
control over them in academic setting is Self-Regulated Learning Theory (Dinsmore, Alexander, & Loughlin, 2008).
Self-regulated learning (SRL) refers to the setting of one’s goals in relation to learning and ensuring that the goals set
are met (Boekaerts & Corno, 2005; Butler & Winne, 1995; Perry, Philips, & Hutchinson, 2006; Winne & Perry,
2000; Zimmerman, 1990).

SRL theory states that learners not only need to regulate their performance, but also how they learn. Literature shows
that self-regulation can positively affect learners’ achievement (Azevedo, 2005). SRL is “an active, constructive
process whereby learners set goals for their learning and then attempt to monitor, regulate, and control their
cognition, motivation, and behaviour” (Pintrich, 2000). It is a deliberate, judgmental, and adaptive process
(Zimmerman, 1990) where feedback is an “inherent catalyst” (Butler & Winne, 1995). In addition, Hadwin, Wozney,
and Ponton (2005) revealed that “ordinary” collaboration (as found in traditional online forums and chats) is
insufficient to transform learners’ everyday “reflection” into productive “regulation.”

What sets self-regulated learners apart is their awareness of when they know a skill or fact and when they do not, at a
meta-knowledge level — i.e., based on what they know or do not know, they plan, set goals, organize, self-monitor,
and self-evaluate throughout their studies (Zimmerman, 1990). In addition, they are able to evaluate and trade-off
between small-scale tactics and overall strategies, and are able to predict how each can support their learning
progress toward the goals they pre-selected (Winne, 1995).

Despite the benefits of self-regulation, there are many reasons why learners choose not to self-regulate their own
learning. For example, some learners might feel that planning, evaluating, monitoring, and evaluating their learning
process takes too much time and they are not willing to invest the resources in that particular context (Boekaerts,
1999). Boekaerts (1999) noted that SRL has a bidirectional relationship with learning environments. She asserted
that learning environments could be used to promote self-regulatory skills and act as facilitators for learning new
self-regulation strategies. The authors would like to extend this line of argument to online learning environments
where a system can be designed to introduce, facilitate, scaffold, and provide options for self-regulation process and
223

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
strategies in an unobtrusive way. Indeed, Winne (1995) suggested that self-regulation is common, and that learners
should develop an effective means to self-direct their learning, so it is recommended “the classical topic of
instructional design could be revitalized as a scientific approach to accommodating and changing learners’
knowledge” (Winne, 1995). It is with this foundation that this research model design is based on: The learner-
directed flexible system wherein students are presented options and self-regulated strategies to select and engage in.

Learner-Directed Model

The concept behind a flexible Learner-Directed Model for this research is grounded on learner control and learner
experience. As a starting point, this research will employ the ELT stages of learning cycle in creating a learning
design framework for presenting the domain knowledge, namely, introductory computer programming in Java. As a
constructivist theory of learning, ELT fulfils many of the criteria for building a learner-directed learning model. It is a
learner-centred and multilinear model as it caters to different learning styles; it is consistent with how people learn,
grow, and develop (Kolb et al., 2000). More importantly, this is an inclusive model of adult learning that intends to
explain the complexities of and differences between adult learners within a single framework (Oxendine, Robinson,
& Willson, 2004).

Using SRL as a complementary theory for the design of a Learner-Directed Model fits well with the requirements for
the current model design. It supports learner control, promotes autonomous learning, and enhances the overall
learner experience. The type of self-regulated learning approach this study will use for the design of meta-cognitive
activity support is called Self-Directed Regulated Learning (SDRL), a term coined by the authors.

Self-Directed Regulated Learning (SDRL)

At one end of the spectrum, self-reflection could be as simple as a thinking process made self-aware, the intangible
“ah ha” moment of a conceptual breakthrough; at the other end, self-regulation could be more tangible as a system or
a tutor can observe what students do after they self-reflect. For example, in debugging a piece of programming code,
a student can self-reflect on errors identified by the compiler at the end of each compile of the program being
developed. The system can track/record the number of errors and warnings faced by the student at the end of each
compile. The system can also classify the types of errors encountered by a single student over multiple sessions of
program development. Looking at this list of errors is an act of self-reflection. However, self-regulation takes it one
step further. Students may try to identify most common errors and warnings they faced, take notes on how they
resolved these common errors and warnings, and refer to these notes when they encounter these common errors and
warnings when writing another program. This “proactive self-reflection” is what the authors identified as self-
directed regulated learning (SDRL) because at this stage, this research does not plan on tracking the end results of
using the study skill tools, it is rather the provision of these “how to” study skill guides and tools embedded as
options for students to assist them in becoming better self-regulated, self-reflected learners that this research is
interested in.

The difference between SDRL and SRL is that SDRL aims at “regulation” in a more implicit manner than SRL. That
is, in SRL, regulation is more explicit, whereby one can directly and independently measure the degree of regulation
being imparted by the system as well as measure the degree of regulation being absorbed or applied by a student. In
SDRL, regulation is being implicit in the design and interactions, measurements related to regulation are inherently
intertwined with other components of the system, and hence it is hard to measure the degree of regulation in a direct
and independent manner.

Model design

One unit (Unit 3 – Control Structure) of an introductory to programming course material was being redesigned to fit
with the Learner-Directed Model. The content was based on COMP268 - Introduction to Java Programming course
currently offered by the School of Computing and Information Systems (SCIS) at Athabasca University in Canada.
This course was chosen as the domain due to the fact that this course has one of the highest attrition rates within the
school. Common complaints among students are that the course does not engage them and there is a lack of

224

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
motivation as a result. In addition, the complexity of learning computer programming further discourages students
from completing their online study.

After the redesign, the module used in this research study was composed of the five following learning concepts:
• Overview of Intro to Programming
• If-Else Statement
• Loops
• Break, Continue, and Return Statement
• Switch, and Try-Catch-Finally Statement

The ELT learning cycle was mapped on top of a circular wheel interface with four quadrants representing the four
learning stages. Each stage was linked to a learning mode based on Kolb’s experiential learning cycle: concrete
experience (CE), reflective observation (RO), abstract conceptualization (AC), and active experimentation (AE). The
four learning modes corresponding to Kolb’s four stages are: watching, discussing, conceptualizing, and trying out
(Figure 2).

Figure 2. The four learning modes correspond to the four stages of the ELT cycle

The four main types of activities, one representative of each aspect of the learning cycle are:
• Concrete Experience – Watching (viewing tutorial videos)
• Reflective Observation – Discussing (posting to discussion forums)
• Abstract Conceptualization – Conceptualizing (concept mapping and mental model building)
• Active Experimentation – Trying Out (writing code in Java)

As Kolb stated that learning can begin at any stage in the cycle, similarly, this learning design follows the idea that
learners have different ways of learning, having an option to choose which way is best for them as to where the
starting point and end point are, and when they are ready to leave the learning material. The five learning concepts
all start with the same interface that presents the four modes of learning. A pre-test is given at the beginning of the
course to assess their prior knowledge in computer programming, as well as to gauge their prior competency in study
skills. Figure 3 indicates what learners assigned to the Experimental Group will see when they enter the course home
page and select “Learning Concept 1.”

For learners who are interested in learning by observation, they can select the “watching” mode, which will provide a
series of tutorial videos on how to write and edit a basic program in Java (Figure 4). For others who prefer to engage
in the reflective process in the form of discussion forum posting, they can select the “discussing” mode. It will take
the learners to the page with learning activities that are designed for social interaction, dialogues, and reflections.

225

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
Figure 3. Experimental Group home page for Learning Concept 1 – Introduction to Programming

Figure 4. Tutorial videos on how to write and edit a basic program under the “Watching” learning activity

This interface design encourages the learner to explore and access more than one learning activity, but they might or
might not choose to go through them all; the decision is theirs to make. If at the end of any chosen learning activity,
the learners feel that they have a good grasp of the material, they can opt to skip the rest of the cycle and go to the
next topic. A post-test is available at the end of the module for learners to self-assess their knowledge both at the
domain and meta-cognitive levels. In other words, learners can test how well they have learned a certain
programming concept in Java as well as whether the associated key study skills are helpful to them in regulating their
own learning and deploying them as learning strategies. The post-test for the domain-knowledge portion asked
programming-related questions and is objective in nature; the meta-cognitive skill portion of the test is in the form of
self-reporting and is subjective in nature. Altogether, it provides a high-level view of competency on both domain
and meta-cognitive levels of learning.

As supplementary learning material, a Self-Directed Regulated Learning (SDRL) theory based interface titled “Key
Study Skills” is available on the upper right-hand corner of the webpage to assist with learning about meta-cognitive
knowledge. As mentioned in Chapter 1, learning computer programming doesn’t rely on a single skill; learners can
benefit from mastering a set of meta-cognitive skills to optimize learning (Azevedo & Cromley, 2004; Winne, 1995).
The way to provide study skills in this research is a “built-in” approach (Wingate, 2006). As opposed to the “bolt-on”
approach (Bennett et al., 2000) where study skills support is treated as an external element independent of the subject
matter, this approach integrates study skills in a contextually appropriate manner.
226

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
Topical study skills for studying introductory computer programming has been selected in these four areas: note
taking, communications, conceptualizing, and problem solving skills. Twenty related study skills — five successive
skills (to scaffold) for each of the four learning modes — have been developed. Each study skill is paired with a tool
for learners to try out and practice these skills. For example, within the “Watching” activity, the key study to
emphasize is “note taking.” Learners can choose to access this supplementary material at any given time during their
study. The design consideration is that the key study skills appear as an unobtrusive interface that sits on the upper-
right-hand-corner of the main content. Learners can choose to engage and learn how the “note taking” skill helps
with the “watching” activity (i.e., how to take better notes while watching a tutorial video online and what are the
appropriate tools to use for note taking), or they can just ignore it and carry on with watching the tutorial video on
their own. Figure 5 shows that the Note Taking skill is being presented with Evernote, and note-taking software. The
rest of the study skills are paired with the following: communications – Moodle discussion forum; conceptualizing –
FreeMap (mind mapping tool); and problem solving – BlueJ (an integrated development environment tool).

Figure 5. One of the key study skills – Note Taking

Experimental design and methodology


Method

This study involved 35 undergraduate student volunteers from Athabasca University to study and evaluate one unit
of adapted content material from Introduction to Java Programming (COMP 268). The volunteers were assigned to
either an Experimental Group or a Control Group during this six-month period. Participants agreeing to the consent
form were invited to log into Moodle, a learning management system (LMS) currently used at Athabasca University.
Learner-Model site was assigned to an Experimental Group of 18, while static and linear content without study skills
support was being presented to a Control Group of 17. Six participants were selected randomly from the two groups
(three from each group) for the follow-up semi-structured interviews to collect qualitative feedback about the study.
Table 1 summarizes the participants’ demographic information for both Experimental and Control Groups:

Table 1. Demographic information for experimental and control groups


Experimental group Control group
Total number of participants 18 17
Age range 19-49 20-42
Gender M = 11, F = 7 M = 11, F = 6

Students who were assigned to the Experimental Group were directed to the Learner-Directed Model site called
“Control Structures (Experimental)” while the Control Group students would study the linear-model site called

227

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
“Control Structures (Control).” For both the Control and the Experimental Groups, participants could only access the
material after taking the pre-test.

For the Experimental Group, when students clicked on the Learning Activities, they were taken to the next screen
that presented them with a flexible learning interface in the shape of a circle with four quadrants. At this stage,
students were presented with different options for their learning activities: watching, discussing, conceptualizing, and
trying out. Alternatively, they could skip ahead to another learning concept or go directly to the post-test if they were
confident of passing this module without going through any learning material at all.

As for the Control Group, the view on the module and the interface was rather different. The design of the Control
Group module mirrored the typical design of a current online offering for computer programming courses. Unlike the
Experimental Group, the Control Group did not have a flexible circular interface in quadrants. Instead, a linear
presentation of content was made available for learners to access. Five learning concepts were provided, the same
learning content as the Experimental Group. Once the participants were done with one learning concept, it will link
to the next learning concept, and this continued on until all five learning concepts were covered. After the learners
were finished with all five learning concepts, a post-test was required for them to assess their competency on this
module. Even though learners were not required to go through the learning concepts in a sequential order, it wasn’t
made explicit to them. The learning activities were the same as the ones in the Experimental Group, but for the
Control Group, learners were not made aware of the different learning preferences, thus, there was no differentiation
of the Watching, Discussing, Conceptualizing, and Trying Out activities.

It is important to point out that this research is limited to the examination of the effect on adult learners on domain-
knowledge, meta-cognitive knowledge, and their overall learning experience (LX) based on data collected
quantitatively online and self-reporting from the participants with qualitative methods such as interviews and survey.
The research will be limited to the adaptation of one unit from an introduction to programming course (COMP268)
typically available for first or second year students at Athabasca University in Alberta, Canada. The course is part of
a regular offering at the School of Computing and Information Systems (SCIS), Faculty of Science and Technology,
one of the largest faculties at Athabasca University.

Data collection

Five data collection instruments have been used for this research study to collect quantitative and qualitative data:
• Pre-test/Post-test — pre-test and post-test have been conducted to measure whether the two groups were initially
the same in terms of their prior subject — level knowledge and meta-level knowledge.
• Log file data — log file data has been collected to track the time spent on domain area activities and the study
skills activities within the Moodle website for both groups.
• Survey — at the end of the module, a survey was provided to measure learners’ attitudes toward study skills and
tools provided in the module, their perceived ease of use and satisfaction, system usability, learners’ experience,
and perceived controllability. Questions on how the learners used the study skills tools as well as open-ended
text questions were also included in the survey.
• Follow-up interviews — semi-structured interviews with six (three from Control Group and three from
Experimental Group) randomly selected participants were conducted. The purpose of the interviews was to
verify and extend information obtained from the survey, particularly in discovering more details about the
learners’ attitudes and impression about the experiment. The interviews also gave learners a chance to elaborate
on the survey results.

Results and discussions


Pre-post test results

Quantitative and qualitative data were collected and the findings are presented and analyzed in this section. For
quantitative data, statistical tests were performed and displayed in Tables 2 and 3. They contain descriptive statistics
for each variable corresponding to Experimental Group (N = 18) and Control Group (N = 17). The tables present the
arithmetic means and standard deviation.
228

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
The null hypothesis employed in this research was that any difference in mean performances and attitudes between
the experimental and control conditions was due to chance alone. In order to test this, an independent-sample t-test
was conducted to compare domain knowledge pre-tests and post-tests, study skills usage pre-tests and post-tests and
survey between the Experimental Group and the Control Group. The results of this analysis are shown in Table 4
below.

Table 2. Descriptive statistics for the experimental group


N Mean Standard deviation
Pre-Test (domain knowledge) 18 15.7 2.7
Post-Test (domain knowledge) 18 15.8 2.7
Pre-Test (study skills) 18 33.9 12.9
Post-Test (study skills) 18 35.4 13.6
Time spent on study skills (minutes) 18 9.2 22.8
Time spent on domain (minutes) 18 428.9 907.1
Total time (minutes) 18 438.1 929.9

Table 3. Descriptive statistics for the control group


N Mean Standard deviation
Pre-Test (domain knowledge) 17 14 3.7
Post-Test (domain knowledge) 17 14.5 3.1
Pre-Test (study skills) 17 32.8 13.1
Post-Test (study skills) 17 36.8 3.4
Time spent on domain (minutes) 17 160 163.6

Table 4. Independent samples t-test


F t df Sig.
Pre-test (domain knowledge) 1.57 1.54 33 0.132
Post-test (domain knowledge) 0.17 1.30 33 0.101
Pre-test (study skills) 0.07 0.27 33 0.792
Post-test (study skills) 5.59 -0.39 33 0.351
Total time spent 4.1 1.01 33 0.151
Survey 0.40 2.32 33 0.021
Note. 1 = 1 tailed; 2 = 2 tailed.

The finding shows that there was no significant difference in performance between the Experimental Group and the
Control Group over the course of this experiment (p > 0.05). The results of the analysis also show that there was no
difference between the performance of the Experimental Group and the Control Group on pre-test and post-test for
both domain knowledge and study skills (p > 0.05). One tailed statistics were employed in post-test comparisons as it
was predicted that the Experimental Group would perform better than the Control Group. The results of the
comparison of the survey results between the Experimental and Control Group show a significant difference in their
responses (p < 0.05). The null hypothesis is therefore rejected and any differences in the means of attitude
measurements are due to the effect of the experimental condition. There were no significant differences between the
Experimental Group and the Control Group in terms of total time spent studying the domain material (p > 0.05),
though the Experimental Group spent more time in total as they spent time on study skills as well as for domain
knowledge. The lack of difference between the two groups on time spent on domain material is interpreted that even
given a flexible approach to learning where learners can skip learning material, they choose to spend a similar
amount of time in studying the material.

Survey results

The end-of-the-module survey has a total of 28 questions for the Control Group, with additional three questions for
the Experimental Group for a total of 31 questions. The additional questions concern the use of the supplementary
study skills and use of tools that were only available to the Experimental Group. To ensure high construct validity,
eight survey items had been adapted from Davis (1989) and Unger and Chandler (2009) for measuring perceived
ease of use and user satisfaction. For system usability, six items were based on the work of Koohang and Du Plessis
229

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
(2004) while three items derived from Smulders (2003) for learning experience. As for perceived controllability, four
items were being reworded from Liu (2003); finally the two open-ended questions “what do you think are the best
features of the module, and why?” and “what features of this module did you think should be improved, and why”
were posted as an attempt to identify ways to improve the product and were based on the idea of Tullis and Albert
(2008).

Table 5 summarizes the survey results of the survey with the following topics: Ease of Use and Use Satisfaction;
System Usability; Learning Experience; and Perceived Controllability.

Table 5. Survey results in details


Group Mean Std. deviation Std. error mean
Ease of Use & User Satisfaction Experimental 26.89 3.45 0.81
Control 26.06 4.23 1.03
System Usability Experimental 24.33 4.12 0.97
Control 21.41 3.14 0.76
Learning Experience Experimental 14.11 2.95 0.69
Control 12.89 1.96 0.48
Perceived Controllability Experimental 17.61 2.87 0.68
Control 15.41 2.92 0.71

It is interesting to note that in all cases the mean of the Experimental Group was higher than the mean of the Control
Group. In order to test the significance of these possible differences in the means, an independent t-test was
performed. One-tailed statistics were used as it was hypothesised that the Experimental Group would have a higher
rating than the Control Group. Table 6 below shows the results of the test performed on the data related to the four
usability components.

Table 6. Independent samples t-test for survey results


F T df Sig. (one-tailed)
Ease of Use & User Satisfaction 0.41 0.64 33 0.13
System Usability 1.38 2.35 33 0.005
Learning Experience 1.19 1.44 33 0.04
Perceived Controllability 0.02 2.25 33 0.01

Significant differences were found in the System Usability measure (p < 0.01), Learning Experience (p < 0.05) as
well as for Perceived Controllability (p = 0.01). These results indicate that in terms of these three usability measures,
the increased ratings observed for the Experimental Group as compared to the Control Group were not due to chance
alone, but to the influence of the independent variable, experimental condition. The result of the t-test comparing the
means of the Experimental and Control Groups for Ease of Use and User Satisfaction was not significant, though it is
fair to say that the difference, although small, was in the expected direction. Only 13% of the time would this result
be expected to be due to chance alone. It is likely that the high quality of learning materials used in both Control and
Experimental Groups may have produced a ceiling effect, both groups having similar levels of satisfaction.

All in all, the experiment was able to show that there was significance in total time spent studying. The Experimental
Group spent longer time on the material than the Control Group. However, there was no significant difference in
performance between the Experimental Group and the Control Group over the course of this experiment (p > 0.05).
The results of the analysis also show that there was no difference between the performance of the Experimental
Group and the Control Group on pre-test and post-test for both domain knowledge and study skills (p > 0.05). For
the survey, the comparison results between the Experimental and Control Group show a significant difference in their
responses to attitude questions (p < 0.05). In all cases for the survey results, the mean scores for the Experimental
Group were higher than the Control Group. Significant differences were found for the System Usability measure (p <
0.01), Learning Experience (p < 0.05) as well as for Perceived Controllability (p = 0.01).

230

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
Interview results

The aim of the interviews for the purpose of this research is to support the results of the survey and give participants
a chance to elaborate their survey answers. It is an opportunity to learn more about the participants’ attitudes and
impressions of the modules, and to gain further insights on the differences between the Control Group’s and the
Experimental Group’s learner experience. The interviewing questions were validated by an external user experience
design professional with over 10 years of experience in user interviews in Edmonton, Canada. The results of the
interviews have supported the survey data in which it indicated that there was a difference in perceived
controllability and system usability between the Control Group and the Experimental Group. The interviews
provided more details on the differences and also some of the similarities between the learner experiences of using
the two modules – one for the Control Group and one for the Experimental Group.

The overall findings of the interviews can be summarised as the following:


• The Experimental Group had to deal with an initial learning curve of adapting to the new interface; however,
once they were familiar with it, they thought it helped with the flow of the learning material and provided
flexibility.
• The interface design for the Experimental Group seemed to promote thinking in learning styles and preferences.
It also appeared to stimulate thinking in terms of how computer programming for online learning can be
presented/designed differently. More comments about how to improve the interface design stemmed from the
Experimental Group rather than from the Control Group.
• It was evident that the Experimental Group used more constructive and positive descriptive terms when
describing their learning experience. For examples, terms such as “feeling in control,” “enjoy the flexibility,”
“freedom to pick or skip,” “give me an opportunity to use different ways of learning,” “it has a flow,” etc. all
pointed to a stronger sense of learner satisfaction in their learning experience.
• The Control Group provided more comments on how to improve or present the content differently while the
Experimental Group addressed both the content and the interaction design issues. It suggested that by presenting
a different/unusual interface for e-learning, it prompted students to think beyond what the possibilities are for
improving their online learning experience.

In conclusion, the findings of the interviews conveyed that the Experimental Group had an overall more positive and
constructive learning experience as opposed to the Control Group. They paid attention to both the content and the
interaction design, and in general felt in control and satisfied with their learning. This finding is in support of the
results from the end-of-module survey.

Conclusion
This study developed an alternative approach to learning computer programming online called a Learner-Directed
Model in order to provide a positive learning experience and optimal levels of learner control. The design is
grounded in the integration of two established learner-centric education theories. The two theories are Experiential
Learning Theory (ELT) and a modified version of Self-Regulated Learning (SRL) Theory, which this paper’s
authors called Self-Directed Regulated Learning (SDRL). ELT was used as a guideline for the domain-knowledge
content with the four learning modes. SDRL was used as a guideline for the meta-knowledge (i.e., study skills)
content that was integrated and contextualized into the four learning modes. This model caters to learner control with
the consideration of freedoms of pace, content, and media, and it includes domain knowledge learning activities as
well as meta-cognitive skills’ interface. Furthermore, this model takes on a constructive approach to learning.
Constructive learning is based on students’ active participation in problem solving and critical thinking regarding a
learning activity, which they find relevant and engaging (Masuyama, 2005). This experimental study with 35
voluntary participants (N = 35) indicated that there is statistically significant difference between the survey results
for the Experimental Group and the Control Group. The Experimental Group reported a higher level of overall
learning experience and better attitudes in general. However, there was no statistically significant difference existing
between the two groups on the domain and meta-level knowledge improvement.

These experimental results have important implications for designing the next generation of e-learning systems as the
Learner-Directed Model enables us to present an alternative to the existing instructional design framework. It goes
beyond presenting and delivering online material to a theory-centric, user-centred, and Learner-Directed Model for
231

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
asynchronous learning. This study highlighted a shift in underpinning learning design philosophy, especially one for
learning about computer programming online. It also served as a departure from the view on student modeling using
adaptive systems, to distinguish the difference between adaptive system and adaptable system with the aim to
optimize learner direction and learner control.

Acknowledgements
The authors acknowledge the support of NSERC, iCORE, Xerox, and the research-related gift funding by Mr. A.
Markin.

References

Azevedo, R. (2005). Using hypermedia as a metacognitive tool for enhancing student learning? The Role of self-regulated
learning. Educational Psychologist, 40(4), 199–209.
Azevedo, R., & Cromley, J. G. (2004). Does training on self-regulated learning facilitate students’ learning with hypermedia?
Journal of Educational Psychology, 96, 523–535.
Bennett, N., Dunne, E., & Carré, B. (2000). Skills development in higher education and employment. Florence, KY: Taylor &
Francis, Inc.
Boekaerts, M. (1999). Self-regulated learning: Where we are today. International Journal of Educational Research, 31, 445–457.
Boekaerts, M., & Corno, L. (2005). Self-regulation in the classroom: A Perspective on assessment and intervention. Applied
Psychology: An International Review, 54(2), 199–231.
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A Theoretical synthesis. Review of Educational
Research, 65, 245–281.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly,
13(3), 319–340.
Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation, and
self-regulated learning. Educational Psychology Review, 20, 391–409.
Hadwin, A. F., Wozney, L., & Pontin, O. (2005). Scaffolding the appropriation of self-regulatory activity: A Socio-cultural
analysis of changes in teacher-student discourse about a graduate research portfolio. Instructional Science, 33, 413–450.
Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., & Ludgate, H. (2013). NMC Horizon Report: 2013
Higher Education Edition. Austin, TX: New Media Consortium.
Kolb, D., Boyatzis, R., & Mainemelis, C. (2000). Experiential learning theory: Previous research and new directions. In R. J.
Sternberg & L. F. Zhang (Eds.), Perspectives on cognitive, learning, and thinking styles (pp. 227-247). Mahwah, NJ: Lawrence
Erlbaum.
Kolb, D. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-
Hall.
Koohang, A., & Du Plessis, J. (2004). Architecting usability properties in the e-learning instructional design process.
International Journal on E-Learning, 3(3), 38.
Liu, Y. (2003). Developing a scale to measure the interactivity of websites. Journal of advertising research, 43(2), 207–216.
Masuyama, K. (2005). Constructivism or instructivism: Which online pedagogy? Sensei Online Benkyookai. Retrieved from
http://www.csus.edu/indiv/m/masuyama/technology/sensei_online/00_intro.htm#intro
Oxendine, C., Robinson, J., & Willson, G. (2004). Experiential learning. In M. Orey (Ed.), Emerging perspectives on learning,
teaching and technology. Retrieved from http://epltt.coe.uga.edu/index.php?title=Experiential_Learning
Perry, N. E., Phillips, L., & Hutchinson, L. R. (2006). Preparing student teachers to support for self-regulated learning.
Elementary School Journal, 106, 237–254.

232

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms
Pintrich, P. R. (2000). The Role of goal orientation in self-regulated learning. In P. P. M. Boekaerts, & M. Ziedner (Eds.),
Handbook of self-regulation (pp. 451–502). San Diego, CA: Academic Press.
Smulders, D. (2003). Designing for learners, designing for users. eLearn, 2003(2), 4. Retrieved from
http://elearnmag.acm.org/archive.cfm?aid=2134466
Tullis, T., & Albert, B. (2008). Measuring the user experience. Burlington, MA: Morgan Kaufmann.
Unger, R., & Chandler, C. (2009). A Project guide to UX design: For user experience designers in the field or in the making.
Berkeley, CA: New Riders.
Wingate, U. (2006). Doing away with “study skills.” Teaching in Higher Education, 11(4), 457–469.
Winne, P. H. (1995). Inherent details in self-regulated learning. Educational Psychologist, 30(4), 173–187.
Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In M. B. P. Pintrich, & M. Seidner (Ed.), Handbook of
self-regulation (pp. 531–566). Orlando, FL: Academic Press.
Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1), 3–
17.

233

This content downloaded from


212.175.35.246 on Fri, 15 Nov 2019 10:50:11 UTC
All use subject to https://about.jstor.org/terms

You might also like