0% found this document useful (0 votes)
15 views

Archive

Uploaded by

fara dhila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Archive

Uploaded by

fara dhila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 208

Creating Spaces for Formative

Feedback in Lectures
Understanding how use of educational technology can support formative
assessment in lectures in higher education

Kristine Ludvigsen
Thesis for the degree of Philosophiae Doctor (PhD)
University of Bergen, Norway
2020
Creating Spaces for Formative
Feedback in Lectures
Understanding how use of educational technology can
support formative assessment in lectures in higher
education
Kristine Ludvigsen

Thesis for thefor


Avhandling degree of philosophiae
graden Philosophiae doctor
Doctor(ph.d
(PhD)
.)
at
vedtheUniversitetet
University ofi Bergen
Bergen

2017 06.03.2020
Date of defense:

Dato for disputas: 1111


© Copyright Kristine Ludvigsen
The material in this publication is covered by the provisions of the Copyright Act.

Year: 2020
Title: Creating Spaces for Formative Feedback in Lectures

Name: Kristine Ludvigsen


Print: Skipnes Kommunikasjon / University of Bergen
Scientific environment
The PhD education programme at the Faculty of Psychology, University of Bergen
funded this research. The research project was situated in the Department of
Education, in the Digital Learning Communities Research Group (DLC). Between
2013 and 2016, I participated in the National Graduate School in Educational Research
(NATED). I also participated in PhD courses provided by the research school Western
Norway Graduate School of Education II.

1
2
Acknowledgements

The etymological meaning of the word contingent is:


‘dependent upon circumstances, not predictable with certainty’.1

Synonyms are:
Crossroads, possibilities, occurrences, eventuality, probability, turning point,
juncture, opportunities, something that is unexpected, uncertainty.

The writing of this thesis has been a single long moment of contingency: hundreds of
small moments that gave the project its direction. In all those moments, key people
have challenged and shaped my thinking. First, I would like to thank the students who
participated in the lectures, in the interviews and in the recorded peer-discussions. I
have been listening to your conversations for hundreds of hours, and, like a fly on the
wall, while listening I felt as if I were exploring a secret: the moment that happens
when one asks students to turn to a neighbour and talk for a few minutes. Thank you
for your trust and for allowing me to enter your worlds.

I would like to express my profound appreciation to my supervisor Professor Rune


Johan Krumsvik, who has always been open to my ideas and at the same time
addressed them critically. I would also like to thank you for inviting me into your
writing projects, all of which have been valuable and important experiences. Thank
you for inviting me to your research projects, which have undoubtedly been crucial for
my carrying out this PhD-project, and which will also be important for future projects.

I would also express my thanks to my midterm evaluation committee, Professor


Kariane Westrheim and Professor Anne Grethe Danielsen, who provided valuable
feedback. You encouraged me to use more dialogically oriented theories as a

1
https://www.etymonline.com/word/contingent

3
framework to discuss my research question, which did take the project in an interesting
turn.

Warm thanks to Andreas Lund, who read thoroughly and commented critically on all
the articles and provided valuable feedback and suggestions to the extended abstract at
the end of the writing process.

I would like to thank Professor Kari Smith, Professor Olga Dysthe and Professor
Anton Havnes for introducing me to the field of formative assessment and feedback in
connection with a research project implementing formative feedback practices in upper
secondary schools in Hordaland. I also thank all my colleagues at the Department of
Education at the University of Bergen and the members of Digital Learning
Communities Research Group (DLC). Several of you have provided valuable feedback
during the writing of this thesis by reading and commenting on article drafts: Professor
Olga Dysthe, Kjetil Egelandsdal, Dag Roness, Bjarte Furnes, Ingunn Johanne Ness
and Trude Løvskar. Big thanks to Bjarte Furnes who was a co-supervisor for the
quantitative phase of this work. Special thanks to Vigdis Stokker Jensen and Robert
Gray for helping me find my voice at the end of this writing process. Thanks to Kåre
Helleve and Øystein Steine Larsen for good administrative support. Many times –
during lunch, sharing a coffee, over a glass of wine after work – I have found myself
thinking that I was grateful to be a part of this environment.

Thanks to my co-authors in the three articles: Professor Rune Johan Krumsvik, Bjarte
Furnes, Ingunn Johanne Ness, Sue Timmis and Jens Breivik. You have all contributed
with your critical comments, contributions and perspectives and made the work
process enjoyable, for which I am grateful.

Thanks to NATED (National Graduate School in Educational Research), in which I


was a participant from 2013–2016 in track 4, and to Professor Peter Maassen,
Professor Monica Nerland and Professor Andreas Lund who have been reading and
commenting on article drafts. Thanks also to Helen Timperley for commenting and

4
providing insightful comments on the first draft of article 1. Thank you to my fellow
PhD candidates for reading and commenting on drafts, and to those of you who have
continued to be my critical, supportive friends. Thank you for reading and commenting
article drafts, Cecilie Enquist Jensen and Jens Breivik.

Thanks to the library at the Faculty of Psychology and Kathrine Cohen for invaluable
help with literature searches; you have otherworldly patience. Thanks to Ole Johan
Eikeland for consulting for the development of the survey as well as providing
suggestions on how to approach the quantitative analysis in Study 1.

Thanks to Western Norway University of Applied Sciences (HVL), Faculty of


Education, Arts and Sports, which has kindly provided me the time I needed to finish
this thesis. A special thanks to Solveig Kalgraf, Ketil Langørgen and Christine Hope.
A big thank you to Tove Patterson, the artist who made the cartoons; you have made
the work even more enjoyable!

Thanks to my family and friends for your support and for being encouraging and
helping me take vitally needed breaks. Thanks to Øyvind Berg, helping me maintain
focus at the end of this work. To Andreas, for being patient and supportive and for
making space so this project would be possible. Also, for reading drafts, providing
suggestions for clarity and structure. Thank you also for keeping me up to date about
what has been going on in the world, in the local community and with our daily
logistics. To my children, Sverre, Rakel and Sindre: thank you for being you. My hope
is that there might be something in here for you as well: Insight from the student
interviews. The idea of opening, widening and deepening dialogical spaces – a
metaphor you could use as you like; I use it as a way of thinking about life.

5
Abstract
In this thesis, I examine how the use of educational technology has the potential to
create moments of contingency and, through those moments, to transform the premises
for formative assessment in lectures. The main research question was: What
affordances are there in using participatory tools to support formative assessment in
lectures?

In my research, I set out to explore an intervention in which a student response system


(Turning Point) and a shared online whiteboard (Flinga) were used to support a
formative assessment in the context of lectures in two university courses, one in
psychology and one in teacher education. A design-based research approach (Barab &
Squire, 2004) and a sequential mixed methods design (Ivankova, 2014) were used to
explore these activities. The thesis is situated within a sociocultural perspective in
which knowledge processes are viewed as social processes of co-construction of
knowledge through dialogue (Wertsch, 1993).

In the three articles issuing from this research, I seek to examine technology-supported
formative assessment in lectures from different angles and using different methods
(Survey, interviews, recordings of peer discussions, analysis of material produced in
lectures and focus group interviews with students and lecturers). In the first article
(Ludvigsen, Krumsvik & Furnes, 2015), we used a sequential mixed-methods design
to examine student perceptions and the use of feedback when student response systems
were used in lectures for an undergraduate methods course. Most of the students
valued the possibility of receiving feedback on their understanding during lectures to
reflect on their learning, and they especially emphasised that explaining their thinking
while discussing questions with their peers was valuable as a feedback space, in
addition to feedback generated through the technology used and from the lecturer.
Students mostly used feedback to ‘check things up’, ‘discuss with peers’ and ‘focus
reading’.

6
To examine what is achieved in these discussions, in the second article (Ludvigsen,
Krumsvik, & Breivik, 2020), we explored the audio-recorded discussions in detail. We
used the framework of exploratory talk (Littleton & Mercer, 2013) as a lens through
which to examine patterns of talk in 87 peer discussions. In 68 of these discussions,
students were able to create spaces in which to exchange and elaborate on each other’s
ideas and understanding of concepts. However, in the remaining cases, students
engaged in superficial discussions, only referring to the number (the numbered
alternatives in multiple-choice questions) without any further elaboration or
justification. In the analysis of this material, we also found that in the majority of the
discussions, students expressed uncertainty, or they were guessing. This led us to
question the quality of the inferences to be drawn, based on the activities, and we
argued for the use of tools that would allow complexity and questions to surface.

In the third article (Ludvigsen, Ness, & Timmis, 2019), we explored the affordances of
using an online collaborative whiteboard to open, widen and deepen dialogic spaces in
lectures, using interviews with students and lecturers, audio recordings of peer
discussions and material produced in lectures as data sources. Based on two cases, we
argued that this technology has the potential to transform the lecture into a ‘dialog
space’ (Wegerif, 2013) for students to participate in activities in which they can
connect new ideas to their previous knowledge and experiences. We argued that
opening dialogical spaces provide students with rich possibilities for reflecting on
concepts and developing arguments, providing feedback on students’ understanding of
course content.

Across the articles, we suggest that moments of contingency can be made explicit
when using technology-supported formative assessment activities in lectures. When
students share knowledge, questions and ideas, it becomes possible for them to
become aware of each other’s thinking in ways that would not otherwise be possible in
a lecture environment. Students find this experience to be valuable in supporting their
learning in lectures and in their coursework. This thesis contributes to educational
research and practice by showing how the use of participatory tools supports students’

7
learning process in lectures, and how it influences students’ work outside of the
lectures. Second, it offers insight into the micro-processes that occur between students
when they engage in peer discussions and reveals how the tools used facilitate
interaction: between students in the group, across groups and between the students and
the lecturer. Third, the thesis offers practical guidance on how to use participatory
tools to facilitate formative assessment in large lectures.

8
List of publications
Article 1:
Ludvigsen, K., Krumsvik, R. & Furnes, B. (2015). Creating formative feedback spaces
in large lectures. Computers & Education, 88, (C), 48–63. Doi:
10.1016/j.compedu.2015.04.002

Article 2:
Ludvigsen, K., Krumsvik, R. & Breivik, J. (2020). Behind the scenes: Unpacking peer
discussions and critical reflections in lectures. British Journal of Educational
Technology (BJET).

Article 3:
Ludvigsen, K., Ness, I. & Timmis, S. (2019). Writings on the wall: Bringing student
voices to the lecture. Thinking Skills and Creativity. 34, 1-18. Doi:
10.1016/jtsc.2019.02.00

9
Content

1. Introduction.................................................................................................. 13
1.1 Background ..................................................................................................................... 14
1.2 The current project ........................................................................................................... 19
1.3 Research on participatory tools to support formative assessment – an overview................. 20
1.4 Aim and Research Questions ............................................................................................ 26
1.5 Teaching design ............................................................................................................... 29
1.6 Research design ............................................................................................................... 30
2. Theory .......................................................................................................... 34
2.1 Formative assessment and feedback.................................................................................. 34
2.2 A Dialogue Approach to Teaching.................................................................................... 41
3. Methods ........................................................................................................ 48
3.1 Design-Based Research and Mixed Methods .................................................................... 48
3.2 A Pragmatic Approach ..................................................................................................... 50
3.3 Design of the Study .......................................................................................................... 53
3.4 Study 1: Research process ................................................................................................ 56
3.5 Study 2: Research process ................................................................................................ 60
3.6 Study 3: Research process ................................................................................................ 62
3.7 Validity ........................................................................................................................... 65
4. Findings ........................................................................................................ 71
4.1 Article 1........................................................................................................................... 71
4.2 Article 2........................................................................................................................... 72
4.3 Article 3........................................................................................................................... 72
4.4 Findings across the articles ............................................................................................... 73
5. Discussion, implications and conclusion ...................................................... 75
5.1 Dialogical spaces as moments of contingency ................................................................... 76
5.2 Perception and use of feedback......................................................................................... 86
5.3 The potential for activities to challenge and transform established practices ...................... 93
5.4 Implications ..................................................................................................................... 96

Tables

Table 1. Overview of the PhD project ................................................................................... 33


Table 2. Affordances of different activeties to support widening and deepening of dialogic
spaces ...................................................................................................................................80

10
Figures2

Figure 1. How the studies are connected. ......................................................................................... 28


Figure 2. The teaching design for Study 1. ......................................................................................... 29
Figure 3. The teaching design for Study 2 and Study 3. ...................................................................... 30
Figure 4. The teaching design for Study 3. ......................................................................................... 30
Figure 5 Key features of high-quality feedback practices in higher education. ................................... 40
Figure 6 Modes of talk. ..................................................................................................................... 43
Figure 7. Design-based research cycle. .............................................................................................. 49
Figure 8. Coherence in the research design in the thesis. .................................................................. 52
Figure 9. DBR and mixed methods research design. .......................................................................... 52
Figure 10. Procedural diagram. ......................................................................................................... 55
Figure 11: How using a sequential mixed method designe contributed to nuanced understanding ... 68
Figure 12: How Moment of contingency and dialogical spaces shed light on each other. ................... 85
Figure 13. How the activities influence the different feedback questions. ......................................... 92
Figure 14. Shared thinking spaces. .................................................................................................... 94
Figure 15. The lecturer talks to one hundred students, vs. one hundred students sharing their
thoughts to the lecturer and to each other. ...................................................................................... 95

Enclosed

Article 1: Creating formative feedback spaces in large lectures.

Article 2: Behind the scenes: Unpacking peer discussions and critical reflections in lectures.

Article 3: Writings on the wall: Bringing student voices to the lecture.

Appendices (In English and Norwegian)

Appendix A: Tools used in this project


Appendix B: Search strings used to find research literature
Appendix C: Flingaboard: Is learning always a good thing?
Appendix D: Letter to the informants in Study 2 and Study 3
Appendix E: Interview guide, Study 3
Appendix F: One of the mindmaps.

2
All figures are made by the author. The cartoons are made by Tove Patterson

11
Appendix G: NSD confirmation letter
Appendix H: NSD: Confirmation letter for change of method

12
1. Introduction
As I raised my hand to ask questions (...) I felt the fight/flight response just
kicked into pulse. Anxiety. I thought, ‘Lord! Now I am going to die’. It’s a
relatively unpleasant experience, of course. So, it does happen every time, even
just thinking about asking questions (Ludvigsen, 2017, p. 1).

You do have a tendency to sit and think, ‘It is just me’. It’s really embarrassing.
Sure, people have a tendency to do just that. This applies to almost everything.
(Jon) (Krumsvik & Ludvigsen, 2012, p. 48).

Have you experienced this, yourself? You are in a lecture hall, and the lecturer asks;
‘Does anyone have any questions?’ The lecturer looks around. You have more
questions than you would like to admit. You worry that you are the only one who has
not understood. You are afraid to waste someone else’s time. You nod. Or, you are the
lecturer. You look around the auditorium, looking at the students’ faces. ‘Does anyone
have any questions?’ Quiet. A few students are nodding. You go on. The quotations
above illustrate a common situation: students that are afraid to speak and structural
barriers for lecturers and students to interact. Despite these barriers, and established
knowledge about the value for students of being active participants in the learning
process, lectures are the most common forum for teaching for undergraduate students
in Norway.

In this thesis, I examine how the use of educational technology has the potential to
create moments of contingency3 and, through those moments, to transform the
premises for formative assessment in lectures. The main research question was: What
affordances are there in using participatory tools to support formative assessment in
lectures?

3
Moments of contingency can thus be interpreted as activities that raise students’ or teachers’ awareness of the
students’ understanding, to adjust teaching and learning

13
This introduction presents the background and previous research on how participatory
tools support formative assessment in lectures and the rationale for the study, then I
introduce its aim, the research questions and the research design.

1.1 Background
The debate on the value of lectures has been polarised (French & Kennedy, 2017;
Harrington & Zakrajsek, 2017). On one hand, common arguments for the pedagogical
value of lectures are that they allow a structured approach to the subject or discipline;
that they have the ‘capacity to build a sustained and complex argument’ over the
course of a semester (French & Kennedy, 2017, p. 647); that they can stimulate and
challenge students, that they promote an environment in which students have to
engage to process ideas and perspectives; that they can support the creation of a ‘sense
of community’ among students and lecturers and that they are ‘cost-effective’ (French
& Kennedy, 2017). Also, lectures have been proven to be of high value for first year
students (Harrington & Zakrajsek, 2017). On the other hand, they have been
characterised as passive, ineffective and ‘obsolete’ (French & Kennedy, 2017, p. 639).
The lecture has been questioned, criticised and under debate for decades, with
particular criticism for being monological and for being a mode of transmission
(Bloom, 1953; Friesen, 2011; Laurillard, 2013) supporting a superficial approach to
learning (Cavanagh, 2011; Prosser & Trigwell, 2014). Lectures have also been
criticised for failing to engage students and being subject to structural constraints, such
as limited opportunities for students and teachers to interact (Cavanagh, 2011). These
arguments illustrate that the value of the lecture is under dispute.

However, a lecture can take a multitude of forms. The format has shown to be
dynamic; throughout history the lecture has changed its shape to adjust and include
valued educational practices and tools (Friesen, 2011). Today, the lecture might
represent a merging of various modalities, such as voices, text, pictures and videos,
with a varying degree of interaction between students and the lecturer underpinned by

14
different pedagogies and shaped by the various disciplines in which the lecture plays
out (Harrington & Zakrajsek, 2017). Additionally, lectures can be grouped into
different genres. Regardless of genre and the activities chosen, a lecture may also be
performed well or badly (Harrington & Zakrajsek, 2017). Also, the lecture as a genre
is increasingly employed outside of formal education in popular culture, e.g., in
podcasts and TED Talks. Telling stories is an activity grounded in human nature. For
these reasons, it is difficult to argue for or against lectures as a general phenomenon.

French and Kennedy (2017) conclude that ‘the lecture remains a valuable pedagogical
tool that with improvements could offer even greater value to students. The capacity to
improve lectures might depend upon a stronger recognition of their capacity to
integrate active and interactive techniques’ (p. 651). This view is reflected in a
growing body of literature within higher education, emphasising the inclusion of
student-centred activities to encourage students to construct their knowledge (Damşa
et al., 2015; Harrington & Zakrajsek, 2017; Nerland et al, 2018; McQueen &
McMillan, 2018). The literature on active learning shows an emphasis on activities
that foster critical reflection – through activities in which students articulate their
understanding, connect new ideas with their previous knowledge and experiences, and
construct their knowledge in collaboration with their peers (Cavanagh et al., 2016;
Lumpkin, Achen, & Dodd, 2015; Prince, 2004). The research literature that uses the
term ‘active learning’ in the context of lectures uses it as a broad term, often poorly
conceptualised and without a clear definition (Arthurs & Kreager, 2017). However,
certain key characteristics are generally found: that such approaches allow students,
through different activities, to reflect on, apply or test out their knowledge in authentic
situations or cases collaborating with peers in a way that makes their thinking visible
and allows various forms of feedback on the learning process. Examples include
writing one-minute papers, making mind maps, pairing up to share thinking,
reviewing and comparing notes, or collaborating with peers in activities such as peer-
and whole-class discussions, inquiry-based learning or problem-based learning, often
supported by some kind of digital tools (Arthurs & Kreager, 2017; Harrington &

15
Zakrajsek, 2017; Hyun, Ediger, & Lee, 2017; McMillan, Loads, & McQueen, 2018).
There is a considerable body of empirical research showing that active approaches
support students’ learning in different ways (Cavanagh et al., 2016; Harrington &
Zakrajsek, 2017; Hyun et al., 2017; Lumpkin et al., 2015; McMillan et al., 2018).

Along with this optimistic use of the term ‘active learning’, critical voices maintain
that there is no such a thing as ‘passive’ learning and that therefore the divide between
active and passive learning represents a false dichotomy (Dall’Alba & Bengtsen,
2019). Critics point out that one cannot necessarily assume a definite relationship
between implementing active strategies and improving learning outcomes (Baeten,
Kyndt, Struyven, & Dochy, 2010) and that active learning approaches take
instructional time that could be used for other activities (Aljaloud, Gromik,
Billingsley, & Kwan, 2015). Studies also find that some students question the value of
student activities in lectures and prefer the transmission mode of learning (Clinton &
Kelly, 2017; Lobo, 2017; McMillan et al., 2018; McQueen & McMillan, 2018).
Nevertheless, this research has generated an increased focus on implementing student
active learning approaches in practice, research and educational policy.

The white paper Culture for Quality in Higher Education (Ministry of Education and
Research, 2017) advocates that when lectures are used, they should allow students to
be active in constructing their own learning and that technology should be an
integrated element if it serves a pedagogical purpose. The white paper emphasises that
higher education institutions should work systematically to establish consistency
between learning outcomes, teaching and learning activities and formative and
summative assessment methods. This concern is reflected in the national strategy for
digitalisation (2017–2021). The strategy (Ministry of Educational Research, 2018)
asserts that digital tools have the potential to change or create new possibilities for
learning and teaching in higher education. To realise this potential, teachers should be
able to use digital tools to support active learning and to follow up and assess students
as individuals and as a group. These documents display a trust in the use of

16
educational technology to change the premises for teaching, learning and assessment
as a contribution to raising quality in higher education (HE).

Despite the promise of digitalisation, and the elevated expectations higher education
institutions have of the potential for digital tools to raise quality, reflection on how the
use of technology might change practice is less visible. A recent report, the Status
report on Norwegian Higher Education (2018), concludes that institutions of higher
education do not exploit the opportunities offered by digital tools, and that students
only experience digital tools in their study programs to a limited extent. However, a
note from Norgesuniversitetet (2018) on the use of digital tools to support active
learning in higher education finds examples that institutions use digital tools in a way
that exceeds and transforms educational practices. However, the report stresses the
need for more research on what characterises high quality for students within active
approaches to learning.

Expectations on how digital tools should raise quality in teaching and learning is also
high within the institutions of higher education. Aasgaard et al. (2018) found that these
expectations are reflected in an analysis of consultation statements submitted by these
institutions for the white paper Culture for Quality in Higher Education (Ministry of
Education and Research, 2017). The consulting hearings explicitly connect the use
of educational technology to quality in teaching and learning, often by referring
back to policy and without situating these claims in relation to pedagogy or
elaborating on how the tools actually might change practices: ‘reflection on how
digitalization changes premises for learning, knowledge and teaching, is nearly
absent’ (Aagaard, Lund, Lanestedt, Ramberg, & Swanberg, 2018, p. 8). This
situation is summarised in the report as follows:

Because there is still uncertainty about to which digital practices promote educational quality
and how, there is a need to further develop research-based knowledge in the field. Strategic
exploration of digital practices adapted to local conditions and disciplines will contribute.
Specifically, more knowledge is needed on how different types of tasks, work types, digital

17
resources, and not least forms of assessment are best designed to ensure quality (Aagaard et al.,
2018, p. 12, my translation).

This passage shows that the relationship between digital tools and how they could be
used to raise quality in teaching and learning is unclear. It also indicates that the term
educational technology is understood quite broadly which is a common view: ‘Even
the most rudimentary definitions of the term ‘technology’ indicate that its meaning
extends far beyond artefacts and devices to include processes, methods, means and
applied knowledge’ (Friesen, 2013), or as stated by Ross, Morrison, Lowther, (2010):
‘educational technology is not a homogeneous “intervention” but a broad variety of
modalities, tools, and strategies for learning’. Its effectiveness, therefore, depends on
how well it helps teachers and students achieve the desired instructional goals’ (Ross
et al. 2010, p. 19). These considerations are also reflected in a large meta-analysis
addressing what has been learned in 40 years of research into the ways computer
technology use affect student achievement in formal face-to-face classrooms,
compared to classrooms that do not use these technologies. Tamim, Bernard,
Borokhovski, Abrami and Schmid (2011) conclude:

Thus, it is arguable that it is aspects of the goals of instruction, pedagogy, teacher


effectiveness, subject matter, age level, fidelity of technology implementation, and possibly
other factors that may represent more powerful influences on effect sizes than the nature of the
technology intervention. It is incumbent on future researchers and primary meta-analyses to
help sort out these nuances, so that computers will be used as effectively as possible to support
the aims of instruction … we feel that we are at a place where a shift from technology versus
no technology studies to more nuanced studies comparing different conditions (Tamim,
Bernard, Borokhovski, Abrami & Schmid, 2011, p. 17).

Reviews on how educational technology influences teaching and learning in higher


education find that technology often is used to fit into conventional practices by
replicating them or extending them rather than taking the opportunity to challenge and
transform them (Henderson, Selwyn, & Aston, 2017; Kirkwood & Price, 2014;
Lillejord, Børte, Nesje, & Ruud, 2018; Pimmer, Mateescu, & Gröhbiel, 2016). This is
summarised by Henderson and colleagues: ‘Digital technologies are clearly not

18
“transforming” the nature of university teaching and learning, or even substantially
disrupting the student experience’ (Henderson et al., 2017, p. 1578). These findings
align with a review by Pimmer and colleagues, which also argues that the creative
potential of educational technology is not being utilised (Pimmer et al., 2016).
Drawing on these sources, it seems likely that the potential of using digital tools to
transform and challenge educational practices is not being utilised. In addition, that
there is a need to examine ways in which different digital tools, in different context
and conditions, influence quality in teaching and learning. This, then, is the landscape
in which this thesis is situated.

1.2 The current project


In this research project I set out to explore activities in which a student response
system (Turning Point) and a shared online whiteboard (Flinga) were used to support
formative assessment in the context of lectures in two university modules, one in
psychology (2012–2016) and another in teacher education (2017). The tools and how
they are used are further explained in each of the three articles and in Appendix A. In
the following, these tools will be referred to as either ‘clickers’, ‘student response
systems’ or the ‘online collaborative whiteboard’. The term ‘participatory tools’
comprehends a variety of such tools. Participatory tools are often used to assess prior
thinking, provoke new thought, elicit misconceptions, stimulate whole-class or small-
group discussions, review course material, apply knowledge in different contexts,
support self-assessment and guide problem solving (Beatty & Gerace, 2009).

To apply participatory tools implies a movement from an understanding of the lecture


as a space in which lecturers talk and students listen, towards promoting an
environment that includes activities where students can participate as they connect
new ideas to their previous knowledge and experiences, visualising knowledge in
different modes. How different participatory tools can support student learning in the
context of lectures has been extensively covered in the two last decades. In the

19
following section, I provide an overview of research on the use of participatory tools
to support formative assessment.

1.3 Research on participatory tools to support formative


assessment – an overview
This section presents an overview of research literature on the use of participatory
tools to support formative assessment in lectures. The object is to describe what
characterises this field of research within which my work is situated, to provide an
overview of its main arguments, identify discussions, and highlight areas for future
research (Grant & Booth, 2009). For this overview, I have used a triangulation
approach (Onwuegbuzie & Frels, 2016), that included literature search for peer-
reviewed articles in various databases, including Eric and Web of Science. Search
strings (Appendix B) were developed to identify relevant research in the period (2014–
2019).

Additionally, I have included articles from my personal archive, collected from 2011–
2019. This archive includes articles found in references lists and literature reviews and
also ‘grey literature’, theses and reports; articles provided by supervisors, colleagues
and reviewers; previous publications from this field of study (Krumsvik & Ludvigsen,
2012, Ludvigsen & Egelandsdal, 2016) and a recent literature review on how student
response systems support formative feedback in lecture (Egelandsdal, Ludvigsen &
Ness, 2019). Articles in which the object of study was to support formative
assessment, feedback and interaction in lectures are included in this overview. I
excluded research articles addressing feedback and interaction in online courses and
adult education. In the following, I first provide the characteristics of the field,
affecting the rationale for this project. Second, I present key findings, including what
is agreed upon by researchers and also some questions that are under debate. Third,
I show the relevance of the choices regarding research questions and methodology
approaches in this project.

20
Characteristics of the research literature
Student response systems are adapted within different disciplines and across various
contexts; they are most often used in the natural sciences (Bruff, 2011). Typically, a
particular technology is applied to overcome constraints within the context of the
lecture, such as the large number of students or other barriers to interaction, and to
support engagement with course content through student activity (Egelandsdal et al,
2019). Participatory tools are also used to address pedagogical or didactical challenges
connected to the discipline in which the activities are embedded. Another
characteristic is that the technologies to support such activities are moving targets: the
focus of the attention within the research literature is on the newest developments, as
such literature ranges from using text messages, through handheld student response
systems, including social media backchannels, to applications tailored to support
interaction in educational settings (Baron, Bestbier, Case, & Collier-Reed, 2016). Most
of the literature draws conclusions based on self-reported data, such as surveys or
interviews, or uses a comparative design to determine if one activity is better than
another using quasi-experimental pre-post design. There is a lack of studies using
ethnographic designs (Crompton & Burke, 2018; De Gagne, 2011; Pimmer et al.,
2016). This trend reflects the general picture of research on higher education: surveys
and interviews are among the most frequently used methods, while ethnographical
approaches are underrepresented (Haggis, 2009; Tight, 2013). Such studies are
essential because there might be a gap between how activities play out and the
lecturers’ and students’ perceptions of them (Nielsen, Hansen & Stav, 2016).

In line with research on educational technology in general (Crompton & Burke, 2018;
Friesen, 2013; Hew, Lan, Tang, Jia, & Lo, 2019), there often is a lack of pedagogical
theory underpinning studies on the use of participatory tools in lectures (Chien, Chang,
& Chang, 2016; Han, 2014; Shapiro et al, 2017). Using a pedagogical framework is
important to be able to assert how the use of digital tools can raise quality in
education.

21
How participatory tools support learning: Key findings.
In this section, I present key findings from research on how the use of student response
systems support student learning in lectures, drawn from literature reviews (Aljaloud
et al., 2015; Boscardin & Penuel, 2012; Castillo-Manzano, Castro-Nuño, López-
Valpuesta, Sanz-Díaz, & Yñiguez, 2016; Egelandsdal, Ludvigsen, & Ness, 2019;
Good, 2013; Kay & LeSage, 2009; Keough, 2012; Liu et al., 2014; MacArthur &
Jones, 2008; van der Kleij & Adie, 2018). The most consistent finding across these
reviews is that students in general report positive attitudes towards the use of
participatory tools in lectures. Furthermore, they indicate that the use of such tools
makes the lecture more enjoyable, increases engagement and attention and also
stimulates and increases interaction. Additionally, it supports formative assessment
and allows a contingent approach to teaching based on feedback.

Critical issues raised in these reviews are: that these activities take time that could
have been used for other activities, that less content is covered, and that it is
demanding for the lecturer to create good questions (Aljaloud et al., 2015). Students
are often critical if the technology is used for a summative purpose or to check trivial
knowledge (Good, 2013; Kay & LeSage, 2009). It is also noted in the literature that it
might be challenging for lecturers to follow up student responses in a formative way
(Kay & LeSage, 2009), as well as technical challenges (Aljaloud et al., 2015).

An ample body of studies attempts to measure how using such participatory tools may
influence student learning outcomes. This literature reports mixed findings. However,
four meta-studies (Castillo-Manzano et al., 2016; Chien et al., 2016; Hunsu, Adesope,
& Bayly, 2016; Nelson, Hartling, Campbell, & Oswald, 2012) conclude that the use of
student response systems supports student learning in terms of various measures of
learning outcomes as well as exam scores. However, the literature also acknowledges
that the reasons for these findings are less well understood. Chien et al. (2016) identify
opportunities for students to explain themselves, possibilities for feedback, the testing
effect and the value of answering questions to be among the potential reasons that
could account for the positive findings. The meta-analysis by Hunsu et al. (2016)

22
shows that the positive outcome of clicker interventions is moderated by how they are
used and the context in which they are used. In a review concerning the use of student
response systems and learning outcomes in health education, Nelson et al. (2012)
found that the use of student response systems improves learning outcomes, however,
this improvement was found to be greater for interventions in traditional, non-
interactive lectures than when introduced in lectures already using other active
learning approaches (Nelson et al., 2012). Such findings suggest that positive learning
outcomes may be achieved through increased interaction, not necessarily through
technology per se (Anthis, 2011; Liu et al., 2014). Even though meta-analysis finds
such tools to support student learning outcomes; there is an ongoing debate regarding
the reason for such findings.

How participatory tools support formative assessment and interaction


A growing body of empirical research suggests that the use of student response
systems in lectures can enhance both the quality and the quantity of peer discussions
(Chien et al., 2016; Egelandsdal & Krumsvik, 2018; Mazur, 1999; Smith et al., 2009).
When they make their thinking explicit in peer discussions, students will be exposed to
different ways of thinking, which can help them become aware of their own
understanding and make better-informed decisions about their learning process (Chien
et al, 2016; Dawson et al., 2019). Again, the question how educational tools support
interaction in lectures is seldom explored using ethnographic approaches. Chien et al.
(2016) therefore argue that peer discussion should be examined from a social aspect:
‘Future studies are needed to investigate how students interact with peers within the
context of clicker integrated instruction. Research on this line will also be helpful to
understand how the use of student response systems meditate the process and outcome
of peer discussion’ (Chien et al., 2016, p. 15).

There is a consensus in the literature that the use of student response systems supports
formative assessment and feedback processes, thereby supporting a contingent
teaching approach in lectures (Aljaloud et al., 2015; Chien et al., 2016; Dawson et al.,
2019; Pimmer et al., 2016). These systems provide opportunities for students to

23
receive feedback about their understanding of course content (Chien et al., 2016;
Dunn, Richardson, Oprescu, & McDonald, 2013; Hunsu et al., 2016; Pagano &
Paucar-Caceres, 2013). Furthermore, they provide lecturers with a more sensitive
awareness of their students’ understanding of the course material, which can be used to
adjust teaching (Egelandsdal & Krumsvik, 2019; Fies & Marshall, 2006; Reimer, Nili,
Nguyen, Warschauer, & Domina, 2016). Often it is assumed a straightforward relation
between the collection of answers and the process of providing feedback, as these
quotes illustrates:

One of the key benefits of using an ARS is that instruction can be modified based on student
feedback gathered throughout a class (…) If feedback from a majority of students indicates
that confusion or misconceptions are evident, an experienced instructor can offer alternative
explanations of the concepts in question (Kay Le Sage, 2009, p. 822).

When the students evaluate their own performance and identify areas for improvement, they
take steps that improve their academic performance (Aljaloud, 2015, p. 319).

Overall, the findings indicate students perceive that clickers provide a high level of feedback
(Keough, 2012, 828).

Research on peer discussions based on recordings also shows that votes do not
necessarily offer an accurate picture of student understanding (James & Willoughby,
2011; Nielsen, Hansen-Nygård, & Stav, 2012; Wood, Galloway, Hardy, & Sinclair,
2014). This can lead to misleading feedback, both for students and for lecturers.

To get a more sophisticated picture of students’ ideas, different tools for collecting
qualitative data (text) are available. The process of sharing ideas in a written post
supports reflective thinking, collaboration and the co-creation of knowledge (Baron et
al., 2016; Gao, Luo, & Zhang, 2012; Neustifter, Kukkonen, Coulter, & Landry, 2016;
Seglem & Haling, 2018; Yates, Birks, Woods, & Hitchins, 2015; Rasmussen, 2016;
Sandström, Eriksson, Lonka, & Nenonen, 2016), increased understanding of course
content (Kim et al., 2015) and opportunities for students to get feedback on their
understanding (Baron et al., 2016; Cacchione, 2015; Kim et al., 2015; Yates et al.,

24
2015). By reading other students’ questions, a student might become aware of the
challenges others face which they may share (Baron et al., 2016; Pohl, 2015). This
provides a safe process-oriented learning atmosphere (Elavsky, Mislan, & Elavsky,
2011; Yates et al., 2015), which is a key in supporting a formative feedback practice.

Participatory tools allow for different possibilities for students to articulate their
thinking and thus to enable certain types of inferences to be drawn. Applying such
tools can therefore allow students – together or alone – to demonstrate their
understanding of knowledge, phenomena and ideas by presenting them in various
ways (Pachler, Daly, Mor, & Mellar, 2010). No technology is in itself formative, but
can be used in a formative way: ‘It is the learners and teachers as human actors who
ultimately determine the formative effects of engaging with technologies, but
technologies can shape the potential for this to happen’ (Pachler et al., 2009, p. 21).
The formative aspect lies in taking advantage of technological opportunities to make
the reflections of the students visible (Pachler et al., 2010) through interaction and
problem solving (Egelandsdal et al, 2019).

Despite the large body of literature on how student response systems support formative
assessment, Egelandsdal and Krumsvik (2019), found that even though the majority of
the students experienced clicker lectures as making them more aware of their
understanding of the material, only half of the students reported using this feedback in
their coursework. A study from Krumsvik and Ludvigsen (2012) found that use of the
student response system increased students’ awareness of their own understanding,
however, they offered few examples of how the activities in the lectures influenced
subsequent course work. Other studies show that the use of student response systems
has little influence on preparation for class or work in other areas (Boyle & Nicol,
2003; MacGeorge et al., 2008).

Understanding how tools shape feedback practices and how and to what extent
students make use of the feedback such activities provide is crucial for research into
feedback in higher education in general (Johnson, 2012; Bound & Carless, 2018;

25
Evans, 2013). How students make sense of and utilise the feedback provided in such
activities is a vital issue when examining the potential for participatory tools to support
formative assessment (Egelandsdal et al., 2019; Fluckiger, Vigil, Pasco, & Danielson,
2010; Krumsvik & Ludvigsen, 2012; Nicol, Thomson, & Breslin, 2014).

This overview shows that clicker interventions can play a vital role in promoting a
climate for formative assessment. They can provide a space for students to reflect,
which invites them to engage in self-assessment. Furthermore, the activities offer room
for peers to share each other’s ideas and co-construct knowledge. Most importantly,
they can serve as a catalyst for lecturer and student interaction, and allow a contingent
teaching approach (Egelandsdal et al., 2019). This overview of previous research has
identified three areas that are critical for research on this topic: First, the need to
explore how these in-lecture activities support students in their work outside of the
lecture, second, the need to address how the tools support learning by exploring the
micro-processes occurring during the activities and third, the need for situating the
research into a theoretical framework. The next section presents the aim and the
research questions of the present study.

1.4 Aim and Research Questions


Formative assessment is a process of engaging students in activities that make
students’ understanding visible so that students and teachers can use this information
to shape learning and teaching activities (Black & Wiliam, 2009). When using
technology to support formative assessment practices, questions to address include: to
what extent and how can the technology support the lecturer to create learning tasks
that provide insight into students’ thinking? Also, in what ways can students be given
opportunities to share their thinking and understanding with each other and with the
teacher? What kind of information about students’ thinking is made visible by these
activities and for whom, and what opportunities do students and teachers have to draw
inferences based on these activities so that they can be used to shape teaching and

26
learning? (Furtak, Glasser, & Wolfe, 2016). Examining the micro processes occurring
during the activities to find out what is achieved in them is important for being able to
recognise the potential for discussion-based activities in lectures and is vital for being
able to make informed decisions on how to use participatory tools to support the
processes of teaching and learning.

The aim of this thesis is to examine how the use of educational technology has the
potential to create moments of contingency and, through those moments, transform the
premises for formative assessment in lectures4.The overarching research question for
this thesis is: What affordances are there in using participatory tools to support
formative assessment in lectures? The question is specific to formative assessment and
addresses how the use of technology can create spaces for feedback in the context of
the lectures and how students perceive these activities in relation to their learning. The
research questions from the three articles developed as the study progressed, as
elaborated in the methods section below. The guiding questions for each of the articles
are as follows:

• How do students in large lectures experience the feedback from these learning
activities? How do the students make use of this feedback to support the
learning activities they are engaged in? (Study 1)

• What characterises peer discussions when student response systems are used to
support peer discussions in lectures? (Study 2)

• What affordances are there in using a shared collaborative whiteboard to


support opening, widening and deepening dialogical spaces in lectures? (Study
3)

4
In this thesis, the term ‘lectures’ is used to refer to both medium-sized classrooms, with 40-100 students, and
large classrooms, with 100-150 students (Denker, 2013).

27
These overarching research questions are related to the articles in a coherent way, with
three studies seeking to examine the overall research questions from different angles
and using different methods. The first study used a survey and an interview to examine
how the use of a student response system supported a formative feedback practice in
lectures. Findings from this study informed the decision to record and analyse
discussions in the second study. Again, findings from this study were the point of
departure for our decision to examine the use of shared online whiteboards to open
dialogical spaces in the third study. The figure below (Figure 1) illustrates how the
studies are connected:

STUDY 2 STUDY 3
STUDY 1

‘Behind the scenes: ‘Writings on the wall: How


‘Creating formative Unpacking peer the use of technology can
feedback spaces in large discussions and critical open dialogical spaces in
lectures’ reflections in lectures’ lectures’

The article explores The article explores the The article discusses the use
student perceptions and quality of technology- of online collaborative
the use of feedback in supported peer whiteboards to provide
lectures. discussions in lectures. dialogue in lectures.

Figure 1. How the studies are connected.

Together, the data and analysis allow me to discuss affordances for using participatory
tools to create moments of contingency in lectures. Based on the findings from these
three articles, this thesis contributes to educational research and practice by showing
how the use of student response systems can influence students’ work outside of the
lecture. Second, it offers an insight into the micro-processes that occur between
students when engaging in peer discussions, and how the tools used facilitate
interaction between students in the group, across groups and between the students and

28
the lecturer. Third, it offers practical guidance on how to facilitate formative
assessment in large lectures. The study thereby addresses the gaps in research that
were introduced in the overview of previous research. In the following section, I will
describe the overall research design and provide an overview of the scope of the three
articles and the methods employed in each.

1.5 Teaching design


In the three articles, the lecturers used different variations of a teaching design referred
to as ‘video case, discussion, voting’ (Ludvigsen, Krumsvik & Furnes, 2015, p. 51),
‘video case, discussions, voting and writing’ (Ludvigsen, Krumsvik & Breivik, 2000,
p. 9), and ‘discuss and write’ (Ludvigsen, Ness & Timmis, 2019, p. 7). These different
approaches are illustrated in Figures 2, 3 and 4. An example of the online collaborative
whiteboard is illustrated in appendix C.

‘Video case, discussion, voting’ (Study 1)5

‘Mini-lecture’ Video case and question Peer Explorations of


about core about key concepts discussions and ideas and
concepts voting clarifications

Figure 2. The teaching design for Study 1.

5
This design was developed by Rune Krumsvik

29
‘Video case, discussions, voting and writing’ 6(Study 2 and Study 3)

‘Mini- Question Peer Explorations Questions Exploration


lecture’ about key discussions of ideas and for peer of ideas
about core concepts and voting clarifications discussions shared in
concepts Flinga

Figure 3. The teaching design for Study 2 and Study 3.

‘Discuss and write’7 (Study 3)

«Mini-lecture» about core Exploration of ideas shared in


concepts Questions for peer Flinga
discussions

Figure 4. The teaching design for Study 3.

1.6 Research design


This thesis is part of a larger research project entitled ‘Formative Assessment in
Higher Education’8, situated within the Digital Learning Communities (DLC)
Research Group9 in the Department of Education at the University of Bergen.

In this project, a particular focus has been placed on exploring how discussion-based
activities support formative feedback in lectures and the role of technology in

6
This design was developed and examined as a part of this PhD-project
7
This design was developed and examined as a part of this PhD-project
8
https://app.cristin.no/projects/show.jsf?id=516274
9
https://www.uib.no/fg/dlc

30
promoting interaction between students and lecturers (Egelandsdal, 2018; Egelandsdal
et al., 2019; Egelandsdal & Krumsvik, 2017; Egelandsdal & Krumsvik, 2017;
Egelandsdal & Krumsvik, 2019: Krumsvik & Ludvigsen, 2013; Krumsvik, 2012;
Krumsvik & Ludvigsen, 2012; Ludvigsen, & Egelandsdal, 2016; Ludvigsen,
Krumsvik, & Furnes, 2015; Ludvigsen et al., 2020; Ludvigsen et al., 2019; Krumsvik,
2012).

To address the study’s research questions and to explore these interventions, we have
been inspired by design-based research in creating our research design (Barb & Squire,
2004). Sequential mixed methods are employed as a methodological framework
(Ivankova, 2014; Johnson, Onwuegbuzie, & Turner, 2007). This design is chosen to
address the nature of the research questions. Different approaches to support formative
assessment allow for different affordances of different digital tools to be discovered.
Originally, the concept of affordance was used to describe an object and how a subject
relates to it. Gibson (1977) argues that an affordance is both real and relational; the
affordance exists, regardless of the need or ability to put it to use. Participatory tools
are flexible, and they can be used for an array of purposes. The idea of an affordance
‘presupposes’ a relation between a certain tool and its purpose in a specific context
(Cave, 2016, p. 72). A gap might exist between the theoretical potential made
available by using a particular technology, the extent of that potential that a teacher or
lecturer can identify and understand, the extent to which teachers can capitalise on and
try to realise that potential in their teaching and the reality of how the technology and
the activities developed to engage with it play out among students, considering both
intended and unintended outcomes (Kirschner, Martens & Strijbos, 2004). While the
concept of affordances is broad, it is also a narrow concept in that affordances exist in
relation to a specified purpose. Cave (2016) explains it in this way: ‘that is a thing that
adumbrated a purpose or indefinite set of purposes; only a particular use and a
particular context can select the relevant purpose’ (Cave, 2016, p. 51). In this sense
they are relative to the values, purposes or rationales for using them, as well as the
lecturer’s knowledge, skills and experiences (Dohn, 2009). What is more, any

31
affordance is dependent on the participants in the situation: in this case, how dynamic
learning activities play out between lecturers and students. This implies that an
affordance cannot be established in advance; rather, they emerge within the context in
which they are embedded (Dohn, 2009; Bloomfield, 2010).

Using a sequential mixed methods approach, including data collection using surveys,
interviews, audio recordings of peer discussions as well as material produced in
lectures, allows for thick description and encourages complexity to surface (Barab &
Squire, 2004). This allows the affordances of using participatory tools to support
formative assessment in lectures to be explored, both as a theoretical potential, as
perceived by students and lecturers, and as affordances that we can identify when
analysing interactions and material produced.

In this introduction, I have provided the background for this project, its rationale, aim,
research questions and research design. The table below (Table 1) offers a summary of
the aim, research questions, data collection methods and theoretical framework applied
in the three articles. In the section that follows, the analytical framework relating to
formative assessment and feedback, exploratory talk and dialogic spaces will be
introduced.

32
Aim The aim of the thesis is to examine how the use of educational technology has the
potential to create moments of contingency and, through those moments, to transform
the premises for formative assessment in lectures.
Research What affordances are there in using participatory tools to support formative assessment
questions in lectures?

Study 1 Study 2 Study 3


Title ‘Creating formative ‘Behind the scenes: ‘Writings on the wall: How
feedback spaces in Unpacking peer the use of technology can open
large lectures’ discussions and critical dialogical spaces in lectures’.
reflections in lectures’
Journal Computers & Education British Journal of Thinking Skills and Creativity
Educational Technology
Research How do students in What characterises peer What affordances are there in
question large lectures discussions when student using an online collaborative
experience feedback? response systems are used whiteboard to support opening,
How do the students to support peer discussions widening and deepening
make use of the in lectures? dialogical spaces in lectures?
feedback to support the How do teachers perceive
learning activities they learning opportunities in these
are engaged in? spaces?
Data • Survey (n=148) • Audio recordings of • Work produced in lectures
• Individual 87 peer discussions • Recordings of 15 peer
qualitative discussions
interviews (n=6) • Focus group interview with
students
• Focus group interview with
teachers
Theoretical Formative assessment Exploratory talk Dialogical space
framework Feedback Creative knowledge processes
Self-regulated learning
Findings Findings illustrated In 68 of the 87 Opening dialogical spaces
various ways students discussions, students were provides students with rich
applied feedback in able to create spaces in opportunities to reflect on
their coursework. which to exchange and concepts and to develop
Student emphasised elaborate on each other’s arguments. Students bring a
discussing questions ideas. In one-third of the range of perspectives and
with their peers as discussions, students’ experiences to the lecture, thus
offering valuable spaces reasoning was less visible. widening the space. For
for them to get lecturers, the critical objective
feedback on their was to orchestrate a dialogue
understanding. with students.

Table 1. Overview of the PhD project

33
2. Theory
The aim of this chapter is to elaborate further on the theories used in each of the three
studies. The chapter is in two parts. In the first part, the concepts of ‘moment of
contingency’,’ formative assessment’ (Black & Wiliam, 2009) and ‘feedback’ (Hattie
& Timperley, 2007) are introduced, and the literature is drawn on to describe the
characteristics of high-quality feedback practices in higher education. In the second
part, I present theories of dialogic teaching, and elaborate on how the concepts of
‘exploratory talk’ (Littleton & Mercer, 2013), and ‘dialogic space’ as is interpreted by
Wegerif (2007; 2010; 2013). The chapter thus provides a conceptual framework to
discuss the research question introduced.

2.1 Formative assessment and feedback


Moment of contingency
How formative assessment can stimulate learning is a growing field of research.
Assessment practice has seen two changes. First, there has been a shift in focus from
assessment at the end of a learning process, assessment of learning, to an activity
which takes place during the course, assessment for learning (Zeng, Huang, Yu, &
Chen, 2018). Second, assessment has also changed from an activity primarily
exercised by the teacher to one in which students and peers are key actors in the
process (Boud & Molloy, 2013). Black and Wiliam (2009) defined formative
assessment as a process in which: ‘evidence about student achievement is elicited,
interpreted, and used by teachers, learners, or their peers, to make decisions about the
next steps in instruction that are likely to be better, or better founded, than the
decisions they would have taken in the absence of the evidence that was elicited’
(Black & Wiliam, 2009, p. 9). Formative assessment might be continuous and
synchronous, embedded in the learning activities within seminars or lectures, or
asynchronous, as written comments on assignments (Baird, Andrich, Hopfenbeck, &
Stobart, 2017). The basic principles of implementing formative assessment are to

34
create activities to make student learning visible and that information from these
activities can be used by students and lecturers to shape learning and teaching (Black
& Wiliam, 2018). Black and Wiliam (2009) refer to those activities as moment of
contingency. The etymological meaning of the word contingency is: ‘dependent upon
circumstances, not predictable with certainty’.10 Synonyms are: Crossroads,
possibilities, occurrences, eventuality, probability, turning point, juncture,
opportunities, something that is unexpected, uncertainty. A premise for creating
moments of contingency is an uncertainty concerning what the next step in the
instructions would be: ‘a point in the instructional sequence where the instructor can
change direction in light of evidence about the students’ achievement, thus allowing
her to adapt the instruction to better meet their learning needs’ (Wiliam, 2006, p.
285). Moments of contingency can thus be interpreted as activities that raise students’
or teachers’ awareness of the students’ understanding, to adjust teaching and learning.
Moments of contingency might be planned or occurring spontaneously. To create,
recognise and capitalise on those moments as an integrated part of learning activities
helps teachers adjust their teaching to the needs of their students and help students to
take decisions on their learning process. Also, the teacher and students should be able
to draw inferences based on the formative assessment activities. Different activities
allow different types of knowledge to be visible, both qualitative and quantitative, and
thus can extend or put limits on the inferences that can be drawn (Furtak et al, 2016).
Inferences are based on what we can observe, and thus they are characterised by
uncertainty (Bennett, 2011; Black & Wiliam, 2018; Furtak et al, 2016).

Feedback
Feedback forms the core of formative assessment and has been shown to be an
important factor in student learning (Black & Wiliam, 1998; Hattie & Timperley,
2007; Shute, 2008). The purpose of feedback is to bridge the gap between current and
desired performance (Sadler, 1989). For effective feedback to occur, students need to

10
https://www.etymonline.com/word/contingent

35
know the standard or goals for their learning, compare the goals with their own work
and take action to close the gap (Sadler, 1989). The role of feedback in ‘closing the
learning gap’ is frequently suggested in the literature. However, this has also been an
issue for critique (Egelandsdal & Riese, 2020, Ninomiya; 2016; Torrance, 2012;
Moeed, 2015) because it suggests a linear picture of the learning processes and
represents a teacher-centered transmission view of learning (Egelandsdal & Riese,
2020). Hattie & Timperley (2007) define feedback as:

...information provided by an agent (e.g., teacher, peer, book, parent, self,


experience) regarding aspects of one’s performance or understanding. A
teacher or parent can provide corrective information, a peer can provide an
alternative strategy, a book can provide information to clarify ideas, a parent
can provide encouragement, and a learner can look up the answer to evaluate
the correctness of a response. Feedback thus is a “consequence” of
performance. (p. 81).

Hattie and Timperley (2007) posed the feedback-related questions, ‘Where am I


going?’, ‘How am I doing?’ and ‘Where to go next?’. They also argue that feedback
has an influence on four levels (p. 87). First, feedback addresses the task level (often
corrective of performance and addressed to individuals or to groups). Second,
feedback addresses the process level (including cues on the learning process or
strategies needed to improve performance). Third, feedback addresses the self-
regulation level (being able to monitor and regulate one’s own learning process,
including a student’s ability to ‘create internal feedback and to self-assess’ (p. 95) and
a student’s role in creating feedback and seeking help). Lastly, feedback addresses the
self, referring to evaluation of the student as a person. The purpose of formative
assessment and feedback is to support self-regulated learning (Black & Wiliam, 2009;
Clark, 2012; Hattie & Timperley, 2007). Self-regulated learning refers to ‘a process
whereby learners set goals for their learning and monitor, regulate and control the
actions, cognition and motivation needed to achieve them’ (Pintrich and Zusho, 2002,
p. 64). The relation between the theories on feedback and self-regulation are discussed

36
by Winnie and Butler (1995) and Black and Wiliam (2009) as an interplay between
external and internal feedback:

feedback is information with which a learner can confirm, add to, overwrite,
tune, or restructure information in memory, whether that information is domain
knowledge, meta-cognitive knowledge, beliefs about self and tasks, or
cognitive tactics and strategies (Winne & Butler, 1995, p. 5740).

A formative interaction is one in which an interactive situation influences


cognition, it is an interaction between external feedback and internal
production by the individual learner. This involves looking at the three aspects,
the external, the internal, and their interactions (Black and Wiliam, 2009,
p. 24).

The phases within the self-regulated learning model posed by Zimmerman and Labuhn
(2012) coincide with Hattie and Timperley’s (2007) questions mentioned above and
help to explain how formative assessment supports self-regulated learning (Andrade,
2010; Clark, 2012).

Although there is considerable evidence that feedback supports learning (Hattie &
Timperley, 2007) and that students value feedback, the literature also recognises that
feedback can influence learning in a negative way (Hattie & Timperley, 2007; Kluger
& DeNisi, 1998). Experiences with feedback is among the areas where students are
dissatisfied (Price, Handley, Millar, & O'donovan, 2010). Students often struggle to
make sense of the feedback they receive; this phenomenon is referred to as the
‘feedback gap’ (Evans, 2013). Among the reasons for the feedback gap is that students
do not know the value of receiving feedback. Moreover, they do not know how to use
feedback and do not have the skills to use feedback. Additionally, they do not
understand the feedback itself or do not trust it (Johnson, 2012). Another reason
emerges when they do not recognise the information as feedback (Havnes, Dysthe,
Smith & Ludvigsen, 2012). Another issue is that students often do not have a
sophisticated notion of feedback, connecting it to a hierarchical approach – that it is

37
teachers who are responsible for providing feedback to students (Carless & Boud,
2018). To be able to use feedback, students have to recognise its value, be able to
make judgments about quality in their own and other students’ work, manage emotions
and affect connected to feedback processes (Steen-Utheim & Wittek, 2017) and take
action to address feedback, as well as recognising that feedback might come from
different sources (Carless & Boud, 2018). Despite a large volume of research
emphasising the value of feedback for student learning, and how to establish good
practices, there are still unresolved questions concerning how students make sense of
and use feedback.

The related concepts of formative assessment and feedback are understood differently
according to different perspectives on learning (Baird et al., 2017; Hattie & Gan,
2011). A behaviouristic view of learning emphasises the learning aims and testing
knowledge, and the role of feedback is as a corrective. This view is related to a
transmission of knowledge, a ‘telling approach’, in which students are viewed as
receivers of information (Baird et al., 2017; Evans, 2013; Boud & Molloy, 2013).
From this perspective the task is closed, the focus is on right or wrong and the teacher
has the main role of identifying gaps in students’ learning and telling them how to
close the gaps (Ajjawi & Boud, 2017). An example of this might be the introduction of
the ‘teaching machines’11 in 1920s-1950s, (Benjamin, 1988; Skinner, 2016)12. From
the cognitive perspective, the focus moves to the student, with an emphasis on self-
regulated learning, in which the student should monitor their learning and practices
should support metacognitive awareness (Andrade, 2010; Baird et al., 2017).

Viewing formative assessment and feedback from social constructive perspectives,


feedback should be ongoing, used to support students in their learning and

11
https://www.youtube.com/watch?v=jTH3ob1IRFo
12
The teaching machine including disks, in where content was divided into 30 small parts. Students answered
questions and received instant feedback of the correctness of their answers. After completing the 30 tasks, the
program started over again, displaying only the parts with incorrect answers (Skinner, 1968).

38
included in the steps in the learning process to support learning (Shepard, 2000; Black
& Wiliam, 2009). This perspective focuses on how formative assessment plays out
moment to moment, with practices emphasising active approaches through dialogue,
self- and peer-assessment, feedback cycles, visible learning and ‘shared experiences’
(Evans, 2013; Carless, Salter, Yang & Lam, 2011). A sociocultural perspective on
learning emphasises feedback practices as an active co-constructed process between
peers and teachers with an emphasis on collaboration, interaction, tools and identity,
and in which the goals and criteria for success are negotiated (Baird et al., 2017;
Evans, 2013). These perspectives and practices often overlap and are not mutually
exclusive, and several researchers argue that they should be connected (Andrade,
2010; Baird et al., 2017; Evans, 2013). Baird et al, (2017) argue that theories on
learning and theories of assessment practices have developed as two different fields,
and there should be a stronger correspondence between theories of learning and
theories on assessment.

Formative assessment and feedback in higher education


In higher education, feedback has been connected to giving oral, written or
technology-supported feedback on written work, assignments, presentations and
projects (Winstone, Nash, Parker, & Rowntree, 2017; Winstone, Nash, Rowntree, &
Parker, 2017). It is formal, planned and often asynchronous. This thesis, however,
focuses on immediate, formative assessment activities played out in a face-to-face
lecture setting, using a dialogical format (Black & Wiliams, 2009: Furtak et al. 2016;
Ruiz-Primo, 2011).

Based on the work of a number of scholars in the field ( Biggs & Tang 2007; Boud &
Molloy, 2013; Carless, 2016: Carless & Boud, 2018; Carless, Salter, Yang & Lam,
2011; Evans, 2013; Nicol & Macfarlane-Dicks, 2006), a formative feedback practice
in higher education is aligned to the purpose of learning; should be an ongoing process
and should be used to shape teaching; should support to create learning tasks that make
student learning visible; should encourage reflection and should support students’
capacities to judge the quality of their own work and of their own progress (Andrade,

39
2010; Boud & Soler, 2015; Evans, 2013). Feedback should be viewed as a process,
and the focus should be placed on self-regulated learning (Andrade, 2010; Boud &
Molloy, 2013; Carless & Boud, 2018; Carless et al., 2011; Evans, 2013; Nicol &
Macfarlane-Dick, 2006). A summary of principles for effective feedback practices are
included in Figure 5.

Make thinking visible •create moments of contingency

Opportunities for •Create feedback processes in which students in interaction


with each other are encouraged to keep track of and
monitoring learning evaluate their own learning

Feedback from different •Facilitate feedback from various sources


sources
•Include opportunities for self and peer-assessment
•Students are producers and users of feedback
Support self-regulated •Support students in developing the skills they need to plan
learning own learning

•Challenge and open for different perspetives and create


Dialogical own questions
•Open for students to voice their ideas

•Embedded in course activities and aligned to the learning


A continuous process outcomes and summative assessments

Should be used to shape • the activities should be used by students and lecturers to
shape teaching both in the short term and in the longer
learning and teaching term

Figure 5 Key features of high-quality feedback practices in higher education.

For students to be able to judge their own learning and learning processes, it is
important to let them identify themselves as active learners in a reflective process
which develops over time (Boud & Falchikov, 2007; Winstone & Boud, 2018). This
implies a dialogical approach where feedback is viewed as a conversation (Ajjawi &
Boud, 2017; Boud & Molloy, 2013). Several definitions of what constitutes dialogic
feedback are suggested within the literature. The definitions offered by Carless (2016)

40
aligns with my own position: ‘Feedback is a dialogic process in which learners make
sense of information from varied sources and use it to enhance the qualities of their
work and learning strategies’ (Carless, 2016, p. 1). This definition opens the way for
different feedback practices and emphasises that feedback comes from varied sources
(e.g., teachers, peers and others or played out as an inner dialogue) and that students
use it to make decisions on their learning process. To be able to adjust formative
feedback practices towards more dialogical approaches, the dialogical perspective and
its implications must be considered (Carless, 2016; Steen-Utheim & Wittek, 2017). In
the next section, I sketch out perspectives based in dialogical approaches to teaching
and suggest their relevance to formative assessment.

2.2 A Dialogue Approach to Teaching


To understand what dialogue is, it is common to contrast it with monologue (Linell,
2009). While the monologue perspective is single-voiced or closed and minimises the
possibilities for responsiveness, the dialogical perspective includes multiple voices and
creates possibilities for responsiveness, criticism and challenges (Bakhtin, 1984). A
dialogue always includes more than one voice or perspective, and the meaning of the
dialogue is created in the space between them (Wegerif, 2013). The term dialogue
covers a range of different social practices and is often used as equivalent
to conversations or verbal exchanges between individuals. However, in the field of
education, the concept of dialogue is conceptualised as ‘a certain sub-type of
conversation’ (Howe, 2017, p. 326). A dialogical account of teaching emphasises the
active role taken by students in the process of constructing their own learning
(Alexander, 2006), and involves activities in which students and teachers are ‘building
on, questioning and exploring each other’s ideas in a way that allows for co-creating of
knowledge’ (Mercer, Hennessy, & Warwick, 2010). This calls for a teaching design
that supports a safe climate for students to share their ideas, exploring their ideas
through open-ended and higher order questions (Mercer, Hennessy, et al., 2010). In the
context of education, dialogue refers to conversations among students and between

41
students and teachers that have certain qualities or characteristics that distinguish such
approaches from other ‘interaction’ or ‘talk’. Drawing on Hennessy et al. (2016),
examples of such qualities are discussions where participants invite elaboration and
reasoning as they express their ideas. Also, they may make their reasoning explicit,
give reasons for arguments, ask for justification and provide examples and
elaborations.

In different traditions, this type of dialogical practice has been conceptualised in


various ways. Examples include ‘accountable talk’ (Michaels, O’Connor, Hall &
Resnick, 2010), ‘exploratory talk’ (Littleton & Mercer, 2013), ‘dialogical teaching’
(Alexander, 2006), ‘Dialogic inquiry’ (Wells, 1999), ‘Dialog spaces’ (Wegerif, 2013).
Although these frameworks are conceptualised differently, they have something in
common: ‘opening up a shared space so different perspectives can interact and new
learning can occur’ (Wegerif & Major, 2019, p. 113). In this thesis I have used the
frameworks of exploratory talk (Study 2) and dialog spaces (Study 3) as theoretical
lenses to explore the discussion transcripts, as discussed both in the methods section
and within each of the articles. In the following section, I elaborate on the idea of
exploratory talk, the idea of a dialog space and how they are connected.

Exploratory talk
Mercer (2004) refers to three modes of talk; disputational talk is characterised by
disagreements, interruption and individual decision-making. Students are not
following up on each other’s questions, or their contributions and utterances are short,
without any justification and often confrontational (Mercer, 2004). Cumulative talk is
characterised by ‘repetitions, confirmations and elaborations’, and uncritically building
on each other (Mercer, 2004, p.46). Exploratory talk is characterised by students who
engage critically with each other’s ideas, arguments or reasoning that are explicit or
accountable within the discussion, students offering alternative views or hypotheses,
and participation with the purpose of ‘joint consideration’ (Mercer, 2004, p. 46).
In exploratory talk, participants pool ideas, opinions and information and think aloud

42
together to create new meanings, knowledge and understanding (Mercer, Hennessy, &
Warwick, 2019). The three modes of talk are illustrated in Figure 6.

Disputational talk Cumulative talk Exploratory talk

Figure 6 Modes of talk.

Littleton and Mercer (2013) argue that when people participate in ‘exploratory talk’,
three different processes take place. The first is appropriation, the process of sharing
information in the group, which allows for increased sensitivity to different possible
ways of thinking. Students can articulate and present their thoughts, and they can
assess their thinking considering the ideas of others. Participants share their
knowledge, and the group can come up with ideas and solve problems that exceed the
capacity of each individual (Littleton & Mercer, 2013). The second process is co-
construction, in which co-regulation is used to share, comment on, justify and
challenge ideas with the purpose of building knowledge together, referring to the
processes occurring when ideas are critically reflected upon in a group and creating a
joint understanding. By engaging in ‘interthinking’, the process in which ideas are
reflected on within the group towards a shared goal, new knowledge is created among
the participants (Littleton & Mercer, 2013). The third process, transformation, involves
engaging in exploratory talk, through which participants learn how to discuss and
obtain a metacognitive awareness of how to reason (Littleton & Mercer, 2013).

Dialogical spaces
The concept of ‘dialogical spaces’ is used in different disciplines and varied contexts.
Examples include collaborative knowledge building supported by technology (Cook,
Warwick, Vrikki, Major, & Wegerif, 2019; Dysthe, 2015; Pifarré, 2019; Pifarre &
Kleine Staarman, 2011) to support classroom dialogue (Mercer, Hennessy, et al., 2010;

43
Mercer, Warwick, Kershner, & Staarman, 2010). It is used in different subjects such as
mathematics (Langer-Osuna & Avalos, 2015) and writing instruction (Jesson, Fontich,
& Myhill, 2016). Furthermore, it is applied in a physical, material way in subjects such
as art (Moate, Hulse, Jahnke, & Owens, 2019; Vass, 2019) and sports (Knijnik, Spaaij,
& Jeanes, 2019). Another area is for collaboration in professional learning
(Anagnostopoulos, Smith, & Nystrand, 2008; Wood & Su, 2014). The concept of
‘dialogic space’ is defined in different ways. Broadly understood, it is the ‘quality of
thinking together’ (Moate et al., 2019, p. 168), or it has been elaborated by Jesson and
Myhill (2016) as:

opportunities for meaning making within a conversation; the site in which a


range of voices interact (…) dialogic space is both influenced by and
influential for the knowledge of the participants. Within a dialogic space,
knowledge generation can be seen as emerging from the process of
participating, which is in turn influenced by the knowledge of the various
participants (Jesson et al., 2016, p. 156).

Jessen et al. (2016) also suggest that the dialogical spaces can be seen as a ‘constantly
shifting shape’ (p. 156), wherein the participants shape the space at the same time as
the space shapes the participants: ‘a space might be created for the interaction of
participants’ voices, within which individual cognition or perspectives of the
participants may change, and those changes in cognition or perspective might in turn
influence the shape of the space’ (Jesson et al., 2016, p. 156). The dialogical space
arises from and depends on the ideas contributed by the participants.

Teo (2016, p. 48) conceptualises a ‘dialog spaces’ as a space ‘for students to actively
participate in and critically engage with discussion and thereby take ownership of their
learning’. The idea of opening dialogical spaces in teaching align to, and are also often
conceptualised as different approaches to ‘dialogic teaching’, such as ‘dialog teaching’
(Alexander, 2006), ‘interthinking’ (Mercer & Littleton, 2007) and ‘dialogic inquiry’
(Wells, 1999). The idea of a dialogical space also identifies with broader philosophical

44
ideas such as ‘intersubjectivity’, (Pifarre & Kleine Staarman, 2011) as well as Buber’s
(1878-1965) notion of the ‘space of the in between’ (Vass, 2019; Wegerif, 2019). Even
though ideas of a dialogical space are conceptualised in different ways in different
contexts, their common core is an intent to open up spaces to share ideas and also to
question, scrutinise and critique those ideas. This is formulated by Teo (2016):

Equally, it has to do with whether the true value of learning lies in the
acquisition of knowledge structures or the cultivation of a dialogic stance,
which acknowledges the multiplicity of perspectives and multifaceted nature of
knowledge and values an open-mindedness to not only accept and
accommodate, but embrace and celebrate, ‘otherness’ and ‘outsideness’,
difference and ambivalence (Teo, 2016, p. 60).

The concept appears to represent flexibility and therefore offers an interesting


metaphor to use in exploring interactions in lectures. Wegerif (2013) refers to the
appearance of different perspectives as the ‘dialogic gap’: ‘the moment there are at
least two perspectives, then the gap between them opens up the possibility of an
infinite number of possible new perspectives and new insights’ (Wegerif, 2013, p. 21).
Drawing on Wegerif (2010; 2013), a dialogical space is both a philosophical idea and
a practical idea about how to facilitate dialogue in an educational setting. When
describing dialogical spaces, ‘opening’ refers to designing teaching to allow students
to exchange ideas in the first place. ‘Widening’ refers to how many possible voices
and perspectives are available. ‘Deepening’ refers to the degree of reflection on the
perspectives and on the dialogue process itself (Wegerif, 2010). By providing a variety
of perspectives, one can increase the degree of reflection. When deeper reflection
occurs, the number of perspectives can be increased (Wegerif, 2010). In the widening
of the dialogical space, differences might become visible, and one can question
assumptions and ideas. In this way, the space of dialogue can deepen (Wegerif, 2013).
This is also described by Teo (2019):

This grappling and wrestling, manifest through earnest probing, questioning


and challenging, may or may not lead to agreement or even conciliation, but

45
should broaden and deepen one's views and lead to an honest revaluation of
one's idea or position in relation to those of others. In this way, knowledge is
co-constructed, understandings recalibrated, and learning deepened.
(Teo, 2019, p. 172)

Littleton and Mercer (2013) describe the relationship between interaction and
individual thinking by referring to Vygotsky (1896-1934) that language is both a
psychological and a cultural tool. Individual understanding is dependent on social
interaction (Mercer, Hennessy & Warwick, 2019, p. 3). The dialogical space is both
collective and individual at the same time, or as formulated by Wegerif (2013): ‘this
space is not just a property of the group and they are sitting together but it is also
something that each individual can take away with them’ (p.151).The idea of
‘interthinking’ is associated with the concept of ‘dialogical space’ (Wegerif, 2013). In
relation to dialogical space, cumulative talk is characterised by harmony, repetitions,
uncritical confirmations, providing a widening of space, while exploratory talk
provides a deepening of the space (Wegerif, 2013).

An ongoing debate addresses the question whether dialogue should be seen as an end
in itself and whether it can be fruitful to use dialogical approaches in a teaching design
guided by more or less fixed learning outcome descriptions (Dysthe, 2011; Matusov &
Wegerif, 2014). Within a dialogical approach to teaching, tension always exists
between the infinite possibilities for multiple voices to appear and the reified closure
that accompanies structured learning outcomes, formative and summative assessments
(Strickland, 2019). In the context of lectures, the intersection between formative
assessment and a dialogical approach to teaching lies in the emphasis on creating a
‘moment of contingency’. The relevance of opening, widening and deepening
dialogical spaces for the concept of the moment of contingency is discussed in Chapter
5. Taking the dialogue perspective into account when defining a moment of
contingency, I would describe them as activities that open for students to voice their
opinions, to articulate different ways of knowing and different viewpoints as well as
stimulating students to create their own questions in an ongoing conversation.

46
In the next section, the overall research design is presented and discussed.

47
3. Methods
In this chapter, the overall research design is presented and discussed, as are the
decisions taken for data collection and analysis within each of the three articles.
Research ethics and validity of the study are discussed throughout the chapter and
again in its final section.

3.1 Design-Based Research and Mixed Methods


The overall methodology of the study is inspired by a Design-Based Research
approach (DBR). DBR seeks to unpack ‘the messiness of real-world practice’ (Barab
& Squire, 2004, p. 3). DBR is characterised by being conducted within a single setting
and includes cycles of testing and improvements of practice; as an approach, it aims to
increase theoretical insight as well as insight into improving course design. To address
the complexity of an authentic learning environment, DBR often uses mixed methods
research (MMR) (Anderson & Shattuck, 2012; Barab & Squire, 2004). Figure 7
illustrates the characteristics of a DBR process as described by Anderson and Shattuck
(2012), Barb and Squire (2004) and Wang and Hannafin (2005) and how this process
was executed in this project.

48
1. The challenge is identified
in terms of practice, research
and policy.

2. The intervention is
1. Identify
2. Develop informed by literature on
teaching
challenges
design formative assessment and
feedback and tested in an
authentic setting.
4. Adjust and
refine practice 3. Explore the
and intervention in 3. MMR (survey, interviews,
an authentic
develop new context audio recordings and focus
insights
group interviews) is used to
explore the intervention.

4. Practice is adjusted by
refining the teaching design.

Figure 7. Design-based research cycle.

The point of departure for the project is to address a challenge identified from practice
as well as from research: to change the lecture from a transmission mode of teaching
to include more active approaches to learning and to provide opportunities for student
feedback. The project was initiated and developed by Krumsvik in 2008–09 as a result
of didactical innovation, followed up by a successful grant application13 for funding in
2010. The aim of the project was to explore techniques to evaluate students’
understanding in lectures as mediating tools to support formative assessment and
feedback. From this funding base, he initiated the DBR project on ‘formative
assessment in higher education’, where the purpose was to examine and develop these
activities, in which a teaching design based on authentic assignments, discussions and
student response systems plays a key role in transforming lectures towards a more
student-centred way of organising learning and teaching in lectures. The pedagogical
design consisted of cases, videos, discussions and voting options developed by

13
Seed funding

49
Krumsvik. In 2011, I started my PhD-project to explore this design and to develop it
further, as a part of the ongoing design-based research project.

To capture the complexity of an authentic learning environment, this project uses a


sequential MMR approach in which quantitative and qualitative elements (survey,
interview and observation) are integrated into the design for the purpose of obtaining a
better understanding of the phenomenon being studied. This thesis is based on a
sequential mixed methods design, which means that one phase of the study (the
qualitative) is informed by another phase (the quantitative) (Fetters, Curry, &
Creswell, 2013; Ivankova, 2014). The mixing of methods was done to minimise
weaknesses (the strength of one method could compensate for the weaknesses of
another) and ‘complementation’ (the different methods should shed light on the
activities from different angles) (Johnson & Onwuegbuzie, 2004). The literature on
MMR explains it as a ‘type of research in which a researcher or team of researchers
combine elements of qualitative and quantitative research approaches (e.g., use of
qualitative and quantitative viewpoints, data collection, analysis, inference techniques)
for the broad purposes of breadth and depth of understanding and corroboration’
(Johnson, Onwuegbuzie, & Turner, 2007, p. 123). Following this, the value of using
DBR and MMR is that this approach allows the activities implemented for supporting
formative assessment to be examined from different angles (Krumsvik & Ludvigsen,
2013).

3.2 A Pragmatic Approach


DBR, with its emphasis on problems in authentic contexts and openness to using
different methods to explore interventions, is situated within a pragmatic approach to
research (Juuti, Lavonen, & Meisalo, 2016; Wang & Hannafin, 2005). The research
process is driven by the research question (Morgan, 2007), which addresses practical
problems in an authentic context (Feilzer, 2010) and emphasises cycles of abductive
reasoning. It is the research question that gives direction to the data collection methods

50
and analyses used (Feilzer, 2010). In a pragmatic account of research, the researcher is
concerned with temporary, authentic problems in a specific social context
(Schoonenboom, 2017). A pragmatic approach to research aims to solve problems, and
choices of methods and analysis are guided by the problem (the research question) it is
supposed to solve, rather than a certain ontology or epistemology (Schoonenboom,
2017). Morgan (2007) argues that a pragmatic approach to research is abductive in its
nature: research moves between induction and deduction, ‘first converting
observations into theories and then asserting those theories through action’ (Morgan,
2007, p. 71). If what the researcher finds does not fit with what they knew before, the
new insight is used as a point of departure for new questions (Schoonenboom, 2017).

By taking a pragmatic account of research, researchers are free to use the best methods
available to address the research questions they face, instead of being restricted to one
particular way of conducting research (Feilzer, 2010; Morgan, 2007). Pragmatism is
not connected to a particular research method (Feilzer, 2010). Several scholars have
suggested that MMR can be founded in pragmatism (Biesta, 2010; Morgan, 2007).
This fits a design-based approach, which is characterised by the identification of a
problem by researchers or practitioners in context, then addressing the problem by
implementing new practices and exploring the opportunities and constraints of these
practices, using different methods, in an iterative, open-ended process. The insights
gained can be used to refine and develop educational practices and theories (Juuti et
al., 2016; Pool & Laubscher, 2016) that align to a pragmatic approach to
research (Feilzer, 2010). Figure 8 illustrates the coherence between the research
approach, data collection methods, data and analysis and Figure 9 shows how the

51
different phases in the project informed each other.

• Active • Survey
learning
• Interviews Quantitative
• Formative • DBR analysis
Socio
Feedback Pragmatic Data • Recording of Analysis
constructivist • MMR discussions Qualitative
• Technology
supported • Focus group analysis
learning interviews

Figure 8. Coherence in the research design in the thesis.

Figure 9. DBR and mixed methods research design.

In educational research, ethical questions arise at each step in the research process
(Tangen, 2014). On a macro level, ethical issues are concerned with the design of the
study and its value for society (Stutchbury & Fox, 2009). At the micro level, ethical
questions are concerned with how individuals who participate in research are affected
by it (Stutchbury & Fox, 2006). Other issues include the balance between protecting
participants and the integrity of the research and the researcher (Tangen, 2014; Sikes,

52
2006). Tangen (2014) poses three domains based on guidelines for research ethics.
These are based on The Norwegian National Committee for Research Ethics in the
Social Sciences and the Humanities (NESH, 2006). The three domains are (a) the
internal quality of the research, including the validity of the conclusion and
suggestions for practice, (b) protection of the participants, and (c) the value, risk and
relevance of the research for society, including individuals, groups and institutions, as
well as for policy-making. Discussion of ethical considerations relating to these
domains are included throughout.

3.3 Design of the Study


This study is based on a sequential mixed methods design (Ivankova, 2014). In
sequential design it is common to collect data in distinct phases. The purpose of the
qualitative data is to explain and elaborate quantitative findings. This purpose can be
identified as both a ‘complementary’ and an ‘explanatory purpose’ (Greene, Caracelli,
& Graham, 1989, p. 259). In this thesis, the sequential nature of the studies should be
thought of as a metaphorical conversation, in which one phase of the study informs the
next phase, and a metaphorical dialogue, going back and forth between the data sets to
investigate the research questions (Bazeley & Kemp, 2012). Therefore, the integration
occurs sequentially when the results of one phase inform teaching design, research
questions and data collection methods in subsequent phases (Ivankova, 2014). The
integration of different types of data occurred in different stages throughout the
project. The first instance was in the planning of the project, when qualitative and
quantitative research questions were introduced. Second, the results of the survey
informed the development of the interview guide in article 1. Third, the findings from
article 1 informed the choice to observe how discussions played out amongst the
students and to include the online collaborative whiteboard in Study 2. The results
from Study 1 and Study 2 informed the research question and decisions taken in Study
3. Integration also occurs on the method level (Fetters et al., 2013), as the sample
from the qualitative phase was selected from the sample in the quantitative phase.

53
Another aspect of integration in how the data are reported is that, in Study 1,
findings from different points in the study are reported sequentially in the order in
which they were collected, to address each of the research questions, which is
in line with a contiguous approach (Fetters et al., 2013).

To bring clarity to the sequence of data collection (Onwuegbuzie & Johnson, 2006), in
this context the interface ‘refers to any point in a study where two or more data sets are
mixed or connected’ (Guest, 2013, p. 5). This aspect is illustrated in the procedural
diagram (Ivankova, Creswell, & Stick, 2006), Figure 10.

54
ARTICLE PHASE PROCEDURE PRODUCT TEACHING
DESIGN

Study 1: Formulating the purpose of the study Quantitative and qualitative Study 1:
questions are introduced.

‘Creating Formative Develop survey Analysis of survey (n=148)


Feedback Spaces in Collect and
questions, using SPSS22
Large Lectures’ analyse
quantitative
collect data during • Factor analysis
data lectures using • Mean difference
student response analysis (t-test)
systems • Correlation analysis

Use findings from the survey to develop an interview guide.


Qualitative Thematic content analysis
Collect and
interviews from the Draw conclusions based on
analyse
qualitative same sample were qualitative and quantitative
data conducted (N=6). data

Findings from Study 1 were used to inform the research questions


and the research design for Study 2.
Study 2: Audio-record and Confidence Study 2 and 3:
Collect and transcribe 87 peer Use of subject specific Discuss, click
‘Behind the scenes: analyse
Unpacking peer discussions language and write
qualitative Analyse pattern of talk
discussions and data
critical reflections in
lectures”
Findings from Study 1 and Study 2 informed research questions
and research design for study 3.
Study 3: Mind maps Analysis of students creating
Collect and Focus group mind maps of their
‘Writings on the analyse interview (n=1) experiences.
wall: How can qualitative
technology open Flinga boards Thematic analysis of
data
dialogic spaces in phenomena described in the
lectures” interviews.

EXTENDED Write a conclusion based on the three studies


ABSTRACT

Figure 10. Procedural diagram.

55
In sequential mixed methods design, the most important dimensions to address are the
research question, timing and the purpose of the mixing, and how the phases inform
each other (Guest, 2013; Leech, 2012; Sandelowski, 2003). In the following section,
the different methods of data collection and analysis are described, focussing on the
timing of the data collection and the purpose for using the different methods. This is
followed by a description of how the phases are connected to each other, focussing on
how one phase informed the questions asked and methods used in subsequent phases.

3.4 Study 1: Research process


The purpose of this article was to address whether and how a student response system
could create opportunities for a formative feedback practice in large lectures and
thereby support students’ ability to monitor their learning, as well as to provide insight
into how students engage with the feedback in their coursework. In the first phase,
quantitative data were collected using a survey. In the second phase, qualitative data
were collected. The purpose of using interviews was to explain and elaborate findings
from the survey, which can be referred to as both a complementary and an explanatory
purpose (Greene et al., 1989, p. 259). In this way, we were able to secure validity for
our results and to draw more robust conclusions than if we had used one method in
isolation.

Survey
The survey was developed from a survey used in a previous study to assess students’
perceptions of feedback and learning outcomes in large lectures (Krumsvik &
Ludvigsen, 2012). One of the findings from this study was that students used clicker
questions to identify concepts they did not understand. However, the earlier questions
did not address how students might use this information in their subsequent
coursework. This informed our choice to develop items for the current study,
addressing students’ use of feedback. Informed by the literature, a survey was
developed addressing three broad themes: perception of the lecture as a space for

56
feedback, students’ use of feedback in coursework, and dialogue with peers and
lecturer. The items were developed to target core characteristics of formative feedback
practices as described in literature and how they were conceptualised in our teaching
design. The scales were reviewed by an international expert in the field of formative
assessment and feedback. This strengthens the construct validity, again, by being
transparent and providing the reader with the items included in the two scales included
in the article. The wording of the questions was developed with the guidance of an
expert in quantitative methods.

An ethical question can arise in the interface between the protection of the participants
and the integrity of the research (Tangen, 2014). In the planning stage of the project, a
survey was developed that captured various items to give a nuanced picture of student
perception and use of feedback. However, the survey turned out to contain too many
questions and would have taken approximately 20 minutes to complete. The research
team concluded that this time should be used for teaching and learning activities, so
several of the items were removed. To protect the students’ time, the survey could
have been given to the students after the lecture; however, opportunities would then
have been missed to obtain student reflections in the moment, during the lecture. It is
also likely that there would have been a lower response rate. This choice might have
reduced the quality of the survey and the amount of data to be used to draw a valid
conclusion; on the other hand, it protected the students, the factor that we decided must
be given the most weight. We contextualised the data collection methods we used, so
they could get a hands-on experience of the method in use, and also showed them a
possible way of collecting data for their own bachelor’s projects. In this way we have
balanced the integrity of the research and the wellbeing of the participants.

The sample
The sample was purposive and voluntary (Creswell, 2012). All the students who
attended the last of the five lectures were asked to answer the survey at the end of the
lecture. The students were asked to rank their degree of agreement with different
statements/claims using a seven-point Likert scale. We used the student response

57
system to collect data in real time immediately after the lecture, which provided
anonymity. Prior to the data collection, the students were given a verbal orientation
about the purpose of the survey and how the data would be stored. In this project,
clickers were handed out to the students randomly (students picked them up when
entering the room), and the clickers were not associated to any particular IP-addresses
or anything that could possibly identify the students. It also allowed a high response
rate, as nearly all of the students participating in the lecture participated in the study.

Analysis
The data collected from the student response system were exported to Excel files and
imported to SPSS. To analyse the data from the survey, we used factor analysis,
descriptive data analysis, mean difference analysis (t-tests), and correlation analysis.
These analyses were conducted using SPSS Statistics 22 software. Details about the
analysis are presented in the article. When students reported that they used feedback,
the score was high in the general questions (if the use of feedback clickers indicated
what they need to work on), while the answers to questions targeting more specific use
of feedback (such as whether it influenced their reading) were distributed across the
scale. An explanation for this finding is that there might be other ways of using
feedback that the specific survey questions were unable to capture. This created a point
of departure for the qualitative phase, to explore how students utilised feedback and to
obtain a nuanced picture of the students’ uptake of feedback.

Interviews
At the end of the last lecture, we invited the students to participate in a semi-structured
interview (Kvale & Brinkmann, 2009). The interview guide had three broad themes of
investigation: How clicker questions and peer discussions supported the learning of
concepts during the lecture, how students used feedback in their coursework and what
role the technology played in these two processes. The invitation was provided
verbally, at the end of the last lecture, and by e-mail (Appendix D). Six students
volunteered and, as such, the sample was characterised as a convenience sample,
‘based on a specific purpose, rather than randomly’ (Teddlie & Tashakkori, 2003, p.

58
713) and can be referred to as a member-checking and sample integration activity
(Onwuegbuzie & Johnson, 2006). The interviews lasted from 40 minutes to
approximately one hour and were conducted the week after the last lecture. The
students were asked to identify the most crucial aspect of how the activities supported
their learning, with the interviews held in a semi-structured format (Kvale &
Brinkmann, 2009). To secure the validity and check my interpretations of the data, the
discussion was summarised at the end of each interview. Again, one ethical question is
the protection of the students’ time (Tangen, 2007). In this case, participating in an
authentic qualitative interview provided an opportunity for the students to get ‘hands-
on’ experience with qualitative methods, which is a benefit for learning qualitative
methods (Creswell, 2012). The opportunity to discuss their own learning strategies
might also be a useful experience for students, so we considered the interview to be a
potentially interesting experience for the students.

The interviews were transcribed and then analysed using thematic analysis (Creswell,
2012). We used NVivo 11 to organise and code the transcribed interviews. The
transcripts were coded by looking for different themes and sub-themes (Braun &
Clarke, 2006). This was done in the following steps: first, reading the transcript
carefully and coding for different themes (nodes). Second, the nodes were organised so
that similar nodes were merged. Third, we created a node structure for different
categories with main themes and sub themes under each category. For transparency,
we chose to include this list of categories and themes in the article.

The article included all the items in the questionnaire and described how the items
were situated within theory and informed by literature. The mean scores for each of
the items were also included. Readers were also provided with the interview guide for
the qualitative phase. Therefore, it would be possible for others to replicate the study
in another context or to judge whether a similar pedagogical intervention would be
suitable in another context.

59
3.5 Study 2: Research process
Article 1 indicates that use of the student response system can promote increased
quality and quantity of peer interaction. Students found that participating in peer
discussions provided them with an opportunity to argue for their views, listen to
others’ arguments, discuss each other’s understanding of concepts, identify alternative
ways of thinking about concepts, get feedback on their understanding, and ask
questions for clarification. However, there might be a gap between how a student
perceives an activity and how the activity unfolds (Nielsen et al, 2016). This
possibility informed our research questions and our decision to use observation of peer
discussions as a source of data for article 2. In article 2 (Ludvigsen et al., 2020), our
purpose was to examine how students share their thinking and what characterises peer
discussions when student response systems are used for formative assessment in
lectures.

To address this question, we recorded the peer discussions as they occurred. Recording
peer discussions allows an in-depth exploration of the micro-processes occurring when
peers discuss and would contribute to the field by providing insights that are closer to
the phenomena under investigation (Creswell, 2012). The purpose of conducting this
observation was ‘complementary’, to gain insight that cannot be captured using
surveys and interviews, as well as ‘triangulation’, to be able to draw valid conclusions
across the three studies (Greene et al., 1989, p. 259). In retrospect, I think this choice
was important because it allowed us to draw more nuanced conclusions on what was
achieved in the discussions.

The recorded discussions were collected during six lectures in two subsequent
semesters. The sample was based on voluntary participation, and the students received
written and verbal information before the lecture. Audio recorders were distributed at
the beginning of each lecture. Some groups decided to record their discussions, while
other groups chose not to record their discussions. The use of audio recorders may
have affected the quality of the discussions (e.g., students may have tried their best to

60
engage in a productive discussion or were afraid to talk if they were unsure).
Therefore, recording the discussions raises definite issues of validity. In all research in
which participants know that they are being observed, this knowledge might change
their behaviour (Creswell, 2012). Thus, the conclusions drawn might be influenced.
This consideration must also be taken into account in this project. An ethical question
arose, that the students might have found the questions stressful to discuss or that they
might not speak as freely as they would have if the discussions had not been recorded.
Again, the students were learning about qualitative research methods; therefore,
volunteering to participate in interviews would give them valuable experience in
understanding the challenges and opportunities of using audio-recording of discussions
as a method (Cooper, Fleischer, & Cotton, 2012; Cooper, Chenail & Fleming, 2012).

Analysis of peer discussions


Our central interest was to examine how students share their thinking and build
knowledge together and how the technologically-mediated activities support the
discussions. Therefore, we used sociocultural discourse analysis, a framework
developed by Mercer and his colleagues (Littleton & Mercer, 2013). The concepts of
exploratory talk were operationalised in this study through the coding scheme
presented in Table 3. This coding scheme was inspired by the ‘Cam–UNAM Scheme
for Educational Dialogue Analysis (SEDA: ©2015), developed by Sara Hennessy and
Sylvia Rojas-Drummond’ (Hennessy et al, 2016, p. 42). The rationale for this, and a
description of the analysis procedure, are provided in the article (Study 2).

To secure construct validity, I presented excerpts and analysis of the peer discussions
in several seminars and workshops within the National Graduate School in
Educational Research community, involving experts in the field of higher education
and in interaction analysis.

The data analysis revealed a considerable number of occurrences of uncertainty in a


majority of the discussions, even though the students voted for the right answer (in
questions with only one correct answer) or a reasonable answer (in questions allowing

61
multiple correct answers). Thus, the aggregated answers could give teachers an
incomplete picture of the students’ understanding. To establish formative assessment
practices that embrace this uncertainty, we argued that opportunities to share thinking
are vital.

3.6 Study 3: Research process


The purpose of this article was to discuss the potential of the online collaborative
whiteboard to support opening, widening and deepening dialogical spaces in lectures.
The need to collect qualitative responses was identified in Study 1 and 2. Across two
cases, we have used the concept of opening, widening and deepening dialogical spaces
as analytical concepts.

Recording of discussions
The procedure for recording the discussions was the same as described in article 2. As
suggested by Wegerif and Yang (2011), an analysis of a dialogical space should
examine: (a) the extent to which the activities facilitate opening, and (b) the extent to
which perspectives and voices can be presented, confronted and challenged. It is along
these dimensions that the concept of dialogical space is conceptualised in the article
and thus was used as an analytical framework. The procedure for analysing the
discussion transcript is explained in the article.

One ethical question arising in this phase was ensuring that the students were
portrayed in the best possible way (Stutchbury & Fox, 2009). Learning new things is
hard, and some of this struggle to learn is displayed in the recorded discussions. When
this struggle is transcribed, episodes can seem funny and may not give an accurate
picture of the situation as it appears in the recordings that capture tone and context
more effectively. If students were to read the transcript, or the transcripts were taken
out of their context, the group of students could potentially feel intimidated, or perhaps
stigmatised, even though the transcripts are anonymised. In this project we have taken

62
these questions into consideration when deciding which discussions to include in the
articles.

Focus group interviews


In case 1, the sample of students that participated in the audio-recorded discussions
(article 2) were invited to the focus group interview (Creswell, 2012) in article 3. This
can be referred to as a member-checking and sample integration activity
(Onwuegbuzie & Johnson, 2006). The first and the second authors of the article both
acted as interviewers. In the focus group interview (interview guide in Appendix E),
the students were first asked to write two maps, one that covered experiences from the
discussions supported by a student response system and another that covered
experiences from the discussions supported by Flinga. They were given 10 minutes to
draw the mind maps (see Appendix F). The mind maps were used as a point of
departure for the next phase of the interview, in which each of the students presented
the points made by their maps. Next, we focussed the discussion on how the two
activities were different in the ways they supported learning inside and outside of the
lecture. At the end of the interview, to support interaction among the participants, the
students were asked to discuss and agree on some suggestions for practice, which they
wrote down. Students volunteered to participate in the interview. As in the other
studies, the interview provided students with ‘hands on’ experience in qualitative
methods and an opportunity to reflect on their learning strategies. For this reason, I
argue that the students’ time was well spent. However, participants who volunteered
might be more favourable towards this pedagogical approach, than those who did not
volunteer. This must be taken into consideration when interpreting the interview
results.

In case 2, the teacher education course, the primary sources of data were the lecturers
course evaluation and material produced in the lecture. In the evaluation, the material
which was produced (the Flinga board) was used as a point of departure for the
discussion on the lecturers’ experiences, focussing on how they perceived the
technology to support activities for students to share their thinking (the widening

63
dimension), and how they approached the perspectives presented (the deepening
dimension). The opening, widening and deepening of dialogical spaces was used as a
lens through which to analyse their experiences and identify challenges. An ethical
question to be addressed concerned the situation of my colleagues (Sikes, 2006), who
might feel vulnerable in this context or that the interview took away time from other
activities. Nevertheless, their experiences with using and reflecting on affordances for
the online collaborative whiteboard to open, widen and deepen dialogical spaces might
be of interest for their teaching practice in other courses.

Before the analysis of the focus group discussion, we conducted an analysis of the
mind maps by grouping similar themes together into broader themes, concerning how
the activities were described by the students participating in the interview. The focus
group discussion was transcribed verbatim and provided 20 pages of material. NVivo
was used to code and analyse the interview; the analysis of the focus group was done
by thematic analysis (Braun & Clarke, 2006; Creswell, 2016) supported by NVivo.
Each utterance or series of utterances illustrating one quality were coded. If an
utterance was describing more than one quality, the same utterance was coded to
different nodes. Following this we grouped together similar themes in which we gave
names that condensed the content of the node in a single phrase, for example, ‘you get
different views’. For clicker-supported discussions and for Flinga-supported
discussions similar but also different themes emerged, which were displayed in the
article (p. 7). Even though the concepts of opening, widening and deepening dialogical
spaces made up our theoretical framework, it was not straightforward to place the
different themes into the headings of ‘widening’ and ‘deepening’, because the content
coded under the different themes most often included ideas that concerned both
widening and deepening. However, quotations from the students were displayed in the
article to illustrate how the activities supported the opening, widening and deepening
of dialogical spaces.

To be able to draw (even) more robust conclusions, the material should contain more
discussions supported by the online collaborative whiteboard. Among the reasons for

64
the limited number of discussions were the acoustics in the lecture hall, which made
poor recordings. However, the discussions that we were able to capture, were found to
be interesting and they did show some shared characteristics.

3.7 Validity
In both quantitative and qualitative research, researchers must draw inferences
concerning phenomena that are not visible and tangible, but are dependent on our
interpretations (Kleven, 2008). Construct validity ‘is about the quality of
correspondence between something observed and something which cannot be directly
observed’ (Kleven, 2008, p. 224). Threats to construct validity are referred to as
construct underrepresentation and construct irrelevance: when research fails to capture
dimensions in the construct or captures phenomena which are not relevant for the
construct – ‘At the same time, the measurement is too broad and too narrow’ (Kleven,
2008, p. 224). In order to pay attention to construct validity, the concepts used, such as
feedback, exploratory talk, dialogical spaces, are operationalised in each of the
articles.

Judging validity in studies that use MMR is complex. Questions of validity must be
assessed for each of the methods applied within a study and also to judge the quality of
the design (Ivankova, 2014), how the methods are integrated, and the extent to which
the studies allow integrated conclusions to be drawn (Teddlie & Tashakkori, 2003). In
MMR, questions on validity should address both the study design, the distinct phases
and the extent to which the mixing of methods minimises weaknesses and allows the
drawing of integrated conclusions (Onwuegbuzie & Johnson, 2006). In sequential
mixed methods design, in which one phase is built on another, validity is especially
important because the quality of one phase affects the conclusions that can be drawn
and consequently the quality of the research questions and design for the next phase
(Ivankova, 2014). It is also important to address the extent to which and how the
qualitative phase can explain and add depth to the quantitative phase (Ivankova, 2014)

65
and to what degree it is useful to compare or to triangulate the quantitative and
qualitative samples. This is referred to as the ‘the problem of integration’
(Onwuegbuzie & Johnson, 2006, p. 54), which refers to the degree to which it is useful
to compare or triangulate quantitative samples with small qualitative samples, what
kind of data sets or findings one should emphasise and how conflicting findings should
be dealt with.

In DBR, findings should be shared within the field of practice, both in a local context
and in other contexts (Anderson & Shattuck, 2012; Wang & Hannafin, 2005); as such,
the value of the research is limited to the extent to which the results of the research can
‘inform and improve practice’. Questions about external validity reflect to the extent
that others can make use of the research: not only its findings but also the pedagogical
design. To secure validity is a core concern in research in general and plays a
particular role in DBR because the purpose of interventional research is to improve
and refine educational practices (Spencer, Ritchie, Lewis & Dillon, 2003; Stutchbury
& Fox, 2009). If our suggestions for practice are based on incorrect assumptions, they
might provide misleading suggestions for practice and potentially affect large groups
of students and teachers in a negative way. Since the purpose of DBR is to improve
practice, the connections between an intervention, methods, conclusions and
suggestions for practice and the data itself should be examined carefully. In the
articles, we have provided rich descriptions of the context and the teaching design, as
well as the participants, the research design and the procedures for data collection and
analysis of data. Providing this information makes it possible for the reader to judge
whether the intervention will fit into a particular context and what modification(s)
must be done to apply the intervention in different contexts (Anderson & Shattuck,
2012). However, a balance has been attempted in reporting the ‘complexity, fragility,
and messiness’ (Barb & Squire, 2004, p. 4) of the local context in a way that could
make it valuable and relevant for practice in another context. This research has
demonstrated its relevance for local contexts. Findings from the project have been
used to support and encourage other lecturers to include some of these activities in

66
different forums at the University of Bergen. Approaches based on Study 1 are used in
courses on mixed methods, which is also a validation of the work from the standpoint
of practice as well as from the research community (Kvale & Brinkmann, 2009).

Validity in qualitative research is often referred to as ‘trustworthiness’, and is


fundamental to the credibility of the researcher (Kvale & Brinkmann, 2009). A key
question to discuss is the credibility of the findings. In this project I, together with my
co-authors, have investigated the research questions by collecting and analysing
different types of data. This allowed us to draw more robust conclusions than by using
one source of data only. This is clearly illustrated by the recorded discussions, because
they showed pitfalls that were not presented in the interview data, and as such allowed
us to nuance the optimistic conclusions drawn in article one. Another question
concerning validity is what is gained, and potentially lost, by using different sources of
data. The argument for using these different methods was that it allowed complexity to
surface, which is aligned with the arguments in favour of MMR in the first place.
Using different methods has contributed to knowledge concerning how these
technologically-mediated activities support student learning, providing a window into
the micro-processes of how the activities play out, and these insights are essential to
get a more nuanced understanding of the discussions (Figure 11). Because of this
nuanced understanding, I am better able to develop and refine the teaching design and
draw robust conclusions.

67
Study 1: Factor 1 (survey) ‘How feedback impacts learning during lectures’ (p. 59).

Study 1 and 3: Quality of the Study 2: Condensed models of the discussions (p. 16-17).
discussions as described in
interviews:

• they had to come to


an answer
• opportunities for
students to explain
their thinking
• argue for their views Study 3: Multiple layers of interaction (p.10).
• listen to others’
arguments
• identify alternative
ways of thinking
• distinguish these
discussions from
‘talk’
• Space for feedback

Figure 11: How using a sequential mixed method design contributed to


nuanced understanding

On the other hand, using fewer or only one source of data would have allowed me to
go deeper into the data and the data collection methods and the methods for analysing
and interpreting the data. Clearly, using multiple methods comes with a price. It
demands skills in each method, is time consuming and gives a complex dataset, which
can be a challenge to handle.

68
In retrospect, a timely question to ask is whether the project would be easier to manage
using fewer methods and fewer data sources. Clearly, using different data sources to
address the research question adds complexity, and depth; at the same time, there is a
danger that by covering so much in a single project, I did not have time to go deeply
into each of the methodological approaches, and this might decrease the quality of the
work. Following the nature of a sequential design, my choice had been to explore the
initial research questions by zooming in to the paths found to be most interesting to
explore as the study evolved. With these limitations in mind, I would argue that the
insights into the micro-processes occurring when the activities unfold makes an
important contribution to understanding how applying such tools potentially supports
students’ knowledge building process in lectures. As previously mentioned, this
project is a part of a larger one, where the MMR research design for study one was
piloted in Krumsvik (2012) and in Krumsvik and Ludvigsen (2012). For that reason,
the main research design was already being used and piloted within the research group.
To put the research into a MMR framework, I also received guidance from Professor
Burke Johnson, who was also an external member of the Digital Learning
Communities Research Group (DLC) while I was doing this work. Professor Johnson
has been included in different steps in this research project as a part of a pragmatic
validation process (Kvale & Brinkmann, 2009). The support that I have received from
the research community has been valuable, and also increases the credibility of the
study.

Based on an analysis of the discourse concerning the value of educational technology


in support of learning, Selwyn (2013) argues that educators working with educational
technology are often underpinned by and situated in ‘progressive education ideals
and/or social constructivist and socio-cultural models of learning’ (Selwyn, 2013, p.
10). Furthermore, that digital tools are ‘fitting neatly with a number of values and
interests relating to the nature and organization of learning’ (Selwyn, 2013, p. 10). I
can identify with this. In recent years, I have frequently used online collaborative
whiteboards and other tools (see Ludvigsen & Egelandsdal (2017) for an overview) in

69
lectures and seminars. Insights gained from these activities are difficult to ignore and
can, in the worst-case scenario, contribute to drawing biased conclusions (Sikes,
2006). Nevertheless, the use of these tools has provided me with insights that are
valuable, that can helpfully identify important phenomena in the data and also guide
interpretations. The credibility of this study is fundamentally strengthened by my
interest and expertise in the domain as well as my digital competence and long
research experience in this field. I am not neutral, neither can I put a parenthesis on
these experiences. Instead, they can help in recognising and raising relevant questions.

70
4. Findings
This chapter provides an overview of the main findings in each of the three articles.

4.1 Article 1
Ludvigsen, K., Krumsvik, R. & Furnes, B. (2015). Creating formative feedback spaces
in large lectures. Computers & Education, 88, 48–63.

The purpose of this article is to examine to what extent and how students experience
the practice of technology-enhanced feedback and to what extent and how they use
feedback to support the learning activities in which they are engaged. Findings from
the survey (n=148) showed a positive correlation between the extent to which students
reported that they used clickers to reflect on their learning and the extent to which they
reported that they used the feedback in their coursework. The majority of the students
reported that they like to get feedback on their understanding during lectures but that
they do not apply the feedback in their coursework to the same extent.

Students in the qualitative sample emphasised discussions of questions with their peers
as offering a valuable space for them to get feedback on their understanding of course
content. The fact that students had to choose an answer for each question ensured that
peer-to-peer conversations were more focussed and productive. The students using
feedback in their coursework employed it as a guide to how they are progressing, as a
way to focus their reading, as a way to identify concepts they did not understand or as
a signal to change their learning strategies. The technology (the clicker and the
questions) played an essential role in changing the dynamics of the lecture on different
levels; these activities transformed it into a space for reflection and self-assessment, a
room for peers to exchange perspectives and elaborate on each other’s ideas and a
catalyst for teacher–student interactions. This research project contributes to the
available knowledge on how students use feedback from such interventions, which is

71
vital to improve practice when using participatory tools to support formative
assessment.

4.2 Article 2
Ludvigsen, K., Krumsvik, R. & Breivik, J. (2020). Behind the scenes: Unpacking peer
discussions and critical reflections in lectures. British Journal of Educational
Technology.

The purpose of this article was to examine the characteristics of peer discussions when
a student response system was used to support peer discussion. We used the concept of
exploratory talk (Littleton & Mercer, 2013) as a lens through which to examine the
patterns of talk in 87 peer discussions, and made these observations: Almost all of the
discussions focused on the assignment. The students expressed uncertainty in a
majority of the discussions (68 of 87), and insecurity was evident with all types of
questions. The students structured their discussions around the answer options. The
alternatives were used both to open (clarify and explain concepts) and to shut down
discussions (by referring to numbers only). Characteristics of exploratory talk were
identified in 62 discussions; most of the exploratory discussions were generated by
questions allowing for more than one correct answer. With the incorporation of student
response systems into formative assessments, the students’ use of such systems to
facilitate their understanding of the course material should be carefully examined. It is
vital that the lecturer include opportunities to gain insights into the reasoning behind
the students’ answers.

4.3 Article 3
Ludvigsen, K., Ness, I. & Timmis, S. (2019). Writings on the wall: Bringing student
voices to the lecture. Thinking Skills and Creativity.

72
In this article, we explored the affordances of using a shared online collaborative
whiteboard to open dialogic spaces within lectures. Across the two cases examined, we
found that students used the shared whiteboard in lecture classes to bring a wide range
of perspectives and experiences to the discussion, and thus a possibility of opening the
space occurred. Engaging with the collaborative whiteboard allowed student voices to
become visible and provided a nuanced picture of students’ understanding. For the
widening dimension, students emphasised that they brought in content and experiences
not covered in books or lectures and that these activities made them aware of nuances
and different ways of thinking. For deepening the dialogical space, students
emphasised that they valued the ability to take the discussion in a direction they found
to be interesting. We observed that students discussed other students’ posts and began
sorting posts on the whiteboard as soon as they arrived on-screen.

4.4 Findings across the articles


The majority of the students value the possibility of receiving feedback on their
understanding during lectures to monitor their learning, and they emphasise that
explaining their thinking while discussing questions with their peers is valuable as a
feedback space, in addition to feedback generated through the technology used and the
lecturer (studies 1 and 3). Across the studies, we argue that moments of contingency
can be made explicit when using technology-supported formative assessment activities
in lectures; students find them to be valuable in supporting their learning in lectures
and in their coursework both inside and outside the lectures (studies 1 and 3). These
activities support feedback on the task level and the self-regulation level by providing
feedback on what is important to learn and the student’s progress towards these goals.
Opening dialogical spaces provides students with rich opportunities to reflect on
concepts and to develop their arguments, and thus to get feedback on their
understanding of course content during the lecture (studies 2 and 3). Students
frequently articulated their uncertainty in the discussions. This implies that the
discussions are worthwhile for students to test out their knowledge and that there is an

73
insecurity of drawing conclusions based on the students’ responses, especially for
multiple choice questions (Study 2 and 3).

74
5. Discussion, implications and conclusion
In this thesis, I examine how the use of educational technology has the potential to
create moments of contingency and, through those moments, to transform the premises
for formative assessment in lectures. The main research question was: What
affordances are there in using participatory tools to support formative assessment in
lectures?

In the section that follows, affordances will be discussed according to the findings
across the three studies. As acknowledged in the introduction, there is no such thing as
a ‘traditional lecture’. My references to ‘traditional lectures’ in the next section refer to
the way they have been described here by students, in their own context.

‘you are not expected to be asked’ ‘you are just sitting get the knowledge to
you’ ‘it is only talk talk talk’ ‘there is no time to stop and think’ ‘it is often a
monologue’ ‘it is just feeding of information’ (…) ‘it is a little break’ ‘easy to
talk about other things’ (Ludvigsen et al., 2015, p. 58).

These quotations depict a lecture format that can be characterised as: a forum where
lecturers talk, students listen and take notes, there are few questions, and there is not
much interaction or the discussions among peers are unstructured.

The discussion is organised into three parts. In the first part, I focus on the affordances
of the activities to create moments of contingency. This section will be discussed in
terms of widening, deepening ‘dialogic spaces’, following Wegerif (2013). In the
second part, I discuss how activities in the lectures influence students’ work in the
lecture and outside of the lectures, using the notions of feedback developed by Hattie
and Timperley (2007) as a lens to discuss the empirical findings across the three
studies. In the third part, I discuss the potential for these activities to transform
formative assessment in lectures. Based on this discussion, in the fourth part, I will
suggest implications for theory, practice, research and policy that result from these
observations, before offering my conclusion.

75
5.1 Dialogical spaces as moments of contingency
A vital question when examining formative assessment activities is, to what extent do
they provide students with opportunities to share their thinking, and what opportunities
become available to students and teachers to draw inferences through these activities
to shape teaching and learning–referred to as moments of contingency. I will use the
metaphor of ‘dialogical space’ to discuss affordances, as perceived by students and
lecturers, and through observed interactions and material products. The quality of
those moments will be determined by focussing on the affordances of the tools to
open, widen and deepen dialogical spaces. During the discussion that follows, I will
highlight some of the empirical data from the articles to bring the discussions to life
and to enter into dialogue with the voices of the participants in the studies. I continue
the discussion by focussing on the potential for these activities and tools to transform
and challenge established practices.

Widening and deepening


The alternatives in a multiple-choice question might be powerful means to make
differing viewpoints visible and open up a discussion space, by providing a framework
to scaffold the discussion. By making their thinking explicit, to themselves, their peers
and their lecturers, students were exposed to differing viewpoints. In lectures, this
provides opportunities for students (as well as for lecturers) to compare experiences,
ideas and understandings shared by other participants, and therefore to reconsider,
nuance or question their own assumptions. Displaying contributions on the online
collaborative whiteboard supports plenary discussions and allows students to analyse
their own contributions in light of the contributions of others. This process opens up
the potential to unpack differences between alternate ways of understanding a
phenomenon and thus provides a widening of the dialogical space. Study 3 (p. 9)
shows that students are trying to add things that are not yet written, thus contributing
to a widening of the dialogic space:

76
S2: I feel that everything is said.

S1: Yes. I am sure there is something important that is not said yet
(sounds of writing, i.e., tapping on the computer for two minutes).

Also, seeing the contributions of others, students become more aware of each other.
From the recorded discussions in Study 3, we found that students assessed
contributions on the collaborative whiteboard to bring new perspectives into the group,
contributing to a widening of the dialogical space. Students are both open to and
critical of other groups’ contributions, as they argue for or against them (Ludvigsen et
al., 2019, p. 10):

S2: What is written on the green there? ‘That people have to know that they are being
observed’

S1: Yeah, regarding your method, it may be important, how people respond. If there is
a camera, people can be affected. If there is someone filming, someone can be
affected.

Ideas from other groups thus serve as voices in the discussions and support students in
joint knowledge building and developing a nuanced understanding. Using the online
collaborative whiteboard extends the possibilities for interactions among students and
between students and lecturers by allowing different layers of interaction between
students in peer groups, between contributors on the shared whiteboard, and between
the contributors on the whiteboard, individual students and the lecturer. Making
alternative perspectives or viewpoints visible allows students to use each other’s
contributions as resources for their own thinking, as also identified by Major et al.,
(2018). This promotes the co-creation of knowledge, self-assessment, and opening up
dialogical spaces where ideas can be ‘scrutinised and challenged’ (Major et al., 2018,
p. 2007). The option to store contributions allows teachers to review diverse ideas and
offers the possibility of extending class discussions as a continuous process. When
students post their ideas, the ideas from their discussions become concrete and subject
to analysis, which contributes to deepening the dialogical space.

77
The lecturers in Study 3 found it both enjoyable and challenging to approach the
deepening dimension; they both sorted and elaborated on student posts. Structuring
content, or asking students to structure it, positioning one contribution in relation to
someone with a similar or different perspective, may be particularly stimulating for
stimulating dialog in the lecture. It also enhances student engagement with posts on the
wall and thus with each others’ contributions, as previously also observed by Cook et
al. (2019). Another way of deepening the space is to ask new questions to push
thinking along by including questions that invite to deepening (Herman & Nilson,
2018). Examples of such questions are: What would an exception be? What are the
assumptions behind your claim? What is an implication of your post? How confident
are you in your claim? When would the claim not be supported? What is an example of
your claim? Can you re-formulate your claim into a question? How do you know?
(Brookfield & Prenskill, 2012). Asking students to justify, explain and elaborate on
their (or one of their peers’) perspectives opens possibilities for more sophisticated
ways to unpack differences in perspectives and thus to make complexity visible,
providing a deepening of the dialogical space.

Both the widening and deepening of dialogical space make differences visible and
have the potential to encourage students to engage in reflection. The posts on the
online whiteboard can be viewed as ‘seeds’ (Scott et al., 2006) or ‘thinking devices’
(Wells, 2000) for dialogue between students and lecturers, and they can be re-viewed,
improved and explored further, in the moment or during time.

To go from an approach to teaching using closed questions towards including different


perspectives in which all could potentially be valid, practice moves ‘from identifying
with a closed image towards identifying with infinite openness and the potential of the
process of dialogue itself’ (Wegerif, 2013, p. 57). Seeing one’s own contributions
(posts) in interaction with other perspectives allows one to examine them from a
distance, literally, as from a ‘third-person perspective’ or ‘observer’s perspective’, as
also illustrated in Study 3 (p.8) , in which students are comparing the post on the wall
with ideas they discussed, kind of as a validation of their own ideas:

78
S 1: There, it is the one we had, not the one we wrote, but what we talked about
(reads) “how closely can you observe a person?”

Sitting in a lecture, seeing one’s own words from a physical distance, the position of a
witness (Wegerif, 2013) is materialised. What participants witness is how their own
ideas change meaning when put in dialogue with other ideas. This materialisation is
particularly prone to occur when others move, comment on and modify a person’s
contributions or link their own ideas. This highlights the dimension of deepening, the
non-verbal activities in which students post thoughts in a certain place regarding the
content of contributions and when teachers help coordinate these contributions,
without explicitly commenting on the movement of the contributions. Seeing this
happening from a distance changes the focus from oneself to the issue under
investigation. Based on Study 2 and 3 we found that the different tools have different
potential for widening and deepening dialogical spaces (Table 2).

79
Table 2. Affordances of different activities to support widening and
deepening of dialogic spaces

DISCUSSIONS DISCUSSIONS SUPPORTED


SUPPORTED BY BY FLINGA
MULTIPLE CHOICE
WIDENING • Different perspectives • Awareness of nuances in
DIALOGICAL between students in contributions
SPACES peer groups • Bringing in ideas not
• Alternatives represent covered in books
different perspectives • A multitude of different
• Different perspectives perspectives
in whole class
discussions

DEEPENING • Discussing • Discussing other students’


DIALOGICAL alternatives posts
SPACES • Explain and justify • Contributions are
own or other students’ connected to research and
contributions theory
• Alternatives structure • Contributions are sorted
the discussions and connected
• Follow-up by the • Contributions are
lecturer elaborated by the lecturer
• Focused discussion and students
• Students focus the
discussions towards what
they find interesting

While it is important to discuss how dialogical space can open, the cases where the
dialogue was not opened or where it collapsed are equally important. Study 2 shows
that the alternatives offered in multiple-choice questions have the power to trigger and
open discussions but may also discourage students from articulating their knowledge
and sharing their thinking – for example, when they were only referring to numbers:
‘sure, it has to be b’. By analysing the discussions in Study 2 we found that in 25
cases, the qualities identified as ‘exploratory talk’ were not evident. When the students
knew the answer, there might be no reason for dialogue and the discussion was shut
down before it started. One consequence, then, is a superficial approach to the content

80
rather than opening up a space for further reflection, as also is raised as a concern by
Shapiro et al (2017). This was also the case when students were simply guessing at an
answer without elaborating, or where they only told each other what to vote without
offering an explanation.

When students started placing a post in the shape of a heart or posted funny pictures,
or they started moving each other’s posts, the technology was not transparent but
became an object of discussion and also a distraction. Consequently, the tool itself (the
online collaborative whiteboard) in some cases became a threat to the opening of
dialogical space in the student groups. The students also found it difficult to open
discussions in this format, because they did not have a starting point for their group as
they did in the multiple-choice format. As this is a part of a DBR project, a further
direction in which the design could be developed would be to scaffold the discussions
by structuring the activities in ways that prompted a more productive opening and
more effective structure of the discussion.

Quality of feedback
Drawing on the work of Nicol and Macfarlane-Dick (2006), among others, a formative
assessment practice should deliver high-quality information to students on their
performance. This feedback might come from various sources: from a teacher, a book
or a peer or through inner dialogue (Hattie & Timperley, 2007). In Study 2 and in
Study 3 we found a high occurrence of expressed uncertainty during discussion with
peers, and also guessing in the immediate moment before voting. In response to this I
argue, in line with previous literature that feedback provided by technology, using
aggregated responses, might be fragile. Forms of practice where we draw conclusions
about students’ learning based on aggregated responses are characterised by
uncertainty. To ensure that valid inferences can be drawn from activities as a basis for
feedback, a careful examination must be made of the questions used and threats to the
validity of possible inferences should be examined. To use such tools as a means to
improve quality, the quality of the inferences that can be drawn should be examined at
the outset. In addition, learning is often not visible and tangible, but rather it tends to

81
be unfocused and messy and to feature moments of struggle and uncertainty on the
part of learners (Dall’Alba & Bengtsen, 2019) as also illustrated in Study 2 and 3. To
establish formative assessment practices that embrace this uncertainty, opportunities to
share thinking are vital. In any activities created for the purpose of formative
assessment, engaging in dialogue with students allows more sensitivity towards
students’ ideas, which helps in drawing inferences.

To embrace activities that allow both aggregated and qualitative answers, or to include
different ways of displaying knowledge such as in the form of drawings or pictures,
might allow a more sophisticated picture of students’ thinking and ways of knowing to
emerge, and thereby allowing other inferences to be drawn. Dall Alba & Bengtson,
(2019) argue that underneath what is visible or apparent ‘holding everything together,
we might become aware (…) disconnected thoughts, broken arguments and doubt’
(Dall’Alba & Bengtsen, 2019, p. 1486), maintaining that these should be encouraged
and are important for learning. They further argue that instead of the attention to
aspects of the learning that are visible, tangible and measurable, attention should also
be placed towards learning that is emerging and ‘not-yet-formed’ (p. 1483), as also
shown in Study 2 and 3. Allowing different forms of knowledge representation
(pictures, drawings, words, sentences, and arguments, for example) would make
visible things that are not yet clear, or even not yet articulated. This encourages a
‘playful sensibility (…) where the intent isn't to interrogate students about their ideas
and goals but to play together with possibilities and different ideas and see where they
might take us’ (Renshaw, 2017, p. 90). Drawing on Study 2 and Study 3, I argue that
when students’ responses are received through systems without the lecturer intending
to unpack the reasoning behind the number, valuable opportunities for learning are
neglected. However, drawing on the three studies, the discussion activities occurring in
the groups are valuable, as perceived from students (Study 1 and 3) and as we can see
from the recorded discussions (Study 2 and 3), regardless of how they are picked up by
the lecturer.

82
The relevance of opening dialog spaces for formative assessment
Black and Wiliam (2009) point out that dialogue might be played out between
‘strongly steered’ and ‘relaxed freedom’ and assert that ‘formative assessment cannot
flourish at either end of this spectrum. The optimum balance … may lie at different
points along it’ (Black & Wiliam, 2009, p. 24). However, affordances – using a
technology and activities to support the creation of moments of contingency – depend
on their intended purposes and what learning outcomes the instructor aims to address
by facilitating discussions (Furtak et al, 2016; Ruiz-Primo, 2011). By connecting the
concept ‘moment of contingency’, to the idea of a ‘dialogical space’, this thesis draws
attention to the quality of such spaces, which is important to be able to develop
practice (Nerland et al, 2018; Wegerif, 2013). To include formative assessment
activities in lectures demands that students are encouraged to share their thinking, in
which the possibilities to interact are critical (Furtak et al, 2016). Also, understanding
is complex (Furtak et al, 2016). By opening dialogical spaces, and acknowledging both
the widening and deepening dimensions, this complexity is allowed to surface. That
would yield real opportunities for a reciprocal connection between lecturers and
students and strengthen the possibilities for contingent teaching.

The value of student response systems is the potential for moments of contingency to
open up between the students when they discuss questions in peer groups, as well as
the potential to open a dialogue involving the whole class. For a continuous dialogue,
in which the lecturer is included, the online collaborative whiteboard is promising.
Using collaborative online whiteboards, it is possible to maintain open moments of
contingency, and further to expand them in various directions, and with different
degrees of reflecting on the ideas.

The ideas of dialogical space and moments of contingency arise from different
theoretical fields and use different terms to describe similar processes. Even though
formative assessment and a dialogue approach to teaching have different aims, one
thing is similar: the acknowledgement of making thinking visible. Opening a dialogue

83
space has its own value, however, and can support students in their progress towards
their learning objectives by creating rich opportunities for feedback and self-reflection.

This thesis contributes with insight to the field by bringing these two ideas together.

One objection, or at least a reasonable question, at the end of this project might
therefore be: is it superfluous to talk about dialogical spaces and moments of
contingency in the same time? I argue that the answer is no. Opening dialogical spaces
has its own value and its own aims. The concept of moments of contingency
emphasises that these activities should be used to support and adjust learning and
teaching activities. In other words, seeing the idea of dialogical spaces through the lens
of moments of contingency, ‘moments of contingency’ help us draw attention to how
these activities could support student learning and provide opportunities for contingent
teaching while aligning practices to learning outcomes, formative and summative
assessment. Seeing the idea of moments of contingency through the lens of a concept
such as dialogical spaces offers a framework to examine critically the moments and
brings a more nuanced understanding of what they can be. How these ideas help shed
light on each other is illustrated in the figure 12.

84
Seeing the idea of
moment of contingency
through the lens of
dialogue spaces:

Provides a framework to
examine critically the
moments we create. Also
helps to bring a more
nuanced understanding of
what they can be. It
Seeing the idea of a
dialog space through moves the discourse on
the lens of moment how the use of
technology can support
of contingency
formative assessment,
Helps us draw from short immediate
attention to how these feedback cycles, towards
activities could the establishment of
support students’ dialogue that evolves
learning, provide over a longer span of
opportunities for time.
contingent teaching
and help align
activities to learning
outcomes, formative
and summative
assessment.

Figure 12: How Moment of contingency and dialogical spaces shed light on
each other.

By seeing the two concepts each through the lens of the other, I argue that these
concepts are interrelated and that they can be used simultaneously in a lecture setting
to realise each other’s potential. It helps identify affordances:

‘you may say in particular cases, “I knew that already”, or “there are other
theories for that”; I would reply: "yes but you haven't seen them from this
angle before". In some cases, the shift of angle may be relatively slight but it
is nevertheless critical it reconfigure the whole field (...) the moments, or, more

85
modestly, an instrument that makes a difference, a vehicle that takes you
further14’ (Cave, 2016, p. 72).

More specifically, viewing moments of contingency from the point of view of


dialogue space moves the discourse on how the use of technology can support
formative assessment from short immediate feedback cycles towards the establishment
of meaningful dialogue that evolves over a longer span of time, as emphasised by
Carless (2019) to be important in formative feedback practices. These concepts
therefore signify the interface between theories of dialogue and those of formative
assessment and feedback.

5.2 Perception and use of feedback


The following section is in two parts: first, I discuss how students experience these
activities as spaces for feedback. Then, in the second section I discuss how students
utilised feedback in their work. Both parts make reference to the different levels at
which feedback operated: on the task level, the self-regulation level and the process
level (Hattie & Timperley, 2007).

Feedback on the task level


Discussions among peers were highlighted as valuable for reflecting on one’s own
understanding. In Study 1, students in the qualitative sample emphasised the role of
discussions as a space for feedback, where they could explain their thinking to their
peers and elicit feedback on their understanding. One student explained: ‘I notice that I
cannot answer the questions until I discuss them out loud […] You argue with
someone about why [your ideas] are right, and then suddenly you find arguments for
why it is right and why it is wrong’ (Ane) ( Ludvigsen et al., 2015, p. 47). The act of
explaining to a peer is a way of explaining their perspectives to themselves: ‘Even
though you remember the words, then you should explain it to others, then they ask

14
The etymological meaning of the term “affordance” is “further” (Cave, 2016)

86
what it means, and then you realise that you did not know, then you notice’ (S3) (
Ludvigsen et al., 2019, p. 11). Articulating their thinking through explanations to their
peers helps students to reflect on their learning, as stated by another student: ‘I can sit
and read or hear and believe that I understand these things. But, if you are to formulate
yourself, with no help in front of you, then I realise if I understand’ (S2) (Ludvigsen,
et al., 2019, p. 11). The quotations illustrate how students are drawn into reflection on
their own thinking and understanding.

When their activities were supported by the online collaborative whiteboard, students
experienced the feedback as directly connected to their ideas and therefore personal.
Many of the contributions were similar, though with nuances, so feedback on one post
was feedback to many. The possibilities for students to voice their thoughts, ideas and
questions allowed both lecturers and students to gain insight into each other’s thinking,
which would not otherwise be possible. When the perspectives were connected in
various ways, new insights might appear, which again created new possible spaces for
feedback and co-creation of knowledge. Seeing their own contributions among those
of other students constructs spaces in which students can receive feedback, as also
recognised by Yates, Birks, Woods and Hitchins (2015), Baron et al. (2016), Kim et al.
(2015) and Cacchione (2015). By reading other students’ contributions, a student
might become aware of challenges identified by others, which they may also share, as
recognised by Baron et al. (2016) and Pohl (2015). Therefore, these activities have the
potential to draw students into questioning their own assumptions and those of others.
Examples of this frequently arose in Study 3, where students were reading and
commenting on other students’ posts. Students found that seeing each other’s posts
created an enjoyable atmosphere, as also recognised by others (Baron et al., 2016;
Pohl, 2015). This support helps to create a safe environment for students to voice their
opinions, which is important for establishing formative assessment practices.

Feedback on the self-regulation level


Feedback also addresses the self-regulation level, supporting students in regulating
their learning process and enhancing a student’s ability to ‘create internal feedback and

87
to self-assess’ (Hattie & Timperley, 2007, p. 95) and a student’s role in creating
feedback and seeking help. Different aspects of the activities encourage students to
engage in self-assessment and to reflect on these concepts. In the activities examined,
students perceive feedback from different elements of the intervention, such as being
asked questions, explaining their own understanding and listening to peers. They can
compare understandings with each other and assess their own thinking while
considering their peers’ perspectives. It is especially the act of listening to themselves
when they speak that is highlighted as a way to assess their own understanding. Self-
explaining is recognised within the body of cognitive-oriented literature – referred to
as self-explanation (Bielaczyc, Pirolli, & Brown, 1995) – and is also identified in
sociocultural theories as being important for learning (Littleton & Mercer, 2013).

The act of writing also offered a space for students to reflect on their ideas and assess
themselves. Things they thought they knew did not appear equally clear when they
would formulate their thinking into writing. Students found this activity to be more
challenging than simply being asked to discuss the content of lectures. Because many
contributions would be similar in content, offering feedback to one of the groups might
provide feedback for several, and also be relevant for students with divergent answers.
Writing is an essential part of reflecting on one’s own thinking (Stead, 2005). Thus,
when the goal is to gather divergent perspectives, collecting written answers might be
of high value. The dynamics of writing in combination with discussions could
potentially create rich opportunities to articulate students’ thinking and understanding,
as was shown empirically in Study 3: Such spaces are important in terms of co-
creating of knowledge and self-assessment.

Mostly by identifying important concepts, and being aware of key concepts, these
activities provide both feed-up (Hattie & Timperley, 2007) and feedback on what
students do not understand and need to pay closer attention to. Most of them address
this by ‘checking things up’ with their peers or the lecturer, or by looking up things on
the Internet. This occurs in the lecture, during breaks or immediately after the lecture.
In these cases, feedback from the intervention (questions, discussions, follow up)

88
allows them to monitor their progress and encourages them to address
misunderstandings; as such, this feedback operates on a self-regulative level (Hattie &
Timperley, 2007). Implementing these activities is thus valuable in helping students to
reflect on their own learning during lectures, while these activities influence feedback
on all three levels identified.

Feedback on the process level


Feedback can identify misunderstandings or areas that are difficult, however, this does
not always help students to identify how to approach these issues. We also have fewer,
but still interesting, examples of students using feedback in a more strategic way, for
example to change the way they learn. For example, they may experience the role of
asking questions while working or recognise the value conjunction question at the end
of book chapters; both are examples of how these activities could influence how
students work. Some students said that they had become aware of the value of asking
questions to themselves while reading or of the value of working with conjunction
questions in their textbooks. Others reported that it changed the way they read before
the lecture, that they read more carefully or in a more detailed way or formulated
questions to themselves while reading. One of the students reconsidered her overall
learning strategies and began formulating questions to herself while reading: ‘I'll do
that for the rest of my life, in my work life as well; if I should learn something, I can
ask questions’ (Ludvigsen et al., 2015, p. 50). Another student suggested that
explaining something might be used as a strategy for assessing one’s own
understanding: ‘You can use it as a method to find out that you did not understand as
much as you thought you did’ (Ludvigsen et al., 2019, p. 11). As such, she discovered
explaining, or thinking aloud, as strategies for learning. This shows that these activities
have influence on the process level. However, this outcome is not explicitly addressed
in the teaching design; rather, it is a consequence of students’ awareness of how these
activities support their learning; this might influence their strategies as shown in Study
1 and Study 3. Through these activities, students learn strategies they can use, which

89
goes beyond the learning required in a course, which are important in establishing
formative feedback practices (Evans, 2013).

In Study 1, we found that the majority of the students reported that they like to receive
feedback on their own understanding during lectures but that they do not apply the
feedback to their own coursework to the same extent, which is in line with the findings
of previous literature (Johnson, 2012; Evans, 2013). We also found a positive
correlation between factor 1 and factor 2, indicating that students who reported using
clicker questions to monitor their learning during the lectures also reported using
feedback to support their learning outside of the lectures to a greater extent than
students who did not report using clicker questions to monitor their learning during the
lectures. This finding is not surprising: some students found that they were on track,
and so they saw no reason to elaborate further on these issues. This finding was
reflected in the qualitative sample. In addition, students who provided examples of
how they used feedback in their coursework used the information from the activities to
check whether they were on track, to focus their reading and to address concepts with
peers or the lecturer, or look up things on the internet, when their understanding was
unclear. This might also indicate that students that were aware of these activities as
opportunities for feedback, were also more likely to use feedback in general, as
emphasised by Carless and Boud (2018). It is thus important for lecturers to help
students to identify the potential value of these activities as crucial spaces for
feedback. Students used information received from engaging in questions to plan
strategically which topics they needed to address in assignments, and one student
changed her learning strategies. These findings were reflected in Study 3. Students
used questions in the lectures to identify what they needed to work on. They described
their own learning process: ‘I read, look on the Internet, ask my friends if they have
some ideas or something, and then I get to sort things out’ (S4) (p. 21), and ‘Then you
notice that you did not understand as much as you thought you did, then you go and
read more, or ask’ (S2) (Ludvigsen et al., 2019, p. 21). An interesting finding from
Study 1 and Study 3 was that students in the sample were able to articulate how these

90
activities support them in connecting what happens in the lectures to their out of
lecture activities. Three quotations demonstrate this:

It is like the connection between the PowerPoint and lectures, and the theme
(…) essentially, the line between me and the curriculum, and between me and
the lecturer. (Ludvigsen et al., 2015, p. 57).

(...) I noted the post that I found to be most relevant and maybe those that were
commented on by the lecturer. I noted them and used some of them in my
exam. (S4) (Ludvigsen et al., 2019 p. 21).

I remember this is what I wrote, then I was given an opportunity to connect


everything. When I think about other subjects, I think: this is the lecture. This
is the book. This is the exam. But now, I get a real thread between everything
(S2) (Ludvigsen et al., 2019 p. 21).

The large body of literature on technology in the classroom focusses on the immediate
value of using participatory tools to support learning in lectures (Egelandsdal et al,
2019). These quotations have shown the potential for these activities to be aligned with
other course activities, and thereby to support students in their study process.
Important questions then include: How could we provide better opportunities to get
students to exploit the epiphanies students experience in these activities in additional
contexts outside the lectures?

This thesis has shown how the use of participatory tools can influence students’
regulation of the learning process (Zimmermann & Labuhn, 2012). It supports the
forethought phase by helping students to become aware of course goals or to help them
create their own goals. It also influences the performance control phase by opening
moments of contingency (MOC). Discussions with peers and teachers allow them to
assess their own understanding. Also, it influences students to engage in self-
reflection, for example by reconsidering their learning strategies. These findings can
be summarised in Figure 13. The figure illustrates how the activities supported

91
feedback (Hattie & Timperley, 2007) that potentially might influence different phases
in the self-regulated learning model posed by Zimmerman and Labuhn (2012).

• Adjust reading • Awareness of


strategies15 course goals
• Discover new16 • Awareness of
strategies what is important
• Focus coursework to learn
• Decide on their
own goals

• Monitor their own


understanding
• Activities as feedback spaces 17
• ‘check things up’18

Figure 13. How the activities influence the different feedback questions.

In the survey (Study 1) students reported using feedback in their coursework to a lesser
extent than they reported using feedback to monitor their understanding during the
lecture. When students reported that they used feedback, the score was high in the
general questions (if the use of feedback clickers indicated what they need to work

15
Adjust their reading strategies, read more carefully, work more on topics that are difficult and improve the
alignment between activities.
16
Awareness of the role of articulating ideas, asking and answering questions and writing, as ways to assess their
own understanding.
17
From peer discussions, technology, the lecturer and whole class discussions.
18
Ask a peer, ask the lecturer, consult the book, check the internet.

92
on), while the answers to questions targeting more specific use of feedback (such as
whether it influenced their reading) were distributed across the scale. In the qualitative
interviews (studies 1 and 3), students gave examples of how the activities supported
their learning outside of the lecture halls. These examples show that the use of the
student response systems supports both short feedback loops – checking out things
immediately – and longer feedback loops that included reading and writing. Also, it
included awareness of strategies to assess their own understanding.

5.3 The potential for activities to challenge and transform


established practices
What is the affordance for participatory tools to transform formative assessment
practices in lectures? It depends. Painted with a broad brush: in traditional practices,
the lecturer talks to one hundred students. However, including activities to collect
students’ ideas alters this situation. In this scenario, one hundred students share their
thoughts to the lecturer and to each other. Interactions are externalised and move the
discourse away from where we sit – the time and space of the lecture – into a new
space that exceeds the limitations of the room. Such a space can be referred to as a
dialogical space (Wegerif, 2013) or, drawing on work of Pifarre and Kleine Staarman
(2011), an intersubjective space. Another metaphor would be Buber’s ‘space of the in-
between’ (Vass, 2019; Wegerif, 2019). The discourse is moved from inside each
individual’s head to a shared space that includes multiple thoughts and perspectives,
which is also something that each of the students uses as a reference to develop own

93
thinking, as in Figure 14.

Figure 14. Shared thinking spaces.

For a technology to be disruptive, it must have the potential to improve or alter


practice in an unexpected way (Wegerif, 2013, p. 97). I would argue that tools for
students to share their thinking carry the potential to challenge, transform and disrupt
the experience of both students and teachers. Figure 15 below illustrates this change:

94
Figure 15. The lecturer talks to one hundred students, vs. one hundred
students sharing their thoughts to the lecturer and to each other 19.

The tools one use changes what is visible, for the individual student, the peer group
and the class, and thus changes the premises for how to create moments of
contingency. With this change another change follows. Instead of viewing
collaborative whiteboards as a mean to overcome barriers, teachers can rather to use
them to open shared thinking spaces. Such spaces capitalise on and take advantage of
the experiences and knowledge of every student participating in the lecture, not only
the lecturer's talk. This is radically different from who students described traditional
lectures in study 1. However, to find a balance between student activity and the
introductions of new ideas, is essential.

By examining activities as they are occurring, studies 2 and 3 were able to capture
affordances that were unanticipated. As students, one affordance of using an online
collaborative whiteboard can be to joke with each other or the lecturer, which was not
the intent. How students shaped a posting as a heart made us aware of the multimodal
affordances these tools exhibit. When students posted pictures, it allowed us to become
aware of affordances we had not yet considered and to raise new questions: What if we

19
Figure 15 shows a lecturer talking to hundred students, vs. one hundred students sharing their thoughts to the
lecturer and to each other. Using the formula: I (n)= n (n-1)/2, the number of interactions is increased from 100
in the first picture to 4950 possible interactions in the second picture.

95
had asked students to post pictures instead of text? How would that support dialogue
in the peer groups and in whole class discussions? Would other ways of knowing
become visible? These episodes challenge our perceptions of potential affordances.
This is summarised in a passage by Wertsch (1998):

[A] change in cultural tools may often be a more powerful force of


development than the enhancement of individuals’ skills. The irreducible
tension between cultural tool and agent that defines mediated action means
that, when considering how to enhance or change a course of development, the
key may often be to change the cultural tool rather than the skills for using that
tool (Wertsch, 1998, p. 103).

However, these tools can also be used in a more monological framework of teaching,
for example where ideas are only evaluated without exploration of alternative
perspectives or where multiple choice-questions have only one correct answer without
any elaboration. In such cases, the tools are not given the potential to transform how
teaching and learning are organised. Nevertheless, their use must be assessed in
relation to its purpose (Cave, 2016). This might call for a qualitative or quantitative
approach, depending on what the purpose of creating a moment of contingency is.

5.4 Implications
In this section, I provide the implications of this research project for practice, theory,
research and policy.

Implications for practice


Based on theory, literature (Black & Wiliam, 2018; Furtak et al., 2016; Hattie &
Timperley, 2007; Littleton & Mercer, 2013; Wegerif, 2013) and empirical findings
from the three studies, I outline some suggestions for how to use participatory tools to
support formative assessment in lectures that strengthen the likelihood that these in-
lecture activities support student learning outside of the lecture. The activities should
be aligned with the learning objectives. The key questions to consider are thus to what

96
extent and how the activities allow students to share thinking and what kind of
thinking is being shared. To secure the quality of the inferences that can be drawn
from the activities, it could be a strength, to include questions on confidence and/or
design an environment where uncertainty and questions might surface and to take time
to clarify and elaborate on these questions. To support high-quality discussion,
students should be provided with guidelines on how to participate in the discussion
and allow open-ended tasks and a supportive climate for students to share ideas.
Students should also be aware of their role as producers of and in using feedback, ans
the purpose of the activities should be articulated. To encourage students to use the in-
lecture activities as resources for their activities outside of lectures, the activities could
be connected to assignments or other course elements so that an elaboration of the
activities is included in the teaching design. To support feedback on the process level,
information on possible approaches to certain topics could be included in the teaching
design. Exploring the activities by examining what is achieved in them, so the teaching
design can be improved, should be a part of the teaching design itself.

One aim for design-based research is to provide suggestions for and refine educational
practices. These suggestions are based on theory as well as research and are informed
by lessons learned within the three studies and the discussion within this synopsis.
Throughout this project, I have endeavoured to make the context and research
procedure transparent; readers can therefore judge whether the principles can be
modified to fit their context.

Implications for theory and research


I have contributed to scholarship by showing the link between the moment of
contingency and the concept of a dialogue space, as discussed in section 5.1.

This study has placed its focus on the students, their perceptions and how the activities
unfold between them. Further studies should explore the role of the lecturer, and how
lecturers use quantitative or qualitative information received through these systems to
adjust their teaching to the needs of the students. This research was done using a

97
design-based approach using different methods to explore the intervention. As outlined
in the overview of previous research, a large body of research supports the possible
benefits of including active approaches to student learning in lectures; however, only a
few of these studies use ethnographic design, as is the case for research in higher
education in general. Recording students’ discussions is important for understanding
what is achieved, to arrive at a nuanced understanding and identify affordances as well
as constraints to be assessed in relation to the purpose of using the technology.

Further research could compare clicker-supported discussions with discussions alone


to discern different patterns of talk. In order to assess how the discussion is affected by
clicker use, researchers could include a control group to compare peer discussions in
which a student response system is applied and when it is not but otherwise following
the same teaching design. To be able to assess how points made earlier in the lecture,
in verbal or written form, are picked up in the peer discussion, these activities should
be observed for a more extended period.

To examine how the use of the online collaborative whiteboard support student
discussions, both in groups and in the whole class, the research design could record
each group discussion by using a head camera to video-record the activities. By using
video, it would also be possible to connect the discussion to students’ contributions on
the online collaborative whiteboard and to gain insight into how ideas from the student
discussions are reflected in their posts. We could identify how actions played out on
the screen feeds, into the peer discussions and to the whole-group dialogue.

Using a student response system to collect quantitative data can also be an effective
means of capturing the students’ experiences during their learning processes. For
qualitative data, the online collaborative whiteboard would also be a potential way to
collect qualitative data about students’ experiences of an activity when the activity
takes place.

98
The relationship between the dialogue and the writing, and the dynamics between the
two modes, should be examined more closely, for example how the written text shapes
the spoken dialogue within the lecture. Exploring the dynamics from moment to
moment and over time would also be of interest, for example how ideas develop
individually, and also how the multitude of ideas create new ideas or new
understanding, e.g., in the course of a semester. The boards are temporally situated,
and contributions can be moved, removed, edited and linked together and developed
during time. To explore how a board or an argument develops, further research should
trace editing on the board, to examine how arguments are developed in the moment or
during time, or how other students’ post influence the content of posts. This would
give insight into how such tools support learning over time. Another suggestion is to
examine assignments, to see how or whether arguments posted on the board are
reflected in students’ writing. By examining this, we could unpack how the processes
of co-creating knowledge are shaped by using such tools. There is a considerable body
of research showing the affordances of using digital tools to support dialogue in the
school setting (Mercer et al., 2019; Mercer, Hennessy, & Warwick, 2019), however,
more research should be conducted to examine affordances of digital tools to support
dialogue in lectures in higher education. For further research, a design-based approach
in collaboration with teachers teaching different subjects would be of interest, to
explore how the potential affordances of these tools play out in different disciplines
and contexts.

In this thesis I have focused mainly on textual evidence; however, the online
collaborative whiteboard also allows participants to post pictures and to draw and
connect the different modes (picture, drawing etc.). The synergy between the different
modes, which expands the possibilities to co-create knowledge in the lecture setting,
should be explored further and not be restricted to text. The students value the
questions, the discussions and the follow up by the lecturer; however, we see that the
use of feedback in students’ courses varies outside the context of the lecture setting.

99
For further research, I also suggest a focus on how the activities in the lectures are
embedded in other course activities or used as resources to push thinking forward.

Implications for policy


This study indicates that it would have been more fruitful for policy to employ more
sophisticated understandings of the term ‘technology’. This is frequently used as a
comprehensive term, capturing all types of technology and a multitude of uses. A more
nuanced use of the term would provide a more useful framework for discussing the
opportunities technology offers to raise the quality of teaching and learning in higher
education in general and in lectures in particular. In addition, it could guide leaders,
faculties and educators to make decisions about which technologies they could
potentially use and why they would use them. Policy makers should pay attention to
the qualities of the new spaces these participatory tools might provide, by encouraging
lecturers to examine what is achieved within them, so they can refine and develop their
practices as a part of their professional development. The use of participatory tools
necessitates digital competence among lecturers, and also mandates insight in how
these tools can be situated within a pedagogical framework.

Conclusion
This thesis examined the affordances of using educational tools to open moments of
contingency in lectures, to take advantage of opportunities to take the lecture towards
shared spaces of reflection, and thus be able to support formative assessment in
lectures. This opens the possibility for students to voice their opinions and to articulate
different ways of knowing and different viewpoints as well as stimulating students to
create their own questions. Using student response systems changed the lecture from
mere monologue into a form that supports a formative feedback practice by providing
the opportunity for students to reflect on the content and their understanding of the
content during lectures. Voicing their ideas and explaining and listening to others
through the discussions might provide opportunities for students to engage in critical
reflection and self-assessment and to receive feedback on their learning and learning
process. Opening dialogical spaces provides students with rich opportunities to co-

100
create knowledge and to reflect on the learning and learning process in lectures. In
showing how students incorporate each other’s written contributions (from the online
collaborative whiteboard) into their discussions, the thesis contributes to unpacking
how participatory tools support collaborative thinking in the context of lectures.

I have opened a space: a space that can further be widened, deepened and expanded.
So, where do we go from here?

101
References
Aagaard, T., Lund, A., Lanestedt, J., Ramberg, K. R., & Swanberg, A. B. (2018).
Sammenhenger mellom digitalisering og utdanningskvalitet–innspill og utspill.
Uniped, 41(03), 289-303.
Alexander, R. J. 2006. Towards dialogic teaching , (3rd edition), Cambridge: Cambridge
University Press/Dialogos.
Ajjawi, R., & Boud, D. (2017). Researching feedback dialogue: an interactional analysis
approach. Assessment & Evaluation in Higher Education, 42(2), 252-265.
Aljaloud, A., Gromik, N., Billingsley, W., & Kwan, P. (2015). Research trends in student
response systems: a literature review. International Journal of Learning Technology,
10(4), 313-325.
Anagnostopoulos, D., Smith, E. R., & Nystrand, M. (2008). Creating dialogic spaces to
support teachers' discussion practices: An introduction. English Education, 41(1), 4-
12.
Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in
education research? Educational researcher, 41(1), 16-25.
Andrade, H. (2010). Students as the Definitive Source of Formative Assessment: Academic
Self-Assessment and the Self-Regulation of Learning. In H.J. Andrade, G.J. Cizek
(Eds.), In Handbook of formative assessment (pp. 102-117). New York: Routledge.
Anthis, K. (2011). Is it the clicker, or is it the question? Untangling the effects of student
response system use. Teaching of Psychology, 38(3), 189-193.
Arthurs, L. A., & Kreager, B. Z. (2017). An integrative review of in-class activities that
enable active learning in college science classroom settings. International Journal of
Science Education, 39(15), 2073-2091.
Baeten, M., Kyndt, E., Struyven, K., & Dochy, F. (2010). Using student-centred learning
environments to stimulate deep approaches to learning: Factors encouraging or
discouraging their effectiveness. Educational Research Review, 5(3), 243-260.
Baird, J. A., Andrich, D., Hopfenbeck, T. N., & Stobart, G. (2017). Assessment and learning:
fields apart? Assessment in Education: Principles, Policy & Practice, 24(3), 317-350.
Bakhtin, M. M. (1984). Problems of Dostoevsky’s Poetics (C. Emerson, editor and translator).
Minneapolis, MN: University of Minnesota Press.
Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The
journal of the learning sciences, 13(1), 1-14.
Baron, D., Bestbier, A., Case, J. M., & Collier-Reed, B. I. (2016). Investigating the effects of
a backchannel on university classroom interactions: A mixed-method case study.
Computers & Education, 94, 61-76.
Bazeley, P., & Kemp, L. (2012). Mosaics, triangles, and DNA: Metaphors for integrated
analysis in mixed methods research. Journal of Mixed Methods Research, 6(1), 55-72.
Beatty, I. D., & Gerace, W. J. (2009). Technology-enhanced formative assessment: A
research-based pedagogy for teaching science with classroom response technology.
Journal of Science Education and Technology, 18(2), 146-162.
Benjamin, L. T. (1988). A history of teaching machines. American psychologist, 43(9), 703-
712.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education:
Principles, Policy & Practice, 18(1), 5-25.

102
Bielaczyc, K., Pirolli, P. L., & Brown, A. L. (1995). Training in self-explanation and self-
regulation strategies: Investigating the effects of knowledge acquisition activities on
problem solving. Cognition and instruction, 13(2), 221-252.
Biesta, G. (2010). Pragmatism and the philosophical foundations of mixed methods research.
Sage handbook of mixed methods in social and behavioral research, 2, 95-118.
Biggs, J., & Tang, C. (2007). Teaching for quality learning at university Maidenhead.
Berkshire, UK: McGraw-Hill Education.
Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational
Assessment, Evaluation and Accountability, 21(1), 5-31.
Black, P., & Wiliam, D. (2018). Classroom assessment and pedagogy. Assessment in
Education: Principles, Policy & Practice, 25(6), 551-575.
Bloom, B. S. (1953). Thought-processes in lectures and discussions. The Journal of General
Education, 7(3), 160-169.
Bloomfield, B. P., Latham, Y., & Vurdubakis, T. (2010). Bodies, technologies and action
possibilities: When is an affordance? Sociology, 44(3), 415-433.
Boscardin, C., & Penuel, W. (2012). Exploring benefits of audience-response systems on
learning: a review of the literature. Academic psychiatry, 36(5), 401-407.
Boud, D., & Falchikov, N. (2007). Rethinking assessment in higher education: Learning for
the longer term. New York, NY, US: Routledge/Taylor & Francis Group.
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of
design. Assessment & Evaluation in Higher Education, 38(6), 698-712.
Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in
Higher Education, 41(3), 400-413.
Boyle, J. T., & Nicol, D. J. (2003). Using classroom communication systems to support
interaction and discussion in large class settings. ALT-J, 11(3), 43-57.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research
in psychology, 3(2), 77-101.
Brookfield, S. D., & Preskill, S. (2012). Discussion as a way of teaching: Tools and
techniques for democratic classrooms. Hoboken, NJ: John Wiley & Sons.
Bruff, D. (2011). Classroom response system (‘clickers’) bibliography. Center for Teaching,
Vanderbilt University. Classroom response system (‘clickers’) bibliography. Center
for Teaching, Vanderbilt University. Retrieved from:
https://cft.vanderbilt.edu/docs/classroom-response-system-clickers-
bibliography/21.10.2019
Cacchione, A. (2015). Creative use of Twitter for Dynamic Assessment in Language Learning
classroom at the university. IxD&A, 24, 145-161.
Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback
practices. Studies in higher education, 36(4), 395-407.
Carless, D. (2016). Feedback as dialogue. Encyclopedia of educational philosophy and
theory, 1-6.
Carless, D. (2019). Feedback loops and the longer-term: towards feedback spirals. Assessment
& Evaluation in Higher Education, 44(5), 705-714.
Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling
uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325.
Castillo-Manzano, J. I., Castro-Nuño, M., López-Valpuesta, L., Sanz-Díaz, M. T., & Yñiguez,
R. (2016). Measuring the effect of ARS on academic performance: A global meta-
analysis. Computers & Education, 96, 109-121.

103
Cavanagh, A. J., Aragón, O. R., Chen, X., Couch, B. A., Durham, M. F., Bobrownicki, A., . . .
Graham, M. J. (2016). Student buy-in to active learning in a college science course.
CBE—Life Sciences Education, 15(4), ar76.
Cavanagh, M. (2011). Students’ experiences of active engagement through cooperative
learning activities in lectures. Active Learning in Higher Education, 12(1), 23-33.
Cave, T. (2016). Thinking with Literature: Towards a Cognitive Criticism. Oxford: Oxford
University Press.
Chien, Y.T., Chang, Y.H., & Chang, C.Y. (2016). Do we click in the right way? A meta-
analytic review of clicker-integrated instruction. Educational Research Review, 17, 1-
18.
Clark, I. (2012). Formative assessment: Assessment is for self-regulated learning. Educational
Psychology Review, 24(2), 205-249.
Clinton, V., & Kelly, A. E. (2017). Student attitudes toward group discussions. Active
Learning in Higher Education, 1469787417740277, 1-1.
Cook, V., Warwick, P., Vrikki, M., Major, L., & Wegerif, R. (2019). Developing material-
dialogic space in geography learning and teaching: Combining a dialogic pedagogy
with the use of a microblogging tool. Thinking Skills and Creativity, 31, 217-231.
Cooper, R., Fleischer, A., & Cotton, F. A. (2012). Building connections: An interpretative
phenomenological analysis of qualitative research students’ learning experiences. The
Qualitative Report, 17(17), 1-16
Cooper, R., Chenail, R. J., & Fleming, S. (2012). A grounded theory of inductive qualitative
research education: Results of a meta-data-analysis. The Qualitative Report, 17(52), 1-
26.
Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating
quantitative: and qualitative research. Prentice Hall Upper Saddle River, NJ: Pearson
Education
Crompton, H., & Burke, D. (2018). The use of mobile learning in higher education: A
systematic review. Computers & Education, 123, 53-64.
Dall’Alba, G., & Bengtsen, S. (2019). Re-imagining active learning: Delving into darkness.
Educational Philosophy and Theory, 1-13.
Damşa, C., de Lange, T., Elken, M., Esterhazy, R., Fossland, T., Frølich, N., . . . Nordkvelle,
Y. T. (2015). Quality in Norwegian Higher Education: A review of research on
aspects affecting student learning. Nifu-rapport, (24) (Retrieved on:
https://www.nifu.no/publications/1288405/ 21.10.2019
Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E.
(2019). What makes for effective feedback: Staff and student perspectives. Assessment
& Evaluation in Higher Education, 44(1), 25-36.
De Gagne, J. C. (2011). The impact of clickers in nursing education: A review of literature.
Nurse education today, 31(8), e34-e40.
Denker, K. J. (2013). Student response systems and facilitating the large lecture basic
communication course: Assessing engagement and learning. Communication Teacher,
27(1), 50-69.
Dohn, N. B. (2009). Affordances revisited: articulating a Merleau-Pontian view. International
Journal of Computer-Supported Collaborative Learning, 4(2), 151-170.
Dunn, P. K., Richardson, A., Oprescu, F., & McDonald, C. (2013). Mobile-phone-based
classroom response systems: Students’ perceptions of engagement and learning in a
large undergraduate course. International Journal of Mathematical Education in
Science and Technology, 44(8), 1160-1174.

104
Dysthe, O. (2011). Opportunity spaces for dialogic pedagogy in test-oriented schools: A case
study of teaching and learning in high school. In E. J. White & M. Peters (Eds.),
Bakhtinian pedagogy: Opportunities and challenges for research, policy and practice
in education across the globe (pp. 69–90). New York: Peter Lang.
Dysthe, O. (2015). Writing Pedagogy in Online Settings—A Widening of Dialogic Space?.
In Learning and Teaching Writing Online. BRILL. 186-193).
Egelandsdal, K. (2018). Clickers and Formative Feedback at University Lectures. Exploring
students and teachers’ reception and use of feedback from clicker interventions:
(Doctoral dissertation), Univeristy of Bergen, Bergen
Egelandsdal, K., & Krumsvik, R. J. (2017). Clickers and formative feedback at university
lectures. Education and Information Technologies, 22(1), 55-74.
Egelandsdal, K., & Krumsvik, R. J. (2017). Peer discussions and response technology: short
interventions, considerable gains. Nordic Journal of Digital Literacy, 12(01-02), 19-
30.
Egelandsdal, K., & Krumsvik, R. J. (2019). Clicker Interventions at University Lectures and
the Feedback Gap. Nordic journal of digital technology, 1–2 (14), 69–86.
Egelandsdal, K., Ludvigsen, K., & Ness, I. J. (2019). Clicker Interventions in Large Lectures
in Higher Education. Learning, Design, and Technology. Doi: 10.1007/978-3-319-
17727-4_147-1
Egelandsdal, K. & Riese, H. (fortcoming). Never Mind the Gap: Formative Assessment
Confronted with Dewey’s and Gadamer’s Concept of Experience. European Journal
of Education.
Elavsky, C. M., Mislan, C., & Elavsky, S. (2011). When talking less is more: exploring
outcomes of Twitter usage in the large‐lecture hall. Learning, Media and Technology,
36(3), 215-233.
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of
educational research, 83(1), 70-120.
Feilzer, M. (2010). Doing mixed methods research pragmatically: Implications for the
rediscovery of pragmatism as a research paradigm. Journal of Mixed Methods
Research, 4(1), 6-16.
Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving integration in mixed
methods designs—principles and practices. Health services research, 48(6pt2), 2134-
2156.
Fies, C., & Marshall, J. (2006). Classroom response systems: A review of the literature.
Journal of Science Education and Technology, 15(1), 101-109.
Fluckiger, J., Vigil, Y. T. Y., Pasco, R., & Danielson, K. (2010). Formative feedback:
Involving students as partners in assessment to enhance learning. College teaching,
58(4), 136-140.
French, S., & Kennedy, G. (2017). Reassessing the value of university lectures. Teaching in
Higher Education, 22(6), 639-654.
Friesen, N. (2011). The lecture as a transmedial pedagogical form: A historical analysis.
Educational researcher, 40(3), 95-102.
Friesen, N. (2013). Educational technology and the “New language of learning”: Lineage and
limitations. In The politics of education and technology (pp. 21-38). Palgrave
Macmillan, New York.
Furtak, E. M., Glasser, H.M, & Wolfe, Z.M. (2016). The feedback loop: Using formative
assessment data for science teaching and learning. Arlington, Virginia: NSTA Press

105
Gao, F., Luo, T., & Zhang, K. (2012). Tweeting for learning: A critical analysis of research on
microblogging in education published in 2008–2011. British Journal of Educational
Technology, 43(5), 783-801.
Gibson, J.J. (1977) The theory of affordances. In R.E. Shaw and J. Bransford (eds)
Perceiving, Acting, and Knowing (pp. 6782). Hillsdale, NJ: Lawrence Erlbaum
Associates.
Good, K. C. (2013). Audience Response Systems in higher education courses: A critical
review of the literature. International Journal of Instructional Technology and
Distance Learning, 10(5), 19-34.
Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and
associated methodologies. Health Information & Libraries Journal, 26(2), 91-108.
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for
mixed-method evaluation designs. Educational evaluation and policy analysis, 11(3),
255-274.
Guest, G. (2013). Describing mixed methods research: An alternative to typologies. Journal
of Mixed Methods Research, 7(2), 141-151.
Haggis, T. (2009). What have we been thinking of? A critical overview of 40 years of student
learning research in higher education. Studies in Higher Education, 34(4), 377–390.
Han, J. H. (2014). Closing the missing links and opening the relationships among the factors:
A literature review on the use of clicker technology using the 3P model. Journal of
Educational Technology & Society, 17(4), 150-168.
Harrington, C., & Zakrajsek, T. D. (2017). Dynamic lecturing: Research-based strategies to
enhance lecture effectiveness. Virginia: Stylus Publishing, LLC.
Hattie, J., & Gan, M. (2011). Instruction based on feedback. In P. Alexander & R. E. Mayer
(Eds.), Handbook of research on learning and instruction (pp. 249–271). New York,
NY: Routledge.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research,
77(1), 81-112.
Havnes, A., Smith, K., Dysthe, O., & Ludvigsen, K. (2012). Formative assessment and
feedback: Making learning visible. Studies in Educational Evaluation, 38(1), 21-27.
Henderson, M., Selwyn, N., & Aston, R. (2017). What works and why? Student perceptions
of ‘useful’digital technology in university teaching and learning. Studies in Higher
Education, 42(8), 1567-1579.
Hennessy, S., Rojas-Drummond, S., Higham, R., Márquez, A. M., Maine, F., Ríos, R. M., . . .
Barrera, M. J. (2016). Developing a coding scheme for analysing classroom dialogue
across educational contexts. Learning, Culture and Social Interaction, 9, 16-44.
Herman, J.H., & Nilson, B.H. (2018). Creating Engaging Discussions: Strategies for
Avoiding Crickets in Any Size Classroom and Online. Virginia: Stylus Publishing,
LLC.
Hew, K. F., Lan, M., Tang, Y., Jia, C., & Lo, C. K. (2019). Where is the “theory” within the
field of educational technology research? British Journal of Educational Technology,
50(3), 956-971
Howe, C. (2017). Advances in research on classroom dialogue: Commentary on the articles.
Learning and Instruction, 48, 61-65.
Hunsu, N. J., Adesope, O., & Bayly, D. J. (2016). A meta-analysis of the effects of audience
response systems (clicker-based technologies) on cognition and affect. Computers &
Education, 94, 102-119.

106
Hyun, J., Ediger, R., & Lee, D. (2017). Students' Satisfaction on Their Learning Process in
Active Learning and Traditional Classrooms. International Journal of Teaching and
Learning in Higher Education, 29(1), 108-118.
Ivankova, N. V. (2014). Implementing quality criteria in designing and conducting a
sequential QUAN→ QUAL mixed methods study of student engagement with
learning applied research methods online. Journal of Mixed Methods Research, 8(1),
25-51.
Ivankova, N. V., Creswell, J. W., & Stick, S. L. (2006). Using mixed-methods sequential
explanatory design: From theory to practice. Field methods, 18(1), 3-20.
James, M. C., & Willoughby, S. (2011). Listening to student conversations during clicker
questions: What you have not heard might surprise you! American Journal of Physics,
79(1), 123-132.
Jesson, R., Fontich, X., & Myhill, D. (2016). Creating dialogic spaces: Talk as a mediational
tool in becoming a writer. International Journal of Educational Research, 80, 155-
163.
Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm
whose time has come. Educational researcher, 33(7), 14-26.
Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed
methods research. Journal of Mixed Methods Research, 1(2), 112-133.
Juuti, K., Lavonen, J., & Meisalo, V. (2016). Pragmatic Design-Based Research – Designing
as a Shared Activity of Teachers and Researches. In D. Psillos & P. Kariotoglou
(Eds.), Iterative Design of Teaching-Learning Sequences: Introducing the Science of
Materials in European Schools (pp. 35-46). Dordrecht: Springer Netherlands.
Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience
response systems: A review of the literature. Computers & Education, 53(3), 819-827.
Keough, S. M. (2012). Clickers in the Classroom: A Review and a Replication. Journal of
Management Education, 36(6), 822-847.
Kirkwood, A., & Price, L. (2014). Technology-enhanced learning and teaching in higher
education: what is ‘enhanced’and how do we know? A critical literature review.
Learning, media and technology, 39(1), 6-36.
Kirschner, P. A., Martens, R. L., & Strijbos, J.W. (2004). CSCL in higher education? A
framework for designing multiple collaborative environments. In P. Dillenbourg
(Series Ed.) & J.-W. Strijbos, P. A. Kirschner, & R. L. Martens (Vol. Eds.),
Computer-supported collaborative learning: Vol. 3. What we know about CSCL and
implementing it in higher education (pp. 3–30). Boston, MA: Kluwer Academic.
Kleven, T. A. (2008). Validity and validation in qualitative and quantitative research. Nordic
Studies in Education, 28(03), 219-233.
Kluger, A. N., & DeNisi, A. (1998). Feedback interventions: Toward the understanding of a
double-edged sword. Current directions in psychological science, 7(3), 67-72.
Knijnik, J., Spaaij, R., & Jeanes, R. (2019). Reading and writing the game: Creative and
dialogic pedagogies in sports education. Thinking Skills and Creativity, 32, 42-50.
Krumsvik, R. (2012). Feedback Clickers in Plenary Lectures: A New Tool for Formative
Assessment? In Transformative approaches to new technologies and student diversity
in futures oriented classrooms (pp. 191-216): Dordrecht: Springer.
Krumsvik, R. J., & Ludvigsen, K. (2012). Formative E-assessment in plenary lectures. Nordic
Journal of Digital Literacy, 7(01), 36-54.

107
Krumsvik, R., & Ludvigsen, K. (2013). Theoretical and methodological issues of formative e-
assessment in plenary lectures. International Journal of Pedagogies and Learning,
8(2), 78-92.
Kunnskapsdepartementet (2017). Meld. St. 16: Kultur for kvalitet I høyere utdanning. Oslo:
Kunnskapsdepartementet
Kunnskapsdepartementet (2018) Tilstandsrapport for høyere utdanning 2018. Oslo:
Kunnskapsdepartementet
Kvale, S., & Brinkmann, S. (2009). Det kvalitative forskningsintervju. Oslo: Gyldendal
akademisk.
Langer-Osuna, J. M., & Avalos, M. A. (2015). "I'm trying to figure this out. Why don't you
come up here?": heterogeneous talk and dialogic space in a mathematics discussion.
Zdm-Mathematics Education, 47(7), 1313-1322. doi:10.1007/s11858-015-0735-y
Laurillard, D. (2013). Rethinking university teaching: A conversational framework for the
effective use of learning technologies: Routledge.
Leech, N. L. (2012). Writing mixed research reports. American Behavioral Scientist, 56(6),
866-881.
Lillejord, S., Børte, K., Nesje, K., & Ruud, E. (2018). Learning and teaching with technology
in higher education–a systematic review. Oslo: Knowledge Center for Education.
Retrieved from:
https://www.researchgate.net/publication/327057633_Learning_and_Teaching_With_
Technology_in_Higher_Education_-_a_systematic_review
Linell, P. (2009). Rethinking language, mind, and world dialogically. Greenwich, CT:
Information Age Publishing
Littleton, K., & Mercer, N. (2013). Interthinking: Putting talk to work London: Routledge.
Lobo, G. (2017). Active learning interventions and student perceptions. Journal of Applied
Research in Higher Education, 9(3), 465-473.
Ludvigsen, K. (2017). Slik får vi studentene til å delta aktivt i forelesningen. Forskning.no
Retrieved from: https://forskning.no/pedagogiske-fag-skole-og-utdanning-
kronikk/kronikk-slik-far-vi-studentene-til-a-delta-aktivt-i-forelesningen/1164670
Ludvigsen, K., & Egelandsdal, K. (2016). Formativ e-vurdering i høyere utdanning (pp. 256
273). In R. J Krumsvik (Ed.). Digital læring i skole og lærerutdanning. Oslo:
Universitetsforlaget.
Ludvigsen, K., Krumsvik, R., & Furnes, B. (2015). Creating formative feedback spaces in
large lectures. Computers & Education, 88, 48-63.
Ludvigsen, K., Krumsvik, R. J., & Breivik, J. (2000). Behind the scenes: Unpacking peer
discussions and critical reflections in lectures. British Journal of Educational
Technology
Ludvigsen, K., Ness, I. J., & Timmis, S. (2019). Writing on the wall: How the use of
technology can open dialogical spaces in lectures. Thinking Skills and Creativity. Doi:
10.1016/jtsc.2019.02.007
Lumpkin, A., Achen, R. M., & Dodd, R. K. (2015). Student perceptions of active learning.
College Student Journal, 49(1), 121-133.
MacArthur, J. R., & Jones, L. L. (2008). A review of literature reports of clickers applicable
to college chemistry classrooms. Chemistry Education Research and Practice, 9(3),
187-195.

108
MacGeorge, E. L., Homan, S. R., Dunning, J. B., Elmore, D., Bodie, G. D., Evans, E., . . .
Geddes, B. (2008). Student evaluation of audience response technology in large
lecture classes. Educational Technology Research and Development, 56(2), 125-145.
Matusov, E., & Wegerif, R. (2014). Dialogue on ‘dialogic education’: Has Rupert gone over
to ‘the dark side’? Dialogic Pedagogy: An International Online Journal. Retrieved at:
https://dpj.pitt.edu/ojs/index.php/dpj1/article/view/78 21.10.2019
Mazur, E (1999). Peer Instruction: A User’s Manual, Englewood Cliffs NJ: Prentice Hall.
McMillan, C., Loads, D., & McQueen, H. A. (2018). From students to scientists: The impact
of interactive engagement in lectures. New Directions in the Teaching of Physical
Sciences (13).
McQueen, H. A., & McMillan, C. (2018). Quectures: Personalised constructive learning in
lectures. Active Learning in Higher Education, 1469787418760325. 1-15.
Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social
mode of thinking. Journal of Applied Linguistics, 1(2), 137-168.
Mercer, N., Hennessy, S., & Warwick, P. (2010). Using interactive whiteboards to orchestrate
classroom dialogue. Technology, Pedagogy and Education, 19(2), 195-209.
Mercer, N., Hennessy, S., & Warwick, P. (2019). Dialogue, thinking together and digital
technology in the classroom: Some educational implications of a continuing line of
inquiry. International Journal of Educational Research. vol. 97, 187-199.
Mercer, N., Hennessy, S., & Warwick, P. (2019). Orchestrate classroom dialogue.
Technology, Pedagogy and Education, 19, 2, 195–209. In N. Mercer (Ed) Language
and the Joint Creation of Knowledge: The selected works of Neil Mercer. NY:
Routledge
Mercer, N., Warwick, P., Kershner, R., & Staarman, J. K. (2010). Can the interactive
whiteboard help to provide ‘dialogic space’ for children's collaborative activity?
Language and Education, 24(5), 367-384.
Michaels, S., O’Connor, C., & Resnick, L. B. (2008). Deliberative discourse idealized and
realized: Accountable talk in the classroom and in civic life. Studies in philosophy and
education, 27(4), 283-297.
Ministry of Education and Reseach (2016-2017). Quality Culture in Higher Education. Oslo:
Ministry of Education and Research Retrieved from:
Ministry of Education and Research (2018) Digitalisation strategy for the higher education
sector 2017-2021. Oslo: Ministry of Education and Research
Moate, J., Hulse, B., Jahnke, H., & Owens, A. (2019). Exploring the material mediation of
dialogic space—A qualitative analysis of professional learning in initial teacher
education based on reflective sketchbooks. Thinking Skills and Creativity, 31, 167-
178.
Moeed, A. (2015). Theorizing formative assessment: Time for a change in thinking. The
Educational Forum, 79, 180-189.
Morgan, D. L. (2007). Paradigms Lost and Pragmatism Regained: Methodological
Implications of Combining Qualitative and Quantitative Methods. Journal of Mixed
Methods Research, 1(1), 48-76.
NESH (National Committee for Research Ethics in the Social Sciences and the Humanities,
Norway). (2006). Guidelines for research ethics in the social science law and the
humanities. NESH: Oslo
Nelson, C., Hartling, L., Campbell, S., & Oswald, A. E. (2012). The effects of audience
response systems on learning outcomes in health professions education. A BEME
systematic review: BEME Guide No. 21. Medical Teacher, 34(6), e386-e405.

109
Nerland, M.& Prøitz, T.S. (2018). Pathways to quality in higher education: Case studies of
educational practices in eight courses. NIFU-rapport 3/2018. Oslo: NIFU
Neustifter, R., Kukkonen, T., Coulter, C., & Landry, S. (2016). Introducing Backchannel
Technology into a Large Undergraduate Course. Canadian Journal of Learning and
Technology, 42(1), 1-22.
Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher
education: a peer review perspective. Assessment & Evaluation in Higher Education,
39(1), 102-122.
Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated
learning: A model and seven principles of good feedback practice. Studies in Higher
Education, 31(2), 199-218.
Nielsen, K. L., Hansen-Nygård, G., & Stav, J. B. (2012). Investigating peer instruction: how
the initial voting session affects students' experiences of group discussion. ISRN
education, vol. 2012, 1-9.
Nielsen, K. L., Hansen, G., & Stav, J. B. (2016). How the initial thinking period affects
student argumentation during peer instruction: students’ experiences versus
observations. Studies in Higher Education, 41(1), 124-138.
Ninomiya, S. (2016). The Possibilities and Limitations of Assessment for Learning: Exploring
the Theory of Formative Assessment and the Notion of “Closing the Learning Gap”.
Educational Studies in Japan, 10(0) 79-91.
Norgesuniversitetet (2017). Digitalisering for utdanningskvalitet – Status i norsk høyere
utdanning. Norgesuniversitetets skriftserie 3/2017. Tromsø: Norgesuniversitetet
Norgesuniversitetet (2018) Digitalisering for utdanningskvalitet og aktiv læring i høyere
utdanning. Digital tilstand 1/2018. Tromsø: Norgesuniversitetet
Onwuegbuzie, A. J., & Frels, R. (2016). Seven steps to a comprehensive literature review: A
multimodal and cultural approach. London: Sage.
Pachler, N., Daly, C., Mor, Y., & Mellar, H. (2010). Formative e-assessment: Practitioner
cases. Computers & Education, 54(3), 715-721.
Pagano, R., & Paucar-Caceres, A. (2013). Using systems thinking to evaluate formative
feedback in UK higher education: the case of classroom response technology.
Innovations in education and teaching international, 50(1), 94-103.
Pifarré, M. (2019). Using interactive technologies to promote a dialogic space for creating
collaboratively: A study in secondary education. Thinking Skills and Creativity, 32, 1-
16.
Pifarré, M., & Kleine Staarman, J. (2011). Wiki-Supported Collaborative Learning in Primary
Education: How a Dialogic Space Is Created for Thinking Together. International
Journal of Computer-Supported Collaborative Learning, 6(2), 187-205.
Pimmer, C., Mateescu, M., & Gröhbiel, U. (2016). Mobile and ubiquitous learning in higher
education settings. A systematic review of empirical studies. Computers in Human
Behavior, 63, 490-501.
Pintrich, P. R., & Zusho, A. (2002). Student motivation and self-regulated learning in the
college classroom. In In J. C. Smart & W. G. Tierney (Eds.), Higher education:
Handbook of theory and research (pp. 55-128). New York: Agathon Press
Pohl, A. (2015). Fostering awareness and collaboration in large-class lectures. Dissertation,
Ludwig-Maximilians-Universität. München
Pool, J., & Laubscher, D. (2016). Design-based research: is this a suitable methodology for
short-term projects? Educational Media International, 53(1), 42-52.

110
Price, M., Handley, K., Millar, J., & O'donovan, B. (2010). Feedback: all that effort, but what
is the effect? Assessment & Evaluation in Higher Education, 35(3), 277-289.
Prince, M. (2004). Does active learning work? A review of the research. Journal of
engineering education, 93(3), 223-231.
Prosser, M., & Trigwell, K. (2014). Qualitative variation in approaches to university teaching
and learning in large first-year classes. Higher Education, 67(6), 783-795.
Reimer, L., Nili, A., Nguyen, T., Warschauer, M., & Domina, T. (2016). Clickers in the wild:
A campus-wide study of student response systems. In: Purdue University Press West
Lafayette, IN.
Renshaw, P. D. (2017). Positionality in researching the dialogic self: A commentary on the
possibilities for dialogic theory and pedagogy. Learning, Culture and Social
Interaction, 20, 90-94.
Ross, S. M., Morrison, G. R., Lowther, D. L. (2010). Educational technology research past
and present: Balancing rigor and relevance to impact school learning. Contemporary
Educational Technology, 1, 17–35.
Ruiz-Primo, M. A. (2011). Informal formative assessment: The role of instructional dialogues
in assessing students’ learning. Studies in Educational Evaluation, 37(1), 15-24.
Sadler, D. R. (1989). Formative assessment and the design of instructional systems.
Instructional Science, 18(2), 119-144.
Sandelowski, M. (2003). Tables or tableaux? The challenges of writing and reading mixed
methods studies. Handbook of mixed methods in social and behavioral research, 321-
350.
Sandström, N., Eriksson, R., Lonka, K., & Nenonen, S. (2016). Usability and affordances for
inquiry-based learning in a blended learning environment. Facilities, 34(7/8), 433-449.
Schoonenboom, J. (2017). A performative paradigm for mixed methods research. Journal of
Mixed Methods Research, 13(3), 284-300.
Scott, P. H., Mortimer, E. F., & Aguiar, O. G. (2006). The tension between authoritative and
dialogic discourse: A fundamental characteristic of meaning making interactions in
high school science lessons. Science education, 90(4), 605-631.
Seglem, R., & Haling, L. (2018). “I Got Confused Reading It”: Using Backchannels to
Collaboratively Build Meaning with Texts. Journal of Teaching and Learning with
Technology, 7(1), 43-58.
Selwyn, N. (2013). Discourses of digital ‘disruption’in education: A critical analysis. Fifth
International Roundtable on Discourse Analysis, City University, Hong Kong, 23-25.
Shapiro, A. M., Sims-Knight, J., O'Rielly, G. V., Capaldo, P., Pedlow, T., Gordon, L., &
Monteiro, K. (2017). Clickers can promote fact retention but impede conceptual
understanding: The effect of the interaction between clicker use and pedagogy on
learning. Computers & Education, 111, 44-59.
Sikes, P. (2006). On dodgy ground? Problematics and ethics in educational research.
International Journal of Research & Method in Education, 29(1), 105-117.
Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T.
(2009). Why peer discussion improves student performance on in-class concept
questions. Science, 323(5910), 122-124.
Spencer, L., Ritchie, J., Lewis, J., Dillon, L.(2004). Quality in Qualitative Evaluation: A
framework for assessing research evidence., London: Cabinet Office. Retrieved at
http://www.cebma.org/wp-content/uploads/Spencer-Quality-in-qualitative-
evaluation.pdf
Skinner, B. E. (1968). Tbe technology of leaching New York: Applcton-Century-Crofts.

111
Stead, D. R. (2005). A review of the one-minute paper. Active Learning in Higher Education,
6(2), 118-131.
Steen-Utheim, A., & Wittek, A. L. (2017). Dialogic feedback and potentialities for student
learning. Learning, Culture and Social Interaction, 15, 18-30.
Strickland, T. H. (2019). Engaged Dialogic Pedagogy and the Tensions Teachers
Face. Dialogic Pedagogy: An International Online Journal, 7. Retrieved at:
http://dpj.pitt.edu/ojs/index.php/dpj1/article/view/224
Stutchbury, K., & Fox, A. (2009). Ethics in educational research: introducing a
methodological tool for effective ethical analysis. Cambridge journal of education,
39(4), 489-504.
Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011).
What forty years of research says about the impact of technology on learning: A
second-order meta-analysis and validation study. Review of Educational Research, 81,
4–28.
Tangen, R. (2014). Balancing ethics and quality in educational research—the ethical matrix
method. Scandinavian Journal of Educational Research, 58(6), 678-694.
Teddlie, C., & Tashakkori, A. (2003). Major issues and controversies in the use of mixed
methods in the social and behavioral sciences. In A. Tashakkori & C. Teddlie (Eds.),
Handbook of mixed methods in social and behavioral research (pp. 3-50). Thousand
Oaks, CA: Sage.
Tight, M. (2013). Discipline and methodology in higher education research. Higher Education
Research & Development, 32(1), 136–151.
Teo, P. (2016). Exploring the dialogic space in teaching: A study of teacher talk in the pre-
university classroom in Singapore. Teaching and teacher education, 56, 47-60.
Teo, P. (2019). Teaching for the 21st century: A case for dialogic pedagogy. Learning,
Culture and Social Interaction, 21, 170-178.
Torrance, H. (2012). Formative assessment at the crossroads: conformative, deformative and
transformative assessment. Oxford Review of Education, 38, 323-342.
van der Kleij F., Adie L. (2018) Formative Assessment and Feedback Using Information
Technology. In: Voogt J., Knezek G., Christensen R., Lai KW. (Eds) Second
Handbook of Information Technology in Primary and Secondary Education, 601-
615. Cham: Springer
Vass, E. (2019). Musical co-creativity and learning in the Kokas pedagogy: Polyphony of
movement and imagination. Thinking Skills and Creativity, 31, 179-197.
Wang, F., & Hannafin, M. J. (2005). Design-based research and technology-enhanced
learning environments. Educational Technology Research and Development, 53(4), 5-
23.
Wegerif, R. (2007). Dialogic Education and Technology: Expanding the Space of Learning.
London: Springer.
Wegerif, R. (2013). Dialogic: Education for the internet age: London: Routledge.
Wegerif R. (2010) Dialogue and teaching thinking with tech-nology: opening, expanding and
deepening the ‘inter-face’. InEducational Dialogues: Understanding andPromoting
Productive Interaction. In K. Littleton & C. Howe( Eds), pp. 304–322. Abingdon:
Routledge.
Wegerif, R. (2019). Dialogic education. In Oxford Research Encyclopedia of Education.
Wegerif, R., & Major, L. (2019). Buber, educational technology, and the expansion of
dialogic space. AI & SOCIETY, 34(1), 109-119.

112
Wegerif, R., & Yang, Y. (2011). Technology and dialogic space: lessons from history and
from the ‘Argunaut’and ‘Metafora’projects. Long papers, 312-318. Connecting
Computer-Supported Collaborative Learning to Policy and Practice: CSCL2011
Conference Proceedings.
Wells, G. (1999). Dialogic inquiry: Towards a socio-cultural practice and theory of
education. Cambridge: Cambridge University Press.
Wertsch, J. V., (1993). Voices of the mind: Sociocultural approach to mediated action.
Cambridge: Harvard University Press.
Wertsch, J. V. (1998). Mind as action. Oxford: Oxford University ress.
Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners' agentic
engagement with feedback: A systematic review and a taxonomy of recipience
processes. Educational Psychologist, 52(1), 17-37.
Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2017). ‘It'd be useful, but I
wouldn't use it’: barriers to university students’ feedback seeking and recipience.
Studies in Higher Education, 42(11), 2026-2041.
Wood, A. K., Galloway, R. K., Hardy, J., & Sinclair, C. M. (2014). Analyzing learning during
Peer Instruction dialogues: A resource activation framework. Physical Review Special
Topics-Physics Education Research, 10(2), 1-15.
Wood, M., & Su, F. (2014). A Mission Possible: Towards a Shared Dialogic Space for
Professional Learning in UK Higher Education. European Journal of Higher
Education, 4(4), 363-372.
Yates, K., Birks, M., Woods, C., & Hitchins, M. (2015). # Learning: The use of back channel
technology in multi-campus nursing education. Nurse education today, 35(9), e65-e69.
Zeng, W., Huang, F., Yu, L., & Chen, S. (2018). Towards a learning-oriented assessment to
improve students’ learning—a critical review of literature. Educational Assessment,
Evaluation and Accountability, 30(3), 211-250.
Zimmerman, B. J., & Labuhn, A. S. (2012). Self-regulation of learning: Process approaches to
personal development. In K. R. Harris, S. Graham, T. Urdan, C. B. McCormick, G. M.
Sinatra, & J. Sweller (Eds.), APA educational psychology handbook, Vol. 1. Theories,
constructs, and critical issues (pp. 399-425). Washington, DC, US: American
Psychological Association.

113
I
Computers & Education 88 (2015) 48e63

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

Creating formative feedback spaces in large lectures


Kristine Ludvigsen*, Rune Krumsvik 1, Bjarte Furnes 2
University of Bergen, Department of Education, Christiesgt 13, Postboks 7802, 5020 Bergen, Norway

a r t i c l e i n f o a b s t r a c t

Article history: Large lectures are the predominant way of teaching first-year students at universities in Norway.
Received 28 December 2014 However, this forum for education is seldom discussed as a context for a formative feedback practice. The
Received in revised form purpose of this sequential mixed methods study was to address whether and how a student-response
3 April 2015
system can open for a formative feedback practice in lectures and thereby support students' ability to
Accepted 6 April 2015
monitor their own learning, as well as supply insight into how students engage with the feedback in their
Available online 23 April 2015
course work. The context for the study was large lectures (150e200 students) in a qualitative method
course for first-year psychology students. Findings from the survey (n ¼ 149) showed a positive corre-
Keywords:
Feedback lation between the extent to which students report that they use clickers to monitor their own learning,
Plenary lectures and the extent to which they report that they used the feedback in their own course work. However,
Higher education findings indicate that students valued the process of monitoring their own learning during the lectures to
Formative assessment a greater extent than they actually used the feedback in their course work. Findings from interviews
Interactive learning environments (n ¼ 6) illustrated various ways students applied feedback in their course work.
© 2015 Elsevier Ltd. All rights reserved.

1. Introduction

The Bologna process, new standards for national curricula, increasing diversity among university students, more focus on formative
assessment, and the digital revolution are all factors that have changed some of the underlying conditions for teaching, learning, and
assessment in today's universities. The Quality Reform in Norway (Ministry of Education and Research, 2000e2001), focuses on frequent
feedback to all students on how they are doing and on what they need to do to improve. It is also emphasized that feedback should be integrated
throughout the courses. Even though there has been a decrease in plenary lectures after the Quality Reform, large lectures dominate the way
teaching is organized for first-year students at universities in Norway (Kvernbekk, 2011). Students in higher education value feedback and
engaging in dialogue about course work, but lecture theatres and seminars are seldom discussed as a context for a formative feedback practice
(Black & McCormick, 2010). This is also reflected in the literature on formative assessment in the area of higher education. Focus has been more
on written work and peer interaction, and less on classroom dialogue (Black & McCormick, 2010).
Tools designed to enhance interaction in lectures are often referred to as “student response systems” or “polling technologies”. These
technologies allow the lecturer to collect and analyse student responses to various types of questions posted during lecturing. Results
provide feedback about student progress that allows the lecturer to modify explanations and to adjust the focus, methods, progression, and
teaching strategies (Schell, Lukoff, & Mazur, 2013). The formative aspect lies in taking advantage of technological opportunities to make the
reflections of the students more visible (Pachler, Daly, Mor, & Mellar, 2010) and to create an arena for collaboration between students,
teachers and peers.
The intent of this study is to address whether and how a student-response system can open for a formative feedback practice in large
lectures, and thereby support students' ability to monitor their own learning, as well as supply insight into how students engage with the
feedback in their course work. The context for the study was large lectures (150e200 students) in a qualitative method course for first-year
psychology students. An explanatory sequential mixed design was used, where survey and interviews were integrated in the design. In the

* Corresponding author. Tel.: þ47 55 01 21 29.


E-mail addresses: [email protected] (K. Ludvigsen), [email protected] (R. Krumsvik), [email protected] (B. Furnes).
1
Tel.: þ47 55 58 48 07.
2
Tel.: þ47 55 58 68 74.

http://dx.doi.org/10.1016/j.compedu.2015.04.002
0360-1315/© 2015 Elsevier Ltd. All rights reserved.
K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 49

first phase, quantitative data was collected using a survey. The purpose of the survey was to address the extent to which the students
perceived that use of student response systems in lectures supported the process of monitoring their own learning in the lecture, as well as the
extent to which the students made use of the feedback to support their course work. Second, qualitative data was collected using interviews.
The purpose of the interviews was to explore how students perceived the lecture as a space for feedback and how they worked with the
feedback in their own course work. The purpose of the mixing of methods was to explain and add depth to findings in the survey and allow the
students to bring in new perspectives that were not captured in the questionnaire. The research questions that have guided the research are:

1. To what extent and how do students experience the process of monitoring their own learning using clicker questions in lectures?
2. To what extent and how do they apply feedback in their own course work?
3. Is there a relationship between the extent to which students report that use of student response systems support their learning in
lectures, and the extent to which they report using feedback in their own course work?
4. How does technology change the conditions for the two processes?

1.1. What characterizes a formative feedback practice?

Feedback is the core of formative assessment and has been shown to be an important factor in students' learning (Hattie & Timperley,
2007). The purpose of feedback is to bridge the gap between current and desired performance (Sadler, 1989). For effective feedback to
occur, students need to know the standard or goals for their learning, compare the goals to their own work, and take action to close the gap
(Sadler, 1989). There are different traditions of feedback practices. Therefore, the definition of feedback practice is dependent on the view of
learning and knowledge building (Hattie & Gan, 2011). For example, in a behaviouristic influenced model of feedback, feedback is understood
as a monologue or a “one-way transmission of information from teacher to students” (Boud & Molloy, 2013, p. 702) The assumption in this
approach is that the teacher or other external agent drives the student's learning. Whereas, on the other hand, feedback through dialogue
is frequently associated with a social-constructivist approach (Evans, 2013) where feedback is viewed as a process, in which students, in
dialogue with their peers and teachers are encouraged to monitor and evaluate their own learning process (Boud & Molloy, 2013). In the latter
tradition, the purpose of feedback is to develop self-regulated learning (Clark, 2012; Evans, 2013; Nicol & Macfarlane-Dick, 2006). Self-
regulated learning refers to a process whereby the learner sets goals for his/her learning, monitors, regulates and controls the actions,
cognition and motivation needed to achieve the goal (Zimmerman & Labuhn, 2012). One central aspect of self-regulation is learners' ability to
regulate learning through metacognitive processes. Metacognition refers to “cognition about cognition”, and is involved in monitoring and
control of various cognitive activities (Koriat, 2007; Metcalfe, 2000). Theories of self-regulated learning also include feedback, self and peer
assessment, and “social forms of learning” (Zimmerman & Labuhn, 2012, p. 399). A theoretical lens by which to explore student engagement
with feedback is Zimmerman's model (Zimmerman & Labuhn, 2012) of self-regulated learning. This framework consists of three cyclic phases.
The first phase, the forethought phase, relates to the objectives and the planning work needed to achieve the goals. The second phase, the
performance and control phase, includes metacognitive monitoring and learning strategies. The third phase, the reflection phase, is related to
the evaluation of outcomes in relation to the effort put in, or adjusting strategies (Zimmerman & Labuhn, 2012). These phases can coincide with
Hattie and Timperley (2007) questions “where am I going?”, “how am I doing?”, and “where to go next?” (Andrade, 2010).
In recent years, there has been an increased focus on feedback and how to give feedback, and a large body of literature has emerged
providing evidence for effective feedback practice (Evans, 2013; Hattie & Timperley, 2007; Shute, 2008). Based on the works of Boud &
Molloy, 2013; Nicol and Macfarlane-Dick (2006), Carless, Salter, Yang, and Lam (2011), Orsmond, Maw, Park, Gomez, and Crook (2013),
and Evans' (2013), a formative feedback practice in higher education is characterized, i.a., by:

 Feedback should clarify good performance and be aligned to the purpose of learning and learning objectives;
 Support the process of creating learning tasks that make student learning visible and encourage reflections around evidence of learning;
 Feedback processes where students together with their peers are encouraged to monitor and evaluate their own learning;
 Feedback processes where feedback from various sources is used to support student learning;
 Students are encouraged to engage in feedback in dialogue with their peers and teachers;
 Feedback processes that support students in skills for planning their learning; and
 Feedback should be an on-going process and be used to shape teaching.

Despite agreement that feedback practices should be characterized by dialogue, students often experience limited opportunities to
actually engage in real feedback dialogues, for example to explain their thoughts, or to pose questions to their tutor (Blair & McGinty, 2013).
Students often experience that the feedback they receive is insufficient when it comes to applying it in their own learning process, and they
also find the interpretation and use of feedback to be problematic. This is referred to as the feedback gap, the “gap between receiving and
acting on feedback” (Evans, 2013, p. 94). There are various reasons why students do not use feedback: they do not find it helpful in
addressing what they need to work on; they do not understand the feedback or how to apply it. It is also common that students use feedback
only as an indicator of how they are doing, rather than engaging with feedback in their own course work (Jonsson, 2012). Therefore, when
studying formative feedback practices, it is vital to examine how students engage with feedback in their own learning process (Fluckinger
et al. 2010; Nicol, Thomson, & Breslin, 2014). How students engage with feedback is a core area of research on formative feedback practices.

1.2. Can feedback clickers support a formative feedback practice in lectures?

Digital technologies represent new opportunities and approaches to teaching, learning and assessment in higher education. E-assess-
ment refers to any type of digital technology that is used for the purpose of formative and summative assessment (Evans, 2013; Sto € dberg,
2012). Research on e-assessment concludes that essential for its success is the way technology is integrated in the course design and how it
is used (Evans, 2013). Questions to be asked when using technology to support a formative assessment practice is to what extent and how
50 K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63

the technology can support different feedback processes that involve students and peers in dialogues on qualities of learning, provide
dialogue on how to close the gap between current and desired performance and support the process of creating learning tasks that make
student learning visible and encourage reflections around evidence of learning (Black & Wiliam, 2009).
Previous studies have found that using clickers in lectures can increase student engagement and participation (Han & Finkelstein, 2013;
Oigara & Keengwe, 2013), that students read more (Lantz, 2010), and that they are more focused during class (Cain, Black, & Rohr, 2009).
Even though most of the literature in this field report an increase in self-reported learning outcome when clickers are used (Nelson, Hartling,
Campbell, & Oswald, 2012), studies show mixed results when it comes to objective learning outcome (e.g. tests or exam score). A review on
the effect of clicker use on learning outcome (Nelson et al., 2012) found evidence to suggest that use of feedback clickers may improve both
short term and long term learning, but stated that the findings were more apparent in studies that compared lectures using feedback
clickers with traditional (non-interactive) lectures. This result indicates that the positive results might be related to the active learning in
general rather than the use of clickers.
Several authors have suggested that there is an improvement in the quality of peer discussions, along with an increase in the quantity of
these discussions (Caldwell, 2007; Kay & LeSage, 2009; Mollborn & Hoekstra, 2010; Nielsen, 2012). Again, some authors have indicated that
it is the process of asking questions (Anthis, 2011; Morse, Ruggieri, & Whelan-Berry, 2010) or an interactive lecture style that promotes student
engagement, not necessarily the technology per se. Other findings indicates that the use of technology added a dimension beyond asking
questions in lectures or engaging students in peer discussions (Krumsvik & Ludvigsen, 2012). Studies that analysed peer-discussion in
clicker questions revealed that even when the majority of the students responded correctly, some had misunderstood the concept they were
discussing (James & Willoughby, 2011; Nielsen, 2012). This shows that there is an uncertainty involved in interpreting students’ responses to
clicker questions and that the results of a clicker session can be misleading for the students as well as for the lecturer (Nielsen, 2012). A key
to the formative feedback practice is to allow students to monitor their own learning. If, as in these cases, what is visible to the students and
the teachers is based on the wrong assumptions, it would result in misleading feedback. When studying these technologies, one must ensure
that certain vital questions are asked: “Do clickers provide good enough evidence of students' understanding of different concepts? What is
actually visible, and to whom? How can results be used to shape teaching and learning?”
Despite the large body of literature on student response systems focussing on how students like or experience the feedback during
lectures (Voelkel & Bennett, 2014), there is less literature on how students apply the feedback to their own course work or how student
response technology supports students in changing their learning strategies. Some studies show that the use of clickers has little influence
on preparation for class or work outside class (Boyle & Nicol, 2003; MacGeorge, et al., 2008). Regarding how student response systems can
support formative feedback practices in lectures, there is a need to investigate how students engage and work with feedback, both in
lectures and in their course work.

2. Context of the study

The context of the study is a course in qualitative method for first-year psychology students (bachelor degree), organized in the spring
semester of 2013. The learning objectives of this course were to equip students with knowledge on how to design and conduct qualitative
research and analysis. Furthermore, students should have knowledge on strengths, weaknesses and ethical dilemma of various qualitative
approaches, as well as the scientific basis of qualitative methods and how they differ from quantitative research design and data collection
methods. They should also know how to design different phases within a qualitative research project and how qualitative research design
can be integrated with quantitative approaches.

2.1. Overall course design

The course was organized in five lectures (2 x 45 min), followed by four seminars and self-studies supported by a course website and a
Facebook page. Out of the 350 students who were enrolled in the course, 220 were usually following the lectures which were given as
doublets every week. Exam was organized at campus and students could choose to write an essay in either qualitative or quantitative
method. The examination paper in qualitative methods, spring 2013 was to give an account of a qualitative research design model and
discuss what kind of function the research question has in this model. How the particular teaching and learning activities in the course were
aligned to the formative and the summative assessment task is presented in Fig. 1.

2.2. Question driven lecturing design

To encourage students to monitor and evaluate their own understanding of concepts, students were asked to discuss conceptual
questions in peer groups; then each student used a clicker to give their own answers. Clickers were delivered when the students entered the
lecture hall. In the first lecture, there was a short briefing about the technology and the purpose of using clicker questions in the course.
Every lecture had 4e6 feedback loops, as illustrated in Fig. 2.
The formative assessment activities (the questions) in the lecture were aligned to the learning objectives of the course (knowledge, skills
and general competence). The purpose of the questions was twofold, first to address core concepts in the course work readings and the
lecture material. Second to apply these concepts from course literature in different cases or contexts (written or video-cases). Examples of
questions are presented in Fig. 3.
Students used 1e2 min to discuss the questions. A display of the results was supposed to be a driver for a plenary class discussion,
comments, questions and clarification. This allowed the lecturer to modify explanations, adjust his focus, methods, progression and teaching
strategies. The lecturer's adjustment was depending on the students' responses. In all cases, an explanation of the question and the al-
ternatives was given. An example of an assignment (clicker question) is to watch a video clip of a qualitative interview (Gibbs, 2013).
Students are asked to assess the interview in light of ten criteria for conducting good qualitative interviews presented by Kvale and
Brinkmann (2009). Figs. 4 and 5 give an example of a reflection question and the corresponding student responses. Displayed answers
are used as backdrops for a plenary group discussion.
K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 51

Fig. 1. Course design.

Fig. 2. Question driven lectures.

Fig. 3. Examples of questions.


52 K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63

Fig. 4. Reflection question.

Fig. 5. Students' responses to reflection question.

3. Methods

3.1. The present study: A mixed method research approach

Design Based Research (DBRC, 2003) is used as the overall research design for the study. Using both qualitative and quantitative data in
this design based research allows us to study interventions such as ICT supported feedback practice from different perspectives. More
specifically, the study is grounded on a mixed methods research design (Johnson, Onwuegbuzie, & Turner, 2007) where the quantitative and
qualitative elements (survey and interview) are integrated in the design.

3.2. The quantitative phase: Developing the survey

The purpose of the quantitative phase was twofold. First, to address the extent to which the students perceived that use of student
response systems in lectures supported the process of monitoring their own learning in the lecture, as well as the extent to which the
students made use of the feedback to support their course work. Second, we wanted to address whether there is a relationship between the
extent to which students reported that they used clickers to monitor their own learning, and the extent to which they reported that they
used the feedback in their course work.
In the first phase, quantitative data was collected via a survey. The survey was developed from a previous survey, used in a study
assessing students' perception of feedback and learning outcome in large lectures (Krumsvik & Ludvigsen, 2012). This study concluded
that students used clicker questions to identify concepts they did not understand. To explore these questions further we developed a
survey and an interview guide addressing three broad themes: Perception of the lecture as a space for feedback, student use of feedback in
course work and teacher and peer dialogue.
In the first category; Perception of the lecture as a space for feedback, questions were developed to capture the extent to which students
experienced clickers to support them in monitoring their understanding of course topics, grasp the purpose of what they were supposed
K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 53

to learn, clarify misunderstandings, give them insight in their strengths and weaknesses, or make them reflect more on the subject matter.
For the second category; student use of feedback in course work, we developed general survey questions addressing different ways students
used feedback in their course work, e.g to read more, to clarify concepts they did not understand, to improve the way they study, to
discuss with peers. In the third category; teacher and peer dialogue, questions addressed to what extent and how feedback clickers
supported the teacher and peer dialogue. Fig. 6 visualizes how the quantitative and the qualitative components are linked together, and
how the two components are supposed to shed light on each other:
The survey was also designed to explore any differences that might exist in students' opinions relating to a number of characteristics:
age, motivation, preference for presentation tools, gender, grade and academic experience. To ensure good quality in the wording of the
questions, external experts in quantitative methods were consulted. We asked the students to rank degrees of agreement with different
statements/claims using a seven point Likert scale. To be able to capture students' reflections about their experience, feedback clickers
were used to collect data in real time immediately after the lecture. All students who attended the last of the five lectures were asked to
answer the survey.
Regarding ethical considerations on collecting and storing of research data, this project was approved by the Norwegian Social Science
Data Services (NSD). Participation was voluntary, and before the data collection started, the students were given a verbal orientation about
the purpose of the survey and the storing of data.

Fig. 6. Survey and interview questions.


54 K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63

3.3. The qualitative phase

The purpose of the interview was to understand how students perceived the lecture as a space for feedback and how they worked with
the feedback in their course work. Phenomenological or ethnographic approaches can offer deeper insight into the thoughts and expla-
nations given by those who are engaged in the process of monitoring and regulating their learning processes than what can be captured
using questionnaires alone (Dinsmore, Alexander, & Loughlin, 2008). Results of the survey were used to develop an interview guide for the
second phase. Some of the questions regarding monitoring and peer interaction were used in our previous work (Krumsvik & Ludvigsen,
2012). The interview guide had three broad themes of investigation: How clicker questions and peer discussions supported the learning of
concepts during the lecture, how students used feedback in their course work, and what role the technology response system played in these two
processes.
Six semi-structured interviews were conducted. The sample is based on voluntary participation, and the sampling can be characterized
as purposive (Teddlie & Tashakkori, 2009). We have chosen informants from the quantitative sample to the follow-up interviews. Students
were recruited by invitation in the end of the last lecture and invitation by e-mail to all the students participating in the course. Six
students volunteered. Of the six students, all girls, four were first-year students (Ane, Ingrid, Karen and Marie) and two students with
more than one year of experience (Hege and Stine). All the informants had high grades from upper secondary school. The semi-structured
interviews lasted from 40 min to more than one hour, and were conducted the week following the last lecture. The interviews were
analyzed using a thematic analysis in Nvivo10. The six transcripts were coded by looking for different themes and sub-themes. The
number of sources coded, how many references coded in each particular category and underlying themes are illustrated in the screen shot
below (Fig. 7).

3.4. Integration of qualitative and quantitative data

The purpose of the mixing of methods was to explain and add depth to findings in the survey and allow the students to bring in new
perspectives that were not captured in the questionnaire. This purpose can be characterized as both “complementary” and explanatory
(Greene, Caracelli, & Graham, 1989, p. 259) and the study can be characterized as a sequential, explanatory mixed-method design. In this
study, we prefer to use the metaphors conversation and dialogue, with the metaphor conversation referring to how one part of the study
informs the next phase, and the metaphor dialogue refers to going back and forth between the two data sets and investigating the research
questions (Bazeley & Kemp, 2012).
Onwuegbuzie and Johnson (2006) argue that “because mixed research involves combining complementary strengths and non-overlapping
weaknesses of quantitative and qualitative research, assessing the validity of findings is particularly complex; we call this the problem of inte-
gration” (Onwuegbuzie & Johnson, 2006, p. 48). The integration occurred in different stages throughout the entire project. First, in the
conceptualizing phase, qualitative and quantitative research questions were introduced; then when the results of the survey informed the
development of the interview guide and in writing the conclusion based on both qualitative and quantitative data. We have aimed to
address the same research question, however from different angles. Acknowledging both the quantitative and the qualitative tradition we
have strived to be transparent regarding our research design, the stages of the data collection, data analysis and integration. The model
below (Fig. 8) visualizes the design of the study.

Fig. 7. Screen print from NVivo10, illustrating themes coded.


K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 55

Fig. 8. Sequential mixed methods design.

4. Data analysis: Findings from the quantitative phase

To analyse the data from the quantitative phase we used factor analysis, descriptive data analysis, mean difference analysis (t-test), and
correlational analysis. We conducted all analyses using SPSS statistics 22.

4.1. Participants

The sample consisted of 149 participants (women ¼ 118, men ¼ 21, missing ¼ 10). 25% of the students were younger than 20 years, 63% of
the students were 20e25 years old and only 6 of the students were 25 or older. All of the students had received high grades from upper
secondary school, and the majority of the students (70%) were highly motivated to succeed the course. Of the students participating in this
study, 73% were first year students. When asked about the extent to which they preferred presentation tools, 47% preferred some use while
53% preferred much use.

4.2. Results of factor analysis

Although the sample is small (N ¼ 149), the assessment of the suitability of the data for factor analysis KMO (Kaiser-Meyer-Olkin
Measure of Sampling Adequacy) was .807, and Bartlett's is significant (p < .000). Based on theory pertaining to formative assessment
and in an effort to bring a simpler structure to the 19 items that were included in the survey, we performed a confirmatory factor
analysis. This analysis revealed two factors related to the timing for feedback. The first factor with questions regarding how feedback
impacts learning during the lecture named How feedback impacts learning during lecture (Factor 1). The second factor with questions
regarding how feedback impacts learning outside the lecture, named How feedback impact learning outside the lecture (Factor 2).
Questions addressing peer and teacher dialogue turned out to have a Cronbach's Alpha too low to represent a separate scale. However,
one of these items, Q17, is included in factor 1.
Factor 1, How feedback impacts learning during lecture, (Cronbach's Alpha ¼ .74) consisted of items Q1, Q2, Q3, Q4, Q5, Q6 and Q18 and
addressed how students used feedback to monitor their own learning during the lecture. Table 1 shows the distribution of responses of the
variables in factor 1.
Factor 2, How feedback impact learning outside the lecture (Cronbach's Alpha ¼ .71) consisted of items Q10, Q11, Q12, Q13, Q14 and Q15 and
addressed the use of feedback to support students' course work. In this case, the majority of the responses were positive only on the general
question addressing the usefulness of feedback clickers to indicate what they need to work on, while the answers to other questions tar-
geting more explicit use of feedback are much more diverse. Table 2 shows the distribution of responses of the variables in factor 2.

4.3. Descriptive statistics, mean differences, and correlational analysis for factor 1 and factor 2

Mean score on factor 1 (How feedback impacts learning during lecture) and factor 2 (How feedback impact of learning outside the
lecture) was 5.3 (SD ¼ .97) and 4.2 (SD ¼ 1.09), respectively. A Paired Samples T-test showed that the mean difference between these two
factors was significant [t(146) ¼ 1.10, p > .000]. This indicates a gap between factor 1 and factor 2, meaning that students are more likely to
report that they use feedback during lectures to monitor their own learning, but that they do not apply the feedback to their own course
work to the same extent.

Table 1
How feedback impacts learning during lecture (factor 1).

Agree (%) Disagree (%)


Q1: Use of feedback clickers in class has helped me clarify misunderstandings 34.2 34.2 13.7 8.9 3.4 4.1 1.4
Q2: Use of feedback clickers gave me greater insight into how I managed academically 16.6 24.1 30.3 15.9 6.2 2.8 4.1
Q3: Use of feedback clickers made me reflect more on the subject matter 20.7 31.0 22.8 13.8 6.2 2.1 3.4
Q4:Use of feedback clickers made it clear to me what I should learn in the lecture in qualitative method 11.8 27.1 36.1 12.5 6.3 3.5 2.8
Q5:Use of feedback clickers made sure I was more attentive in the lectures 43.2 30.1 15.1 4.1 2.7 2.7 2.1
Q6:Feedback clickers helped me to understand my intellectual strengths and weaknesses 11 16.4 21.2 20.5 20.5 5.5 4.8
Q 18: Use of feedback clickers led to more discussions between students during the lecture 61.6 17.8 11.0 4.1 2.7 0.7 2.1

A seven-point Likert scale was used.


56 K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63

Table 2
How feedback impact learning outside the lecture (factor 2). Questions in factor 2.

Agree (%) Disagree (%)


Q 10: Use of feedback clickers has been useful for my learning of the subject matter in qualitative method 40.1 23.1 21.1 6.1 6.1 3.4 0
Q 11: Use of feedback clickers motivated me to work with the material I did not understand in qualitative method 7.6 11 24.1 20.7 17.9 7.6 11.0
Q 12: Clickers helped me to improve the way I study in qualitative method 2.7 6.8 23.3 21.2 19.9 9.6 16.4
Q:13 Use of feedback clickers motivated me to read the curriculum in qualitative method 2.8 11.8 12.5 16.7 13.9 13.2 29.2
Q 14: Use of feedback clickers has shown me what I need to work on in qualitative method 20.0 26.2 30.3 12.4 6.9 4.1 0
Q16: If I responded incorrectly to a clicker question about a concept, I tried to clarify the concept after the lecture 13.8 17.2 19.3 11.7 10.3 10.3 17.2

A seven-point Likert scale was used.

A correlational analysis (Pearson) showed that there was a strong positive correlation between factor 1 and factor 2 (r ¼ .68, p < .000). In
other words, students who used clicker questions to monitor their own learning during the lecture also more often used feedback to support
their own learning during the whole course. Furthermore, we found a negative correlation between the background variable student
experience and factor 1 (r ¼ .26, P <. 001) as well as student experience and factor 2 (r ¼ .17, p < .034), indicating that more experienced
students find that clickers supported them in their learning to a lesser extent than less experienced students. Although these correlations
were significant, they are weak and should therefore be treated with caution.

5. Qualitative findings

The purpose of the qualitative phase was to examine more in depth how clicker questions and peer discussions supported students
in monitoring their own learning during the lecture, how students used feedback in their course work, and what role the student
response system played in these two processes. We also wanted to address the feedback gap that had been identified in the survey, in
terms of how students use feedback in their learning, but equally important, to ascertain why they do not use feedback. First, we
present findings on how students experience the process of monitoring their own learning using clicker questions in lectures. Second,
we present how students in the sample used feedback in their own learning process. A presentation of the six students (referred as
pseudonyms) is found in Appendix A.

5.1. How do students experience the process of monitoring their own learning using clicker questions in lectures?

Results from the survey showed that the mean score in factor 1 (How feedback impact learning in the lecture) was 5,7, indicating
that the majority of the student found that the use of clicker questions during the lectures supported them in monitoring their own
understanding. However, what lies behind these numbers? How do students explain this? All the students pointed out that the
questions supported them in identifying key concepts and basic ideas in the field of qualitative methods, and as a summary of the
different parts of the lecture and the curriculum. A first-year student used the metaphor of a key to explain the meaning of the term
clicker question:
It may be the key to pretty much. That is what the clicker questions are. They are the keys. (Ingrid)
Clickers have helped me identify main themes and characteristics (Stine).

A clear aim is connected to the forethought phase (Zimmerman & Labuhn, 2012) and Hattie and Timperley (2007) question ‘where am I
going?’. All students emphasize that working with questions during the lectures is an active way of engaging with course material while
allowing time to reflect on concepts during the lecture:
It is seldom I sit in another lecture and think: Do I really know this? It doesn't happen, because you just sit and take in the
knowledge. ( … ) once it is being told to me, I get the feeling that I know this, right? It sounds logical, everything is put together,
so you feel like you can do everything that you're told, but if you are asked questions, you will become all the more aware of what
you are unsure of. (Hege)

The quote above is interesting. Hege uses different modes for describing the two contexts: traditional lectures and question-
driven lectures. When describing traditional lectures, she refers to a passive mode: Sit and take in the knowledge. When
describing question driven lectures, she refers to an active mode: “but if you are asked questions, you will become all the more aware
of what you are unsure of”. She feels a commitment to reflect on and give answers to the questions posed in the lecture. This is a
typical quote from all the informants in the qualitative material; they use the questions to monitor their own understanding of
concept and themes; “you maintain more control over what you know and don't know, you have more control, like metacognitions then”
(Karen). This makes it easier for them to know where they need to put in extra effort: “And then I become conscious of what I am
unsure of” (Ingrid). This quote refers to a metacognitive awareness of one's own understanding during the lectures and is a recurrent
view of all the informants in the interviews. The peer-discussions are rated high in the survey and are emphasized in the interviews
as being the most useful aspect of using clicker questions during a lecture. Students did experience that participating in peer
discussions makes their own and others' understanding of ideas clear. The opportunity to share and argue in favour of their own
views, as well as to discuss each other's understanding of concepts allowed them to see alternative ways of thinking about a
phenomenon or concept:
K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 57

Very often there are different opinions. Then you have to argue why. Argue for what you believe, what they mean, so you get different
points of view (… ) So you get the more insight into how it is possible to think or to look at a problem or a question, other views.
(Karen)

This quote is chosen because it is typical of the sample. Karen describes how peer discussions are spaces in which the student gets
feedback on his/her own views and understandings, both by arguing for one's own thoughts and listening to the perspectives of others. In
the discussion, they can adjust and monitor their own understanding of concepts discussed. One student experienced that she develops her
arguments during discussions and becomes aware her own understanding:
It means a lot for how you organize your thoughts, for me at least. I notice that I cannot answer the questions until I discuss them out loud
( … ) you argue with someone about why (your ideas) are right, and then suddenly you find arguments for why it is right and why it is
wrong. (Ane)

In the quote above, she describes how she becomes aware of her own and her peers' views, and together they find arguments
for reaching a conclusion. Findings based on interviews indicated an improvement in the quality as well as the quantity of peer
discussions. When students described quality, four aspects were highlighted. Most important was the fact that the students had to
give an answer, and therefore engaged in the process of discussing questions in the first place. They listened to what the others had
to say and used this actively to adjust their own understanding and to draw their own conclusions. Questions were used to monitor
their own understanding of the concepts under discussion and to evaluate their own learning process. The quotes above are con-
nected to the performance and control phase (Zimmerman & Labuhn, 2012), and to the question of ‘how am I doing?’ (Hattie &
Timperley, 2007). It seems clear that discussions with peers are important in the design and that students value them. The re-
sponses highlight opportunities to listen to others and to argue for one's own understanding or views, to keep discussions on track
and to seek a joint conclusion.
All the six students emphasized the value of the teacher discussing the results and explaining concepts after a clicker questions session.
One student (Marie) argued that how the clickers had affected the lecturer was the most important aspect of using clickers in the lecture:
It is maybe not how it (use of clickers) affects me, but maybe how it affects the lecturer. Because when he sees the results, he gets a good
idea of what has been understood, and what has not been well understood, or if something is unclear. If he has been unclear, or if things
are difficult, he adjusts his lecture after the results and this is very positive. At least I think so. (Marie).

All the students were aware of that the results of clicker questions allowed the lecturer to adjust his teaching, explain more detailed or
comment on the different alternatives. This gave them the opportunity to clarify, argue in favour of their own perspective, ask questions, and
the discussions gave them a sense of active participation. Clickers play an important role in this sense:
It's like the connection between the PowerPoints and lectures and the theme. ( … ) essentially the line between me and the curriculum
and between me and lecturer. (Ingrid)

Students are familiar with questions and peer discussions in lectures. However, are they different from clicker questions, and how do
students explain this? We asked the students to describe their engagement with questions with and without clickers. Although stu-
dents are used to questions in other lectures, they do not assess them in a similar way, they maintain that questions without clickers are

Fig. 9. Passive and active modes.


58 K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63

in some sense different from questions you actually have to answer (clicker questions), or as one student said: “in other lectures you are
not expected to be asked for anything” (Hege). And when there are questions in lectures, students do not feel that they actually have to
get involved or engage in them. When describing how the use of clicker questions in lectures support their learning, students refer to
traditional lectures as passive mode, while the question driven lectures represent an active mode. Again, we find the two dimensions
passive and active described. This is evident in all the interviews. In Fig. 9, the quotes are placed along these two dimensions. In passive
mode, students in the sample talk about traditional lectures. In active mode, students talk about question driven lectures.
Reading the transcripts, it seems evident that using question-driven lecturing supports students in monitoring their own learning and
provides a sense of participation. Clicker questions mediate a space for productive peer discussion. Using clickers moves the lecture from
a passive mode to an active mode, enabling students to engage with their own understanding.

5.2. How do students use feedback in their own course work?

The opportunity to provide immediate feedback on the students' understanding of terms and concepts is highlighted as the most
important aspect of using feedback clickers. However, there is a difference between becoming aware of the terms or concepts that one does
not understand, and taking action in order to achieve a greater level of understanding. The interviews showed examples of various ways
students addressed feedback in their own learning. One approach is simply to use feedback as a guideline or an indicator of how they are
doing. Students with a more active approach used feedback to identify gaps in their own understanding and noted concepts or themes that
they needed to look into in more detail. Questions are also triggers for further discussion, and students discussed the questions with their
peers after lectures and during breaks.
You get the a-ha experience much earlier, I think it's very important, especially because you can also talk to others about it right away.
Right? (Marie).

An a-ha experience is a typical metacognitive activity and in this case, is a trigger for further exploration. During breaks, questions,
alternatives posed and possible explanations for why they answered as they did, are discussed.
He says we should discuss with the person sitting next to us every time, but the problem is that the discussion can continue
afterwards; we continue talking, discussing the issue, we do it anyway. We have to discuss it during the break and stuff like that
(Stine)

One stated that when she experiences a gap in her own understanding, her purpose for reading becomes to “check concepts” or “go
through the chapter again”. Some of the student said that they “read in a more focused way” and “focus more on details”. Only one of the
students had strategies beyond reading more or address difficult topics with her peers or with the lecturer. When Ane identified
concepts she did not understand, she wrote assignments connected to these topics: “I also write about the task, the topics that I
knew least. I always do. I always take the most difficult topics, to challenge myself”. In this case, the questions helped to focus her course
work.
During the course, Hege changed her learning strategies. First, she started to read in a more focused way; then she questioned herself as
she was working with the course material. She has experienced that working with questions in the lectures raised her awareness on how
questions supports her to engage with concepts both in the lecture and when she is reading:
I have noticed that asking questions means a lot for whether I understand (the course material) Right? I become more conscious. (Hege)

Fig. 10. Quantification of qualitative data.


K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 59

Fig. 11. Quantitative and qualitative findings.

Working with clickers has changed her view of learning and how she can learn:
This is something I will carry with me the rest of my life, in my work life as well. If I shall learn something, I can just ask: Ok, did I
understand this? Do I know this? Just to check for myself. I will use it the rest of my life. I think it is very nice. (Hege)
This supports the process of monitoring one's own understanding and is a key characteristic of self-regulated learning. The quotes above
can be associated with both the reflection phase (Zimmerman & Labuhn, 2012) and the question ‘where to go next?’ (Hattie & Timperley,
2007). The six students apply or emphasize different strategies; some have just one approach, while other applies several approaches.
This is illustrated in Fig. 10, which illustrates references coded for the node reflection on own learning during lecture, and the node use of
feedback in course work. The model visualizes how much of the transcript was coded at different themes and illustrates that students in the
sample, both as individuals and as a group, are more concerned of how clicker questions support them to reflect on their own understanding
during lectures, than actually applying feedback in their own course work.
We have found five approaches by which this practice supports students' work. First and most common, students use feedback to check if
their learning is on track. Second, they use it to address difficult concepts with their peers or teacher. Third, they use it to adjust and focus
their reading. Fourth, they use questions to identify difficult topics that they need to explore in more depth. Fifth, one student realizes that
she need to change her overall learning strategies, in this case, she has discovered how asking more questions supports her learning. The
students that did not engage in the feedback experienced that they had given correct (or mostly correct) answers, and therefore felt they
didn't need to work more on the concepts. In the model below (Fig. 11), main findings from the quantitative and the qualitative phase are
presented.

6. Discussions and conclusion

6.1. How do students experience the process of monitoring their own learning using clicker questions in lectures and how do they apply
feedback in their own course work?

In this article, we have had four areas of investigation. The first question was to what extent and how do students experience the
process of monitoring their own learning using clicker questions in lectures. Students in the qualitative sample emphasized the
reflection questions as a space for feedback, to explain their thoughts and reflect on their own understanding. The questions were
developed to address key concepts in the curriculum. Students found that clicker questions supported them in identifying those
concepts and allowed them to be active and engage with the concepts during the lecture. Knowing the standards or the objectives of
the learning is the key to a formative feedback practice (Hattie & Timperley, 2007). Questions and peer-discussion allowed students to
reflect on their own learning process during the lecture. This is an example of how feedback clickers supported the creation of
“moments of contingency” (Leahy, Lyon, Thompson, & Wiliam, 2005, p. 6). Clicker questions can create those ‘moments’, allowing the
students to monitor and reflect on their learning during the lecture, or as Clark states: “the blend of monitoring and reflecting which
together permit the reshaping of what being worked on while working on it” (Clark, 2012, p. 212). The technology plays an important role
because possibilities of giving and getting feedback on answers created a sense of commitment, engagement and participation. Both in
60 K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63

the survey and in the interviews, peer discussion was highlighted as most useful for monitoring one's own understanding and making
learning visible. Reflecting on/monitoring one's own understanding through peer discussions created a sense of participation in the
lecture. Findings based on interviews indicated an improvement in the quality as well as the quantity of peer discussions when clickers
were used to foster peer discussion in lectures. Students said that they listened to what the others were saying and used this actively to
adjust their own understanding and to draw their own conclusions. They used discussions to monitor their own understanding of the
concepts discussed. All six informants emphasized these aspects. The students found that both the clicker and the question asked by the
lecturer played a significant role in their engagement. The fact that students have to choose an answer ensured that peer-to-peer
conversations were more focused and effective. An analysis of the conversations between the students could be useful for future
studies in order to establish how use of feedback clickers may contribute to the students' brainstorming. Important topics for further
research include ascertaining how lecturers are able to make sense of and use the feedback to adjust their teaching, both in the short
and long terms.
The second question was to what extent and how students applied feedback in their own course work. The survey questions
addressing different ways students used feedback in their course work, for example to read more, clarify concepts they did not
understand, improve the way they study, discuss with peers, work on things they do not understand and help change the way they
study. Findings in the survey indicated a feedback gap; a majority of the students like to get feedback on their own understanding
during lectures, but they do not report that they apply the feedback to their own course work to the same extent. This was also
reflected in the interviews and was in line with previous literature (Evans, 2013). It is interesting to note that for the general
questions, like “Use of feedback clickers has shown me what I need to work on in qualitative method” (Q 10), the majority of the students
answer on the left side of the scale (agree) while for the more specific question, “Use of feedback clickers motivated me to read the
curriculum in qualitative method “(Q13), the majority answer on the right side of the scale (disagree). However, in the interview, five
students said that they read more focused. It might be that the clickers were not motivating them to read, but rather changed the
way they were reading, something which is not captured in the survey. In another specific question, “If I responded incorrectly to a
clicker question about a concept, I tried to clarify the concept after the lecture” (Q 16) the student's answers are equally distributed along
the scale. About half of the students agree with the statement while the other half disagree. The qualitative findings showed that to
ask peers after the lecture or during breaks, as well as checking in the course literature, are examples of strategies students use for
clarifying concepts.
In the qualitative sample, all the six students had examples of how feedback supported their course work in a way that allowed
them to make adjustments of their learning trajectories. However, the six students had different strategies, from using feedback
passively, as a guide on how they are progressing, to using it strategically, to guide reading and to identify concepts for further
investigation or for changing learning strategies. Even though this project had a specific focus on use of feedback outside of the lecture,
it is clear that students were more interested to talk about what happened in the lecture than how this supported their course work. To
be able to address the feedback gap it is important to know why students do not use feedback. The purpose of the qualitative data was
to explain and add depth to the quantitative findings. However, in the interviews we do not have so much data explaining why students
did not use feedback in their course work. It might be that the students who agreed to participate in the interviews, were among this
group of students that found the clickers to be most useful for their learning. If students are to be able to use feedback, it is also
important to design the questions appropriately. Questions designed to support a formative assessment practice should enhance
reflection and challenge the understanding of concepts, rather than having only a single correct answer. Another possibility is to use
other types of polling technologies, for example to poll texts, pictures, models/drawings or questions. This might change the dynamics
of the lecture and open other forums for dialogue. This is yet another area for further research in the area of technology-supported
formative assessment in lectures.
We found a negative correlation between the background variable student experience and factor 1 (How feedback impacts learning
during lecture) as well as student experience and factor 2 (How feedback impact of learning outside the lecture) indicating that more
experienced students find that clickers supported them in their learning to a lesser extent than less experienced students. Although
these correlations were significant, they are weak and should therefore be treated with caution. Self-monitoring is a core aspect in the
process of becoming a self-regulated learner, therefore it is not surprising that first-year students find the question-driven lectures to
support them in their learning to a greater extent than more experienced students. Even though it has been a lot of research on
first year students' experiences, less of this research is connected to the first year students' experience of feedback practices (Nulty,
2011). First year students face new challenges when they are introduced to an academic culture. In these processes, formative feed-
back plays an important role, both in clarifying standards to be achieved (Poulos & Mahony, 2008), as emotional support in the stu-
dents' course work (Robinson, Pope & Holyoak, 2011), and by supporting them in developing the skills needed to become self-regulated
learners (Nicol, 2009).

6.2. Is there a relationship between the extent to which students report that they use clickers for monitoring their own learning and the extent
to which they use the feedback in their own learning?

The third question addressed if there was a relationship between the extent to which students report that use of student response
systems support their learning in lectures, and the extent to which they report using feedback in their own learning. The quantitative
findings indicated that there was a significant positive correlation between factor 1 (How feedback impacts learning during lecture) and
factor 2 (How feedback impact of learning outside the lecture). This finding is also reflected in the interviews. Students who had examples
of how they actively and strategically used feedback to improve their understanding of concepts, also emphasized the value of
monitoring their own learning during the lecture, not only as an indicator on how they are doing, but they also acknowledged
questions in lectures as a real feedback space that invites them to reflect on their undestatnding of concepts. A goal for further
developing of the course design is to support students to recognize different possible spaces for feedback, both in the lectures and
outside the lectures.
K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 61

Fig. 12. The role of technology.

6.3. How does technology change the conditions for the two processes?

The fourth question addressed the role that technology played in these two processes. The technology allowed the results to be
displayed during the lecture. This created a sense of participation and triggered plenary class discussion. Students experienced
possibilities to engage in real feedback dialogues, for example to explain their thoughts, pose questions and challenge the teacher,
as emphasized by Blair and McGinty (2013). The technology played an important role in changing the dynamics of the lecture on
different levels; a space for reflection and self-assessment, a room for peers to exchange and elaborate on each other's ideas as well
as the results served as a catalyst for teacher and student interaction. Fig. 12 illustrates how the questions and the clickers played an
important role in the processes of creating a climate for a formative feedback practice in the lecture.

6.4. Implications

In this study, we have aimed at being transparent throughout our research design. This means that it is possible for others to replicate
the study in other contexts. We found a mixed method research approach to be useful for addressing students' perception of a formative
feedback practice in lectures. Using mixed method analysis allowed us to investigate the research questions from different perspectives. If
they had been explored using only one method in isolation, we would not have been able to capture the complexity of the students'
experiences. Thus, we tried to develop a coherent research design that is connected to pragmatism as the theoretical paradigm, to mixed
method analysis as a strategy of enquiry, and to methods (survey and interviews) for collecting the empirical material to answer the
research questions. One important methodological implication of the study is that the study of lectures, using clickers to collect data, can
be an effective means of capturing the students' experiences during their learning processes. This improved the validity of the study. One
important theoretical implication from the study is that moments of contingency can be made more explicit in this kind of technological
learning environments. How clicker questions can facilitate discussions in lectures is an area for further research on technology sup-
ported formative assessment in lectures.

Acknowledgements

This research was funded by the PhD-education at the University of Bergen (Org. No. 874 489 542). The research project is
situated in the Department of Education, in the Digital Learning Communities Research Group. We would like to thank Ole Johan
Eikeland for giving feedback on the survey. We would also thank the students who participated in the survey and interviews, as well
as colleagues at the University of Bergen and NATED (National Graduate School in Education) for valuable feedback in the writing
process.
62 K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63

Appendix A

Presentation of the six students:

Ane Hege Ingrid

 First year student  Second year student  First year student

 Use of clicker questions is an opportunity to  More conscious of her own under-  Finds it difficult to talk in large lectures.
process the material and provide space for stand of concepts.
reflection.

 Difficult questions support her learning to a  Likes anonymity.  Questions are keys to understanding the
larger extent than easy questions. concepts.

 Changed her reading strategies.  Clicker questions provide an incentive  Discussions: arguing and listening to peers.
to engage in less interesting topics.

 Writes assignments when she identifies  Values peer discussions.  Most important: Clicker is a personal tool, a
difficult concepts. connection between her, the material and the
teacher.

 Peer discussion useful for reflecting on  New strategies: works with questions
concepts. in the textbooks.

 Discusses questions during breaks.  Discovered the value of asking ques-


tions when learning.

 Most important: Get feedback on how she is  Most important: Get feedback on
doing. understanding.

Karen Marie Stine

 First year student  First year student  Experience from previous university courses

 Finds the subject difficult.  Clickers made the course material  Made it easier to get to know fellow students.
interesting.

 Afraid to talk in large lectures.  Does not raise her hand in the lecture  Afraid to raise a hand and talk in the lecture.
because she is afraid to say something
wrong.

 Right answers are triggers for her to learn  Easier to remember and apply the  Peer discussions: Clicker questions give
more. concepts and topics she discussed incentive to reach agreement in discussions.
with others.

 When she identifies difficult concepts, she  Asks other students (during breaks) if  A typical lecture (non-clicker) is something
writes and checks them after the lectures. there is anything she does not you listen to, this one is to participate.
understand.

 Peer discussions: allow her to discover  New strategies: Noted possible clicker  Knowing that she is not alone in not under-
different approaches to a problem. questions and read more focused. standing everything, makes it easier to discuss
with others.

 Most important: Instant feedback.  Most important: How use of clickers  Emphasizes the fun.
has affected the teacher.
K. Ludvigsen et al. / Computers & Education 88 (2015) 48e63 63

References

Andrade, H. L. (2010). Students as the definitive source of formative assessment: academic self-assessment and the self-regulation of learning. In H. L. Andrade, & G. J. Cizek
(Eds.), Handbook of formative assessment. London: Routledge.
Anthis, K. (2011). Is it the clicker, or is it the question? Untangling the effects of student response system use. Teaching of Psychology, 38(3), 189e193.
Bazeley, P., & Kemp, L. (2012). Mosaics, triangles and DNA metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research, 6(1), 55e72.
Black, P., & McCormick, R. (2010). Reflections and new directions. Assessment & Evaluation in Higher Education, 35(5), 493e499.
Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 1(21), 5e31.
Blair, A., & McGinty, S. (2013). Feedback-dialogues: exploring the student perspective. Assessment & Evaluation in Higher Education, 38(4), 466e476.
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698e712.
Boyle, J. T., & Nicol, D. J. (2003). Using classroom communication systems to support interaction and discussion in large class settings. Research in Learning Technology, 11(3).
Kvale, S., & Brinkmann, S. (2009). Inter views: Learning the craft of qualitative research interviewing. Thousand Oaks California: Sage.
Cain, J., Black, E. P., & Rohr, J. (2009). An audience response system strategy to improve student motivation, attention and feedback. American Journal of Pharmaceutical
Education, 73(2), 1e7.
Caldwell, J. (2007). Clickers in the large classroom: current research and best-practice tips. Life Sciences Education, 6(1), 9e20.
Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395e407.
Clark, I. (2012). Formative assessment: assessment is for self-regulated learning. Educational Psychology Review, 24(2), 205e249.
Design-Based Research Collective, T. (2003). Design-based research: an emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5e8.
Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation and self-regulated learning. Educational Psychology
Review, 20(4), 391e409.
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70e120.
Fluckiger, J., Vigil, Y., Tixier, Y., Pasco, R., & Danielson, K. (2010). Formative feedback: Involving students as partners in assessment to enhance learning. College Teaching, 58,
136e140. http://dx.doi.org/10.1080/87567555.2010.484031.
Gibbs, G. (2013). How to do a research interview. Retrieved 04.05.13 from: http://www.youtube.com/watch?v¼9t-_hYjAKww.
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3),
255e274.
Han, J. H., & Finkelstein, A. (2013). Understanding the effects of professors' pedagogical development with clicker assessment and feedback technologies and the impact on
students' engagement and learning in higher education. Computers & Education, 65, 64e76.
Hattie, J., & Gan, M. (2011). Instruction based on feedback. In Mayer, & Alexander (Eds.), Handbook of research on learning and instruction (pp. 249e271).
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 1(77), 81e112.
James, M. C., & Willoughby, S. (2011). Listening to student conversations during clicker questions: what you have not heard might surprise you! American Journal of Physics,
79(1), 123e132.
Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods, 2(1), 112e133.
Jonsson, A. (2012). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14(1), 63e76.
Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: a review of the literature. Computers & Education, 53(3), 819e827.
Koriat, A. (2007). Metacognition and consciousness. In I. P. D. Zelazo, M. Moscovitch, & E. Thompson (Eds.), The Cambridge Handbook of Consciousness (pp. 289e325).
Cambridge,UK: Cambridge University Press.
Krumsvik, R., & Ludvigsen, K. (2012). Formative assessment in plenary lectures. Nordic Journal of Digital Literacy, 1, 34e56.
Kvernbekk, T. (2011). Til forelesningens forsvar. In T. Kvernbekk (Ed.), Humaniorastudier i pedagogikk (pp. 203e226). Oslo: Abstrakt forlag AS.
Lantz, M. E. (2010). The use of ‘Clickers’ in the classroom: teaching innovation or merely an amusing novelty? Computers in Human Behavior, 26(4), 556e561.
Leahy, S., Lyon, C., Thompson, M., & Wiliam, D. (2005). Classroom assessment, minute by minute, day by day. Educational Leadership, 63(3), 19e24.
MacGeorge, E. L., Homan, S. R., Dunning, J. B., Jr., Elmore, D., Bodie, G. D., Evans, E., et al. (2008). Student evaluation of audience response technology in large lecture classes.
Educational Technology Research and Development, 56(2), 125e145.
Metcalfe, J. (2000). Metamemory: theory and data. In E. Tulving, & F. I. M. Craik (Eds.), The Oxford Handbook of Memory (pp. 197e211). London, UK: Oxford University Press.
Ministry of Education and Research. (2000e2001). Do your duty e Claim your rights. Quality reform in higher education. White Paper nr, 27. Oslo: Statens Forvaltningsteneste.
Mollborn, S., & Hoekstra, A. (2010). “A meeting of minds” using clickers for critical thinking and discussion in large sociology classes. Teaching Sociology, 38(1), 18e27.
Morse, J., Ruggieri, M., & Whelan-Berry, K. (2010). Clicking our way to class discussion. American Journal of Business Education, 3(3), 99e108.
Nelson, C., Hartling, L., Campbell, S., & Oswald, A. E. (2012). The effects of audience response systems on learning outcomes in health professions education. A BEME sys-
tematic review: BEME guide No. 21. Medical Teacher, 34(6), 386e405.
Nicol, D. (2009). Assessment for learner self-regulation: enhancing achievement in the first year using learning technologies. Assessment & Evaluation in Higher Education,
34(3), 335e352.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Ed-
ucation, 31(2), 199e218.
Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39(1),
102e122.
Nielsen, K. L. (2012). Student response systems in science and engineering education. Dissertation. Norwegian University of Science and Technology.
Nulty, D. D. (2011). Peer and self-assessment in the first year of university. Assessment & Evaluation in Higher Education, 36(5), 493e507.
Oigara, J., & Keengwe, J. (2013). Students' perceptions of clickers as an instructional tool to promote active learning. Education and Information Technologies, 18(1), 15e28.
Onwuegbuzie, A. J., & Johnson, R. B. (2006). The validity issue in mixed research. Research in the Schools, 13(1), 48e63.
Orsmond, P., Maw, S. J., Park, J. R., Gomez, S., & Crook, A. C. (2013). Moving feedback forward: Theory to practice. Assessment & evaluation in higher education (pp. 1e13) (ahead-
of-print).
Pachler, N., Daly, C., Mor, Y., & Mellar, H. (2010). Formative e-assessment: practitioner cases. Computers & Education, 54(3), 715e721.
Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: the students' perspective. Assessment and Evaluation in Higher Education, 33(2), 143e154.
Robinson, S., Pope, D., & Holyoak, L. (2011). Can we meet their expectations? Experiences and perceptions of feedback in first year undergraduate students. Assessment &
Evaluation in Higher Education, 1e13 (ahead-of-print).
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119e144.
Schell, J., Lukoff, B., & Mazur, E. (2013). Catalyzing learner engagement using cutting-edge classroom response systems in higher education. Cutting-edge Technologies in Higher
Education, 6, 233e261.
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153e189.
€dberg, U. (2012). A research review of e-assessment. Assessment & Evaluation in Higher Education, 37(5), 591e604.
Sto
Teddlie, C., & Tashakkori, A. (2009). Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Los
Angeles: Sage.
Voelkel, S., & Bennett, D. (2014). New uses for a familiar technology: introducing mobile phone polling in large classes. Innovations in Education and Teaching International,
51(1), 46e58.
Zimmerman, B. J., & Labuhn, A. S. (2012). Self-regulation of learning: process approaches to personal development. In K. R. Harris, S. Graham, & T. Urdan (Eds.), APA educational
psychology handbook (Vol. 1, pp. 399e425). Washington DC: American Psychology Association.
II
Behind the scenes: Unpacking student discussion and
critical reflection in lectures
Kristine Ludvigsen, Rune Johan Krumsvik and Jens Breivik
Kristine Ludvigsen is an assistant professor. Her research interests are how use of digital
tools facilitate formative assessment and feedback practices. Rune Johan Krumsvik is
Professor of Education, and Head of the DLCs at the University of Bergen. Krumsvik is also
Honorary Research Fellow at the University of Bristol. His main research interests are
formative assessment, digital learning and doctoral education. Jens Breivik is associate
professor in education. His area of research is teaching and learning in higher education. His
background is in scholarship of teaching and learning in HE and philosophy of education. He
has a particular interest in analysis of students’ critical thinking and argumentation skills. He
is a member of the research group Pedagogical Philosophy at The Arctic University of
Norway. Address for correspondence: Kristine Ludvigsen, Department of Education,
Cristiesgt. 13, Box 7807, 5020 Bergen, Norway. Email: [email protected]

Abstract
This study investigated the characteristics of peer discussions used to support formative
assessment in lectures, facilitated by a student response system, in an undergraduate
qualitative methods course for psychology students. The intent was to examine the
characteristics of peer discussions in which student response systems are used to facilitate
the practice of formative assessment lectures. The research was guided by the following
research questions: (1) What patterns of talk can be identified in the discussions? (2) How
do the students use subject-specific vocabulary in the discussions? (3) How is the students’
understanding of the subject matter displayed in these discussions? To examine the
characteristics of peer interactions, 87 student discussions were recorded and analysed. The
concept of exploratory talk (Littleton & Mercer, 2013) was used as a lens to examine the
discussions. In 68 of the 87 discussions, the students exchanged ideas and elaborated on
their peers’ ideas and understanding of the concepts. In the remaining 25 discussions the
process of reasoning was less visible. The findings are relevant for teaching designs that
aims to use digital tools to facilitate formative assessment.

1
Practitioner Notes
What is already known about this topic:
• Student response systems can support formative assessment and feedback in
lectures.
• The most common approaches used in research on student response systems to
support formative assessment is questionnaires or interviews
• Few studies have provided detailed analyses of clicker-supported peer
discussions.

What the paper adds:


• Provides insights into the micro-processes in clicker-supported discussions.
• Critically discusses the role of discussions facilitated by student response
systems to support formative feedback in the classroom. It also explores their
role in making understanding visible for the students and the lecturer.
• The article contributes to scholarship in this field by drawing attention to the
qualities of the activities created for formative assessment.
• Discusses the validity of the inferences drawn from the use of student response
systems in the classroom.

Implications for policy and practice:


• Analysis of discussions shows that there is not necessarily a correlation between
aggregated answers and students' understanding. To ensure that valid inferences
can be drawn from activities as a basis for feedback, the questions used and
threats to the validity of possible inferences should be critically examined.

Introduction
This study investigated the characteristics of peer discussions facilitated by a student response
system (Turning Point) to support formative assessment in lectures, in a qualitative methods
course for undergraduate psychology students. A key process in formative assessment is to
make learning visible through ‘moments of contingency’ (Black & Wiliam, 2009, p. 10),
situations and activities in which students are encouraged to articulate their thinking and
understanding so information generated from these activities can be used to shape ongoing
teaching and learning activities. Furthermore, formative assessment should support self-
regulated learning (Boud & Soler, 2016; Clark, 2012; Evans, 2013). A vital question when
examining formative assessment activities is, to what extent do they provide students with
opportunities to share their thinking, and what opportunities become available to students and
teachers to draw inferences through these activities to shape teaching and learning (Furtak et
al, 2016) –referred to as moments of contingency.
In lectures, formative assessment activities often include the use of different kind of student
response system that elicit quantitative or qualitative answers to questions. The value of
student response systems to support formative assessment lies in technological opportunities
to make the reflections of the students visible through interaction and problem solving
(Egelandsdal et al, 2019). Hattie and Timperley (2007) conceptualised feedback as
‘information provided by an agent (a teacher, a peer, a book, a parent, oneself, experience)
regarding aspects of one’s performance or understanding’ (p. 81). Asking questions and
explaining their thoughts could create awareness and increase the students’ understanding of
the topic under discussion. Therefore, participation in discussions provides opportunities for

2
receiving feedback at both the task, often corrective of individual or group performance (p.
95) and self-regulation level, level refers to the students’ ability to monitor and to regulate
their own learning processes, including opportunities to ‘create internal feedback and to self-
assess’ (p. 95) their progress towards achieving learning goals, which is essential for students
ability to make decisions about their learning.
A growing body of empirical research suggests that the use of student response systems in
lectures can enhance both the quality and the quantity of peer discussions (Chien, Chang &
Chang, 2016). Participation in discussions allows students to argue and to provide
justifications, to question their own and others’ assumptions, to co-create knowledge, to
explain and to clarify the subject matter. When students make their thinking explicit in peer
discussions, students will be exposed to different ways of thinking, which can help them
become aware of their own understanding and make better-informed decisions about their
learning process (Dawson et al., 2018).
Using student response systems provide lecturers with an awareness of their students’
understanding of the course material, which supporting a contingent teaching approach in
lectures (Chien et al., 2016; Dawson et al., 2018; Hunsu et al., 2016; Liu et al., 2017). Often it
is assumed a straightforward relation between the collection of answers and the process of
providing feedback:
One of the key benefits of using an ARS is that instruction can be modified based on
student feedback gathered throughout a class (…) If feedback from a majority of
students indicates that confusion or misconceptions are evident, an experienced
instructor can offer alternative explanations of the concepts in question (Kay & Le
Sage, 2009, p. 822).
Inferences are based on what we can observe, and thus they are characterised by uncertainty
(Bennett, 2011). Findings derived from recordings of student discussions suggest that voting
for the correct option did not necessarily demonstrate an understanding of the topic under
discussion, and an incorrect answer did not necessarily indicate inadequate understanding of
the concept (Knight, Wise & Southard, 2015; Nielsen, 2012; Wood, Galloway, Hardy, &
Sinclair, 2014). This can lead to misleading feedback, both for students and for lecturers
(Nielsen, 2012) which might problematise the validity of the inferences drawn from multiple-
choice clicker questions designed for formative assessment in lectures. Studies that have used
recordings of peer discussions have found that instructing students to argue, instead of simply
ask student to discuss can improve argumentation (McDonough & Foote, 2015). The evidence
suggests that groups engage in more argumentation when they are required to justify a
position rather than merely to discuss a topic (Knight, Wise & Southard, 2015) and that an
initial thinking period increases the number of arguments in a discussions Nielsen, Hansen &
Stav, 2016). In identifying the features of high- and low-quality discussions (based on the
level of reasoning in each discussion), Knight, Wise and Southard (2013), Knight et al. (2015)
and James and Willoughby (2011) each found that the quality of discussions was not
dependent on the cognitive level of the questions posed.
This current study
The research reported in this article is a part of a design-based research project (Barab &
Squire, 2004), that explored the role of discussion-based activities in supporting a dialog
approach to formative feedback in the lecture. It also assessed the quality of dialogue among
students and between students and lecturers. The point of departure for the project was to
address a challenge identified in practice as well as by theory and prior research: the
transformation of the lecture from a mode of transmission to a format that includes student
3
active learning approaches. For this purpose, the project aimed to incorporate case-based
activities in lectures to promote critical reflection, to connect the course material to the
students’ own language and experiences and to provide opportunities for feedback

In three previous studies based on survey and interviews from the same course (Krumsvik &
Ludvigsen, 2012, Ludvigsen et al., 2015; Ludvigsen et al, 2019), students claimed that the
quality of the discussions had improved, compared to discussions without support of a student
response system. The reason for this was that the alternatives in the questions posed helped
structure the discussion, and the fact that they had to submit a response made the activities felt
more authentic than discussions that did not require a response (Krumsvik & Ludvigsen,
2012). Discussions among peers were highlighted as valuable for reflecting on one’s own
understanding. One student explained: ‘I notice that I cannot answer the questions until I
discuss them out loud […] You argue with someone about why [your ideas] are right, and
then suddenly you find arguments for why it is right and why it is wrong’ (Ane) ( Ludvigsen
et al., 2015, p. 47). The act of explaining to a peer is a way of explaining their perspectives to
themselves: ‘Even though you remember the words, then you should explain it to others, then
they ask what it means, and then you realise that you did not know, then you notice’ (S3) (
Ludvigsen et al., 2019, p. 11). Articulating their thinking through explanations to their peers
helps students to reflect on their learning, as stated by another student: ‘I can sit and read or
hear and believe that I understand these things. But, if you are to formulate yourself, with no
help in front of you, then I realise if I understand’ (S2) (Ludvigsen, et al., 2019, p. 11). The
quotations illustrate how students are drawn into reflection on their own thinking and
understanding in which are vital in establishing a formative feedback practice in lectures.
According to students, discussions provide opportunities for sharing perspectives, arguing,
explaining and listening, which are all indicators of high-quality discussions (Hennessy et al.,
2016). The discussions were short, lasting only one or two minutes. This created the desire to
explore them in greater detail. What would argumentation and explanation look like in these
discussions?
In their meta-study of how use of student response systems support learning, Chien et al.
(2016) thus argue that peer discussion should be examined from a social aspect: ‘Future
studies are needed to investigate how students interact with peers within the context of clicker
integrated instruction research on this line will also be helpful to understand how the use of
student response systems meditate the process and outcome of peer discussion’ (Chien et al.,
2016, p. 15). Such studies also essential because there might be a gap between how activities
play out and the lecturers’ and students’ perceptions of them (Nielsen, Hansen & Stav, 2012).
By exploring the activities as they unfold might give valuable insights into discussions
supported by response systems that are not possible to identify using other methods.
Recording students’ discussions is important for understanding what is achieved, to arrive at a
nuanced understanding and identify affordances as well as constraints to be assessed in
relation to the purpose of using the technology. Recording discussions helps us pay attention
to the qualities of the new spaces these tools might provide. Knowledge of the characteristics
of peer discussions stimulated by multiple-choice questions is also essential to inform practice
and to promote high-quality discussions (Barth-Cohen et al., 2016; Knight et al., 2015; Wood
et al., 2014). Furthermore, understanding how tools shape feedback practices is crucial for
research into feedback in higher education in general (Evans, 2013).
The goal of this study thus, was to examine the characteristics of peer discussions facilitated
by student response systems to support formative assessment in the lecture. The research was
guided by the following research questions: (1) What patterns of talk can be identified in the

4
discussions? (2) How do the students use subject-specific vocabulary in the discussions? (3)
How is the students’ understanding of the subject matter displayed in these discussions?
By exploring these questions, the article offers insights into the characteristics of peer
discussions supported by student response systems and situating the research within the
context of formative assessment.

Context, methods and analysis of the discussions

The context was a qualitative methods course for undergraduate psychology students. For
students to judge the choices that must be made when approaching the qualitative research
process, critical reflection is crucial (Cooper, Chenail & Fleming, 2012; Cooper, Fleisher &
Cotton, 2012). To provide hands-on experience and to address such core concepts as sample,
validity and triangulation, the students were engaged in a discussion of authentic case
questions. Each lecture started with the introduction of a theme or concepts (a ‘mini-lecture’).
A question for peer discussion followed; the students were given between one and two
minutes to discuss each question. Each student was given a device (‘clicker’) to respond
(‘vote’) to the multiple-choice questions through a student response system (Turning Point).
The answers were aggregated and projected onto the screen for clarification and whole-class
discussion.

Two categories of questions were used. In the textbook questions, the students were asked to
answer a multiple-choice question that addressed the definitions, basic concepts and
characteristics of qualitative methods as presented in assigned readings. In the case questions,
the students answered a multiple-choice question that addressed the procedures involved in
qualitative methods and their application in authentic contexts. Both categories included
questions that allowed for multiple responses (‘multiple responses’) and others that had only
one correct answer (‘closed’). Figure 1 illustrates a case question (Gibbs, 2013), and Figure 2
presents the response.

Figure 1. A question allowing multiple Figure 2. The corresponding responses.


responses).

Each lecture included sequences of four to six questions. The lecturer (the second author of
this article) is a professor with extensive experience in the use of technology to support and
enhance teaching and learning in higher education. Approximately 20 minutes of each lecture
was devoted to discussion-based activities. Before the discussions, the students were asked to
argue for their views; however, they were not provided guidelines on how to engage in

5
exploratory talk. The activities were supposed to be dialogic in that they provided students
with the space to engage with the material in the peer groups (Wegerif, 2013); they were also
designed to gain insights into the students’ thinking (to create moments of contingency).
However, there is some tension between the infinite possibilities for multiple voices to be part
of a dialogue and the closure needed to achieve structured learning objectives. This tension
includes completion of the assignment within a teaching design developed to facilitate
formative assessment through questions with only one correct answer.

Data collection
Discussions were gathered from six lectures spread over two semesters. In both classes,
undergraduate students were given an introductory course in qualitative methods, with the
same lecturer, and they discussed the same types of questions using the same course material
and within the same context. Discussions were conducted over two semesters of classes so
that more discussions could be included in the study than we were able to collect during only
one semester. The sample was based on voluntary participation; thus, it was a convenience
sample (Creswell, 2012). The students received written and verbal information about the
study before the lecture began. Audio recorders were provided at the beginning of each
lecture. Students who chose to participate received a short briefing on the recorders; they then
decided whether to record a discussion. While 96 discussions were collected, not all of the
recordings could be analysed because of audio quality issues. The analysis included 87
discussions distributed across 21 different questions asked in class. 38 of the discussions
addressed questions allowing multiple correct answers, and 50 addressed questions with one
correct answer.

The study was performed in the authentic context of a lecture. Students were asked to discuss
with students sitting beside them; for that reason, the discussion groups consisted of two to
four students each. The number of students in each group depended on how they were seated
in the auditorium, which reflects common practice in these lectures. An ethical question arose,
that the students might have found the questions stressful to discuss or that they might not
speak as freely as they would have if the discussions had not been recorded. However, as the
students were learning about qualitative research methods in this class, volunteering to
participate in interviews would give them valuable experience in understanding the challenges
and opportunities of using audio-recordings of discussions as a research method.

Data analysis procedures


The analysis of the discussions followed five major steps. First was listening to the recordings
several times to become familiar with the discussions. The material was then transcribed.
Most of the discussions were short: between 40 seconds and two minutes. Second, the
transcripts were read to examine their characteristics. Patterns of talk were coded using the
framework of exploratory talk (Mercer, 2004). Examples of how discussions are coded are
provided in Appendix A. Third, the transcripts showed that the students used subject-specific
vocabulary in different ways. Three codes emerged through the reading of the discussion
transcripts: no use of subject-specific vocabulary, references to subject-specific vocabulary
and application or definition of subject-specific vocabulary. Forth, A majority of the
discussions revealed student uncertainty. Questions and statements before their ‘vote’
reflected the emergence of the students’ questions and insecurities. Each of these occurrences
was coded as ‘uncertainty’. In the discussions in which the students knew the answers,
indicated their subjective certainty, or stated that the assignment was easy, the transcripts
were coded as ‘confidence’. Fifth, the distribution of (1) uncertainty and (2) subject-specific
vocabulary across the patterns of talk were examined. The steps in this analysis are illustrated

6
in Figure 3. In Appendix 2, examples of this coding are provided.

Figure 3. Data analysis procedure.

Analytical framework for examining exploratory talk


Several frameworks are available for analysing the quality of student discussion (for a review,
see Hennessy et al., 2016). These frameworks can be distinguished based on their emphasis
on changes in individual thinking, emergent understanding within a group or single utterances
or episodes of talk (Mercer, Littleton & Wegerif, 2004). The goal in this study was to examine
the students’ sharing of ideas and their processes of collaborative knowledge building. The
unit of analysis was the discussion, and the patterns of talk were examined through
sociocultural discourse analysis (Littleton & Mercer, 2013). This methodology is suited to
analysing patterns of talk among participants engaged in problem solving (Mercer, 2004). It
uses a quantitative approach to compare discussions under different conditions and a
qualitative approach to examine student engagement in idea sharing and knowledge co-
construction in a specific context (Mercer, 2004). As an indicator of the quality of education
talk, Mercer (2004) refers to three modes as prototypes: disputational, cumulative and
exploratory. Exploratory talk is characterised by the discussants’ critical engagement with one
another’s ideas. The arguments or the reasoning is explicit or accountable in the discussions.
The students offer alternative views or hypotheses, and they participate with the purpose of
‘joint consideration’ (Mercer, 2004, p. 46). Cumulative talk is characterised by ‘repetitions,
confirmations and elaborations’ that build on one another uncritically (Mercer, 2004, p. 46).
By contrast, disputational talk is characterised by disagreements, interruptions and individual
decision-making. Students do not ask follow-up questions or make additional contributions.
Their utterances are short, lack justification and are often confrontational (Mercer, 2004). The
idea of ‘exploratory talk’ is associated with the idea of a ‘dialogic space’ (Wegerif, 2013). In
a dialogical space, cumulative talk provides a widening of the space, and exploratory talk
provides a deepening of the space (Wegerif, 2013).
The framework of exploratory talk is mostly used when examining the quality of talk in
schools (Littleton & Mercer, 2013), however, the same pattern of talk has been identified in
studies examining the quality of educational dialog in the context of higher education
(Havnes, Christiansen, Bjørk & Hessevaagbakke, 2016) and in work place settings (Littleton
& Mercer, 2013). Since this framework are mostly used within a school setting, our study is a
contribution to bring this literature into analysis of student peer discussions in higher
education.

7
The concepts of exploratory talk were operationalised in this study through the coding scheme
(Appendix 2). This coding scheme was inspired by the ‘Cam–UNAM Scheme for Educational
Dialogue Analysis (SEDA: ©2015), developed by Sara Hennessy and Sylvia Rojas-
Drummond’ (Hennessy et al, 2016, p. 42). The purpose of the scheme developed by Hennessy
et al (2016) was to “distil out the essence of dialogic interactions and operationalise them in
the form of a new scheme of systematic indicators for these productive forms of educational
dialogue”. (p. 42). The SEDA-framework thus describes indicators of qualities of talk (The
complete coding scheme can be found at http://tinyurl.com/BAdialogue.). Four of these
(invite elaboration or reasoning; build on other’s ideas; make reasoning explicit; positioning
and coordination) are from our point of view congruent with qualities described as
exploratory talk in the literature (Mercer, 2004). Furthermore, it was suited for the material
and we found that description of each of the categories were useful for examining qualities in
our own context (peer-discussions). We used the condensed scheme as presented on the
webpage, however, with some adjustments, for example that qualities described in
“positioning and coordination” are put under the heading “make reasoning explicit”.

To limit possible biases in researching our own practice (Sikes, 2006) one researcher outside
of our own institutions has been involved with the analysis of the discussion. There were two
coders. To ensure their understanding of the coding scheme, the coders coded 10 discussions
together. The coders then did a close reading of all the discussions and coded each transcript
individually regarding ‘pattern of talk’. NVivo 11 was used to organise the coding of the
discussions. Before analysis, each transcript was coded by assignment type (questions with
one correct answer or multiple correct answers and textbook or case questions) in accordance
with the coding scheme. In some cases, a discussion was characterised by more than one
pattern of talk. They were coded as exploratory if indicators of exploratory talk were present
in the transcript. Last, inter-rater reliability for patterns of talk was evaluated using Cohen’s
kappa coefficient, which was chosen because of the use of categorical rather than continuous
coding. The inter-rater reliability, calculated using IBM SPSS Statistics, was a Cohen’s kappa
coefficient of 0.69, which could be considered a substantial fit (De Wever, Schellens, Valcke,
& Van Keer, 2006). Disagreements were resolved after discussion. This was performed by
reading the transcript aloud; then, each of the coders provided arguments for their
interpretations. In this way, the transcript was discussed carefully, and through this process
we also compared these discussions with other discussions we agreed upon. This made it easy
to come to an agreement and in this way, we resolved each of the disagreements. To secure
the validity of our conclusions and to be open for different possible ways of interpreting the
discussion transcript, we have also presented excerpts and analysis of the peer discussions in
several workshops involving experts in the field of higher education and in interaction.
Findings

The following section is in five parts. First, we provide a general characteristic of the
material. Second, we provide examples of patterns of talk identified in the material. The third
section discusses the students’ application of subject-specific vocabulary. forth, we present
how uncertainty was expressed in the discussions. The fifth section presents the distribution
of subject-specific vocabulary and occurrences of uncertainty across the different pattern of
talk.

General characteristics
Students used the answer options to structure the discussions, and they argued about the
merits of each. In general, the discussions were of high quality. Another characteristic was the
students’ completion of one another’s sentences, a phenomenon that was interesting in this
8
context because it revealed the students’ collaborative thinking processes as presented in the
examples below:
Example 1a.

Line Student Utterance

1 S1: The interview guide. It’s the questions.


2 S2: It’s the interview question.
3 S3: In a way, it’s the order.

Example 1b.

Line Student Utterance

1 S1: You can measure stress or


2 S2: Pulse or
3 S3: Perspiration

Characteristics of cumulative and exploratory discussions


Of the 87 discussions, 25 were coded as cumulative talk. In some of the cases, the students
knew the answers; therefore, discussion was not required. In others, they focused more on
finding the right answer than making their reasoning visible to their peers: they did not justify
or explain their claims. Only a few students addressed clarifications or justifications of the
arguments, and in some cases follow-up questions were not asked. Subject-specific terms
were used only at a superficial level, and the reasoning was visible only to a limited degree, as
indicated in Example 1.
Example 1. Cumulative talk

Line Student Utterance

1 S1: It’s not Two. It has to be One or Two? One?


2 S2: Yes.
3 S1: It can’t be Two. The students did not use
4 S2: No, because they’re going to find out. . . . subject-specific terms or
5 S1: One, or Three? argue for their claims.
6 Hmm.
7 S3: I think it’s Number One.
8 S2: Should we choose One then?
9 S1: I think so. I’ll go for that.
10 S2: It’s not Number Two. And it’s not Number Three.
11 S3: Then it has to be Number One.
12 S1: I choose Number One.

In Example 1, the discussion was characterised by cumulative talk. The students did not use
subject-specific terms. They suggest answer alternatives by referring to their number, rather
than explaining and presenting reasons for their suggested alternatives.
The dramaturgy was simple: (1) opening, (2) suggestions for voting, and (3) votes.

9
Exploratory talk
In 62 of the 87 discussions, the students exchanged ideas and elaborated on one another’s
ideas and understanding of course concepts. In making their ideas visible, the students
enabled others to connect to these ideas, to build on them, to criticize them and to argue for or
against them, thereby allowing for the development of multiple perspectives. The most
striking characteristic of these discussions was students building on one another’s arguments
to come to a consensus. This was evident when they were arguing for one another’s claims
and completing one another’s sentences. Second, the students’ reasoning was visible to
everyone in the group. This feature became evident in their justifications of their own or their
peers’ claims. Third, they asked for clarifications, or they addressed concepts that were
unclear. However, there were only a few examples of the students being critical of one
another’s arguments. In the next discussion (Example 2), the group discussed the question
after watching a news report about the off-task use of information and communication
technology (ICT) in upper-secondary schools. The question invited the students to identify the
features of quantitative and qualitative research questions.

Example 2: Exploratory talk

Line Student Utterance

1 S1: What do you think?


2 S2: I think Number Four?
3 S1: Why?
4 S2: Because the relationship is
5 quantitative, and to what extent it’s
6 also quantitative, and
7 the first is also totally yes or no
8 questions.
9 S3: But how does the teacher experience
10 the student’s off-task ICT use in class,
11 is qualitative, and then they don’t
12 problematize, and then it’s not
13 normative. Definitely, so? What do
14 you think?
15 S2: I don’t know, and I haven’t started the
16 process yet. I spent so much time
18 reading this thing. Yes, I think Four is
19 the best option: the point that it’s
20 particular. Experience?
21 S3: What is?
22 S2: It’s qualitative.
23 S3: Phenomenological.

Example 2 was characterised by exploratory talk. S2 asked for a justification for choosing
Number Four. The sequences of the talk that brought the discussions into the explorative
mode were characterised by the opening of the discussion space when S1 asked for a
justification for S1’s claim, ‘I think Number Four.’ S2 and S3 followed up by introducing
new arguments in support their shared conclusion. The discussions exhibited a typical
dramaturgy. First, one of the students invited the others into the discussion, and the students
presented their immediate thoughts without explanations or justifications. This mode changed
if or when problems were encountered or questions arose; to progress in these situations, the
students needed to make arguments or explain their thoughts. This was the most exploratory

10
part of the discussion. This confirms previous findings that exploratory talk was associated
with productive ways of addressing the object of learning (Havnes et al., 2016; Littleton &
Mercer, 2013).

Table 1 displays the distribution of the patterns of talk engendered by the closed-ended and
open questions. The discussions featuring exploratory talk were most likely to be generated
by the questions allowing multiple correct answers,
Table 1: Patterns of talk by question allowing one correct answer and
multiple correct answers

Cumulative (25) Exploratory (62)


Questions with one correct answer (50) 21 28
Questions allowing multiple correct answers (38) 4 34

Investigating the differences among the groups was beyond the scope of this study. However,
it was noteworthy that while some groups engaged in exploratory discussions regardless of the
questions, others addressed all of the questions superficially.

Use of subject-specific terms


There were discussions in which the students used no subject-specific vocabulary (3), referred
to a course-related concept without any elaboration (33) or applied or defined the subject-
related concepts they mentioned (51). In the discussion provided as Example 3, the
assignment was to discuss the following question: What are the two most common threats to
validity in qualitative research? The possible responses were the following: (1) researcher bias
and reactivity, (2) respondent validation and triangulation and (3) internal and external
validation.

Example 3. The students applied and defined subject-related concepts

Line Student Utterance

1 S1: It has to be a . . . Excuse me. What do


2 you think?
3 S2: The researcher’s trustworthiness? The
4 influence of the researcher?
5 S1: Yes. Then it’s the two. Yes. It’s a
6 common threat with the method in
7 general, isn’t it?
8 S2: However, particularly, in the qualitative,
9 I guess? I think One.
10 S1: For triangulation.
11 S2: What is it?
12 S1: It’s something you should use to get
13 good validity, to have different entrance S1 explains the concept of triangulation.
14 angles, different methods, for example.
15 The way she’s here now and making the
16 recording, she could have chosen to ask
17 us how it was to discuss the lecture.
18 Hmm . . . S1 provides an example
19 S2: Moreover, got a response and used it as
20 an answer. But when she also records
21

11
22 what’s the truth, that strengthens the
23 S1: validity in a way, so it’s the opposite.
24 S2: It increases the quality.
Yes. It’s not a threat.

The two students discussed the reasons why triangulation was not a threat to validity in
qualitative research. This demonstrated the process of proposing both a definition and an
example. The students were inviting others into the discussion space and making their
thinking visible by both posting questions (S2) and defining the concept and providing
examples (S1).
An examination of the distribution of subject-specific terms across the patterns of talk
indicated that the discussions coded as exploratory talk were more likely to feature
application, definitions or examples than the discussions coded as cumulative (Table 2).
Table 2: Use of subject-specific vocabulary in the discussions

No use Reference Application/Definition/Examples


Cumulative (25) 3 20 2
Exploratory (62) 0 13 49

Uncertainty
Most of the recorded discussions revealed instances of the students’ uncertainty. Below is an
example of a discussion on the role of the Helsinki declaration.

Example 4. How uncertainty was expressed.

Line Student Utterance

1 S1: Nuremberg. Is it something to do with World


2 War Two?
3 S2: I don’t know what it is.
4 S1: I think it’s One.
5 S3: Vulnerable group? What’s that?
6 S1: What does that mean?
7 S4: I don’t know, and I just think Nuremberg. I
8 feel it was something with a trial with Nazism.
9 S1: The Nuremberg convention.
10 I don’t know. I remember from
11 history . . . because of all the things that
12 happened in the concentration camps.
13 S2: That it’ s vulnerable groups do you think?
14 S1: Possibly.
15 S2: Anyway, what are vulnerable groups?
16 S1: Yes. That also. Is it children?
17 S4: Yes. This needs to be specified.
18 S2: Is it minorities, Sami?

In this example we see that each perspective that was added to the discussions engendered a
new question.

In the corpus, there were as many as 20 occurrences of students’ explicit statements of not
understanding the topic at the moment before giving their votes. The following are examples:

12
‘Honestly, I don’t have the slightest idea.’ ‘We’re gambling.’ ‘I say One, and then what if it’s
wrong?’ ‘Frankly, I don’t know. My heart says Four.’ ‘Ah, I don’t want him to ask me to
explain this.’ ‘We say Three, and then we have to sink into a hole in the ground if it’s wrong.’
‘I guess. . . .’ ‘Should we each take One?’ These episodes in the discussions are interesting
and important. They indicate that even though the students might have chosen an answer, they
had expressed uncertainty about whether it was right or wrong. Table 3 presents the
occurrences of articulations of uncertainty in the questions with multiple correct answers and
questions with one correct answer.

Table 3: Expressions of uncertainty in the discussions

Confidence (19) Lack of confidence


(68)
Cumulative (25) 5 20
Exploratory (62) 14 48

Summary of the findings


The analysis showed the following:

• Almost all of the discussions focused on the assignment.


• The students expressed uncertainty in a majority of the discussions (68 of 87), and
insecurity was evident with all types of questions.
• The students structured their discussions around the answer options.
• The alternatives were used both to open discussions (to clarify and explain concepts)
and to shut down discussions (by referring to numbers only).
• Characteristics of exploratory talk were identified in 62 discussions.
• Most of the exploratory discussions were generated by questions allowing for more
than one correct answer.
• The use of subject-specific terms beyond mere references to a concept was most likely
to be found in discussions with characteristics of exploratory talk.

The next section focuses on the role of discussions in formative feedback practice. Guidelines
for practice and further research are suggested.

Discussion and conclusion


By connecting moment of contingency’ to exploratory talk, we draw attention to the quality of
the activities employed to support formative assessment in lectures. The idea moments of
contingency emphasises that such activities should be used to support and adjust learning and
teaching activities. Analysing moments of contingency using the framework of “exploratory
talk” enable us to critically examine the quality of the dialogues enabled by this teaching
design. This helps us to focus how such activities can enhance contingent teaching and
formative assessment.

Our study shows that the alternatives offered in multiple-choice questions have the power
both to trigger and open discussions, and to limit discussions. In some of the analysed
discussions, students opened up dialogues by using the alternatives as means for clarifying
concepts or arguing for their view. In other of the analysed discussions, we found that the
alternatives discourages the students from articulating their knowledge and sharing their
thinking. For example, some students were simply guessing at an answer without elaborating,

13
or they told each other what to vote for an alternative without offering an explanation, only
referring to numbers: ‘sure, it has to be c’. This is similar to findings by Wood et al. (2015),
Knight et al. (2013), James and Willoughby (2011) and McDonough & Foote (2015).
When students use the MC-alternatives this way, this leads them to superficial approach to
their learning, rather than opening up spaces for reflection. To support formative assessment
and deep learning, it is vital to stimulate dialogues were student articulate and share their
understanding. This happens when the students use the alternatives as points of departure for
clarification and argumentation. Open-ended questions enabled more exploratory talk and
should thus be used to create moments of contingency.
A common claim in the research literature on the use of student response systems is that
aggregated responses on MC-questions provide feedback to the lecturer about the students'
knowledge and understanding (Chien et al., 2016; Dawson et al., 2019; Hunsu et al., 2016;
Liu et al., 2017). Our data analysis revealed considerable uncertainty in a majority of the
discussions, even though the students voted for the right answer (in questions with only one
correct answer) or a reasonable answer (in questions allowing multiple correct answers).
Regarding formative assessment, this is important for several reasons: 1) Exploring
uncertainty allows students to be aware and reflect on their own understanding. 2) Our
analysis demonstrate that conclusions based on clicker responses might be fragile and that the
response students given by choosing an answer provides limited information about the
students understanding subject concepts. This raises questions about the validity of the
inferences that can be drawn from the aggregated responses to clicker questions. Wood et al.
(2014), James and Willoughby (2011) and Knight et al. (2013) have expressed similar
concerns. Dall Alba & Bengtson, (2019) argue that underneath what is visible or apparent ‘we
might become aware [of] disconnected thoughts, broken arguments and doubt’ (Dall’Alba &
Bengtsen, 2019, p. 1486). When this uncertainty is brought to the scene, there is a potential to
open ‘moments of contingency’ and allow the lecturer to enter into dialogue with the
students’ thinking. We argue that when the students’ responses are received through a system
without options for the lecturer to unpack the students’ reasoning behind their aggregated
response, there is a risk to neglect valuable opportunities for learning. In any activities created
for the purpose of formative assessment, engaging in dialogue with students allows more
sensitivity towards students’ ideas, which helps in drawing inferences and strengthen the
possibilities for the lecturers to follow up student responses in a formative way. To embrace
activities that allow both aggregated and qualitative answers, or to include different ways of
displaying knowledge would allow complexities to emerge, and this would provide richer
insights into the students’ understanding and thereby allowing other inferences to be drawn.
Limitations
This study was conducted in an authentic setting and thus has several limitations regarding
design, data collection and analysis. Video recordings of the lectures might have provided
greater insights into the quality of the discussions. For example, body language would have
been captured, which could provide valuable information. Also, we did not know what each
individual student voted. If this information was connected to the discussions, we would be
able to assess the quality of the discussions and how students responded. Furthermore, the
students recorded their own discussions; therefore, it is possible that they might have chosen
not to record some of their discussions if they were unsure of the answers, which could have
influenced the results. Some groups decided to record their discussions, while other groups
chose not to record their discussions. The use of audio recorders may also have affected the
quality of the discussions (e.g., students may have tried their best to engage in a productive
discussion or were afraid to talk if they were unsure).

14
Implications for practice
To create, recognise and capitalise on those moments as an integrated part of learning
activities helps teachers adjust their teaching to the needs of their students and help student to
take decisions on their learning process. When using student response system as a part of a
formative feedback practice, the key questions to consider are to what extent and how the
activities allow students to share thinking and what kind of thinking is being shared. To
secure the quality of the inferences that can be drawn from the activities, it could be a
strength, to include questions on confidence and/or design an environment where uncertainty
and questions might surface and to take time to clarify and elaborate on these questions.
To ensure that valid inferences can be drawn from activities as a basis for feedback, the
questions used and threats to the validity of possible inferences should be critically examined.
Implications for Research
Future studies should compare clicker-supported discussions and discussions without
technology within the same overall teaching design. A following up study could design
typologies of questions to investigate more rigorously the influence of question type on
student discourse. Further studies should explore the role of the lecturer, and how lecturers
use quantitative or qualitative information received through these systems to adjust their
teaching to the needs of the students.

Acknowledgements
First, we would like to thank the students who participated in the recorded peer-discussions.
We would also like to thank insightful comments from five anonymous reviewers. Your
comments surly contributed to the quality of the article.

Declaration of Conflicts of Interest


The author(s) declare that there are no potential conflicts of interest regarding the research,
authorship and publication of this article.

Funding
This research was funded by University of Bergen. It is located in the Research Group Digital
Learning Communities.

15
References
Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The
journal of the learning sciences, 13(1), 1-14.
Barth-Cohen, L. A., Smith, M. K., Capps, D. K., Lewin, J. D., Shemwell, J. T., & Stetzer, M.
R. (2016). What are middle school students talking about during clicker questions?
Characterizing small-group conversations mediated by classroom response systems.
Journal of Science Education and Technology, 25(1), 50–61.

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational
Assessment, Evaluation and Accountability (formerly: Journal of Personnel Evaluation
in Education), 21(1), 5.
Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education:
Principles, Policy & Practice, 18(1), 5-25.

Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in
Higher Education, 41(3), 400-413.

Chien, Y. T., Chang, Y. H., & Chang, C. Y. (2016). Do we click in the right way? A meta-
analytic review of clicker-integrated instruction. Educational Research Review, 17, 1–
18.
Clark, I. (2012). Formative assessment: Assessment is for self-regulated learning. Educational
Psychology Review, 24(2), 205-249.

Cooper, R., Fleischer, A., & Cotton, F. A. (2012). Building connections: An interpretative
phenomenological analysis of qualitative research students’ learning experiences. The
Qualitative Report, 17(17), 1-16
Cooper, R., Chenail, R. J., & Fleming, S. (2012). A grounded theory of inductive qualitative
research education: Results of a meta-data-analysis. The Qualitative Report, 17(52), 1-
26.
Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating
quantitative: and qualitative research. Prentice Hall Upper Saddle River, NJ: Pearson
Education
Crompton, H., & Burke, D. (2018). The use of mobile learning in higher education: A
systematic review. Computers & Education, 123, 53–64.
Dall’Alba, G., & Bengtsen, S. (2019). Re-imagining active learning: Delving into darkness.
Educational Philosophy and Theory, 51(14), 1–13.
Dawson P., Henderson, M., Ryan, T., Mahoney, P., Boud, D., Phillips, M., & Molloy, E..
(2018). Technology and Feedback Design. In: M. Spector, B. Lockee, & M. Childress
(Eds,) Learning, Design, and Technology (pp. 1–45). Cham: Springer.
De Wever, B., Schellens, T., Valcke, M., & Van Keer, H. (2006). Content analysis schemes to
analyze transcripts of online asynchronous discussion groups: A review. Computers &
Education, 46(1), 6–28.
Egelandsdal K., Ludvigsen K., Ness I.J. (2019) Clicker Interventions in Large Lectures in
Higher Education. In: Spector M., Lockee B., Childress M. (eds) Learning, Design,
and Technology. Springer, Cham
Evans, C. (2013). Making sense of assessment feedback in higher education. Review of
educational research, 83(1), 70-120.

16
Furtak, E. M., Kiemer, K., Circi, R. K., Swanson, R., de León, V., Morrison, D., & Heredia, S.
C. (2016). Teachers’ formative assessment abilities and their relationship to student
learning: findings from a four-year intervention study. Instructional Science, 44(3), 267-
291.
Gibbs, G. (2013) How to conduct a qualitative interview. Retrieved 04.03. 2013 at
https://www.youtube.com/watch?v=9t-_hYjAKww
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research,
77(1), 81-112.
Han, J. H., & Finkelstein, A. (2013). Understanding the effects of professors' pedagogical
development with Clicker Assessment and Feedback technologies and the impact on
students' engagement and learning in higher education. Computers & Education, 65, 64-
76.
Havnes, A., Christiansen, B., Bjørk, I. T., & Hessevaagbakke, E. (2016). Peer learning in higher
education: Patterns of talk and interaction in skills centre simulation. Learning, Culture
and Social Interaction, 8, 75–87.
Hennessy, S., Rojas-Drummond, S., Higham, R., Torreblanca, O., Barrera, M. J., Marquez, A.
M., . . . Ríos, R. M. (2016). Developing an analytic coding scheme for classroom
dialogue across educational contexts. Learning, Culture and Social Interaction, 9, 16–
44.
Hunsu, N. J., Adesope, O., & Bayly, D. J. (2016). A meta-analysis of the effects of audience
response systems (clicker-based technologies) on cognition and affect. Computers &
Education, 94, 102-119.
James, M. C., & Willoughby, S. (2011). Listening to student conversations during clicker
questions: What you have not heard might surprise you! American Journal of Physics,
79(1), 123–132.
Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience
response systems: A review of the literature. Computers & Education, 53(3), 819-827.
Knight, J. K., Wise, S. B., & Southard, K. M. (2013). Understanding clicker discussions:
Student reasoning and the impact of instructional cues. CBE—Life Sciences Education,
12(4), 645–654.
Krumsvik, R., & Ludvigsen, K. (2012). Formative e-assessment and learning outcome in higher
education. Nordic Journal of Digital Literacy 1(6), 36–54.
Liu, C., Chen, S., Chi, C., Chien, K. P., Liu, Y., & Chou, T. L. (2017). The effects of clickers
with different teaching strategies. Journal of Educational Computing Research, 55(5),
603–628.
Littleton, K., & Mercer, N. (2013). Interthinking: Putting talk to work. London: Routledge.
Ludvigsen, K., Krumsvik, R., & Furnes, B. (2015). Creating Formative Feedback Spaces in
Large Lectures, Computers and Education, 88, 48–63.
Ludvigsen, K., Ness, I. J., & Timmis, S. (2019, in press). Writing on the wall: How the use of
technology can open dialogical spaces in lectures. Thinking Skills and Creativity.
https://doi.org/10.1016/j.tsc.2019.02.007
&McDonough, K., & Foote, J. A. (2015). The impact of individual and shared clicker use on
students’ collaborative learning. Computers and Education, 86, 236–249.
Mercer, N. (2004). Sociocultural discourse analysis. Journal of Applied Linguistics, 1(2),
137–168.
Mercer, N., Littleton, K., & Wegerif, R. (2004) Methods for studying the processes of
interaction and collaborative activity in computer-based educational activities.
Technology, Pedagogy and Education, 13(2), 193–209.

17
Nielsen, K. L., Hansen, G., & Stav, J. B. (2016). How the initial thinking period affects
student argumentation during peer instruction: Students’ experiences versus
observations. Studies in Higher Education, 41(1), 124–138.
Nielsen, K. L., Hansen-Nygård, G., & Stav, J. B. (2012). Investigating peer instruction: how
the initial voting session affects students' experiences of group discussion. ISRN
education, vol. 2012, 1-9.
Nielsen, K. L. (2012). Student response systems in science and engineering education
(doctoral dissertation). Norwegian University of Science and Technology.
Sikes, P. (2006). On dodgy ground? Problematics and ethics in educational research.
International Journal of Research & Method in Education, 29(1), 105-117.
Wegerif, R. (2013). Dialogic: Education for the Internet age. Cornwall: Routledge.
Wood, A. K., Galloway, R. K., Hardy, J., & Sinclair, C. M. (2014). Analyzing learning during
peer instruction dialogues: A resource activation framework. Physical Review Special
Topics—Physics Education Research, 10(2), 1–15.

18
Coding scheme for uncertainty Coding scheme for use of subject-specific terms

CODE Description Examples CODE Description Examples

Confident S1: I first thought of Number Four. S1: It’s not Two. It has to be One? Or Two?
(1) S2: Yes. Me too. S2: Yes.
S1: Because Number Three is qualitative. No use Students do not S1: It can’t be Two.
(…) use subject- S2: Because they’re going to find out.
S2: You must choose someone who’s purposeful for what (0) S1: One, or Three?
you should study. specific terms S2: Hmm.
S1: As he said, you can’t take people working on the floor S3: I think it’s Number One.
to find out how much time they spend on social media S1: Should we take Number One then?
during work hours. I think it’s Four. S3: I think so. I chose Number One.
S2: Yes. It has to be that because you should have S2: It is definitely not Two, and neither is it Three. Then it has to be . . .
someone who knows what it’s like to be schizophrenic. S1: I think it’s Number One.
Uncertain S1: Then you have to choose someone with schizophrenia. S1: Sure. It’s not Two!
(2) S2: No.
S1: Nuremberg. Is it something with World War Two? Students only S1: Because that’s a correlation, and we’re not interested in that.
Uncertainty is S2: I don’t know what it is. S2: No, it should be a particular phenomenon. And Number Three is a
refer to subject-
demonstrated in S1: I think it’s One. Refer general phenomenon.
the transcript S3: Vulnerable groups? What’s that? As? What does that specific terms S1: Yes, it is.
mean? (1) S2: And Number Two is a correlation then.
S1: I don’t know. I just think Nuremberg. I feel it was S1: Yea, and that is definitely wrong.
something with a trial with Nazis. S2: It’s particular research questions?
S4: Yes, that’s right.
S1: The Nuremberg convention. I don’t know. I remember
from history . . . because of all the things that happened in
the concentration camps. S2: For triangulation.
S2: That it’s vulnerable groups do you think? S1: What’s that?
S1: Possibly. Apply/ Students S2: That’s something you should use to get good validity, to get different
S2 But what are vulnerable groups? define angles, to get different methods, for example. Like her who’s recording:
provide She could have chosen to ask us what it was like to discuss the lecture.
S1: Yes, that also. Is it children?
Uncertainty is (2) definitions and S1: Hmm.
directly examples S2: And got answers and used them as a facit. But when she adds records,
articulated in the OR that strengthens the validity in a way, so that’s the opposite. That
transcript S1: I don’t have the slightest clue. increases the validity.
S2: Nor do I. Honestly, I don’t have a clue. S1: Yes. And that is not a threat.
Coding scheme for pattern of talk
Code Indicators Examples
Disputational talk • Disagreements
• Interruptions
• Individual decision-making
• No use of follow-up questions or
contributions
• Short confrontational statements
without justifications
Cumulative talk • Repetitions S1: This time we have to have the right answer! It has
• Confirmations and elaborations to be right!
[Quiet]
• Uncritical building on contributions
S2: I am completely exhausted! I can’t.
[Quiet]
S2: Isn’t it Number One? No.
S3: Number one and Number Two are quantitative.
S1: Two, quantitative?
S3: Yes. Two is quantitative.
S2: Then I would go for Number One.
S1: Okay.
S3: Could be both?
S1: No.
Exploratory talk1 Invite elaboration or reasoning:
S1: I choose sensitive . . .
• Invite elaboration/building on S4: Whether it was . . . or whether she was nervous, or
contributions what?
• Ask for explanations or S2: Perhaps because she wasn’t open at the start and
presented what it was all about. So, was the
justifications of others’
interviewee really like that? She seems . . . maybe she
contributions
was a little sceptical?
Build on ideas: S4: Yes . . .
• (Dis)agree with/evaluate others’ S2: Wondering what’s going on.
contributions S4: Yes. It’s what I was wondering: about openness.
But she wasn’t sensitive. It’s sort of even worse.
• Ask for clarifications or
S2: Yeah . . . It’s the one I noticed as well. It was both
elaborations in a way. Nothing compassionate when she said it was
• Build on/clarify own or others’ stressful, so no: ‘Yes, I can imagine. Tell me more
contributions about it.’ So, in a sense, both that she’s somehow
Make reasoning explicit: sensitive to her feelings, but also that she wasn’t in a
• Explain or justify own or others’ way picking up . . .
S4: Picking up signals. But also, that she didn’t talk
contributions
about it anymore afterwards either.
• Provide examples, evaluate
alternative views, challenge others

1
Modes of talk (Mercer (2004). Exploratory talk was operationalised using the
SEDA-scheme (Hennessy et al., 2016, p. 27)
III
Thinking Skills and Creativity 34 (2019) 100559

Contents lists available at ScienceDirect

Thinking Skills and Creativity


journal homepage: www.elsevier.com/locate/tsc

Writing on the wall: How the use of technology can open dialogical
T
spaces in lectures
Kristine Ludvigsena,⁎, Ingunn Johanne Nessb, Sue Timmisc
a
University of Bergen, Faculty of Psychology, Department of Education, Christiesgt. 13, PO Box 7807, 5020 Bergen, Norway
b
SLATE, the centre for the science of Learning & Technology, Faculty of Psychology, University of Bergen
c
University of Bristol, School of Education, Helen Wodehouse Building, Berkeley Square, Clifton BS8 I JA, United Kingdom

ARTICLE INFO ABSTRACT

Keywords: This article discusses experiences using an online collaborative whiteboard to provide dialogical
Dialogical space spaces (Wegerif, 2013) for students to reflect on their understanding of concepts in lectures in
Higher education two higher-education courses: one in psychology and the other in teacher education. When de-
Peer discussions scribing dialogical spaces, the following terms are crucial: opening (how the dialogical space is
Technology
enabled), widening (how many different voices and perspectives it allows for) and deepening (the
Feedback
Lectures
extent of critical reflections that it provides). The research question is: ‘What kind of affordances
Creative knowledge building are there in using a collaborative whiteboard to support the dimensions of opening, widening and
deepening dialogical spaces in lectures?’ Audio recordings of peer discussions, material produced
in lectures, focus-group interviews with students and course evaluations from teachers are used
to examine the activities through the analytical lenses of opening, widening and deepening
dialogical spaces. The focus is on how creative knowledge processes are stimulated through
dialogue. Based on the two cases, we argue that opening dialogical spaces provides students with
rich possibilities to reflect on concepts and develop arguments, thereby providing feedback on
students’ understanding of course content. Students bring a range of perspectives and experiences
to the scene, thereby widening such spaces. For lecturers, the critical point was to deepen the
spaces and orchestrate a dialogue with students. We found the concept of a dialogical space to be
fruitful for planning and assessing discussion-based activities in the context of the lecture format.

1. Introduction

This article discusses the affordances of using a collaborative online whiteboard (flinga.fi) for opening, widening and deepening
dialogical spaces (Wegerif, 2013) in the context of lecturing in higher education. Creating dialogical spaces in educational settings
requires engaging students in activities where ideas, perspectives and voices can confront and challenge each other (Dysthe, 2006).
The crucial dimensions for describing dialogical spaces are the concepts of opening (how dialogical spaces are enabled), widening
(how many different voices and perspectives each space allows) and deepening (the extent of reflections that these spaces provide).
Despite criticisms that the traditional lecture format is passive and fails to activate students’ learning processes (Freeman et al., 2014),
and that the format is subject to structural constraints (Bligh, 1998), it is a commonly used teaching method in higher education (Friesen,
2011; Harrington & Zakrajsek, 2017). Research literature on lectures has an increased emphasis on the value of students being active in
constructing their knowledge (Cavanagh, 2011; McQueen & McMillan, 2018; Roberts, 2017). Common ways for lecturers to promote


Corresponding author.
E-mail addresses: [email protected] (K. Ludvigsen), [email protected] (I.J. Ness), [email protected] (S. Timmis).

https://doi.org/10.1016/j.tsc.2019.02.007
Received 31 July 2018; Received in revised form 29 December 2018; Accepted 22 February 2019
Available online 10 April 2019
1871-1871/ © 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/BY-NC-ND/4.0/).
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Fig. 1. The interface of the collaborative whiteboard in a lecture hall.

spaces for students to reflect on their understanding of content in the lecture, is to include peer and whole-class discussions, questions
(Mazur, 1997) or to engage students in writing assignments (Stead, 2005). By participating in discussion-based activities, students can
articulate, justify and develop their reasoning and assess their ideas in relation to others by questioning their own and others’ arguments
(Wegerif & Yang, 2011). Sharing information in a group also allows for increased sensitivity to different possible ways of thinking, as well
as co-creation of knowledge (Littleton & Mercer, 2013). So, how and why should we do this in lectures?
During lectures, different activities and tools hold different potential to facilitate a dialogical approach to teaching. Flinga
(https://flinga.fi/) is one such tool, an online collaborative whiteboard where students can share ideas via their phones or other
online devices and send them to a shared online screen projected during the lecture (Fig. 1). The link is shared via an access code and
students do not need an account to access the link. In addition to text, the application allows participants to post pictures or models,
as well as to make drawings or create links between contributions. The contributions can be presented in different shapes and colours,
and can be moved, edited or connected. The students and the lecturer can navigate around the board and work on different areas
simultaneously. Content can be exported so that students and teachers can review and use it as a resource. The website’s technical
interface is illustrated in Fig. 2 below:

Fig. 2. Technical description of the interface.

2
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Writing on shared screens, or using other technologies that allow students to post comments and questions in lectures, supports
increased interaction among students, and between students and lecturers (Baron, Bestbier, Case, & Collier-Reed, 2016; Bry and Pohl,
2017; Cacchione, 2015; Ebner, Lienhardt, Rohs, & Meyer, 2010; Gao, Luo, & Zhang, 2012; Jeong et al., 2015; Neustifter, Kukkonen,
Coulter, & Landry, 2016; Ruismäki, Salomaa, & Ruokonen, 2015; Yates, Birks, Woods, & Hitchins, 2015). This supports reflective
thinking, collaboration and co-creation of knowledge (Sandström, Eriksson, Lonka, & Nenonen, 2016) while providing a safe, in-
formal, process-oriented learning atmosphere (Elavsky, Mislan, & Elavsky, 2011; Yates et al., 2015). Studies have found that students
ask more questions in a shared-screen environment than they do during traditional lectures (Pohl, Gehlen-Baum, & Bry, 2012) and
that they feel ownership of the discourse that plays out in these activities (Sandström et al., 2016). By reading other students’
questions, a student might become aware of others’ challenges. This might create a sense of connectedness and a feeling of shared
work to understand concepts (Aagard, Bowen, & Olesova, 2010; Baron et al., 2016; Pohl, 2015).
While the objective of such technology is to create opportunities for students and teachers to interact, all interactions cannot be
referred to as dialogical (Dysthe, 2006). Exchanging ideas in peer discussions and externalising thoughts on a shared screen allow for
different perspectives and opportunities to engage in dialogues (Rasmussen & Hagen, 2015; Rasmussen, 2016). Using a collaborative
online whiteboard potentially could transform lectures by allowing students to share knowledge, questions and ideas in ways that
otherwise would not be possible. When students share their ideas, these ideas can be reflected upon and connected to build colla-
borative knowledge. Examining micro-processes that occur during activities to find out what is achieved in them, is important in
recognising the potential for discussion-based activities during lectures and is vital in making informed decisions on how to improve
teaching design (Wegerif, 2013). In this study, we explore how using a shared online collaborative whiteboard allows students to
share their thinking and reflect on course concepts from lectures. The guiding research question was: What kind of affordances are
there in using a collaborative whiteboard to support the dimensions of opening, widening and deepening dialogical spaces in lec-
tures? To address this question, we explore interactions that using such technology affords, as well as how students and lecturers
perceive them.
In the following section, we present the concept of creative knowledge processes among different voices, as presented by Ness
(2016); the idea of dialogical space, as interpreted by Wegerif (2013); and how we use the affordance concept. The discussion is based
on two empirical cases: Case 1, ‘Qualitative Methods’, in which the collaborative whiteboard was used to support peer and whole-class
discussions in an introductory course in qualitative methods for undergraduate psychology students, and Case 2, ‘Different Paths to
Learning’, in which the collaborative whiteboard was used to support peer and whole-class discussions in lectures in an undergraduate
teacher-education programme. In both cases, the concepts of opening, widening and deepening dialogic spaces are used as analytical
tools to examine affordances of using the technology.

1.1. Creative knowledge processes

The concept of creative knowledge processes refers to the processes involved in the creative tension between perspectives. The
concept comes from research on groups working toward developing innovative ideas (Ness, 2016). Within the sociocultural per-
spective, creative knowledge processes are inherently social, as ideas develop through a combined and relational process of co-
construction of meaning and knowledge enhancement through dialogue. The concept is rooted in empirical research on creative-
knowledge development (Ness & Søreide, 2014), which has a Bakhtinian understanding of knowledge development and refers to how
knowledge among learners is created when different voices confront and acknowledge each other. When different voices confront
each other, new knowledge and ideas emerge between learners. Ness (2016) coined the term Room of Opportunity to describe how
when social languages meet, differences emerge, and we get what Bakhtin refers to as ‘alterity’. To create something new, it is
insufficient merely to have many voices, i.e., voices must confront each other and create dissonance as well (Bakhtin, 1984; Ness &
Søreide, 2014). Creativity peaks in the Room of Opportunity when participants engage in dialogue and push the boundaries of
everyone’s knowledge (Ness, 2016). When participants challenge each other, ask open questions and explore different perspectives,
creative knowledge processes are stimulated.

1.2. Dialogical spaces

Wegerif and Yang (2011, p. 1) draw from Bakhtin (1895–1977) when they define a dialogical space as ‘possibilities that open up
when two or more incommensurate perspectives are held together in the creative tension of a dialogue’. Bakhtin’s view on dialogue
includes both an ontological and epistemological understanding (Ness, 2016; Wegerif, 2013). For Bakhtin, dialogue is both ‘a fact of
life’ and an ideal to strive for (Ness, 2016, p. 33). The concept of dialogue is connected to the concept of polyphony, a process in which
different voices interact, with the tension between voices acknowledged (Ness, 2016). Wegerif (2013) uses the term ‘dialogic gap’ to
refer to the appearance of different perspectives: ‘The moment there are at least two perspectives, then the gap between them opens
up the possibility of an infinite number of possible new perspectives and new insights’ (Wegerif, 2013, p. 21). In the interactions
among different perspectives, these perspectives can develop further, and new perspectives might emerge (Wegerif, 2013).
Utterances are the core of all dialogues (Bakhtin, 1984). Since voices respond to someone or something in the past, present or
future, one can talk about voices as more or less dialogical or more or less monological (Bakhtin, 1984). While the monological
perspective is single-voiced or closed, minimising the possibilities for responsiveness, the dialogical perspective allows for a multi-
tude of voices, and opens the possibilities for challenge, responsiveness, and criticism (Bakhtin, 1984; Bakthin and Slaattelid, 1998).
A dialogue must include multiple voices and/or perspectives, and its meaning resides in the spaces between them (Wegerif, 2013).
Drawing on Wegerif (2013), a dialogical space is both a philosophical idea and a practical idea of how to facilitate dialogue. In an

3
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

educational setting, a dialogical space can be viewed as practical, such as during a lecture, where students can be encouraged to share
their ideas for reflection (Wegerif, 2013). When describing dialogical spaces, opening refers to designing teaching environments that
allow students to exchange ideas. Widening refers to how many possible voices and perspectives are available (Wegerif, 2013). By
asking students to raise their hands, it is possible to gain a few perspectives. By asking every student to share an idea, we widen the
space extensively. Deepening refers to the degree of reflection on perspectives, and on the dialogue process itself (Wegerif, 2013).
Different degrees of reflection on perspectives may exist, from a teaching design that is open to differences but only lists perspectives
and ideas, to a design that attempts to group, compare, contrast or connect ideas to a broader discourse (Scott, Mortimer, & Aguiar,
2006). In teaching design, the degree of reflection on ideas might evolve over time – during a lecture or across or between lectures,
e.g., merely by starting to collect different ideas and reflecting on them in a later sequence (Scott et al., 2006).
By providing a variety of perspectives, one can increase the degree of reflection. With deeper reflection, you can increase the
number of perspectives (Wegerif, 2013). With the widening and deepening of the dialogical space, differences might become visible,
and one can question assumptions and ideas (Wegerif & Yang, 2011):
‘Viewed from the outside, all dialogues are different, but experienced from the inside, they all share something in common, which
is the infinite potential to be drawn into self-questioning and reflection, which we referred to as the idea of the infinite other as a
potentially emerging voice within all dialogues’ (Wegerif & Yang, 2011, p. 2).
Another idea from Bakhtin is that of the superaddresse. Drawing on Wegerif’s (2013) interpretations, a superaddresse is present in
the dialogue by virtue of being able to listen to himself or herself while speaking. Listening to yourself as if you were another person,
and considering what you say from the perspective of a witness position, allows you to assess your own thinking and understanding
(Wegerif, 2013, p. 48).
In a dialogical approach to teaching, tension always exists between the infinite possibilities for multiple voices to appear and the
reified closure that accompanies structured learning outcomes, formative and summative assessments (Biggs & Tang, 2011). Drawing
on Alexander (2017), p. 5), a dialogical approach to teaching should be characterised by being: a) collective, ‘a site of joint learning
and enquiry’; b) reciprocal, with students given opportunities to voice their thoughts, ‘listen to each’ other and ‘consider alternative
viewpoints’; c) supportive, in which students can voice ideas freely and c) cumulative; “participants build on their own and each
other’s contributions and chain them into coherent lines of thinking and understanding”; and d) purposeful, with discussions planned
and structured toward certain learning outcomes (Alexander, 2017). To open and orchestrate dialogical spaces in lectures, the
lecturer needs to consider the extent to which, and ways in which, students can share their thinking and understanding with each
other and the lecturer, in order that students and the lecturer can reflect upon theese different perspectives, and the extent to which
these activeties address and support the intended learning outcomes.

1.3. Affordances

In lectures, different activities and tools hold different potentials to facilitate a dialogical approach to teaching. Different ap-
proaches on how to stimulate dialogue allow for different affordances to be discovered. A gap might exist between the theoretical
potential for using a particular technology, and the potential that a lecturer can identify and understand, the extent to which a
lecturer can realise that potential in their teaching, and the reality of how the use of technology plays out, intended or unintended,
among students (Kirschner, Martens, & Strijbos, 2004). Therefore, an affordance cannot be set in advance; it emerges in the context in
which it is embedded (Bloomfield, Latham, & Vurdubakis, 2010). Exploring technological affordances to open dialogical spaces
requires that technical features, the technology’s purpose, underlying theoretical assumptions on how students learn and notions of
how to stimulate dialogue are addressed. In this article, affordances of using technology will be discussed both as a theoretical
potential, as perceived by students and lecturers, and as affordances that we can identify when analysing interactions and material
produced in lectures.

1.4. Analytical tools

Across the two cases, we used the dimensions of opening, widening and deepening dialogical spaces as analytical concepts. As
suggested by Wegerif and Yang (2011), an analysis of a dialogical space explores the extent to which the activities facilitate opening
an environment in which ideas, perspectives and voices can be presented, confronted and challenged. Wegerif argues that in spoken
and written text, ‘it is even possible to feel the space opening, widening, deepening and closing down – each shift often as a direct shift of what
people say and the way they say them’ (Wegerif, 2013, p. 152). For this article, we examine the extent to which the activities allow
thinking/ideas to be shared, and the extent to which they allow for a different degree of critical reflection among perspectives.

2. Context and methods

In this section, we describe the research design, the participants and how the data were collected and analysed. The teaching
design for the two courses is described in Sections 3.1 and 3.2.

2.1. Data and analysis

The research reported in this article is part of a larger design-based research (DBR) project, initiated in 2011, in which the
particular focus has been to explore how discussion-based activities support creation of a formative feedback practice in lectures

4
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Table 1
Descriptions of the two cases.
CASES DATA AND ANALYSIS

Case 1, ‘Qualitative Methods’:


A qualitative-method course for undergraduate psychology students in
• 15online
audio-recorded and transcribed peer discussions supported by an
collaborative whiteboard. Each of the discussions was the unit of
which a student response system (Turning Point) and a collaborative analysis.
writing tool (Flinga) were used to support peer discussions to reflect on a
concept in qualitative methods.
• Focus-group interview with four students. Each turn during the interview
was the unit of analysis.
• The concepts opening, widening and deepening are used as analytical
lenses.
Case 2, ‘Different Paths to Learning’: lectures in a teacher-education • Evaluations
Seven Flinga boards: Analyses of written contributions from students
programme. • the dimensionsfrom teachers in which we discussed the experiences. We used
of opening, widening and deepening dialogical spaces as an
analytical lens for analysing our experiences and identifying challenges.

within higher education (Krumsvik, 2012; Krumsvik & Ludvigsen, 2012; Ludvigsen, Krumsvik, & Furnes, 2015; Egelandsdal &
Krumsvik, 2017; Ludvigsen & Krumsvik, in review). DBR emphasises using different approaches to examine learning and interactions
in an authentic setting (Barab & Squire, 2004) and includes cycles of testing and improvements in practice, increased theoretical
insight, as well as insight to improve intervention (Anderson & Shattuck, 2012). In this project, the intervention was to make a
literature-informed adjustment to an already established practice.

2.2. The two cases

In Case 1, ‘Qualitative Methods’, the principal source of data was audio recordings of peer discussions and a focus-group interview
with students. In Case 2, ‘Different Paths to Learning’, the principal data source was teachers’ course evaluations and material
produced during lectures (Table 1).

2.2.1. Analysing Flinga-supported discussion


The discussions that Flinga supported were conducted through a close reading of the transcripts, using the dimensions of widening
and deepening to identify the discussions’ characteristics, as illustrated in the example below (Table 2).

2.2.2. Analysing the focus-group interview


We conducted a focus-group interview with four of the students participating in the discussions to identify how they perceived the
use of the online collaborative whiteboard to support their learning. The sample is based on voluntary participation, and can be
characterized as a convenience sample; ‘based in a specific purpose, rather than randomly’ (Tashakkori & Teddlie, 2003, p. 713). At the
beginning of the interview, the students were asked to create a mind map of their experiences., The resulting mind maps are shown in
the figure below (Fig. 3).
Issues raised in the mind maps were used as a point of departure for the discussion. The focus-group discussion was transcribed
verbatim and provided 20 pages of material. NVivo was used to support a thematic analysis of the focus-group interview. In NVivo,
codes are referred to as nodes. To analyse the interview, we coded each turn. If a student raised several ideas, the turn was coded at
various nodes. In total, 25 nodes emerged, which we grouped into 14 broad themes (Table 3). Even though this article focuses on the
use of the collaborative whiteboard, we also included students’ perceptions on the use of the student response system because using
the student response system was part of the teaching design.
The guiding research question was: What kind of affordances are there in using a collaborative whiteboard to support the di-
mensions of opening, widening and deepening dialogical spaces in lectures? To address this question, we first describe some of the
discussions’ characteristics that the collaborative whiteboard supports. Second, to illustrate how the activities supported the di-
mension of opening, widening and deepening dialogical spaces, we present examples from three of the peer discussions, which were
chosen because they illustrate how using the online collaborative whiteboard has the potential to open a shared space of reflection in
the lecture. Third, we describe students' experiences with the activities.
While Case 1 focused on students’ interactions and how they perceived activities to support their learning processes, case 2
focused on how the lecture built on students’ voices. In both cases, the concepts of opening, widening and deepening are used as
analytical lenses. In the following section, we provide a description of how the data were analysed.

2.2.3. Teachers’ evaluation


Case 2, ‘Different Paths to Learning’, reports on four lecturers’ experiences using the online collaborative whiteboard to support
discussion-based activities in lectures. To examine the lecturers’ experiences, we arranged for an evaluation meeting, which was
conducted in three parts. First, the lecturers were shown the Flinga boards from their lectures. We used the Flinga board as a point of
departure for a discussion of their experiences with the activities. Second, we discussed the experiences along the dimensions of
widening and deepening dialogical spaces and what challenges they faced in using the online collaborative whiteboard to support
discussion. The meeting was audio-recorded, transcribed and analysed.

5
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Table 2
Analysis of the transcripts.

Fig. 3. Mind map of students’ experiences of using Flinga in lectures.

6
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Table 3
Nodes and themes in the qualitative analysis of the focus-group interview.
MODE NODES THEMES

Student response systems ‘You want to achieve’ Easy


‘Feedback on right and wrong’ Feedback on right/ wrong
‘Helped me understand’ Structured discussions
‘Connected to coursework’ Self-assessment
‘Address feedback’ Self-reflection
‘Alternatives open up the discussion’ Co-creating knowledge
‘You eliminate’ Student active approach
‘When you explain something for others, you explain it for yourself’ Connectedness
‘You listen’ Align coursework
‘You get different points of view’ Awareness of differences
‘Use questions in coursework’
‘it is an active way of working with the material”
Online collaborative whiteboard ‘You become aware of other points of view’ Challenging
‘You become aware (of) nuances’ Feedback on contribution
‘You contribute to the lecture’ Unstructured discussions
‘Challenging when you do not have alternatives’ Self-assessment
‘We discuss other students’ posts’ Self-reflection
‘To write something, you have to think further’ Co-creating knowledge
‘Silence breaks for thinking/writing’ Student active approach
‘You really want to contribute’ Connectedness
‘You have to think more’ Align coursework
‘You get feedback on something you have contributed’ Awareness of differences
‘I used other students’ contributions in (my) own writing’ Awareness for nuances
‘You can work on the material’ Students contribute content
‘It is activating’

3. Findings

In this section, we present and discuss the two cases before we propose suggestions for future research and practice.

3.1. Case 1: ‘Qualitative Methods’

Case 1 is a course in qualitative methods for undergraduate psychology students in which a student response system (Turning
Point) and a collaborative writing tool (Flinga) were used to support peer discussions to reflect on a concept within qualitative
methods. A core skill in learning about qualitative research is to engage in critical reflection (Cooper, Fleischer, & Cotton, 2012;
Cooper, Chenail et al., 2012). Critical reflection is important for students to be able to make informed choices on how to approach the
different steps of the qualitative research process, such as developing research questions, choosing a sample, conducting interviews
and observations, coding, interpreting and analysing data, judging validity and examining possible ethical challenges (Cooper,
Fleischer et al., 2012; Cooper, Chenail et al., 2012). To learn qualitative methods, students need to have an active approach and they
should be able to connect concepts they learn to their prior knowledge and experiences (Cooper, Fleischer et al., 2012; Cooper,
Chenail et al., 2012). Each lecturer started by introducing a theme or a concept, followed by a multiple-choice question inviting
students to apply these concepts to different cases or contexts. To expand the opportunities for the exchange and comparison of ideas,

Fig. 4. Design of activities during the lecture.

7
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

we invited students to voice their ideas on the online collaborative whiteboard by answering prompts such as ‘Share issues regarding
ethics in doing observations’. Each lecture contained several sequences with lecturing, discussions of multiple-choice questions and
writing. As such, the activities move between a traditional lecture format and opening up spaces for shared reflection on the topics
introduced in the lecture, as illustrated in Fig. 4 below.

3.1.1. Characteristics of discussions


The 15 discussions were characterised by different activities, e.g., students finding out how the interface worked, quiet breaks for
writing, and deciding what to write (as in Example 2). Students typically discussed a problem that they identified themselves (as in
Example 1). In addition to their discussions, the students read or analysed other students’ posts as they appeared on the screen (as in
Example 3).
Example 1: Widening and deepening the discussion
The first discussions (Example 1) illustrate both a widening dimension, in which they introduce different perspectives, and a
deepening dimension, in which they elaborate on these perspectives before introducing new topics to the discussion.

The discussion above opens when S1 introduces a familiar topic, examples from reality TV, then discusses whether situations exist
in which one should not be observed (lines 1–13). The discussion widens with more elaboration on the phenomenon of being
observed in reality TV. From this, they move the discussion toward observation challenges in the context of research and conclude
that people tend to change behaviour when they are being observed (line 14). However, the students agree that this effect fades over
time (lines 19–21), which is an example of how they deepen their discussion. Then S2 widens the discussion by introducing a new
topic: What if you observe something that is illegal (line 23–25)? They pause to write, then resume when S1 follows up with an

8
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

example of this by referring to teachers: If they see students with bruises. S2 is completing the statement from S1 by adding that they
are obliged to take action (line 27). S2 agrees, and S1 writes down the post. S2 elaborates further on the phenomenon of changing
behaviour by questioning whether it is an ethical problem (line 34). Again, S1 is completing S2′s utterance and suggesting it is a
validity problem, rather than an ethical problem, which is an example of deepening in the discussion. They are also reading other
groups’ posts on the same topic (line 38), thereby broadening their own discussion.
Example 2: How viewing other students’ posts contributes to widening perspectives on the board
The discussion below illustrates how the student, in silence, analyses the board and tries to introduce a new perspective and thus
contributes to widening the dialogical space.

The two students are watching ideas as they appear on the screen. S1 indicates that everything is said already (line 1), and S2
replies that there must be important things that are yet to be said (line 2). After this, the students take a quiet break, during which we
only hear typing. After two minutes, S2 presents an ethical question about when to stop an observation and intervene in a critical
situation (lines 4–11). S1 agrees and suggests they share the idea on the whiteboard. This example shows that by assessing other
students’ contributions, they search for ethical challenges other than what was presented, and are therefore able to widen the
dialogue space by making contributions to it.
Example 3: How students use contributions from another group to widen their own discussion
The next example shows how students broaden their discussion by reading and assessing other students’ contributions. In the
example, two students discuss ethical considerations in conducting qualitative interviews. The assignment is: ‘Share issues regarding
ethics in doing observations’.

9
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

This discussion illustrates how other students’ contributions feed into their discussion. The two students are reading contributions
displayed on the board. In line 1, S1 is reading two posts without commenting on them. S2 reads a third post, in which they elaborate
(lines 4–8), before they read and comment on a fourth contribution (line 9). S1 asks S2 whether he or she has anything to add. S2
reads a new post which they discuss in more depth. This example illustrates that ideas from other groups’ discussions feed into their
discussion and serve as a catalyst for S1 and S2′s discussion, thereby providing both a widening and deepening of the discussion.

3.1.2. Students’ perceptions


The students experienced discussions supported by the student response system as spaces for them to develop their thinking to
help elicit a more nuanced understanding, as stated in the focus group interview: ‘I have had my opportunities to show what I know.
They can show what they know. We can compare with each other, and then we get a broader understanding of what it is all about’
(S4). The students emphasised the value of contributing different perspectives and understandings to the group: ‘One reads different
things, you know different things, you notice different things, you get different perspectives than you maybe know’ (S1). Students
value the process of explaining their understanding to peers because it makes them aware of their own thinking, as in the quotes
below:
It is then (that) I realise that I have understood it in a way. I can sit and read or hear and believe that I understand these things.
But, if you are to formulate yourself, with no help in front of you (e.g. notes or books, etc.), then I realise if I understand (S2).
Even though you remember the words, when you should explain it to others, then they ask what it means, and then you realise
that you did not know, then you notice (S3).
Students emphasised how they listened to themselves as they explained concepts to their peers, becoming aware of their un-
derstanding from a ‘witness position’ (Wegerif, 2016, p. 32). This also might be the case when a student reads his or her post on the
screen and sees his or her contribution at a distance and in the context of other students’ contributions, as Sandström et al. (2016)
noted. Students used the discussions to reflect on their understanding and identify aspects that they needed to review. Below is a
sequence from the interview.

10
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

In the focus group interview, we asked the students to make a mind map of their experiences of using Flinga in their lectures. The
main perspectives reflected in the mind maps were that they could participate by contributing with their ideas and that they became
aware of different viewpoints.
Flinga allowed students to contribute to the lecture, alerting them to nuances and an awareness of ways of understanding that
differed from their own: ‘Your understanding is one thing, but their understanding is something completely different, on what these
things are about’ (S4). ‘I liked that you could see that they had different suggestions (…) When we had a discussion about ethics, then
there were lots of different opinions’ (S2). Students appreciated being allowed to share ideas that were not covered in the course
readings:
(…) Things are not black and white, or wrong – you can present your own thoughts and ideas. It helps you to understand, and you
get the time to reflect on things that you should learn. So, I think it should have been used to a greater extent because it activates
the students. Yes. You are not just sitting there, and someone is telling you how things are (S1).
For the students, the process of formulating ideas to share on Flinga was the most valuable way to reflect on what they had
learned. This can also be connected to deepening the dialogical space, as two students experienced in the sequence below:

They had to ‘think further’ or process more deeply: ‘It requires more, having to formulate sentences, and then you have to at least
understand what you are talking about’ (S4). This can be connected to a sense of participation:’ It gives a sense of achievement when
you are able to write something. It gives, in a sense, you have contributed something’.
The act of sharing their thinking by writing posts forced them to articulate their thinking and, thus, increased the possibility for
questions to appear or, as (Nygaard, 2015, p. 24) stated beautifully: ‘[…] putting word on paper makes us think things through […]
suddenly gaps in logic became visible. Things we thought we knew thwart our every attempt to describe them’. Students emphasised
that they became aware of differences and nuances by reading and discussing other students’ posts, and they compared other stu-
dents’ contributions to their own. Seeing other contributions inspired them to open discussions in their peer groups:
You can focus a little on what you think is interesting, then take the discussion in that direction instead of another direction (S2).
It helped to discuss the other students’ posts. It was really very fine, really (S4).
I feel that there was a way to get started then, not necessarily that I think that what they wrote was right, but if I did not agree, I
could say, ‘No, what, why did they write this’ or ‘yes, it was really good’. So, we began to discuss what we would write, and what
we would not write (S1).
Even though it was quiet from the start (…) you think really hard in a way. You would give input; it takes time to formulate in a
good way in writing (…) you actually need, the need to work a bit more with it, to think of anything to write (S3).
The students also emphasised that working on the material connected the lectures to other activities in the course, such as reading
and writing, as this students explained:
It helped in writing my exam, and to remember what I had written, and what other students had written. In addition to what was
written in the book, I could use it in the discussion, to show that I was able to reflect (…) I noted the post that I found to be most
relevant and maybe those that were commented on by the lecturer. I noted them, and used some of them in my exam (S4).
When you are given a chance to work on the material, then you remember it better. When you read the book about something,
then you can connect it to the lecture (…) then I can review my notes. I remember this is what I wrote, then I was given an
opportunity to connect everything. When I think about other subjects, I think: This is the lecture. This is the book. This is the
exam. But now, I get a real thread between everything (S2).

11
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Students found it difficult to discuss ideas with their peers during the Flinga session because of the challenge of formulating ideas
into short posts, with no starting point to focus on or structure, like the discussions supported by clickers. Despite this, the students
experienced, in both discussion formats, the act of explaining ideas to peers and listening to other perspectives, writing ideas, seeing
and assessing other students’ posts and hearing lecturers’ comments – all as spaces for them to reflect on their own learning and
thinking by considering their own understanding. Interestingly, and in line with extant literature (Baron et al., 2016; Pohl, 2015), the
students found that using discussion-based activities in the lecture supported a feeling of participation and belonging in a group:

At the end of the interview, we asked the students to agree on suggestions on how/why these activities supported their learning.
The points were: 1) It provided feedback on their own understanding, and it encouraged thinking for themselves; 2) It gave students
more control over their own learning; 3) It should be used in different subjects to provide students with different perspectives; 4) It
creates a feeling of belonging.
Students also used the online collaborative whiteboard to comment on the organisation of the course. In a few instances, students
posted pictures and jokes. On one occasion, some students made posts into the shape of a heart (see Fig. 5). This illustrates the
multimodal affordances of using the collaborative online whiteboard. When this occurred, the lecturer could not see how students
were playing with the post, because he had his screen zoomed in on a few posts only. This created an enjoyable atmosphere.

3.1.3. Summary: case 1


Flinga had the potential to open the space in the first place by including the lecturer and by bringing in perspectives from all the
student groups. When students are asked to share their views in Flinga, it broadens the dialogical space. This might provide students

Fig. 5. How students used the online shared whiteboard.

12
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Table 4
How use of the technology affords the dimension of widening and deepening.
Widening Deepening

Students make an effort to add different perspectives to the Whole-group discussions were held.
collaborative whiteboard. Students sort and categorise contributions.
Students experienced awareness of nuances in their own and in Writing adds reflection to the space.
others’ views. It helped students connect to each other’s ideas, both in the lecture and in
Perspectives from all the students participating in the lectures coursework outside the lecture.
Students include perspectives from other groups in their peer Lecturer asks for justifications and examples and helps connect the contributions to
discussions. research literature and theory
The lecture is included. Students argue for their views.

with a richer picture of possible ways to understand a phenomenon than what is occurring in the peer-group and in whole-class
discussions. As illustrated in the case outlined here, the use of shared online whiteboards provided different affordances for creating
dialogical spaces. These can be grouped as shown in Table 4.

3.2. Case 2: “Different paths to learning”

The course’s objective is to be able to discuss examples from practice in light of research and theory, and to be able to articulate an
informed and critical view of different teaching situations. During each lecture, we asked the students to discuss questions in groups
and share their ideas on the collaborative whiteboard. Flinga was used to allow students to share their ideas and to connect those
ideas to theoretical concepts. Examples of writing tasks were to formulate definitions of students’ understanding of a phenomenon,
e.g., ‘Suggest a definition of learning’. We also asked the students to answer questions or argue for and against statements such as ‘All
learning is good learning’. To problematise and identify different or conflicting views, we asked the students to discuss practical or
ethical issues related to classroom situations presented by video, using questions such as ‘What can the teacher do in this situation?’
The pictures below (Fig. 6) illustrate how we used the collaborative online whiteboard in the teacher-education course:
Approximately 30 minutes of the 2 × 45-minute lectures were dedicated to Flinga-supported discussions. The lecturers were using
Flinga for the first time. The first author facilitated the process and supported the lecturers by guiding them and the students in using
the platform. We visualised this approach in the model below (Fig. 7):

Fig. 6. How we organised the discussion sessions.

13
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Fig. 7. The structure of a Flinga session.

3.2.1. Lecturers’ experiences


In the evaluating meeting with the lecturers, we used the material produced (Flinga board) as a point of departure for the
discussion on the lecturers’ experiences by focusing on a) how they perceived the technology to support activities for students to share
their thinking (widening dimension) and b) how they approached the perspectives (deepening dimension). We also asked them about
their experiences in general and finally, we used widening and deepening dialogue spaces as analytical lenses through which to identify
opportunities and challenges.
It was easy for students to bring a wide range of perspectives and experiences to the scene, thereby opening and widening the
dialogical space. In the auditorium, the lecturers took the following strategies in engaging with the posts and reflecting on the
perspectives: (1) Read a post aloud and ask for examples and elaborations, and connect the ideas to theory, research, and best practice
in teaching; (2) Go in depth by unpacking only one of the contributions and invite whole-class discussion; (3) Identify conflicting
views to stimulate whole-group discussion; and (4) Ask students to include a written justification or examples to their ideas expressed
in their posts. The following section provides examples of the approaches above.
Example 1
A total of 100 students participated in the lecture. The assignment was to provide examples from their own experiences and get
feedback. We asked the students to post feedback examples that supported their learning in the green boxes and feedback that was not
useful in the red boxes. The students had five minutes to discuss and write their posts. The lecturers sorted out the perspectives as
they were posted to the Flinga board (Fig. 5). The posts covered a range of experiences, and the feedback needed to ‘be relevant’ and
‘practical’, ‘should inform you on how to improve’, ‘must adjust to the student’, ‘should not focus on the grade’, ‘should be specific’,
‘should use examples’, ‘should be face-to-face’ and should recognise that students have ‘different needs and that the feedback should
be differentiated’ and provide ‘clear instructions’. Ineffective feedback focuses ‘only on grades’ or ‘negative things’, is ‘not relevant’, is
‘not specific’, is ‘too critical’, uses an ‘emoji to be hip’, offers ‘only positive feedback’ or ‘no justification’ or simply ‘too much’
feedback. The students’ ideas reflect ideas informed by research on formative assessment. By stimulating dialogue among students
and trying to unpack the posts shared on the board, the lecturers could connect the experiences to the research by building on
students’ contributions (Fig. 8).
Example 2
In the example below (Fig. 9), the students were asked to ‘suggest a definition of learning’.
Students brought different perspectives to the exercise on how they define learning, including: ‘to gain knowledge’; ‘to

Fig. 8. What are your experiences with feedback? Provide examples of feedback that supports learning and feedback that does not support learning.

14
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Fig. 9. A Flinga board with posts for the prompt ‘Suggest a definition of learning’.

understand’; ‘to develop skills’; ‘an experience’; ‘a process’; ‘a small change in understanding; and ‘like a rose’. This served as a point
of departure for sorting out the different perspectives in light of the acquisition and participation metaphors of learning (Sfard, 1998).
By sorting their responses according to these metaphors, students were able to grasp the two ways of looking at learning and also
recognise that their own ideas might be reflected within one of the two perspectives. Although many of the posts were easy to place,
some fell between concepts. The lecturer explains how she handled this:
I tried to make a grey zone then. These are challenges you might discuss with the students (…) It may open up even more dialogue:
You can ask them to move posts, that they are allowed to move the posts, but then they must justify it in one way or another. Your
students may disagree. This can be the point of departure for a dialogue (Mari).
Example 3
Another approach was to identify conflicting views and use them as a point of departure for a whole-class discussion. The lecturer
asked the students to elaborate on their posts and explain their reasoning. This opened up a discussion in which differing views were
confronted. To facilitate the reflective process among students in their peer groups, we asked them to justify their ideas by providing
reasons or examples tied to their ideas when writing their posts. By including justifications and examples in the posts, it was easier for
lecturers to connect to the students’ ideas.
Example 4
One of the teachers used the interface to code the posts as good (green) or bad learning strategies (yellow). (Fig. 7). Students were
asked to share their experiences with learning strategies, providing an opportunity to deepen the space by putting ideas in relation to
each other. For example, they started to organise related posts together. These actions were unplanned and showed how students
shaped the use of technology as the activities spontaniously emerged. In the lecture, the students brought a variety of different
experiences, and the lecturer sorted the posts into individual and collaborative learning strategies:
The idea was to ask students about the learning strategies they used. The aim was to sort the responses into different types of
learning strategies, such as social and cognitive strategies. I focused on the answers that I thought were most interesting or revealed
things that were different and that I wanted to give more depth. That was what I found to be the easiest way to approach it (Jon)
(Figs. 8–10 ).
Across the examples, the students were willing to argue for their beliefs, raise concerns and give examples. The lecturers found it
challenging to handle students’ individual experiences and focus on theoretical understanding simultaneously. They also felt that they
had a limited amount of time to make decisions about how to connect perspectives and conduct analysis on the spot:
I experienced it as a bit intense when it comes too many perspectives on the board. You feel you have a short time to analyse it (…)
it is not easy to do an analysis very fast. (…) You may come up with things you are not happy with afterward, or things you can
discuss or disagree with (…) I felt I had to hurry to sort and categorise the answers (Mari).
It is nice to think that they will come up with numerous perspectives so that you will say, ‘See, there's an example of this!’ It is
harder in practice when you are in the lecture hall (Jon).
It is the nature of this activity that you cannot prepare for everything and that one must be open to unpredictable things hap-
pening. It is a delicate balance between what can be planned and what one has to do spontaneously.

15
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

Fig. 10. How the interface is used to code the posts.

Flinga gave me a lecture on students' thoughts. I am delighted with how I managed to utilise this opportunity in the lecture. In
retrospect, I think I spent too little time on the follow-up phase (…) I think it requires a good academic overview, a humble and
open attitude that one cannot ‘do everything’ and that you keep calm and keep a cool head when unpredictable things happen
(Tomas).
The lecturers also expressed a feeling of discomfort associated with doing something in a new way, be it using the technology
itself, not knowing what the students are going to write or being uncertain of whether they manage to orchestrate a dialogue between
ideas and how they are able to capitalize on students’ contributions.

3.2.2. Summary: case 2


Based on the lecture experiences, it was easy to use the shared whiteboard to support the widening dimension. The critical point
was how to approach the deepening dimension: The lecturers found it challenging to do on-the-spot analysis that entailed managing
students’ experiences and focusing on theoretical understanding simultaneously. Based on this insight, we would work systematically
toward developing a design with greater dialogue among ideas by using structures in the interface to support reflection on per-
spectives within the peer groups, as well as in the whole-group discussion. To develop our teaching design we would view widening
and deepening the dialogue space as a process that continiues beyond the time and space of the lecture section.

4. Discussion and conclusion

Opening the dialogical space occurs by inviting students to voice their opinions and ideas in written posts. Taking this further, by
asking students to justify, explain and elaborate on their perspectives allows for more sophisticated ways of unpacking differences in
perspectives, thereby allowing complexities to surface and providing opportunities to deepen the dialogical space. Using a colla-
borative, online whiteboard has the potential to transform lectures by allowing students to share knowledge, questions and ideas in
ways that otherwise would not be possible. Based on our cases, we argue that the technology has the potential to transform the
lecture into a space of dialogue and reflection, open for students to participate in activities where they can connect new ideas to
previous knowledge and experiences. Using a collaborative whiteboard can potentially change how students and lecturers interact
and how students’ ideas interact with each other. We argue that opening dialogical spaces provides students with rich possibilities to
reflect on concepts,develop arguments, and obtain feedback on their own understanding of course content. In this method of
teaching, creative knowledge processes are encouraged (Ness, 2016), and several possible dialogical spaces can potentially open up.
Based on our two cases, we have illustrated that using the online collaborative whiteboard supports increased interaction among
students, and between students and lecturers. This would have been impossible without such technology. The activities supported a
process-oriented and safe learning atmosphere for the students to engage in critical reflection. A dialogue space was created, where
students could articulate their ideas in peer groups, formulate them into written text, view perspectives from their peers, and discuss
with their lecturer. This was also recognised by Yates et al. (2015) and Elavsky et al. (2011). Students gained ownership of the
discourse played out, which has also been recognised by Sandström et al. (2016). Students also gained awareness of different views
and nuances, along with a sense of connectedness, as was recognised by Pohl (2015) and Baron et al. (2016). This article exceeds
previous research by adding insight into the micro-processes occurring among students during the activities. Through our analysis we
have shown how the use of such tools provides affordances to open, widen, and deepen dialogue spaces.

4.1. Usefulness of using dialogical-space concept as an analytical tool

We found the idea of a dialogical space to be stimulating as a ‘thinking tool’ for examining activities to promote dialogue in
lectures. We found the concept of dialogical space to be connected closely to creativity in the way it stimulates exploration and a

16
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

divergent thinking mode. In traditional creativity theory, divergent thinking focuses on exploring many possible solutions (Guilford,
1967).

4.2. Implications for practice

Deepening the dialogue space must be viewed as shared work between students and lecturers. It is important to emphasise that
affordances arise when everyone participates. Understanding is then created among students and lecturers.
The lecturer’s role is essential to facilitate depth when using Flinga, e.g., by helping connect different perspectives and finding
conflicting views and opinions. As we found in Case 2, the lecturer needs to be open to unpredictable occurrences and find a balance
between what can be planned and what one must do spontaneously. Another way of thinking about deepening spaces is that text
produced in lectures might bridge other lectures and course activities. A widening of the space might serve a purpose in one lecture,
while in the next lecture, the lecturer can elaborate on the perspectives and add more depth. This method of teaching corresponds to
the concept of ‘just-in-the-moment teaching’ (Novak & Patterson, 2010), in which digital tools are used to provide insight into how
students understand a topic before the lecture.
Processes for reflecting on perspectives might be viewed as a continuous process in which the lecturer can connect with students’
perspectives and use them to calibrate students' discourse, moment to moment, in or across courses, and for students to review
content after lectures and during other course activities.
An important aspect of helping students co-construct knowledge is to address the underlying conditions for this to succeed, i.e.,
they must be open with each other, curious about each other’s opinions and perspectives, view each other as resources in the
discussions and not as ‘threats’ and have respect for each other’s opinions (Ness, 2016). We suggest that students become familiar
with the tools before they use them during lectures, and students should understand the activities’ purpose. To save valuable time, the
links to the shared online whiteboard could be distributed to the students prior to the lecture.

4.3. Limitations and implications for research

This study was conducted in an authentic setting and has several limitations. First, our two cases are small. In case 1, we have only
15 discussions, and the conclusions from case 2 are based on four teachers using the collaborative whiteboard for the first time.
Nevertheless, we argue that the data collected across the cases provide valuable insight into affordances of using such tools to open
dialogue spaces in lectures and make informed choices on how to improve our teaching design. Another limitation is that we only
used audio recordings to capture students’ discussions. If we had video-recorded the lectures, we would have had better insights into
how the processes played out. Future studies might also record each group discussion by using a head camera to video-record
activities to capture how students navigate the shared space, how they approach other students’ posts and how different tools
(laptops, books, and paper) might influence the affordance of the whiteboard use.
In future designs, the students’ discussions could be connected to their contributions to assess how ideas from the discussions are
reflected in their posts. Video of the interface as the writing appears also would provide valuable information on how these tools feed
into classroom dialogue because we would be able to identify how actions played out on the screen feed into peer discussions, as well
as in the whole-group dialogue. The time dimensions also are critical: We suggest that the activities should be observed for a more
extended period, e.g., a course, semester or year, to assess how points made earlier in the lecture, verbal or written, are picked up in
discussions. The explorative level of creative knowledge processes in the student groups perhaps would vary over a semester.
Research on multidisciplinary group members (Ness, 2016) showed that as the group members got to know each other over a period,
they also seemed to become more comfortable and active in the group dialogues, enhancing their creativity as a group, as they
learned from each other. We suggest research designs that allow for capturing changes in student contributions, both in voicing their
opinions and writing down their arguments, as well as whole-class discussions that assess whether they change over time. How
activities feed into other course activities (seminars, assignments and exams), how students work individually, how activities afford
sharing and reflecting on perspectives in peer groups (among peers) and in whole-class discussions (among students and lecturers), as
well as the dynamics among these activities, are worth further exploration.

Declaration of conflicting interests

The author(s) declare no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Funding

This research was funded by the Faculty of Psychology, University of Bergen, and is situated in the Digital learning Communities
Reseach group.

Acknowledgements

We would like to thank the participating students, as well as Rune Krumsvik, Olga Dysthe, Jens Breivik, Cecilie Enquist Jensen,
Kjetil Egelansdal, Fride Haram Klykken, Trude Løvskard, Rob Grey, Eva Vass, and the two anonymous reviewers for valuable and
inspiring feedback during different stages of the writing process.

17
K. Ludvigsen, et al. Thinking Skills and Creativity 34 (2019) 100559

References

Aagard, H., Bowen, K., & Olesova, L. (2010). Hotseat: Opening the backchannel in large lectures. Educause Quarterly, 33(3), 2.
Alexander, R. (2017). Developing dialogic teaching: Process, trial, outcomes. Paper Presented at the 17th Biennial EARLI Conference.
Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in education research? Educational Researcher, 41(1), 16–25.
Bakthin, M. M., & Slaattelid, R. (1998). Spørsmålet om talegenrane. Bergen: Ariadne forlag.
Bakhtin, M. M. (1984). In C. Emerson (Ed.). Problems of dostoevsky’s poetics. Minneapolis, MN: University of Minnesota Press.
Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. Journal of the Learning Sciences, 13(1), 1–14.
Baron, D., Bestbier, A., Case, J. M., & Collier-Reed, B. I. (2016). Investigating the effects of a backchannel on university classroom interactions: A mixed-method case
study. Computers & Education, 94, 61–76.
Bligh, D. A. (1998). What’s the use of lectures? San Francisco: Intellect Books.
Bloomfield, B. P., Latham, Y., & Vurdubakis, T. (2010). Bodies, technologies and action possibilities: When is an affordance? Sociology, 44(3), 415–433.
Bry, F., & Pohl, A. Y. S. (2017). Large class teaching with Backstage. Journal of Applied Research in Higher Education, 9(1), 105–128.
Cacchione, A. (2015). Creative use of Twitter for dynamic assessment in language learning classroom at the university. Interaction Design and Architecture Journal, 24,
145–161.
Cavanagh, M. (2011). Students’ experiences of active engagement through cooperative learning activities in lectures. Active Learning in Higher Education, 12(1), 23–33.
Cooper, R., Chenail, R. J., & Fleming, S. (2012). A grounded theory of inductive qualitative research education: Results of a meta-data analysis. Qualitative Report,
17(52).
Cooper, R., Fleischer, A., & Cotton, F. A. (2012). Building connections: An interpretative phenomenological analysis of qualitative research students’ learning ex-
periences. Qualitative Report, 17(17), 1.
Dysthe, O. (2006). Bakhtin og Pedagogikken. Norsk Pedagogisk Tidsskrift, 90(06), 456–469.
Ebner, M., Lienhardt, C., Rohs, M., & Meyer, I. (2010). Microblogs in Higher Education: A chance to facilitate informal and process-oriented learning? Computers &
Education, 55(1), 92–100.
Egelandsdal, K., & Krumsvik, R. J. (2017). Peer discussions and response technology: Short interventions, considerable gains. Nordic Journal of Digital Literacy,
12(01–02), 19–30.
Elavsky, C. M., Mislan, C., & Elavsky, S. (2011). When talking less is more: Exploring outcomes of Twitter usage in the large‐lecture hall. Learning, Media and
Technology, 36(3), 215–233.
Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., et al. (2014). Active learning increases student performance in science, engineering
and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410–8415.
Friesen, N. (2011). The lecture as a transmedial pedagogical form: A historical analysis. Educational Researcher, 40(3), 95–102.
Gao, F., Luo, T., & Zhang, K. (2012). Tweeting for learning: A critical analysis of research on microblogging in education published in 2008–2011. British Journal of
Educational Technology, 43(5), 783–801.
Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill. Kim.
Harrington, C., & Zakrajsek, T. (2017). Dynamic lecturing: Research-based strategies to enhance lecture effectiveness. Stylus Publishing, LLC.
Jeong, Y. J., Ji, S., Lee, Y., Kwon, S., K. H, & Jeon, J. W. (2015). Smartphone response system using twitter to enable effective interaction and improve engagement in
large classrooms. IEEE Transactions on Education, 58(2), 98–103.
Kirschner, P. A., Martens, R. L., & Strijbos, J.-W. (2004). CSCL in higher education? A framework for designing multiple collaborative environments. In P. Dillenbourg,
J.-W. Strijbos, P. A. Kirschner, & R. L. Martens (Eds.). Computer-supported collaborative learning: Vol. 3. What we know about CSCL: And implementing it in Higher
education (pp. 3–29). Boston, MA: Kluwer Academic Publishers.
Krumsvik, R. (2012). Feedback clickers in plenary lectures: A new tool for formative assessment? Transformative approaches to new technologies and student diversity in futures
oriented classrooms. Dordrecht: Springer191–216.
Krumsvik, R. J., & Ludvigsen, K. (2012). Formative E-assessment in plenary lectures. Nordic Journal of Digital Literacy, 7(01), 36–54.
Littleton, K., & Mercer, N. (2013). Interthinking: Putting talk to work. London: Routledge.
Ludvigsen, & Krumsvik (2019). Behind the scenes: Unpacking student discussion in lectures (in review).
Ludvigsen, K., Krumsvik, R., & Furnes, B. (2015). Creating formative feedback spaces in large lectures. Computers & Education, 88, 48–63.
Mazur, E. (1997). Peer instruction. Upper Saddle River. NJ: Prentice Hall9–18.
McQueen, H. A., & McMillan, C. (2018). Quectures: Personalised constructive learning in lectures. Active Learning in Higher Education. https://doi.org/10.1177/
1469787418760325.
Ness, I. J. (2016). The Room of Opportunity: Understanding how knowledge and ideas are constructed in multidisciplinary groups working with developing innovative ideasPhD
thesis. Norway: University of Bergen.
Ness, I., & Søreide, G. (2014). The room of opportunity: Understanding phases of creative knowledge processes in innovation. The Journal of Workplace Learning, 26(8),
545–560.
Neustifter, R., Kukkonen, T., Coulter, C., & Landry, S. (2016). Introducing backchannel technology into a large undergraduate course. Canadian Journal of Learning and
Technology, 42(1), 1–22.
Novak, G., & Patterson, E. (2010). An introduction to just-in-time-teaching. In S. Simkins, & M. Maier (Eds.). Just-in-time teaching: across the disciplines, across the
academy (pp. 3–24). Sterling, VA: Stylus Publishing.
Nygaard, L. (2015). Writing for scholars: A practical guide to making sense & being heard. Sage: CITY.
Pohl, A. (2015). Fostering awareness and collaboration in large-class lectures. Doctoral dissertationGermany: Ludwig-Maximilian University of Munich.
Pohl, A., Gehlen-Baum, V., & Bry, F. (2012). Enhancing the digital backchannel Backstage on the basis of a formative user study. International Journal of Emerging
Technologies in Learning (iJET), 7(1), 33–41.
Rasmussen, I. (2016). Microblogging as partner(s) in teacher-student dialogues. Educational technology and polycontextual bridging. Rotterdam, Netherlands:
SensePublishers63–82.
Rasmussen, I., & Hagen, Å. (2015). Facilitating students’ individual and collective knowledge construction through microblogs. International Journal of Educational
Research, 72, 149–161.
Roberts, D. (2017). Higher education lectures: From passive to active learning via imagery? Active Learning in Higher Education. https://doi.org/10.1177/
1469787417731198.
Ruismäki, H., Salomaa, R. L., & Ruokonen, I. (2015). Minerva Plaza: A new technology-rich learning environment. Procedia-Social and Behavioural Sciences, 171,
968–981.
Sandström, N., Eriksson, R., Lonka, K., & Nenonen, S. (2016). Usability and affordances for inquiry-based learning in a blended learning environment. Facilities, 34(7/
8), 433–449.
Scott, P. H., Mortimer, E. F., & Aguiar, O. G. (2006). The tension between authoritative and dialogic discourse: A fundamental characteristic of meaning-making
interactions in high school science lessons. Science Education, 90(4), 605–631.
Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 4–13.
Stead, D. R. (2005). A review of the one-minute paper. Active Learning in Higher Education, 6(2), 118–131.
Tashakkori, A., & Teddlie, C. (2003). Handbook of mixed methods in social & behavioral research. Thousand Oaks, Calif: SAGE Publications.
Wegerif, R. (2013). Dialogic: Education for the internet age. London: Routledge.
Wegerif, R., & Yang, Y. (2011). Technology and dialogic space: Lessons from history and from the ‘Argunaut’ and ‘Metafora’projects. Long Papers312–318.
Yates, K., Birks, M., Woods, C., & Hitchins, M. (2015). #Learning: The use of back channel technology in multi-campus nursing education. Nurse Education Today,
35(9), e65–e69.

18
Appendix A: The digital tools used in this project

Tool Description

• The lecturer can pose a multiple-choice


question.

• Each student is given a device (‘clicker’) to


respond (‘vote’)

• The answers are aggregated and projected onto


the screen for follow-up and whole-class
discussion

STUDENT RESPONSE SYSTEM

• The lecturer poses questions

• Students can share ideas via their online


devices and send them to a shared online
screen projected during the lecture.

• Participants can post pictures as well as make


drawings or create links between
contributions.

• The contributions can be presented in different


shapes and colours, and can be moved, edited
or connected.

ONLINE SHARED
COLLABORATIVE WHITEBOARD
Appendix B: Search strings used to find research literature

Free search words and thesaurus terms (TH) were used. Based on several test searches,
the following three search strings were used:

Context Lecture* OR Lecture method

Formative
assessment and
dialogue "peer* N3 discussion*" OR "class* N3 discussion*" OR
"Discussion (Teaching Technique)" OR "Discussion" OR
"nongraded student evaluation" OR "Feedback (Response)" OR
feedback OR "Formative Evaluation" OR "Student Evaluation"
OR "Self Evaluation (Individuals)" OR "self evaluation" OR
"Alternative Assessment" OR "alternative Assessment” OR
"Informal Assessment" OR "informal assessment" OR
"Cooperative Learning" OR "accountable talk" OR "explor*
talk*"OR dialog* OR "active learning"

OR "self assessment" OR "Formative assessment" OR


Metacognition OR feedback OR "Feedback (Response)" OR
"Interaction" OR "Student Participation"

Technology Twitter "mobile technolog*"OR microblog* OR flinga OR


support samtavla OR talkwall OR "technology N2 education*" OR
"Technology Uses in Education" OR "Educational Technology"
OR "technology integration" OR OR "shar* screen*" OR
backchannel* OR "online N2 whiteboard*" OR "electronic
voting system*" OR "Response System*"OR "Audience
Response Systems" OR clicker* OR collaboration OR
"Cooperation"

The terms ‘student engagement’, ‘student involvement’ and ‘student participation’,


which might be relevant to include, were removed because they generated too few
specific results. The search strings were combined, and I also search on single terms.
Appendix C: Flingaboard: Is learning always a good thing?
Appendix D: Letter to the participants in Study 2 and Study 3

Forespørsel om å delta i forskningsprosjektet” Technology enhanced peer


discussions in plenary lectures”

I forbindelse med min doktorgrad ved Universitetet i Bergen, Institutt for pedagogikk,
gjennomfører jeg et prosjekt om bruken av feedback-klikkere i forelesninger. I denne
delen av prosjektet undersøker vi hvordan studenter bruker gruppediskusjoner i
forelesningen. Deltakelse i første del av prosjektet innebærer at dere gir et muntlig
samtykker til at diskusjoner dere deltar i forelesningen i kan tas opp digitalt. Du kan
når som helst trekke deg eller be om at diskusjonene som du har deltatt i, blir slettet.

Deltakelse i andre del av prosjektet innebærer at dere deltar i et fokusgruppeintervju.


Intervjuet blir tatt opp digitalt. Du kan når som helst trekke deg eller be om at
diskusjonene som du har deltatt i, blir slettet.

Du blir ikke bedt om å oppgi navn. Oppbevaringen av data vil skje i tråd med
retningslinjer fra Norsk samfunnsvitenskapelig datatjeneste (NSD). Lydopptak og
transkripsjoner blir oppbevart i passordbeskyttede datafiler eller låste skap.
Doktorgradsprosjektet forventes å være avsluttet april 2018. Etter at prosjektet er
avsluttet vil alle lydfiler bli slettet.

Har du spørsmål i forbindelse med dette prosjektet eller ønsker å bli informert om
resultatene fra undersøkelsen når de foreligger, kan du gjerne ta kontakt med meg på
e-post [email protected] eller på telefonnummer 55 58 39 10.

Med vennlig hilsen

Kristine Ludvigsen
Appendix E: Interview guide, Study 3

Intervjuguide til fokusgruppe


Erfaringer Åpningsspørsmål (Et generelt spørsmål som alle må svare på)
(10 minutt) Si kort hvilken bakgrunn du har, alder).
Introduksjonsspørsmål (Hver deltaker tegner en mind-map)
Hva er din erfaring med bruken av diskusjoner i forelesningen? (støtte til læring i og
utenfor forelesningen)

Hva er din erfaring med bruken av klikkere i forelesningen? (støtte til læring i og utenfor
forelesningen)

Hva er din erfaring med bruken av flinga i forelesningen. (støtte til læring i og utenfor
forelesningen)

Nøkkelspørsmål:
Diskusjoner: Hva må til for at det skal bli en god diskusjon? Hvorfor er de nyttige?

Flinga: Hvordan var det å skrive svar sammen?


Hva lærte dere av å se de andre sine svar? Har dere eksempler på dette?
Opplevde du at flinga hjalp deg til å oppklare misforståelser? Bedre forståelse for
konsepter? Hvordan?
Kan bruken av flinga ha noen å si for hvordan dere studenter snakker om faget utenfor
forelesningen? Hvordan? Hvorfor? Hvorfor ikke?
Hvilken betydning hadde det for det videre arbeidet i faget?
Diskuterte dere forskjellig når det er klikkespm og når det er flingaspm?
Hva er mest nyttig for å synliggjøre forståelse i faget? Din egen/ medstudenters? Hva er
mest nyttig for foreleser tenker dere?
Hvordan er de to måtene å jobbe på (klikkere/ flinga) forskjellig? Støtter de læringen på
forskjellige måter? Hvordan? Har dere eksempler? (i og utenfor forelesningen) Hadde det
betydning for hvordan du jobbet med faget?

Alt i tatt i betraktning spørsmål:


Alt i alt, hva er det viktigste for deg?
Kan du oppsummere dine erfaringer med disse aktivitetene i forelesningen?
Oppsummeringsspørsmål:
Har jeg forstått det riktig?
Kan det tolkes sånn?
Sluttspørsmål:
Er det noe vi har glemt?
Noe vi ikke har snakket om som vi burde snakket om?
Appendix F: One of the mindmaps
• It contributed to good • I used elimination and became
discussions with my friends. aware that I knew more than I
We told each other what we thought I did
thought, and then voted

• It showed that I did DISCUSS AND • I appreciate that the lecture


understand included several questions on
CLICK the same topic
• It started discussions and • It helped me to understand
improved my understanding

• It helped me read after the


lecture

• I could participate with my


own ideas

• I used it for my exam • It was engaging

• I remembered better the DISCUSS • It started discussions


things we had discussed on
flinga AND WRITE

• The lecturer reviewed what • Nice to see other views


we had written

• The lecturer elaborated on • I liked that we could discuss


the post; It was good with the person beside us first
Appendix G: NSD confirmation letter
Appendix H: NSD: Confirmation letter for change of method
Doctoral Theses at The Faculty of Psychology,
University of Bergen

1980 Allen, Hugh M., Dr. philos. Parent-offspring interactions in willow grouse (Lagopus
L. Lagopus).

1981 Myhrer, Trond, Dr. philos. Behavioral Studies after selective disruption of
hippocampal inputs in albino rats.

1982 Svebak, Sven, Dr. philos. The significance of motivation for task-induced tonic
physiological changes.

1983 Myhre, Grete, Dr. philos. The Biopsychology of behavior in captive Willow
ptarmigan.

Eide, Rolf, Dr. philos. PSYCHOSOCIAL FACTORS AND INDICES OF


HEALTH RISKS. The relationship of psychosocial
conditions to subjective complaints, arterial blood
pressure, serum cholesterol, serum triglycerides and
urinary catecholamines in middle aged populations in
Western Norway.

Værnes, Ragnar J., Dr. philos. Neuropsychological effects of diving.

1984 Kolstad, Arnulf, Dr. philos. Til diskusjonen om sammenhengen mellom sosiale
forhold og psykiske strukturer. En epidemiologisk
undersøkelse blant barn og unge.

Løberg, Tor, Dr. philos. Neuropsychological assessment in alcohol dependence.

1985 Hellesnes, Tore, Dr. philos. Læring og problemløsning. En studie av den


perseptuelle analysens betydning for verbal læring.

Håland, Wenche, Dr. philos. Psykoterapi: relasjon, utviklingsprosess og effekt.

1986 Hagtvet, Knut A., Dr. philos. The construct of test anxiety: Conceptual and
methodological issues.

Jellestad, Finn K., Dr. philos. Effects of neuron specific amygdala lesions on fear-
motivated behavior in rats.

1987 Aarø, Leif E., Dr. philos. Health behaviour and sosioeconomic Status. A survey
among the adult population in Norway.

Underlid, Kjell, Dr. philos. Arbeidsløyse i psykososialt perspektiv.

Laberg, Jon C., Dr. philos. Expectancy and classical conditioning in alcoholics'
craving.

Vollmer, Fred, Dr. philos. Essays on explanation in psychology.

Ellertsen, Bjørn, Dr. philos. Migraine and tension headache: Psychophysiology,


personality and therapy.

1988 Kaufmann, Astrid, Dr. philos. Antisosial atferd hos ungdom. En studie av psykologiske
determinanter.

I
Mykletun, Reidar J., Dr. philos. Teacher stress: personality, work-load and health.

Havik, Odd E., Dr. philos. After the myocardial infarction: A medical and
psychological study with special emphasis on perceived
illness.

1989 Bråten, Stein, Dr. philos. Menneskedyaden. En teoretisk tese om sinnets


dialogiske natur med informasjons- og
utviklingspsykologiske implikasjoner sammenholdt med
utvalgte spedbarnsstudier.

Wold, Bente, Dr. psychol. Lifestyles and physical activity. A theoretical and
empirical analysis of socialization among children and
adolescents.
1990 Flaten, Magne A., Dr. psychol. The role of habituation and learning in reflex
modification.

1991 Alsaker, Françoise D., Global negative self-evaluations in early adolescence.


Dr. philos.
Kraft, Pål, Dr. philos. AIDS prevention in Norway. Empirical studies on
diffusion of knowledge, public opinion, and sexual
behaviour.
Endresen, Inger M., Dr. philos. Psychoimmuniological stress markers in working life.

Faleide, Asbjørn O., Dr. philos. Asthma and allergy in childhood. Psychosocial and
psychotherapeutic problems.

1992 Dalen, Knut, Dr. philos. Hemispheric asymmetry and the Dual-Task Paradigm:
An experimental approach.

Bø, Inge B., Dr. philos. Ungdoms sosiale økologi. En undersøkelse av 14-16
åringers sosiale nettverk.

Nivison, Mary E., Dr. philos. The relationship between noise as an experimental and
environmental stressor, physiological changes and
psychological factors.

Torgersen, Anne M., Dr. philos. Genetic and environmental influence on temperamental
behaviour. A longitudinal study of twins from infancy to
adolescence.

1993 Larsen, Svein, Dr. philos. Cultural background and problem drinking.

Nordhus, Inger Hilde, Dr. Family caregiving. A community psychological study with
philos. special emphasis on clinical interventions.

Thuen, Frode, Dr. psychol. Accident-related behaviour among children and young
adolescents: Prediction and prevention.

Solheim, Ragnar, Dr. philos. Spesifikke lærevansker. Diskrepanskriteriet anvendt i


seleksjonsmetodikk.

Johnsen, Bjørn Helge, Brain assymetry and facial emotional expressions:


Dr. psychol. Conditioning experiments.

1994 Tønnessen, Finn E., Dr. philos. The etiology of Dyslexia.

Kvale, Gerd, Dr. psychol. Psychological factors in anticipatory nausea and


vomiting in cancer chemotherapy.

II
Asbjørnsen, Arve E., Structural and dynamic factors in dichotic listening: An
Dr. psychol. interactional model.

Bru, Edvin, Dr. philos. The role of psychological factors in neck, shoulder and
low back pain among female hospitale staff.

Braathen, Eli T., Dr. psychol. Prediction of exellence and discontinuation in different
types of sport: The significance of motivation and EMG.

Johannessen, Birte F., Det flytende kjønnet. Om lederskap, politikk og identitet.


Dr. philos.

1995 Sam, David L., Dr. psychol. Acculturation of young immigrants in Norway: A
psychological and socio-cultural adaptation.

Bjaalid, Inger-Kristin, Dr. philos. Component processes in word recognition.

Martinsen, Øyvind, Dr. philos. Cognitive style and insight.

Nordby, Helge, Dr. philos. Processing of auditory deviant events: Mismatch


negativity of event-related brain potentials.

Raaheim, Arild, Dr. philos. Health perception and health behaviour, theoretical
considerations, empirical studies, and practical
implications.

Seltzer, Wencke J., Dr. philos. Studies of Psychocultural Approach to Families in


Therapy.

Brun, Wibecke, Dr. philos. Subjective conceptions of uncertainty and risk.

Aas, Henrik N., Dr. psychol. Alcohol expectancies and socialization:


Adolescents learning to drink.

Bjørkly, Stål, Dr. psychol. Diagnosis and prediction of intra-institutional


aggressive behaviour in psychotic patients

1996 Anderssen, Norman, Physical activity of young people in a health perspective:


Dr. psychol. Stability, change and social influences.

Sandal, Gro Mjeldheim, Coping in extreme environments: The role of personality.


Dr. psychol.
Strumse, Einar, Dr. philos. The psychology of aesthetics: explaining visual
preferences for agrarian landscapes in Western Norway.

Hestad, Knut, Dr. philos. Neuropsychological deficits in HIV-1 infection.

Lugoe, L.Wycliffe, Dr. philos. Prediction of Tanzanian students’ HIV risk and
preventive behaviours

Sandvik, B. Gunnhild, Fra distriktsjordmor til institusjonsjordmor. Fremveksten


Dr. philos. av en profesjon og en profesjonsutdanning

Lie, Gro Therese, Dr. psychol. The disease that dares not speak its name: Studies on
factors of importance for coping with HIV/AIDS in
Northern Tanzania

Øygard, Lisbet, Dr. philos. Health behaviors among young adults. A psychological
and sociological approach

Stormark, Kjell Morten, Emotional modulation of selective attention:


Dr. psychol. Experimental and clinical evidence.

III
Einarsen, Ståle, Dr. psychol. Bullying and harassment at work: epidemiological and
psychosocial aspects.

1997 Knivsberg, Ann-Mari, Dr. philos. Behavioural abnormalities and childhood


psychopathology: Urinary peptide patterns as a potential
tool in diagnosis and remediation.

Eide, Arne H., Dr. philos. Adolescent drug use in Zimbabwe. Cultural orientation in
a global-local perspective and use of psychoactive
substances among secondary school students.

Sørensen, Marit, Dr. philos. The psychology of initiating and maintaining exercise
and diet behaviour.

Skjæveland, Oddvar, Relationships between spatial-physical neighborhood


Dr. psychol. attributes and social relations among neighbors.

Zewdie, Teka, Dr. philos. Mother-child relational patterns in Ethiopia. Issues of


developmental theories and intervention programs.

Wilhelmsen, Britt Unni, Development and evaluation of two educational


Dr. philos. programmes designed to prevent alcohol use among
adolescents.

Manger, Terje, Dr. philos. Gender differences in mathematical achievement among


Norwegian elementary school students.

1998 Lindstrøm, Torill Christine, «Good Grief»: Adapting to Bereavement.


V Dr. philos.

Skogstad, Anders, Dr. philos. Effects of leadership behaviour on job satisfaction,


health and efficiency.

Haldorsen, Ellen M. Håland, Return to work in low back pain patients.


Dr. psychol.

Besemer, Susan P., Dr. philos. Creative Product Analysis: The Search for a Valid Model
for Understanding Creativity in Products.

H Winje, Dagfinn, Dr. psychol. Psychological adjustment after severe trauma. A


longitudinal study of adults’ and children’s posttraumatic
reactions and coping after the bus accident in
Måbødalen, Norway 1988.

Vosburg, Suzanne K., The effects of mood on creative problem solving.


Dr. philos.

Eriksen, Hege R., Dr. philos. Stress and coping: Does it really matter for subjective
health complaints?

Jakobsen, Reidar, Dr. psychol. Empiriske studier av kunnskap og holdninger om hiv/aids


og den normative seksuelle utvikling i ungdomsårene.

1999 Mikkelsen, Aslaug, Dr. philos. Effects of learning opportunities and learning climate on
V occupational health.

Samdal, Oddrun, Dr. philos. The school environment as a risk or resource for
students’ health-related behaviours and subjective well-
being.

Friestad, Christine, Dr. philos. Social psychological approaches to smoking.

Ekeland, Tor-Johan, Dr. philos. Meining som medisin. Ein analyse av placebofenomenet
og implikasjoner for terapi og terapeutiske teoriar.

IV
H Saban, Sara, Dr. psychol. Brain Asymmetry and Attention: Classical Conditioning
Experiments.

Carlsten, Carl Thomas, God lesing – God læring. En aksjonsrettet studie av


Dr. philos. undervisning i fagtekstlesing.

Dundas, Ingrid, Dr. psychol. Functional and dysfunctional closeness. Family


interaction and children’s adjustment.

Engen, Liv, Dr. philos. Kartlegging av leseferdighet på småskoletrinnet og


vurdering av faktorer som kan være av betydning for
optimal leseutvikling.

2000 Hovland, Ole Johan, Dr. philos. Transforming a self-preserving “alarm” reaction into a
V self-defeating emotional response: Toward an integrative
approach to anxiety as a human phenomenon.

Lillejord, Sølvi, Dr. philos. Handlingsrasjonalitet og spesialundervisning. En analyse


av aktørperspektiver.

Sandell, Ove, Dr. philos. Den varme kunnskapen.

Oftedal, Marit Petersen, Diagnostisering av ordavkodingsvansker: En


Dr. philos. prosessanalytisk tilnærmingsmåte.

H Sandbak, Tone, Dr. psychol. Alcohol consumption and preference in the rat: The
significance of individual differences and relationships to
stress pathology

Eid, Jarle, Dr. psychol. Early predictors of PTSD symptom reporting;


The significance of contextual and individual factors.

2001 Skinstad, Anne Helene, Substance dependence and borderline personality


V Dr. philos. disorders.

Binder, Per-Einar, Dr. psychol. Individet og den meningsbærende andre. En teoretisk


undersøkelse av de mellommenneskelige
forutsetningene for psykisk liv og utvikling med
utgangspunkt i Donald Winnicotts teori.

Roald, Ingvild K., Dr. philos. Building of concepts. A study of Physics concepts of
Norwegian deaf students.

H Fekadu, Zelalem W., Dr. philos. Predicting contraceptive use and intention among a
sample of adolescent girls. An application of the theory
of planned behaviour in Ethiopian context.

Melesse, Fantu, Dr. philos. The more intelligent and sensitive child (MISC)
mediational intervention in an Ethiopian context: An
evaluation study.

Råheim, Målfrid, Dr. philos. Kvinners kroppserfaring og livssammenheng. En


fenomenologisk – hermeneutisk studie av friske kvinner
og kvinner med kroniske muskelsmerter.

Engelsen, Birthe Kari, Measurement of the eating problem construct.


Dr. psychol.

Lau, Bjørn, Dr. philos. Weight and eating concerns in adolescence.

2002 Ihlebæk, Camilla, Dr. philos. Epidemiological studies of subjective health complaints.
V

V
Rosén, Gunnar O. R., The phantom limb experience. Models for understanding
Dr. philos. and treatment of pain with hypnosis.

Høines, Marit Johnsen, Fleksible språkrom. Matematikklæring som tekstutvikling.


Dr. philos.

Anthun, Roald Andor, School psychology service quality.


Dr. philos. Consumer appraisal, quality dimensions, and
collaborative improvement potential

Pallesen, Ståle, Dr. psychol. Insomnia in the elderly. Epidemiology, psychological


characteristics and treatment.

Midthassel, Unni Vere, Teacher involvement in school development activity. A


Dr. philos. study of teachers in Norwegian compulsory schools

Kallestad, Jan Helge, Dr. Teachers, schools and implementation of the Olweus
philos. Bullying Prevention Program.

H Ofte, Sonja Helgesen, Right-left discrimination in adults and children.


Dr. psychol.

Netland, Marit, Dr. psychol. Exposure to political violence. The need to estimate our
estimations.

Diseth, Åge, Dr. psychol. Approaches to learning: Validity and prediction of


academic performance.

Bjuland, Raymond, Dr. philos. Problem solving in geometry. Reasoning processes of


student teachers working in small groups: A dialogical
approach.

2003 Arefjord, Kjersti, Dr. psychol. After the myocardial infarction – the wives’ view. Short-
V and long-term adjustment in wives of myocardial
infarction patients.

Ingjaldsson, Jón Þorvaldur, Unconscious Processes and Vagal Activity in Alcohol


Dr. psychol. Dependency.

Holden, Børge, Dr. philos. Følger av atferdsanalytiske forklaringer for


atferdsanalysens tilnærming til utforming av behandling.

Holsen, Ingrid, Dr. philos. Depressed mood from adolescence to ’emerging


adulthood’. Course and longitudinal influences of body
image and parent-adolescent relationship.

Hammar, Åsa Karin, Major depression and cognitive dysfunction- An


Dr. psychol. experimental study of the cognitive effort hypothesis.

Sprugevica, Ieva, Dr. philos. The impact of enabling skills on early reading acquisition.

Gabrielsen, Egil, Dr. philos. LESE FOR LIVET. Lesekompetansen i den norske
voksenbefolkningen sett i lys av visjonen om en
enhetsskole.

H Hansen, Anita Lill, Dr. psychol. The influence of heart rate variability in the regulation of
attentional and memory processes.

Dyregrov, Kari, Dr. philos. The loss of child by suicide, SIDS, and accidents:
Consequences, needs and provisions of help.

2004 Torsheim, Torbjørn, Student role strain and subjective health complaints:
V Dr. psychol. Individual, contextual, and longitudinal perspectives.

VI
Haugland, Bente Storm Mowatt Parental alcohol abuse. Family functioning and child
Dr. psychol. adjustment.

Milde, Anne Marita, Dr. psychol. Ulcerative colitis and the role of stress. Animal studies of
psychobiological factors in relationship to experimentally
induced colitis.

Stornes, Tor, Dr. philos. Socio-moral behaviour in sport. An investigation of


perceptions of sportspersonship in handball related to
important factors of socio-moral influence.

Mæhle, Magne, Dr. philos. Re-inventing the child in family therapy: An investigation
of the relevance and applicability of theory and research
in child development for family therapy involving children.

Kobbeltvedt, Therese, Risk and feelings: A field approach.


Dr. psychol.
2004 Thomsen, Tormod, Dr. psychol. Localization of attention in the brain.
H
Løberg, Else-Marie, Functional laterality and attention modulation in
Dr. psychol. schizophrenia: Effects of clinical variables.

Kyrkjebø, Jane Mikkelsen, Learning to improve: Integrating continuous quality


Dr. philos. improvement learning into nursing education.

Laumann, Karin, Dr. psychol. Restorative and stress-reducing effects of natural


environments: Experiencal, behavioural and
cardiovascular indices.

Holgersen, Helge, PhD Mellom oss - Essay i relasjonell psykoanalyse.

2005 Hetland, Hilde, Dr. psychol. Leading to the extraordinary?


V Antecedents and outcomes of transformational
leadership.

Iversen, Anette Christine, Social differences in health behaviour: the motivational


Dr. philos. role of perceived control and coping.

2005 Mathisen, Gro Ellen, PhD Climates for creativity and innovation: Definitions,
H measurement, predictors and consequences.

Sævi, Tone, Dr. philos. Seeing disability pedagogically – The lived experience of
disability in the pedagogical encounter.

Wiium, Nora, PhD Intrapersonal factors, family and school norms:


combined and interactive influence on adolescent
smoking behaviour.

Kanagaratnam, Pushpa, PhD Subjective and objective correlates of Posttraumatic


Stress in immigrants/refugees exposed to political
violence.

Larsen, Torill M. B. , PhD Evaluating principals` and teachers` implementation of


Second Step. A case study of four Norwegian primary
schools.

Bancila, Delia, PhD Psychosocial stress and distress among Romanian


adolescents and adults.

2006 Hillestad, Torgeir Martin, Normalitet og avvik. Forutsetninger for et objektivt


V Dr. philos. psykopatologisk avviksbegrep. En psykologisk, sosial,
erkjennelsesteoretisk og teorihistorisk framstilling.

VII
Nordanger, Dag Øystein, Psychosocial discourses and responses to political
Dr. psychol. violence in post-war Tigray, Ethiopia.

Rimol, Lars Morten, PhD Behavioral and fMRI studies of auditory laterality and
speech sound processing.

Krumsvik, Rune Johan, ICT in the school. ICT-initiated school development in


Dr. philos. lower secondary school.

Norman, Elisabeth, Dr. psychol. Gut feelings and unconscious thought:


An exploration of fringe consiousness in implicit
cognition.

Israel, K Pravin, Dr. psychol. Parent involvement in the mental health care of children
and adolescents. Emperical studies from clinical care
setting.

Glasø, Lars, PhD Affects and emotional regulation in leader-subordinate


relationships.

Knutsen, Ketil, Dr. philos. HISTORIER UNGDOM LEVER – En studie av hvordan


ungdommer bruker historie for å gjøre livet meningsfullt.

Matthiesen, Stig Berge, PhD Bullying at work. Antecedents and outcomes.

2006 Gramstad, Arne, PhD Neuropsychological assessment of cognitive and


H emotional functioning in patients with epilepsy.

Bendixen, Mons, PhD Antisocial behaviour in early adolescence:


Methodological and substantive issues.

Mrumbi, Khalifa Maulid, PhD Parental illness and loss to HIV/AIDS as experienced by
AIDS orphans aged between 12-17 years from Temeke
District, Dar es Salaam, Tanzania: A study of the
children’s psychosocial health and coping responses.

Hetland, Jørn, Dr. psychol. The nature of subjective health complaints in


adolescence: Dimensionality, stability, and psychosocial
predictors

Kakoko, Deodatus Conatus Voluntary HIV counselling and testing service uptake
Vitalis, PhD among primary school teachers in Mwanza, Tanzania:
assessment of socio-demographic, psychosocial and
socio-cognitive aspects

Mykletun, Arnstein, Dr. psychol. Mortality and work-related disability as long-term


consequences of anxiety and depression: Historical
cohort designs based on the HUNT-2 study

Sivertsen, Børge, PhD Insomnia in older adults. Consequences, assessment


and treatment.

2007 Singhammer, John, Dr. philos. Social conditions from before birth to early adulthood –
V the influence on health and health behaviour

Janvin, Carmen Ani Cristea, Cognitive impairment in patients with Parkinson’s


PhD disease: profiles and implications for prognosis

Braarud, Hanne Cecilie, Infant regulation of distress: A longitudinal study of


Dr.psychol. transactions between mothers and infants

Tveito, Torill Helene, PhD Sick Leave and Subjective Health Complaints

VIII
Magnussen, Liv Heide, PhD Returning disability pensioners with back pain to work

Thuen, Elin Marie, Dr.philos. Learning environment, students’ coping styles and
emotional and behavioural problems. A study of
Norwegian secondary school students.

Solberg, Ole Asbjørn, PhD Peacekeeping warriors – A longitudinal study of


Norwegian peacekeepers in Kosovo

2007 Søreide, Gunn Elisabeth, Narrative construction of teacher identity


H Dr.philos.

Svensen, Erling, PhD WORK & HEALTH. Cognitive Activation Theory of Stress
applied in an organisational setting.

Øverland, Simon Nygaard, PhD Mental health and impairment in disability benefits.
Studies applying linkages between health surveys and
administrative registries.

Eichele, Tom, PhD Electrophysiological and Hemodynamic Correlates of


Expectancy in Target Processing

Børhaug, Kjetil, Dr.philos. Oppseding til demokrati. Ein studie av politisk oppseding
i norsk skule.

Eikeland, Thorleif, Dr.philos. Om å vokse opp på barnehjem og på sykehus. En


undersøkelse av barnehjemsbarns opplevelser på
barnehjem sammenholdt med sanatoriebarns
beskrivelse av langvarige sykehusopphold – og et forsøk
på forklaring.

Wadel, Carl Cato, Dr.philos. Medarbeidersamhandling og medarbeiderledelse i en


lagbasert organisasjon

Vinje, Hege Forbech, PhD Thriving despite adversity: Job engagement and self-
care among community nurses

Noort, Maurits van den, PhD Working memory capacity and foreign language
acquisition

2008 Breivik, Kyrre, Dr.psychol. The Adjustment of Children and Adolescents in Different
V Post-Divorce Family Structures. A Norwegian Study of
Risks and Mechanisms.

Johnsen, Grethe E., PhD Memory impairment in patients with posttraumatic stress
disorder

Sætrevik, Bjørn, PhD Cognitive Control in Auditory Processing

Carvalhosa, Susana Fonseca, Prevention of bullying in schools: an ecological model


PhD

2008 Brønnick, Kolbjørn Selvåg Attentional dysfunction in dementia associated with


H Parkinson’s disease.

Posserud, Maj-Britt Rocio Epidemiology of autism spectrum disorders

Haug, Ellen Multilevel correlates of physical activity in the school


setting

Skjerve, Arvid Assessing mild dementia – a study of brief cognitive


tests.

IX
Kjønniksen, Lise The association between adolescent experiences in
physical activity and leisure time physical activity in
adulthood: a ten year longitudinal study

Gundersen, Hilde The effects of alcohol and expectancy on brain function

Omvik, Siri Insomnia – a night and day problem

2009 Molde, Helge Pathological gambling: prevalence, mechanisms and


V treatment outcome.

Foss, Else Den omsorgsfulle væremåte. En studie av voksnes


væremåte i forhold til barn i barnehagen.

Westrheim, Kariane Education in a Political Context: A study of Konwledge


Processes and Learning Sites in the PKK.

Wehling, Eike Cognitive and olfactory changes in aging

Wangberg, Silje C. Internet based interventions to support health behaviours:


The role of self-efficacy.

Nielsen, Morten B. Methodological issues in research on workplace bullying.


Operationalisations, measurements and samples.

Sandu, Anca Larisa MRI measures of brain volume and cortical complexity in
clinical groups and during development.

Guribye, Eugene Refugees and mental health interventions

Sørensen, Lin Emotional problems in inattentive children – effects on


cognitive control functions.

Tjomsland, Hege E. Health promotion with teachers. Evaluation of the


Norwegian Network of Health Promoting Schools:
Quantitative and qualitative analyses of predisposing,
reinforcing and enabling conditions related to teacher
participation and program sustainability.

Helleve, Ingrid Productive interactions in ICT supported communities of


learners

2009 Skorpen, Aina Dagliglivet i en psykiatrisk institusjon: En analyse av


H Øye, Christine miljøterapeutiske praksiser

Andreassen, Cecilie Schou WORKAHOLISM – Antecedents and Outcomes

Stang, Ingun Being in the same boat: An empowerment intervention in


breast cancer self-help groups

Sequeira, Sarah Dorothee Dos The effects of background noise on asymmetrical speech
Santos perception

Kleiven, Jo, dr.philos. The Lillehammer scales: Measuring common motives for
vacation and leisure behavior

Jónsdóttir, Guðrún Dubito ergo sum? Ni jenter møter naturfaglig kunnskap.

Hove, Oddbjørn Mental health disorders in adults with intellectual


disabilities - Methods of assessment and prevalence of
mental health disorders and problem behaviour

Wageningen, Heidi Karin van The role of glutamate on brain function

X
Bjørkvik, Jofrid God nok? Selvaktelse og interpersonlig fungering hos
pasienter innen psykisk helsevern: Forholdet til
diagnoser, symptomer og behandlingsutbytte

Andersson, Martin A study of attention control in children and elderly using


a forced-attention dichotic listening paradigm

Almås, Aslaug Grov Teachers in the Digital Network Society: Visions and
Realities. A study of teachers’ experiences with the use
of ICT in teaching and learning.

Ulvik, Marit Lærerutdanning som danning? Tre stemmer i


diskusjonen

2010 Skår, Randi Læringsprosesser i sykepleieres profesjonsutøvelse.


V En studie av sykepleieres læringserfaringer.

Roald, Knut Kvalitetsvurdering som organisasjonslæring mellom


skole og skoleeigar

Lunde, Linn-Heidi Chronic pain in older adults. Consequences, assessment


and treatment.

Danielsen, Anne Grete Perceived psychosocial support, students’ self-reported


academic initiative and perceived life satisfaction

Hysing, Mari Mental health in children with chronic illness

Olsen, Olav Kjellevold Are good leaders moral leaders? The relationship
between effective military operational leadership and
morals

Riese, Hanne Friendship and learning. Entrepreneurship education


through mini-enterprises.

Holthe, Asle Evaluating the implementation of the Norwegian


guidelines for healthy school meals: A case study
involving three secondary schools

H Hauge, Lars Johan Environmental antecedents of workplace bullying:


A multi-design approach

Bjørkelo, Brita Whistleblowing at work: Antecedents and consequences

Reme, Silje Endresen Common Complaints – Common Cure?


Psychiatric comorbidity and predictors of treatment
outcome in low back pain and irritable bowel syndrome

Helland, Wenche Andersen Communication difficulties in children identified with


psychiatric problems

Beneventi, Harald Neuronal correlates of working memory in dyslexia

Thygesen, Elin Subjective health and coping in care-dependent old


persons living at home

Aanes, Mette Marthinussen Poor social relationships as a threat to belongingness


needs. Interpersonal stress and subjective health
complaints: Mediating and moderating factors.

Anker, Morten Gustav Client directed outcome informed couple therapy

XI
Bull, Torill Combining employment and child care: The subjective
well-being of single women in Scandinavia and in
Southern Europe

Viig, Nina Grieg Tilrettelegging for læreres deltakelse i helsefremmende


arbeid. En kvalitativ og kvantitativ analyse av
sammenhengen mellom organisatoriske forhold og
læreres deltakelse i utvikling og implementering av
Europeisk Nettverk av Helsefremmende Skoler i Norge

Wolff, Katharina To know or not to know? Attitudes towards receiving


genetic information among patients and the general
public.

Ogden, Terje, dr.philos. Familiebasert behandling av alvorlige atferdsproblemer


blant barn og ungdom. Evaluering og implementering av
evidensbaserte behandlingsprogrammer i Norge.

Solberg, Mona Elin Self-reported bullying and victimisation at school:


Prevalence, overlap and psychosocial adjustment.

2011 Bye, Hege Høivik Self-presentation in job interviews. Individual and cultural
V differences in applicant self-presentation during job
interviews and hiring managers’ evaluation

Notelaers, Guy Workplace bullying. A risk control perspective.

Moltu, Christian Being a therapist in difficult therapeutic impasses.


A hermeneutic phenomenological analysis of skilled
psychotherapists’ experiences, needs, and strategies in
difficult therapies ending well.

Myrseth, Helga Pathological Gambling - Treatment and Personality


Factors

Schanche, Elisabeth From self-criticism to self-compassion. An empirical


investigation of hypothesized change prosesses in the
Affect Phobia Treatment Model of short-term dynamic
psychotherapy for patients with Cluster C personality
disorders.

Våpenstad, Eystein Victor, Det tempererte nærvær. En teoretisk undersøkelse av


dr.philos. psykoterapautens subjektivitet i psykoanalyse og
psykoanalytisk psykoterapi.

Haukebø, Kristin Cognitive, behavioral and neural correlates of dental and


intra-oral injection phobia. Results from one treatment
and one fMRI study of randomized, controlled design.

Harris, Anette Adaptation and health in extreme and isolated


environments. From 78°N to 75°S.

Bjørknes, Ragnhild Parent Management Training-Oregon Model:


intervention effects on maternal practice and child
behavior in ethnic minority families

Mamen, Asgeir Aspects of using physical training in patients with


substance dependence and additional mental distress

Espevik, Roar Expert teams: Do shared mental models of team


members make a difference

Haara, Frode Olav Unveiling teachers’ reasons for choosing practical


activities in mathematics teaching

XII
2011 Hauge, Hans Abraham How can employee empowerment be made conducive to
H both employee health and organisation performance? An
empirical investigation of a tailor-made approach to
organisation learning in a municipal public service
organisation.

Melkevik, Ole Rogstad Screen-based sedentary behaviours: pastimes for the


poor, inactive and overweight? A cross-national survey
of children and adolescents in 39 countries.

Vøllestad, Jon Mindfulness-based treatment for anxiety disorders. A


quantitative review of the evidence, results from a
randomized controlled trial, and a qualitative exploration
of patient experiences.

Tolo, Astrid Hvordan blir lærerkompetanse konstruert? En kvalitativ


studie av PPU-studenters kunnskapsutvikling.

Saus, Evelyn-Rose Training effectiveness: Situation awareness training in


simulators

Nordgreen, Tine Internet-based self-help for social anxiety disorder and


panic disorder. Factors associated with effect and use of
self-help.

Munkvold, Linda Helen Oppositional Defiant Disorder: Informant discrepancies,


gender differences, co-occuring mental health problems
and neurocognitive function.

Christiansen, Øivin Når barn plasseres utenfor hjemmet: beslutninger, forløp


og relasjoner. Under barnevernets (ved)tak.

Brunborg, Geir Scott Conditionability and Reinforcement Sensitivity in


Gambling Behaviour

Hystad, Sigurd William Measuring Psychological Resiliency: Validation of an


Adapted Norwegian Hardiness Scale

2012 Roness, Dag Hvorfor bli lærer? Motivasjon for utdanning og utøving.
V

Fjermestad, Krister Westlye The therapeutic alliance in cognitive behavioural therapy


for youth anxiety disorders

Jenssen, Eirik Sørnes Tilpasset opplæring i norsk skole: politikeres,


skolelederes og læreres handlingsvalg

Saksvik-Lehouillier, Ingvild Shift work tolerance and adaptation to shift work among
offshore workers and nurses

Johansen, Venke Frederike Når det intime blir offentlig. Om kvinners åpenhet om
brystkreft og om markedsføring av brystkreftsaken.

Herheim, Rune Pupils collaborating in pairs at a computer in


mathematics learning: investigating verbal
communication patterns and qualities

Vie, Tina Løkke Cognitive appraisal, emotions and subjective health


complaints among victims of workplace bullying:
A stress-theoretical approach

Jones, Lise Øen Effects of reading skills, spelling skills and accompanying
efficacy beliefs on participation in education. A study in
Norwegian prisons.

XIII
2012 Danielsen, Yngvild Sørebø Childhood obesity – characteristics and treatment.
H Psychological perspectives.

Horverak, Jøri Gytre Sense or sensibility in hiring processes. Interviewee and


interviewer characteristics as antecedents of immigrant
applicants’ employment probabilities. An experimental
approach.

Jøsendal, Ola Development and evaluation of BE smokeFREE, a


school-based smoking prevention program

Osnes, Berge Temporal and Posterior Frontal Involvement in Auditory


Speech Perception

Drageset, Sigrunn Psychological distress, coping and social support in the


diagnostic and preoperative phase of breast cancer

Aasland, Merethe Schanke Destructive leadership: Conceptualization,


measurement, prevalence and outcomes

Bakibinga, Pauline The experience of job engagement and self-care among


Ugandan nurses and midwives

Skogen, Jens Christoffer Foetal and early origins of old age health. Linkage
between birth records and the old age cohort of the
Hordaland Health Study (HUSK)

Leversen, Ingrid Adolescents’ leisure activity participation and their life


satisfaction: The role of demographic characteristics and
psychological processes

Hanss, Daniel Explaining sustainable consumption: Findings from


cross-sectional and intervention approaches

Rød, Per Arne Barn i klem mellom foreldrekonflikter og


samfunnsmessig beskyttelse

2013 Mentzoni, Rune Aune Structural Characteristics in Gambling


V

Knudsen, Ann Kristin Long-term sickness absence and disability pension


award as consequences of common mental disorders.
Epidemiological studies using a population-based health
survey and official ill health benefit registries.

Strand, Mari Emotional information processing in recurrent MDD

Veseth, Marius Recovery in bipolar disorder. A reflexive-collaborative


exploration of the lived experiences of healing and
growth when battling a severe mental illness

Mæland, Silje Sick leave for patients with severe subjective health
complaints. Challenges in general practice.

Mjaaland, Thera At the frontiers of change? Women and girls’ pursuit of


education in north-western Tigray, Ethiopia

Odéen, Magnus Coping at work. The role of knowledge and coping


expectancies in health and sick leave.

Hynninen, Kia Minna Johanna Anxiety, depression and sleep disturbance in chronic
obstructive pulmonary disease (COPD). Associations,
prevalence and effect of psychological treatment.

XIV
Flo, Elisabeth Sleep and health in shift working nurses

Aasen, Elin Margrethe From paternalism to patient participation?


The older patients undergoing hemodialysis, their next of
kin and the nurses: a discursive perspective on
perception of patient participation in dialysis units

Ekornås, Belinda Emotional and Behavioural Problems in Children:


Self-perception, peer relationships, and motor abilities

Corbin, J. Hope North-South Partnerships for Health:


Key Factors for Partnership Success from the
Perspective of the KIWAKKUKI

Birkeland, Marianne Skogbrott Development of global self-esteem:


The transition from adolescence to adulthood

2013 Gianella-Malca, Camila Challenges in Implementing the Colombian


H Constitutional Court’s Health-Care System Ruling of
2008
Hovland, Anders Panic disorder – Treatment outcomes and
psychophysiological concomitants

Mortensen, Øystein The transition to parenthood – Couple relationships


put to the test

Årdal, Guro Major Depressive Disorder – a Ten Year Follow-up


Study. Inhibition, Information Processing and Health
Related Quality of Life

Johansen, Rino Bandlitz The impact of military identity on performance in the


Norwegian armed forces

Bøe, Tormod Socioeconomic Status and Mental Health in Children and


Adolescents

2014 Nordmo, Ivar Gjennom nåløyet – studenters læringserfaringer i


V psykologutdanningen

Dovran, Anders Childhood Trauma and Mental Health Problems


in Adult Life

Hegelstad, Wenche ten Velden Early Detection and Intervention in Psychosis:


A Long-Term Perspective

Urheim, Ragnar Forståelse av pasientaggresjon og forklaringer på


nedgang i voldsrate ved Regional sikkerhetsavdeling,
Sandviken sykehus
Kinn, Liv Grethe Round-Trips to Work. Qualitative studies of how persons
with severe mental illness experience work integration.

Rød, Anne Marie Kinn Consequences of social defeat stress for behaviour and
sleep. Short-term and long-term assessments in rats.

Nygård, Merethe Schizophrenia – Cognitive Function, Brain Abnormalities,


and Cannabis Use

Tjora, Tore Smoking from adolescence through adulthood: the role


of family, friends, depression and socioeconomic status.
Predictors of smoking from age 13 to 30 in the “The
Norwegian Longitudinal Health Behaviour Study” (NLHB)

Vangsnes, Vigdis The Dramaturgy and Didactics of Computer Gaming. A


Study of a Medium in the Educational Context of
Kindergartens.

XV
Nordahl, Kristin Berg Early Father-Child Interaction in a Father-Friendly
Context: Gender Differences, Child Outcomes, and
Protective Factors related to Fathers’ Parenting
Behaviors with One-year-olds

2014 Sandvik, Asle Makoto Psychopathy – the heterogenety of the construct


H

Skotheim, Siv Maternal emotional distress and early mother-infant


interaction: Psychological, social and nutritional
contributions

Halleland, Helene Barone Executive Functioning in adult Attention Deficit


Hyperactivity Disorder (ADHD). From basic mechanisms
to functional outcome.

Halvorsen, Kirsti Vindal Partnerskap i lærerutdanning, sett fra et økologisk


perspektiv

Solbue, Vibeke Dialogen som visker ut kategorier. En studie av hvilke


erfaringer innvandrerungdommer og norskfødte med
innvandrerforeldre har med videregående skole. Hva
forteller ungdommenes erfaringer om videregående
skoles håndtering av etniske ulikheter?

Kvalevaag, Anne Lise Fathers’ mental health and child development. The
predictive value of fathers’ psychological distress during
pregnancy for the social, emotional and behavioural
development of their children

Sandal, Ann Karin Ungdom og utdanningsval. Om elevar sine opplevingar


av val og overgangsprosessar.

Haug, Thomas Predictors and moderators of treatment outcome from


high- and low-intensity cognitive behavioral therapy for
anxiety disorders. Association between patient and
process factors, and the outcome from guided self-help,
stepped care, and face-to-face cognitive behavioral
therapy.

Sjølie, Hege Experiences of Members of a Crisis Resolution Home


Treatment Team. Personal history, professional role and
emotional support in a CRHT team.

Falkenberg, Liv Eggset Neuronal underpinnings of healthy and dysfunctional


cognitive control

Mrdalj, Jelena The early life condition. Importance for sleep, circadian
rhythmicity, behaviour and response to later life
challenges

Hesjedal, Elisabeth Tverrprofesjonelt samarbeid mellom skule og barnevern:


Kva kan støtte utsette barn og unge?

2015 Hauken, May Aasebø «The cancer treatment was only half the work!» A Mixed-
V Method Study of Rehabilitation among Young Adult
Cancer Survivors

Ryland, Hilde Katrin Social functioning and mental health in children: the
influence of chronic illness and intellectual function

Rønsen, Anne Kristin Vurdering som profesjonskompetanse.


Refleksjonsbasert utvikling av læreres kompetanse i
formativ vurdering

XVI
Hoff, Helge Andreas Thinking about Symptoms of Psychopathy in Norway:
Content Validation of the Comprehensive Assessment of
Psychopathic Personality (CAPP) Model in a Norwegian
Setting

Schmid, Marit Therese Executive Functioning in recurrent- and first episode


Major Depressive Disorder. Longitudinal studies

Sand, Liv Body Image Distortion and Eating Disturbances in


Children and Adolescents

Matanda, Dennis Juma Child physical growth and care practices in Kenya:
Evidence from Demographic and Health Surveys

Amugsi, Dickson Abanimi Child care practices, resources for care, and nutritional
outcomes in Ghana: Findings from Demographic and
Health Surveys

Jakobsen, Hilde The good beating: Social norms supporting men’s


partner violence in Tanzania

Sagoe, Dominic Nonmedical anabolic-androgenic steroid use:


Prevalence, attitudes, and social perception

Eide, Helene Marie Kjærgård Narrating the relationship between leadership and
learning outcomes. A study of public narratives in the
Norwegian educational sector.

2015 Wubs, Annegreet Gera Intimate partner violence among adolescents in South
H Africa and Tanzania

Hjelmervik, Helene Susanne Sex and sex-hormonal effects on brain organization of


fronto-parietal networks

Dahl, Berit Misund The meaning of professional identity in public health


nursing

Røykenes, Kari Testangst hos sykepleierstudenter: «Alternativ


behandling»

Bless, Josef Johann The smartphone as a research tool in psychology.


Assessment of language lateralization and training of
auditory attention.

Løvvik, Camilla Margrethe Common mental disorders and work participation – the
Sigvaldsen role of return-to-work expectations

Lehmann, Stine Mental Disorders in Foster Children: A Study of


Prevalence, Comorbidity, and Risk Factors

Knapstad, Marit Psychological factors in long-term sickness absence: the


role of shame and social support. Epidemiological
studies based on the Health Assets Project.

2016 Kvestad, Ingrid Biological risks and neurodevelopment in young North


V Indian children

Sælør, Knut Tore Hinderløyper, halmstrå og hengende snører. En kvalitativ


studie av håp innenfor psykisk helse- og rusfeltet.

Mellingen, Sonja Alkoholbruk, partilfredshet og samlivsstatus. Før, inn i,


og etter svangerskapet – korrelater eller konsekvenser?

Thun, Eirunn Shift work: negative consequences and protective factors

XVII
Hilt, Line Torbjørnsen The borderlands of educational inclusion. Analyses of
inclusion and exclusion processes for minority language
students

Havnen, Audun Treatment of obsessive-compulsive disorder and the


importance of assessing clinical effectiveness

Slåtten, Hilde Gay-related name-calling among young adolescents.


Exploring the importance of the context.

Ree, Eline Staying at work. The role of expectancies and beliefs in


health and workplace interventions.

Morken, Frøydis Reading and writing processing in dyslexia

2016 Løvoll, Helga Synnevåg Inside the outdoor experience. On the distinction
H between pleasant and interesting feelings and their
implication in the motivational process.

Hjeltnes, Aslak Facing social fears: An investigation of mindfulness-


based stress reduction for young adults with social
anxiety disorder

Øyeflaten, Irene Larsen Long-term sick leave and work rehabilitation. Prognostic
factors for return to work.

Henriksen, Roger Ekeberg Social relationships, stress and infection risk in mother
and child

Johnsen, Iren «Only a friend» - The bereavement process of young


adults who have lost a friend to a traumatic death. A
mixed methods study.

Helle, Siri Cannabis use in non-affective psychoses: Relationship


to age at onset, cognitive functioning and social cognition

Glambek, Mats Workplace bullying and expulsion in working life. A


representative study addressing prospective
associations and explanatory conditions.

Oanes, Camilla Jensen Tilbakemelding i terapi. På hvilke måter opplever


terapeuter at tilbakemeldingsprosedyrer kan virke inn på
terapeutiske praksiser?

Reknes, Iselin Exposure to workplace bullying among nurses: Health


outcomes and individual coping

Chimhutu, Victor Results-Based Financing (RBF) in the health sector of a


low-income country. From agenda setting to
implementation: The case of Tanzania

Ness, Ingunn Johanne The Room of Opportunity. Understanding how


knowledge and ideas are constructed in multidisciplinary
groups working with developing innovative ideas.

Hollekim, Ragnhild Contemporary discourses on children and parenting in


Norway. An empirical study based on two cases.

Doran, Rouven Eco-friendly travelling: The relevance of perceived norms


and social comparison

2017 Katisi, Masego The power of context in health partnerships: Exploring


V synergy and antagony between external and internal
ideologies in implementing Safe Male Circumcision
(SMC) for HIV prevention in Botswana
XVIII
Jamaludin, Nor Lelawati Binti The “why” and “how” of International Students’
Ambassadorship Roles in International Education

Berthelsen, Mona Effects of shift work and psychological and social work
factors on mental distress. Studies of onshore/offshore
workers and nurses in Norway.

Krane, Vibeke Lærer-elev-relasjoner, elevers psykiske helse og frafall i


videregående skole – en eksplorerende studie om
samarbeid og den store betydningen av de små ting

Søvik, Margaret Ljosnes Evaluating the implementation of the Empowering


Coaching™ program in Norway

Tonheim, Milfrid A troublesome transition: Social reintegration of girl


soldiers returning ‘home’

Senneseth, Mette Improving social network support for partners facing


spousal cancer while caring for minors. A randomized
controlled trial.

Urke, Helga Bjørnøy Child health and child care of very young children in
Bolivia, Colombia and Peru.

Bakhturidze, George Public Participation in Tobacco Control Policy-making in


Georgia

Fismen, Anne-Siri Adolescent eating habits. Trends and socio-economic


status.

2017 Hagatun, Susanne Internet-based cognitive-behavioural therapy for insomnia.


H A randomised controlled trial in Norway.

Eichele, Heike Electrophysiological Correlates of Performance


Monitoring in Children with Tourette Syndrome. A
developmental perspective.

Risan, Ulf Patrick Accommodating trauma in police interviews. An


exploration of rapport in investigative interviews of
traumatized victims.

Sandhåland, Hilde Safety on board offshore vessels: A study of


shipboard factors and situation awareness

Blågestad, Tone Fidje Less pain – better sleep and mood?


Interrelatedness of pain, sleep and mood in total
hip arthroplasty patients

Kronstad, Morten Frå skulebenk til deadlines. Korleis nettjournalistar


og journaliststudentar lærer, og korleis dei utviklar
journalistfagleg kunnskap

Vedaa, Øystein Shift work: The importance of sufficient time for


rest between shifts.

Steine, Iris Mulders Predictors of symptoms outcomes among adult


survivors of sexual abuse: The role of abuse
characteristics, cumulative childhood
maltreatment, genetic variants, and perceived
social support.

Høgheim, Sigve Making math interesting: An experimental study of


interventions to encourage interest in mathematics

XIX
2018 Brevik, Erlend Joramo Adult Attention Deficit Hyperactivity Disorder.
V Beyond the Core Symptoms of the Diagnostic and
Statistical Manual of Mental Disorders.

Erevik, Eilin Kristine User-generated alcohol-related content on social


media: Determinants and relation to offline alcohol
use

Hagen, Egon Cognitive and psychological functioning in patients


with substance use disorder; from initial
assessment to one-year recovery

Adólfsdóttir, Steinunn Subcomponents of executive functions: Effects of


age and brain maturations

Brattabø, Ingfrid Vaksdal Detection of child maltreatment, the role of dental


health personnel – A national cross-sectional study
among public dental health personnel in Norway

Fylkesnes, Marte Knag Frykt, forhandlinger og deltakelse. Ungdommer og


foreldre med etnisk minoritetsbakgrunn i møte
med den norske barnevernstjenesten.

Stiegler, Jan Reidar Processing emotions in emotion-focused therapy.


Exploring the impact of the two-chair dialogue
intervention.

Egelandsdal, Kjetil Clickers and Formative Feedback at University


Lectures. Exploring students and teachers’
reception and use of feedback from clicker
interventions.

Torjussen, Lars Petter Storm Foreningen av visdom og veltalenhet – utkast til en


universitetsdidaktikk gjennom en kritikk og
videreføring av Skjervheims pedagogiske filosofi på
bakgrunn av Arendt og Foucault. Eller hvorfor
menneskelivet er mer som å spille fløyte enn å
bygge et hus.

Selvik, Sabreen A childhood at refuges. Children with multiple


relocations at refuges for abused women.

2018 Leino, Tony Mathias Structural game characteristics, game features,


H financial outcomes and gambling behaviour

Raknes, Solfrid Anxious Adolescents: Prevalence, Correlates, and


Preventive Cogntive Behavioural Interventions

Morken, Katharina Teresa Mentalization-based treatment of female patients


Enehaug with severe personality disorder and substance use
disorder

Braatveit, Kirsten Johanne Intellectual disability among in-patients with


substance use disorders

Barua, Padmaja Unequal Interdependencies: Exploring Power and


Agency in Domestic Work Relations in
Contemporary India

Darkwah, Ernest Caring for “parentless” children. An exploration of


work-related experiences of caregivers in children’s
homes in Ghana.

Valdersnes, Kjersti Bergheim Safety Climate perceptions in High Reliability


Organizations – the role of Psychological Capital

XX
2019 Kongsgården, Petter Vurderingspraksiser i teknologirike læringsmiljøer.
V En undersøkelse av læreres vurderingspraksiser i
teknologirike læringsmiljøer og implikasjoner på
elevenes medvirkning i egen læringsprosess.

Vikene, Kjetil Complexity in Rhythm and Parkinson’s disease:


Cognitive and Neuronal Correlates

Heradstveit, Ove Alcohol- and drug use among adolescents. School-


related problems, childhood mental health
problems, and psychiatric diagnoses.

Riise, Eili Nygard Concentrated exposure and response prevention for


obsessive-compulsive disorder in adolescents: the
Bergen 4-day treatment

Vik, Alexandra Imaging the Aging Brain: From Morphometry to


Functional Connectivity

Krossbakken, Elfrid Personal and Contextual Factors Influencing


Gaming Behaviour. Risk Factors and Prevention of
Video Game Addiction.

Solholm, Roar Foreldrenes status og rolle i familie- og


nærmiljøbaserte intervensjoner for barn med
atferdsvansker

Baldomir, Andrea Margarita Children at Risk and Mothering Networks in Buenos


Aires, Argentina: Analyses of Socialization and Law-
Abiding Practices in Public Early Childhood
Intervention.

Samuelsson, Martin Per Education for Deliberative Democracy. Theoretical


assumptions and classroom practices.

Visted, Endre Emotion regulation difficulties. The role in onset,


maintenance and recurrence of major depressive
disorder.

2019 Nordmo, Morten Sleep and naval performance. The impact of


H personality and leadership.

Sveinsdottir, Vigdis Supported Employment and preventing Early


Disability (SEED)

Dwyer, Gerard Eric New approaches to the use of magnetic resonance


spectroscopy for investigating the pathophysiology
of auditory-verbal hallucinations

Synnevåg, Ellen Strøm Planning for Public Health. Balancing top-down and
bottom-up approaches in Norwegian municipalities.

Kvinge, Øystein Røsseland Presentation in teacher education. A study of


student teachers’ transformation and representation
of subject content using semiotic technology.

Thorsen, Anders Lillevik The emotional brain in obsessive-compulsive


disorder

Eldal, Kari Sikkerheitsnettet som tek imot om eg fell – men som


også kan fange meg. Korleis erfarer menneske med
psykiske lidingar ei innlegging i psykisk helsevern?
Eit samarbeidsbasert forskingsprosjekt mellom
forskarar og brukarar.

XXI
Svendsen, Julie Lillebostad Self-compassion - Relationship with mindfulness,
emotional stress symptoms and
psychophysiological flexibility

2020 Albæk, Ane Ugland Walking children through a minefield. Qualitative


V studies of professionals’ experiences addressing
abuse in child interviews.

XXII
Graphic design: Communication Division, UiB / Print: Skipnes Kommunikasjon AS

ISBN: 9788230858684 (print)


9788230846759 (PDF)
uib.no

You might also like