Exploring Business Students views of the use of generative AI in assignment writing An examination of generative AI use through students’ own ethical perspectives
Exploring Business Students views of the use of generative AI in assignment writing An examination of generative AI use through students’ own ethical perspectives
The rise of generative AI, particularly over the past few years, has raised notable issues about its
use. This has been possibly most pronounced in academia, where there has been strong debate on
the potential value of generative AI to augment learning outcomes versus the potential for
academic dishonesty and devalued education. Whilst some papers have looked at students’
perspectives on the use of generative AI, there has been less focus exploring through what ethical
perspectives or frames students see using generative AI in their tertiary education.
We interviewed and conducted focus groups and interviews with students enrolled in an
Australian university business school, to explore the ethical frames through which they saw the
use of generative AI. Focussing on three specific perspectives: Deontological, Consequentialism
and Virtue Ethics, it emerged that no single perspective dominated, with students having a
complex mix and latticework of ethical perspectives on its use, even within the same individual.
We explore some potential implications for practice that emerged from the data, one of which is
the role of the academic as moral exemplar.
Introduction
There has been much recent attention paid to the rise of generative AI and chatbots in relation to
student assessment (Chan, 2023). A chatbot, put simply, is an electronic tool (typically via software
such as an app) that simulates conversations by replying to words or phrases it recognises. It employs
algorithms designed to comprehend natural language inputs and respond appropriately with either pre-
written or AI-generated responses (Salvagno et al., 2023). ChatGPT learns iteratively and is simple to
use. The user can ask ChatGPT anything from ‘What’s the best omelette recipe’ through to ‘Explain
Mintzberg’s 10 Managerial roles, using examples’ and get a prompt, complete, written response.
And herein lies the potential issue. The exponential development of more refined generative AI
iterative text programs means that the potential to have the program write and provide the answer to a
question, or a complete essay or journal article, in a way that closely reflects a human writer, is now
far more apparent. And this situation will continue to progress. Nietzel (2023), citing
GrandViewResearch, predict that generative AI will grow, 37.3% year-on-year from 2023 to 2030.
One of the areas where concerns have been expressed is the use of generative AI by students in their
assignments. The issue raises a range of ethical, scholastic and pedagogical questions. One of the most pressing
is what constitutes the ethical use of such programs when the student is submitting an assignment as ‘their
work’? However, whilst this question has been the subject of a number of papers (Chan, 2023), the question of
how students themselves view the use of AI chatbots in their own programs and degrees, particularly in terms of
their ethical perspectives, has been less widely examined to date.
This paper presents preliminary findings of a larger study into the ethical perspectives of various student
cohorts/demographics regarding the use of generative AI. The findings presented here are those of domestic
female undergraduate students enrolled in a business degree at an Australian university. Employing an ethical
lens framework, it examines their responses, views and ethical perspectives of generative AI in higher
education, both by themselves, their peers, and their lecturers. It posed two overarching research questions:
• RQ 1: What are current business students’ views the use of generative AI tools such as ChatGPT within their
academic program?
• RQ 2: What, if any, insights and implications might this have for academia?
Literature review
The Use of AI in higher education
The issue of generative AI in tertiary education has become a topic of note, both within and external to the
tertiary sector, over the past year in particular. Most commentary around its use can be put into two broad
considerations. The first is a concern about issues of academic integrity and the associated concerns of students
earning a degree or qualification via the use of generative AI (Chan & Lee, 2023; Chan & Hu, 2023). Linked to
these concerns is the fear that reliance by students on generative AI may lead to declines in students’ writing,
and most importantly, capacity to critically analyse and think (Civil, 2023). The second is the potential for
generative AI to facilitate teaching and learning outcomes for students. For example, Bailey et al. (2021) found
that students felt the use of generative AI was useful in assisting them to generate ideas, to communicate those
ideas effectively and for grammatical support. And there is support for this position. For example, Essel et al.
(2022) found that the use of generative AI for learning support for students resulted in improved learning
achievement, attitude, motivation and self-efficacy among students.
That generative AI is currently being used by university students is not in question. Welding (2023), in their
recent 2023 survey of 1000 college students found that 43% of college students confirmed that they had used
ChatGPT or a similar generative AI application in their studies. Of these, 50% stated that they used generative
AI applications to directly help them complete assignments or exams. However, in contrast to the high
proportion of students using generative AI, over half of students in the survey also stated that their lecturers had
not openly discussed the use of AI tools with their students, with 60% of students stating their lecturers or
institutions hadn’t specified what constituted ethical or responsible use of generative AI. Given that over 60% of
students in the same survey stated that they considered generative AI as constituting the ‘new normal’ for work
and study, the need to understand the ethical frame in which students view the use of generative AI is very
timely.
There have been a number of studies that have focussed on student perceptions and perspectives on generative
AI. One of the broadest is that by Chan and Hu (2023, p. 12), who explored student’s perspectives on generative
AI. Among their findings was that students saw a number of potential positives with generative AI, but also
some notable concerns. Of note was the identification by students of ethical issues as one of the challenges
relating to the use of generative AI. This links to the concepts of academic dishonesty (or integrity) and
plagiarism. Numerous studies have indicated that student perceptions of what constitutes plagiarism vary widely
by nationality, awareness and cultural differences among other factors (see Chan, 2023 for detail). The rise of
generative AI, and where it fits within the academic misconduct framework, has challenged and muddied these
murky waters even further. For example, Welding (2023) explored students’ perceptions of the moral use of
generative AI in academic settings. 41% of respondents agreed that it was morally wrong to use AI tools to help
complete assignments and exams, whereas 27% disagreed. In contrast 38% disagreed that AI tools should be,
with only 27% agreeing they should. Finally, 48% agreed that it is possible to use AI in an ethical way to help
complete assignments, with only 21% disagreeing. What therefore emerges is not only disagreement among
faculty and institutions about the use of generative AI, but disagreement and confusion among students
themselves regarding its use in academic settings. This further confounds the very problem academics and
academic institutions are facing in relation to generative AI. Accordingly, what is currently lacking from the
literature is a deeper examination of how students view the use of generative AI through an ethical lens. Beyond
simply considering the use through the perspective of faculty and institutions, we need to get the insight of
students themselves in terms of, not only their use of generative AI, but their views on its use, both the potential
benefits, and the concerns and ethical issues surrounding it (Fyfe, 2022; Brossi et al., 2022).
This paper is part of a larger study investigating the ethical perspectives university students have regarding their
use of generative AI. Whilst the larger study considers a more complete range of ethical perspectives students
may hold, we are limiting it to three normative ethical perspectives for this paper: deontological,
consequentialism and virtue ethics.
Deontology
Deontological ethics are duty-based and concerned with actions rather than consequences. It is the act that
matters, and we all have a duty to do the right thing regardless of the consequences it produces. A key figure
within deontology is Immanuel Kant (1724-1804), in particular the first two of his four categorical imperatives
(Kant, 2003). Categorical imperative 1: The Principle of Universality states: “act only in accordance with that
maxim through which you can at the same time will that it become a universal law” (p. 421). Put simply, if the
actor isn’t willing for the action to be applied to everyone, then the act isn’t ethical. An example here is lying.
Categorical imperative 2: The Principle of Humanity states - “So act that you use humanity, whether in your
own person or in the person of any other, always at the same time as an end, never merely as a means” (p. 429).
This for example is why lying isn’t ethical from a deontological perspective, as the recipient of the lie becomes
simply a means to an end.
Consequentialism
Consequentialism holds that morality of an action is determined by the consequences of that action. Put simply,
the morality of an individual action is based on the specific outcomes of that action, a form of consequentialism
known as ‘act utilitarianism’. For example, if a doctor allowed one person to die to save six others, then, despite
allowing that person to die, from a consequentialist perspective, the act would be considered as inherently
moral. This notion of the morality of action can be enshrined in rule utilitarianism, which emphasises the
importance of following general rules that have been determined to bring about the best overall consequences.
In essence consequentialism supports the greatest good for the greatest number – in essence the principle of
utility. However, how this good is assessed – whether hedonistic, and whether probable or actual outcomes– is a
matter of conjecture.
Virtue ethics
Virtue ethics is the final of the three major approaches that falls under normative ethics. Emphasising virtue or
moral character, within virtue ethics there are no categorical imperatives, no rules or duties (as found in
deontology) and no principle of utility or emphasis on outcomes (consequentialism) (Kraut, 2022). Virtue ethics
places emphasis on cultivating virtuous character traits and making ethical decisions based on the virtues that
lead to human flourishing, promoting a morally good and fulfilling life. Ethical virtues are intermediate between
two states, a ‘golden mean’. At one end is excess, at the other, deficiency. For example, cowardness as
deficiency and overconfidence as excess. This intermediate, or mean, is not fixed, and considers individual
circumstances.
It is these three ethical perspectives that provide the frame through which we examine business students’ views
of generative AI, both their own as well as others, in tertiary settings.
Research design
To allow for an in-depth exploration of the phenomenon under study, the findings presented here employed a
qualitative approach for data collection and analysis. Focus groups, combined with semi-structured interviews
formed the core research design in this initial stage of the study. Focus groups were selected as the primary
source of data collection due their ability to “‘empower' research participants’ who then ‘become an active part
of the process of analysis’” (Kitzinger, 1995, p. 300). The decision to split participants based on demographic
data such as gender and method of study was influenced by a general consensus among researchers where
homogeneity within focus groups will assist in capitalising on individuals shared experience. Given the sensitive
nature of the topic, participants were also offered the opportunity to participate via individual interview.
Participants were recruited via a combination of non-probability sampling method, informed by a purposive
approach, as well as some snowball sampling. In total, 2 focus groups (5 in a group) and 2 interviews were
conducted. All participants were female, domestic, undergraduate students currently enrolled in a business
program at an Australian university. Ages ranged from 21-38. Focus groups and interviews were all conducted
face-to-face and went for 60-90 minutes each. The interviewer/focus group facilitator was not a lecturer of any
of the students in the study.
Analysis
Data analysis followed a two-stage approach. The first stage was an inductive employment of thematic analysis,
employing open and subsequent axial coding of the data (Creswell et al., 2007). The second stage employed a
deductive thematic approach. Utilising the three main ethical approaches as the conceptual framework, this
stage involved a theoretically driven approach to data coding (Boyatzis, 1998). Themes that emerged were
confirmed via the process of constant comparison.
Ethical considerations
Participants in the study were informed that their participation was voluntary, and they had the option to
withdraw their consent to participate in the study at any stage (until publication). Participants were also ensured
that all measures would be taken to protect their anonymity and confidentiality throughout the study and across
the presentation of the data. This included assurance that anything disclosed during the sessions would remain
confidential and not subject to future academic integrity scrutiny or penalty. Ethics approval was sought and
obtained from the relevant Human Ethics committee at the university.
Consequentialism
There were a number of responses students gave that demonstrated a consequentialist ethical frame through
which they were viewing the use of generative AI in assignments. For example, when discussing the use of
generative AI, several ‘pain’ inducing outcomes were mentioned. These outcomes included: lack of faith the
answer would be correct, feeling stupid if getting hired (and fired) and not knowing how to do your job.
Interestingly, getting caught by the university was not a prominent outcome concern.
One clear outcome concern that emerged in relation to the accuracy or quality of the information/answer
provided by AI. In some ways this directly reflected act utilitarianism. For example:
Have tried it for a test but it wasn’t specific. I was hoping for an answer
If I knew it [the answer] would be right, I would use it
In these instances, we see a strong act utilitarian ethical frame emerging. A common limitation of utilitarian
ethics is the difficulty predicting outcomes in certain contexts. In the case of generative AI, the students see the
outcome as unpredictable, or uncertain, and therefore impacts on their behaviour. The ‘good’ provided by the
outcome is questioned. If over time the accuracy and thus students’ confidence in generative AI tools is
increased, our findings suggest that improved confidence of a positive, or correct answer outcome will lead to an
increase in willingness to use generative AI by business students.
Interestingly, when asked about the use of generative AI by lecturers to mark their assignments, trust in outcome
was again a clear concern for students. For example, one participant (B) noted that:
If a computer marked it, it’s too black and white. You need the human element. A human can say
– I can see where this is going and give 2/5, AI might give 0/5.
In contrast however, it was perhaps evident that was already some resentment, or decreased utility, from some
students employing an act utilitarianism approach. In this case the student expressed a rule utilitarian view of the
use of generative AI, discussing the impact of the use of generative AI by some students to create negative
outcomes for others. For example, one participant (F) noted:
And now, because of this we have to have more invigilated assessments which usually means
exams. I’m a good student but I panic in exams. I get stressed and never do as well.
One concern of consequentialist ethics is its potential for ‘ends justify the means’ (i.e., causing potential harm to
justify other people living well). Students were cognisant of this in relation to other students’ use of generative
AI and the potential for negative outcomes for others. This may include increased invigilated assessments,
adjustment of assessments or forms of assessment they may not do particularly well in.
A final area where consequentialist ethical perspectives emerged from the focus groups and interviews were the
use of generative AI to produce a positive outcome given other broader contextual factors the students faced.
For example, the use of generative AI to save time on research and writing assignments was seen as providing a
positive outcome for students given other pressures they faced. For example, one participant (Z) stated that:
In these cases, the act is an assessment of its utility. In other words, the longer-term future outcome of a job, a
career and getting their degree is the greater good, justifying the use of generative AI if it is functional towards
that outcome.
Virtue ethics
As discussed, virtue ethics emphasises virtue or moral character. The main elements of this approach embedded
in participants views included the balance between excess and deficiency (the golden mean), as well as the
impact of moral exemplars on the behaviours of the students in relation to generative AI.
I use it for things I know already or things that to me represent the administrative side of uni and I
don’t want to do
At the other end of the scale, not studying and relying entirely, or at least heavily, on AI would represent the
vice of excess. Students found this approach to generative AI use to clearly breach a virtue ethics frame, with
one stating:
If I used it and had done the work, then yeah. But someone else who has used it and done
nothing? Well - that’s not right
The mean position between ‘using it in an entirely reliant manner’ to ‘not using it all’ seemed to represent the
point where they perceived the use of generative AI was acting morally. The golden mean considers context and
individual circumstances. Students again reflected a virtue ethics position when advocating for the use of
generative AI for certain groups of students. For example, one participant noted the potential for generative AI
by students who might have a particular disability:
Some people have dyslexia and stuff. Shits me they can’t use it to get over a potential hurdle
Moral Exemplars
Respondents noted an interesting duality in terms of their view of using generative AI in different courses. In
courses where the lecturer provided an educational experience that they perceived as of quality, and created
genuine engagement, then the desire to use generative AI was diminished or even absent. However, this was not
the case in courses where a quality or engaging educational experience was lacking. Rather than merely being
based on the capacity of the student to follow along with the material, omitting the need for generative AI help,
it was more driven by a feeling of ‘letting’ the lecturer down, and thus feeling negative about the use of such
tools. A desire to do the right thing:
Reason I used it? The course sucked and I was over it. In another course, when the lecturer was
good and engaging, then I never use it as I’d feel I was letting them down.
In such instances, we argue that academics serve as moral exemplars – individuals who inspire and lead by
showing what is possible (Morgenroth et al., 2015). Moral exemplars motivate individuals beyond just moral
reasoning and as argued by Frey (2009, p. 613) can include other attributes noted by Hursthouse (2006), such as
dispositions, choices, attitudes and interests. Exemplars “stand out through their ability to build motivational
systems” (Frey, 2010, p. 617). Within virtue ethics, “becoming a good person is not a matter of learning or
'applying' principles, but of imitating some models” (Statman, 1997, p. 13). Thus, can it then be argued that
good students (not necessarily high achieving), become good students, by imitating models.
Deontological
As with the other two normative ethical perspectives, deontological ethical frames also emerged in the interview
and focus group sessions. For example, responses emerged consistent with Kant’s first categorical imperative.
This particularly was prominent from students when they were discussing topics such as first year subjects and
foundations of a degree and the use of generative AI in these contexts. One participant (E) stated that:
If people studying accounting, or research methods, don’t know the basics then how can they
progress? It becomes useless
In this instance, perceptions were that if students were using generative AI to simply pass foundation knowledge
that was core to the program, then generative AI failed the first test of categorical imperative one (i.e.,
generalising the action no longer makes any sense because it contradicts itself, then it is wrong to use that
maxim as a basis for action).
For others however, their perception of their own use of generative AI passed the necessary tests for categorical
imperative one. For example, once the rule generalised it still made sense:
When I go to the doctor I see my GP using it to Google – I don’t see the difference
Accordingly, in their view, the use of generative AI also passed the second test for categorical imperative one,
about whether they would live a in a world where this was followed by everyone:
When I go to work, the computer does it. We’re living in the real world.
From the perspective of Kant’s second categorical imperative, for the most part participants felt comfortable
using generative AI as a means to an end but were largely hesitant about using it as a mere means to an end. For
example, participants spoke about using it to assist them in their research or even as means to help generate
ideas. For example, one participant (B) stated.
I use it to recommend textbooks that might be useful or a starting point for research
I use it to generate ideas, help spark something when I have a bit of a ‘writers block’
In this respect, students’ perception of using it as means to an end was acceptable, as the students also viewed
that they had upheld their duty, or proper intention in performing the action. In contrast, participants were, for
the most part, were uncomfortable or against using it to generate an entire piece of work – or a mere means to an
end. Participant (E) reflected this view, noting:
Using it for ideas about papers or topics is one thing, but getting it to write an entire essay, that’s
just wrong
Finally, regardless of their own views of using generative AI, participants were almost unanimously against it
being used by academic staff for marking and generating lecture content. The complete universality test includes
the golden rule (‘if you aren't willing for the ethical rule you claim to be following to be applied equally to
everyone - including you - then that rule is not a valid moral rule’), was failed by most and only passed by the
few who were anti generative AI use. Comments in relation to academic staff not using generative AI were
numerous and included:
Regarding outcomes of students’ perspectives of the use of generative AI, our findings suggest that it is
important to remind, or illustrate, to academics that managing/mitigating generative AI use isn’t just about
preventing students from using it. It is also about managing it a way where overall learning outcomes are
enhanced, with students not being punished by the strategies we put in place. Whilst recent times had witnessed
a shift away from assessments such as face-to-face invigilated exams for pedagogical reasons (Cartner, 2020),
preventative measures put in place as a reaction to the use of generative AI have seen the return of such
assessment methods. Accordingly, generative AI requires considered and nuanced deliberation by academics on
assessment and course design, focussing on how to maximise the benefits of generative AI, without
compromising core learning outcomes. One example would be to give students a question they then input into
generative AI. Then, the assignment is the student’s critique of the answer in terms of its accuracy and
relevance. In this example, generative AI becomes a tool that enhances the opportunity for greater critical
analysis and thought by the student. Perceptions of outcomes are also important to consider. For example, with
universities increasingly being run as businesses, and students seeing themselves (and being perceived) as
customers, if a perceived outcome of AI use is increased efficiency, this adds another layer of complexity to
current discussions surrounding the increasing costs/fees of tertiary education. One student (R), for example,
made the comment that:
Well if lecturers will use it to be more efficient will there be a reduction in our fees?
Finally, our findings suggest that one of the key ‘soft areas’ where the sector can make an immediate impact in
terms of students’ use of generative AI relates to our finding of the academic as moral exemplar. Frey (2009, p.
216-217) argues “the virtues displayed by individual members contribute to the well-being of the surrounding
moral ecology. As part of the feedback loop, the actions and habits of the moral agent, in this case the academic,
feed back into the actions of the student. Importantly, a moral ecology will either “thrive or suffer depending on
how well it is supported by its members” (Frey, 2009, p. 217). Such an ecology cannot be expected to survive
on its own. Thus, the creation and/or maintenance of an ecology that supports rather than constrains academics
in their ability to produce quality educational experiences becomes an important part of enabling academics to
become, or continue being, exemplars for their students:
According to education through moral exemplars is more effective than education focused on principles and
obligations, because it is far more concrete. If these exemplars and their virtues are portrayed in an attractive
way - as they surely can be - the motivation to imitate them will be strong (Statman, 1997). As such whilst
ethical content and teachings have become a recent focus for universities, enabling a moral ecology where moral
exemplars have the time, space and ability to practice quality and engaging educational experiences becomes as,
if not more, important than ensuring curriculum based ethical teachings. It must be noted that this paper details
only responses from female business students at a single university. This is the first part of a more
comprehensive study, expanding the scope to include students across diverse demographic backgrounds,
multiple programs and institutions.
References
Bailey, D., Southam, A., & Costley, J. (2021). Digital storytelling with chatbots: Mapping L2 participation and
perception patterns. Interactive Technology and Smart Education, 18(1), 85-103.
https://doi.org/10.1108/ITSE-08-2020-0170
Boyatzis, R. E. (1998). Transforming qualitative information: Thematic analysis and code development. Sage.
Brossi, L., Castilo, AM., & Cortesi, S. (2022). Student-centred requirements for the ethics of AI in education. In
W. Holmes & K. Porayska-Pomsta (Eds.), The ethics of artificial intelligence in education. Routledge.
https://doi.org/10.4324/9780429329067
Cartner, H., & Hallas, J. (2020). Aligning assessment, technology and multi-literacies. E-Learning and Digital
Media, 17(2), 131-147. https://doi.org/10.1177/2042753019899732
Chan, C. K. Y. (2023). Is AI Changing the Rules of Academic Misconduct? An In-depth Look at Students’
Perceptions of “AI-giarism.” https://doi.org/10.48550/arxiv.2306.03358
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in
higher education. International Journal of Educational Technology in Higher Education, 20(1), 43–18.
https://doi.org/10.1186/s41239-023-00411-8
Chan, C. K. Y., & Lee, K. K. W. (2023). The AI generation gap: Are Gen Z students more interested in adopting
generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation
teachers? https://doi.org/10.48550/arxiv.2305.02878
Creswell, J., Hanson, W., Plano, V., & Morales, A. (2007). Qualitative research designs: Selection and
implementation. The Counselling Psychologist, 35(2), 236–264. https://doi.org/10.1177/0011000006287390
Essel, H. B., Vlachopoulos, D., Tachie-Menson, A., Johnson, E. E., & Baah, P. K. (2022). The impact of a
virtual teaching assistant (chatbot) on students’ learning in Ghanaian higher education. International Journal
of Educational Technology in Higher Education, 19, 57. https://doi.org/10.1186/s41239-022-00362-6
Frey, W. J. (2010). Teaching virtue: Pedagogical implications of moral psychology. Science and Engineering
Ethics, 16(3), 611–628. https://doi.org/10.1007/s11948-009-9164-z
Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & Society. 38, 1395–
1405. https://doi.org/10.1007/s00146-022-01397-z
Kant, I. (2003). Critique of pure reason (M. Weigelt, Trans.). Penguin Classics. (Original work published 1781).
Kitzinger, J. (1995). Qualitative research: Introducing focus groups. British Medical Journal, 311(7000), 299–
302. https://doi.org/10.1136/bmj.311.7000.299
Kraut, R. (2022). Aristotle’s ethics. In E. N. Zalta & U. Nodelman (Eds.), The Stanford encyclopedia of
philosophy. Stanford University.
Lee, J., Wu, A.S., & Kulasegaram, K. M. (2021). Artificial intelligence in undergraduate medical education: A
scoping review. Academic Medicine, 96(11S), S62-S70. https://doi.org/10.1097/ACM.0000000000004291
Morgenroth, T., Ryan, M. K., & Peters, K. (2015). The motivational theory of role modeling: How role models
influence role aspirants' goals. Review of General Psychology, 19(4), 465–483.
https://doi.org/10.1037/gpr0000059
Nietzel, T. M. (2023, March 20). More than half of college students believe using ChatGPT to complete
assignments is cheating. Forbes. https://www.forbes.com/sites/michaeltnietzel/2023/03/20/more-than-half-
of-college-students-believe-using-chatgpt-to-complete-assignments-is-cheating/?sh=6e3901f518f9
Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Can artificial intelligence help for scientific writing?
Critical Care, 27-75 https://doi.org/10.1186/s13054-023-04380-2
Statman, D. (1997). Virtue ethics. Edinburgh University Press. https://doi.org/10.1515/9781474472845
Welding, L. (2023). Half of college students say using AI on schoolwork is cheating or plagiarism. Best
Colleges. https://www.bestcolleges.com/research/college-students-ai-tools-survey/
Murray, D. & Williams, K. (2023). Exploring Business Students’ views of the use of generative AI in assignment
writing: An examination of generative AI use through students’ own ethical perspectives. In T. Cochrane, V.
Narayan, C. Brown, K. MacCallum, E. Bone, C. Deneen, R. Vanderburg, & B. Hurren (Eds.), People, partnerships
and pedagogies. Proceedings ASCILITE 2023. Christchurch (pp. 167 - 174).
https://doi.org/10.14742/apubs.2023.662
Note: All published papers are refereed, having undergone a double-blind peer-review process.
The author(s) assign a Creative Commons by attribution licence enabling others to distribute, remix, tweak, and
build upon their work, even commercially, as long as credit is given to the author(s) for the original creation.