Experiences of moderation, moderators, and moderating by online users who
Experiences of moderation, moderators, and moderating by online users who
1 Full Title: Experiences of moderation, moderators, and moderating by online users who
5 1 NIHR Bristol Biomedical Research Centre, University Hospitals Bristol and Weston NHS
9 4 Current Role: ESRC Centre for Society and Mental Health, Kings College London, UK.
10 * Correspondence to:
11 Zoë Haime
13 Canynge Hall
14 Bristol, UK
16
17
18
19
20
NOTE: This preprint reports new research that has not been certified by peer review and should not be used to guide clinical practice.
21
1
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
22 Abstract
23 Online mental health spaces require effective content moderation for safety. Whilst policies
24 acknowledge the need for proactive practices and moderator support, expectations and
25 experiences of internet users engaging with self-harm and suicide content online remain
27 moderators and moderating when engaging online with self-harm/suicide (SH/S) related
28 content.
29 Participants in the DELVE study were interviewed about their experiences with SH/S content
30 online. N=14 participants were recruited to interview at baseline, with n=8 completing the
31 3-month follow-up, and n=7 the 6 month follow-up. Participants were also asked to
32 complete daily diaries of their online use between interviews. Thematic analysis, with
34 moderation, moderators and moderating from interview transcripts and diary entries.
35 Three key themes were identified: ‘content reporting behaviour’, exploring factors
39 approaches, their own experiences of moderating, and insights for future moderation
40 improvements.
41 This study revealed challenges in moderating SH/S content online, and highlighted
43 online SH/S spaces, showing the need for proactive platform-level strategies. Additionally,
44 whilst the lived experience of moderators was valued by participants, associated risks
2
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
45 emphasised the need for supportive measures. Policymakers and industry leaders should
47 Keywords:
49 Author Summary
50 In today's digital world, ensuring the safety of online mental health spaces is vital. Yet,
51 there’s still a lot we don’t understand about how people experience moderation,
52 moderators, and moderating in self-harm and suicide online spaces. Our study set out to
53 change that by talking to 14 individuals who engage with this content online. Through
54 interviews and diaries, we learned more about their experiences with platform and online
55 community moderation.
56 Our findings showed some important things. Firstly, individuals with declining mental health
57 struggled to use tools that might keep them safe, like reporting content. This emphasised
58 the need for effective moderation in online mental health spaces, to prevent harm.
60 and frustration amongst users who reported content, or had their own content moderated.
62 online. Lastly, users encouraged the involvement of mental health professionals into online
63 moderating teams, suggesting platforms and online communities should provide training
64 and supervision from professionals to their moderation staff. These findings support our
66
3
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
67 1. Introduction
71 content across entire platforms, and can involve automated tools, algorithms, and human
74 reporting or flagging content that they find inappropriate, or against the rules of specific
75 spaces or communities within platforms. These moderation actions at both platform and
77 harmful or illegal material, including self-harm and suicide content and discourse around it.
79 To regulate online community spaces, moderators play a vital role in screening content,
80 identifying problematic users, and enforcing rules [3]. The mechanisms employed, such as
82 ensure the availability of high-quality content, whilst limiting the presence of harmful
83 material [4]. However, experiences and outcomes of these moderating actions can vary for
84 online users. For example, when Facebook and Instagram introduced a total ban of graphic
88 environments focused on mental health. Moderation in these spaces must balance the
89 responsibility of protecting users from triggering content, with the provision of space for
4
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
90 social support and recover [6]. The effectiveness of current moderation practices, including
91 outright content bans in mental health spaces, have been further questioned. Multiple
92 studies have investigated how users seek out potentially harmful pro-eating disorder
93 content online and evade moderation systems by posting content using no text, or
94 alternative hashtags [7, 8]. This practice is also prevalent amongst self-injury related posts
95 on Instagram, where the use of ambiguous and unrelated hashtags leads to graphic self-
96 harm imagery without proper content warnings [9]. Additionally, research shows that even
97 where platforms proactively search for and remove harmful content, this can be inefficient,
100 There are also concerns regarding the allocation of responsibility for moderating online
101 spaces. Platform-level moderation usually relies on paid humans, although these
102 mechanisms are sometimes automated. However, within peer-driven communities, users
103 themselves usually moderate content, often in voluntary roles. While several benefits to the
104 moderator role have been reported, including feelings of altruism, having a sense of
105 purpose [12] and – in the case of lived-experience moderators – receiving validation of their
106 own experiences [13], concerns have also been raised for both community and platform
107 level moderators’ about the impact on their own mental health [6, 12].
108 A recent study by Spence et al [14], found 40.8% of 213 online platform-level content
109 moderators were exposed to distressing content daily, and that moderators showed a dose-
111 psychological distress and secondary trauma. However, their findings also revealed, that
112 after accounting for work factors in the analysis, including access to supportive colleagues
5
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
113 and feedback about their work, the relationships between exposure and psychological
114 distress and secondary trauma failed to remain significant, indicating that a supportive work
116 Some platforms and sites, such as Google and Facebook, have begun to implement
117 strategies to address the risks to their content moderators, such as training on working with
118 sensitive content, recommended access to both individual and group counselling services,
119 and suggested ‘wellness breaks’ [15, 16]. However, the effectiveness of these interventions
120 for moderator wellbeing, remains unclear. Additionally, whilst large platforms are able to
121 dedicate resources to moderator care, there are less viable options available to
123 Even within larger platforms, it may also be challenging to ensure uniform care across all
124 moderators, due to varying work environments and individual needs. For instance, diversity
125 in preferences of moderators was shown by Saha et al [16]. Some moderators desired
126 support from mental health experts and welcomed the idea of having trained professionals
127 within the moderation team. However, other moderators emphasised that these spaces
128 should not provide medical advice to users, and that moderators with lived experience of
129 mental health problems can create the supportive community users are looking for, without
130 additional help. This highlights the differing needs amongst moderators in mental health
131 spaces and emphasises the need for further understanding of individual moderator
134 Policy and regulatory frameworks also play a significant role in the expectations and
135 responsibilities of online platforms in moderating content. In the United Kingdom, there has
6
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
136 been a growing emphasis on addressing online harms, through the recently enacted Online
137 Safety Bill [17]. Key points within this legislation include the need for platforms to remove
138 self-harm and suicide content, implement measures to minimise user exposure to harmful
139 content, uphold a duty of care towards users, and provide individuals with improved
141 To date, limited research has been conducted exploring experiences of moderation by users
142 who engage with online mental health content or those who moderate it. Notably, existing
143 findings suggest that moderation is essential to mental health online spaces [6] but that
144 current approaches may be ineffective, and, in some cases increase vulnerability to harm for
145 users and moderators. For instance, content posted with limited information in order to
146 avoid automatic removal systems, increases the risk to users of unexpectantly encountering
147 potentially triggering content [7], and moderators may experience emotional distress and
148 burnout [13]. A noticeable research gap exists in understanding user perspectives on the
149 moderation of SH/S related content online, particularly user experiences of having their own
150 SH/S related content moderated (reported or removed), reporting others’ SH/S content, or
151 seeing others’ SH/S content being removed. Understanding the first-hand perspective of
152 users engaging in mental health online spaces is essential for informing effective,
154
155
7
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
157 This study involved thematic analysis [18] of qualitative interview transcripts to gain insights
158 into participant experiences with moderation and moderators in the context of engaging
159 with SH/S content online. By examining these perspectives, this research aimed to deepen
160 our understanding of how moderation practices impact user experiences and consider how
162 2. Results
164 Fourteen participants took part. The sample had diversity in terms of age and ethnicity, and
165 relatively good representation of gender (Table 1). All participants completed a baseline
166 interview, eight participants completed midpoint and seven completed endpoint interviews
167 (see [19]). Participants returned monthly diaries. Due to participant dropout, overall 44
168 diaries were anticipated, with a total of 31 (70.5%) diaries actually returned. All interview
170 Table 1.
171 Demographic characteristics of participants who took part in the DELVE study.
Demographic Variable Total N (%)
Gender
Female 10 (71.4)
Male 4 (28.6)
Ethnicity
Asian British 2 (14.2)
Black British 1 (7.2)
White British 7 (50.0)
Asian Other 2 (14.2)
Black Other 0 (0.0)
White Other 1 (7.2)
Mixed Race 1 (7.2)
Age
16-17 Years 1 (7.2)
8
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
174 Results are visually depicted in Figure 1. by theme and subtheme. These are explored in
176
179 Several participants described instances where they didn’t report SH/S content. One
180 participant described how their non-reporting was influenced by a lack of awareness about
181 how and where to report on certain platforms/sites (IDH). However, participants also shared
182 that they did not consciously consider reporting SH/S content when online (IDC, IDH, IDM,
9
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
183 I33), due to their poor mental state. In fact, during challenging times, users described
184 becoming preoccupied with this material or deliberately seeking it out, thereby prioritising
186 ‘I think in that moment it didn’t really cross my mind [to report content about
187 suicide methods]. I was too focused on the content itself and what it was giving.’
188 (IDJ)
189 A couple of participants described how decisions to report SH/S content were multifaceted,
190 one stating how the choice to report would change depending upon their mental health
191 state:
192 ‘Part of me thinks they should be reported and another part of me thinks, well,
193 freedom of speech and they should stay. So I most probably feel that they should
194 stay, on balance. However, I think that’s one of those things that could change and
196 Participants who recalled reporting content described no hesitation in reporting general
197 online content considered morally wrong, such as racist comments or images. However,
198 decisions to report SH/S content were often more selective. One user (IDD) reflected on the
199 emotional impact of content on a pro-suicide forum affecting their decision: ‘Cause like if it
200 does affect me then… I like reduce time on it or report it to one of the moderators for
201 example’ (IDD). Another (IDM), described how their lack of knowledge about SH/S content
10
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
204 ‘I didn’t know much about it [graphic self-harm image] and so I decided maybe to
206 Considering the implications for users posting the SH material, this participant also
207 expressed reluctance to report images of SH without first understanding the posters’
208 circumstances and suggested having a conversation with them may be more appropriate
210 ‘I think you would have to talk to them, maybe see where they’re at and maybe you
211 can report, or maybe take it to the next step with them [rather] than reporting them
212 and then going to them. I don’t think it would be appropriate to [just report] them.’
213 (IDM)
214 Despite concern for content posters’ wellbeing, and attempts not to stigmatise their
215 experience, there was general agreement amongst the other participants that images or
216 videos of SH/S were largely inappropriate and justified reporting, sometimes due to their
218 ‘I can get it if someone is saying oh, I’m really suicidal, I won’t report that, because
219 maybe they need someone to talk to, if they are posting images on that though, you
220 can’t be doing that… And also there are lots of videos of people trying to kill
221 themselves, they’re on the internet, that’s another thing you can spiral into that,
222 there are sites just full of people dying…I think it was on Twitter, I ended up on a
223 video and then it went to another video and then I ended up on a site and then
225 ‘…Ok and how did that make you feel sort of seeing that play out?’ (Interviewer)
11
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
227 One participant (IDI) described how reporting served not only as a means of addressing the
228 issue of SH/S content posted online, but also as a way to raise awareness and educate the
229 person responsible for the content: ‘it also, hopefully, tells that person that what they were
230 posting [images] might have been sensitive towards someone’ (IDI).
231 One participant reflected on how following predetermined methods for reporting, like using
232 checkboxes, allowed the process to be less biased, and suggested an attempt to detach
233 themselves from the feelings of guilt associated with reporting an online peer:
234 ‘What I’ll do is I’ll click all those [check] boxes and I’ll try to desensitise it or make it
235 less personal or not take it personally. So, I try to be as objective as possible about
237 When asked, those who had reported online content did describe it as a straightforward
238 process in practical terms, with some sites having specific options for recording if the
239 content was related to SH/S. However, one participant (IDE) acknowledged how it would be
240 impractical to report all content that was offensive or inappropriate, and therefore they felt
241 they had to make careful decisions about what they reported and when, although they did
242 not detail how they made those choices, other than to note they were more likely to report
243 users with significant follower counts: ‘Yeah, when I see something that I don’t like, like self-
244 harm and they’ve got lots and lots of followers then I do report stuff’ (IDE).
245 Some of the participants who previously had not reported content, mentioned during
246 interviews that they would now report SH/S content they came across online, using the
247 platform guidelines to direct them (IDH, IDJ, IDC). It is unclear what motivated this change,
12
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
248 though there was some indication it may be explained by involvement in the research itself.
249 For instance, IDE described increased awareness of content that may be harmful, and being
251 ‘My uses have changed massively because it’s really [I’ve] been more aware, but
252 also, I’m more conscious of it. So when you are a user you just look at it and think,
253 ‘Okay.’ But when you do something like [taking part in this research study] you think,
254 ‘Hang on, actually there’s another dimension to it and is there harm or should I
255 report?’ So you become a social media – I don’t know if it’s an influencer – but you
257 Some participants also described the consequences of reporting other users’ content. While
258 a couple described content being taken down (IDK, IDI), or the user being ‘kicked off’ (IDD) a
259 group, others commented on specific site/platform actions, stating Facebook do ‘nothing’
260 (IDB) when reporting nuisance pages, or that reporting on Tumblr felt ‘completely
262 IDI acknowledged that although the automated process of reporting using Instagram
263 options made it relatively easy to do, it left no room for nuance in describing the content: ‘I
264 don’t think there’s much of an opportunity to fully explain anything or anything like that’
265 (IDI). Additionally, despite reported SH/S content being removed on Instagram, users were
267 ‘[they can] carry on posting as normal anyway. So it just means that they can post it
13
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
271 Several participants, (IDB, IDN, IDL, IDK, and IDG) were able to describe their experiences of
272 having content reported online, including content expressing suicidal ideation (IDB, IDK),
273 privately posted images of SH (IDL), and blogs about depression and self-harm (IDN).
274 Participants described receiving differing levels of information from sites when their content
275 had been blocked, with some feeling that the rules felt unclear:
276 ‘I have like one or two blocked [blogposts], but I don’t know why. I don’t know which
277 words [are] the restriction things. I just can’t post it out and it remains in the draft.
278 But I think I’m just writing a normal thing, but maybe they have like a key word ban’
279 (IDN).
280 IDK, who had once posted stating they were ‘struggling’ on Twitter [X], had this reported by
281 another user and criticised the platform’s ‘one-size-fits’ all response:
282 ‘[It’s] a bit pointless to be honest, like suicide hotlines are not going to solve your issues,
283 I think that’s the problem… but it’s [why they are struggling] about I can’t afford this, I
284 can’t afford my house, like what am I going to do, so I found it a bit hard but thanks for
286 IDL, who used Instagram to post images privately, documenting and tracking their SH
287 journey, recalled instances of content being blocked without warning. Removal of content
288 interfered with their ability to monitor their well-being, and disrupted their sense of control
290 ‘…I think it’s like under [the] notification page they’ll just say, your post has been
291 removed because of a violation. And yeah I think it’s quite frustrating because I don’t
14
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
292 know what I’ve written so I can forget what it was about…because I mean even if
293 they said that, I want to remember and recall how I was feeling and the tough times
294 I’d been through. But they don’t give you any warning at all, they just take it’ (IDL).
295 However, it seemed the platform was inconsistent with their moderation of these images,
296 leaving the user uncertain what the consequences might be when posting:
297 ‘…only sometimes they take it down, sometimes they don’t. It’s kind of confusing, I don’t
299 Interestingly, one participant who had content blocked on Twitter [X] due to the use of bad
300 language found the situation ‘quite funny to be honest’ (IDK), perhaps highlighting the
302 One participant who had no personal experience of their content being reported or blocked
303 by platforms/sites, also reflected on the potential emotional impact if it were to happen,
304 expressing they would feel ‘disempowered’ and as if ‘you’re not being heard’ (IDC).
306 In some cases, participants chose to engage with SH/S content in spaces regulated by
308 had responsibility for enforcing community rules and guidelines, which were sometimes
309 more specific or stricter than those set by the site/platform (e.g., individual Facebook
311 Two participants shared experiences of having content removed from these spaces. Both
312 were left feeling confused and frustrated due to inconsistent and unclear moderation
313 policies. For instance, one had (IDG) posted ‘what is the point?’ on a forum, which led to
15
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
314 moderators removing their post and contacting them with resources of support. The
315 participant expressed feeling ‘… just like a criminal. Well not a criminal, but like I’ve really
316 done something wrong.’ (IDG). They also seemed surprised to see other content on the site
318 The other participant (IDB) recalled two instances of moderators blocking content in an
319 online private Facebook group that provided SH peer support. Initially, they posted a suicide
320 note on the group, and moderators replaced it with a supportive post, tagging the
321 participant. This action resulted in an influx of messages of support from other users.
322 However, this felt intrusive, overwhelming and made the participant feel guilty, placing
324 ‘it was driving me up the wall, I couldn’t, I was like I’ve made my decision, I know
325 what I’m doing. I don’t want you to all tell me to stop doing it, I just want some help
326 with [suicide plans], can you please just stop because you’re just making me feel
327 guilty and you’re not helping me, you’re not helping. Obviously it was nice but in that
328 state I just couldn’t take it, I felt it was awful, I didn’t like it at all.’ (IDB)
329 In a separate event, the participant had a post removed by moderators when in
330 disagreement with another user about them (IDB) giving mental health advice. The
331 participant described feeling frustrated as a result of this interaction and that moderation
333 “…she’s (other user) like, “it’s so dangerous, you’re going to kill someone”, I’m like,
334 so are you, like it says in my post I’m suicidal, is that (users’ comment to her) really
335 helpful. And I started to get really wound up, and then the people that were
336 managing the page took my comment off and blocked me from commenting on it
16
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
337 anymore but allowed for her to continue which just drove me insane because I felt
341 As participants generally perceived that some SH/S content should be accessible online,
342 regulation and moderation of content was viewed as a necessary part of making those
344 ‘…there is a place for it [SH/S content] online. There is a place but it needs to come
345 from a very supportive place and it needs to be highly monitored and regulated.
346 That’s what I’ve generally learnt. The safer places to be online and the safer places to
347 deal with it are when it is more moderated and looked after.’ (IDB)
348 ‘I don’t feel that it [online SH/S spaces] should be [taken] away because if it’s
349 moderated correctly then it’s not doing anybody any particular harm.’ (IDD)
350 Participants, owing to their different encounters with SH/S content online, experienced
351 varying interactions with moderators across platforms. Three participants (IDA, IDB and IDC)
352 who regularly engaged with a self-harm support group hosted on Facebook, mentioned
353 being drawn to their moderation methods, due to their clear, friendly approach, and close
355 ‘I tend to stick to [self-harm support Facebook group] because I know that it’s heavily
357 ‘… it’s clear from the onset. They’re completely honest and they say this is a
358 moderated Facebook page, so you know where you stand…A couple of days ago,
17
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
359 somebody said about direct messaging and… very soon, a moderator came on and
360 just said, ‘Can I just remind you?’, and he phrased it really nicely.’ (IDC).
361 Additionally, IDB expressed an advantage of using the self-harm support Facebook group
362 was its rapid content moderation, which reassured them it was safe:
363 ‘The moderation like I say this is why I tend to use [self-harm support Facebook
364 group] more than anything else because… it’s so heavily moderated and monitored
365 we don’t really find it [community argument posts]. And if anything like that does
366 crop up on [self-harm support Facebook group] it’s gone within minutes you know.
368 However, it is important to note that these perspectives may have been subject to bias
369 within the sample. Notably, IDA and IDB who regularly engaged with the self-harm support
370 Facebook group, also moderated for them in a voluntary capacity, which may have
372 Participants also provided insights into the moderation practices of prominent platforms
373 such as Twitter [X] and Facebook. Interestingly, participant experiences seemed to
374 contradict one another. For instance, one participant criticised Facebook for using ‘a carpet
375 ban’ (IDC) when it came to SH/S content. In contrast, another participant reported that
376 moderation practices on Facebook were relaxed, though it is unclear whether participants
378 ‘On Twitter [X], if you’ve reported… like graphic images or anything like that, it will
379 be blocked, but TikTok and Facebook they’re very lenient about stuff.’ (IDK).
18
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
381 Participants identified areas of moderation that they would like to see improved across
382 platforms. One common concern for participants was the lack of transparency and
384 ‘I think in terms of restrictions for social media overall, I personally don’t feel they’re
385 very open about exactly how they moderate posts. So I think that it’s difficult to
386 understand what their thought processes [are] behind blocking posts or restricting
388 Specifically, for SH/S content, participants emphasised that current platform regulations
389 were too reactive, resulting in punitive measures such as immediate bans or content
390 removal, where a more nuanced approach is needed to ensure support is provided to those
392 ‘…they seem to throw the baby out the bath water. They do a formal, catch you all
393 sort of ban process there or suspend here…But where it comes to suicide and self-
394 harm maybe it’s a point of that actually this is a serious situation where we need to
395 be looking at this openly… rather than actually someone’s put themselves on the line
396 to say ‘I’m feeling suicidal.’ All of a sudden it’s all guns blazing, red lights etcetera.
397 Let me try and… have more safety-conscious rules and policies in place, that’s what
399 IDI echoed the call for a more thoughtful and empathetic approach to moderating online
400 SH/S content, specifically highlighting the portrayal of SH scars. They emphasised a need to
401 recognise scars can become a part of someone’s image and identity, and so removing the
19
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
403 ‘it’s like they’ve got a one size fits all rule thing where any posts about self-harm, any
404 scars at all, they’ll just remove the pictures, whereas I’ve seen some people talk
405 about how their scars are like recovery scars and just a part of who they are. So I
406 think that improvement is still needed at the moment even just to establish the
407 baseline of what’s right and what’s wrong to post about.’ (IDI)
408 A more multidimensional approach was encouraged for companies making decisions around
410 ‘we need to get the social media firms to get their position right on what content
411 they’re allowing to show on their platforms. They need to be real as well and actually
412 work with local, national governments, but they also need to be working with like
413 voluntary and community sectors of people that live with it. And also people with
415 However, other participants favoured more rigorous moderation, particularly to protect
416 vulnerable users, such as young people, who they acknowledged will seek out SH/S content
418 ‘If it was my child, I wouldn’t want them accessing that content but a lot of that is
419 because I think that children and the internet are a difficult match anyway because
420 of the way that the internet is designed to keep you hooked and put so much worth
422 ‘I feel like if teenagers or young people are gonna go on it [SH/S spaces online] then
423 they need somewhere that will moderate it by professionals like I had when I were a
424 teenager, rather than just being on Facebook or Instagram for example.’ (IDD)
20
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
425 One participant, who was exposed to graphic self-harm imagery on a mental health
426 Facebook group noted that approving posts before they are shared could be an easily
429 you actually have to have admin approval before you approve it.’ (IDG)
431 ‘Before it goes on the group. Now this [mental health Facebook] group doesn’t have
432 that and this is the thing that was worrying’ (IDG)
434 Some participants went on to describe the moderator role in online spaces and whether this
435 should be undertaken by trained professionals or by those with lived experience. Two of
436 those participants expressed the importance of moderators with personal experience of
437 SH/S:
438 ‘That’s the great thing about [Self-Harm support Facebook group is] that it’s not run
440 ‘I think having people have a lived experience doing the moderation is more
442 The general belief was that these moderators would have a better understanding of the
443 content posted in the online communities allowing them to make informed judgements
21
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
445 ‘it’s not like you need like a psychiatrist or a counsellor on there or something like
446 that. You know you don’t need that you just need somebody that’s been
447 accountable for it… you know that’s not just some random person.’ (IDB)
448 Despite this, one individual preferred the concept of mental health professionals
450 ‘It definitely needs to have someone who’s trained in either like mental health first
451 aid and specifically suicide and self-harm or like a proper professional’ (IDJ)
452 Participants also highlighted the importance of implementing mechanisms to prevent those
453 with lived experience becoming too entrenched in a community’s mindset, which may be
454 harmful to them. One suggestion was to have regular changes in moderators as a
455 preventative measure. Another was to provide support to moderators from mental health
456 professionals, to ensure they could handle content they may encounter, and maintain their
459 because I think you can get just indoctrinated into what you’re doing, [with]
460 everybody else on the site [having] the same problem. So whether it be something
461 where moderators are changed regularly I don’t know, something like that.’ (IDH)
462 ‘it feels like you need maybe somebody [to take] a step back. It may be that [Self-
463 Harm Support Group] needs a professional that’s going to regularly contact the
464 people that are moderating and the people that are running these things to make
22
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
467 Three (IDA, IDB, IDI) had become moderators or administrators on the platforms where they
468 accessed SH/S content. The decision to take on this role was driven by a hope to help
469 others, and ensure rules within the online spaces were adhered to, as well as an altruistic
471 ‘If I can help support that even a little bit to make it a bit safer and keep people safe
472 that are also vulnerable, it’s just my little way of saying thank you, I guess, for the
473 support I’ve got and enabling other people to be able to still continue to get that
475 These participants reflected on the significant responsibility associated with moderating
476 online SH/S content. IDI shared their personal recognition of the need to disengage from a
477 moderation/administrator role when their own mental health declined. They expressed
478 concern about their ability to effectively address someone else’s issues in such
479 circumstances, as well as the potential negative impact of engaging with another users’
481 ‘I think the other thing is I won’t open a DM unless, again, I’m in a mood where I
482 think I’ll be able to handle it because I don’t want a clash of negativity or anything to
483 end up going wrong and, like you said, it’s the whole responsibility thing. So you
484 don’t want someone to hear your point of view when you’re feeling negative
486 However, IDB seemed less able to manage their behaviour in this way. They recalled an
487 incident where they encountered a video in the group they moderated that presented a SH
488 method that was novel to them. This resulted in them subsequently acquiring the
23
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
490 ‘Obviously, it [SH method post] got taken down from the main group… I’m first line
491 of defence. It’s just that I keep getting hit by it at the moment because I’m the one
492 that’s moderating it…it was talk about [SH method] and since then, I’ve bought [SH
494 IDB, when considering the best approach to moderation, drew upon their own experience of
495 encountering strict moderation methods. They found this experience influenced their
496 gentler approach, recognising how more forceful moderation could potentially isolate
498 ‘Different people do it different ways. I’m normally quite gentle because quite often,
499 especially with the [Self-Harm Support Group] rules, the reason they’re breaking the
500 rules is from a really nice place…We have got some people on the other end of the
501 spectrum who say, ‘You agreed to the rules when you joined this group. You are
502 breaking the rules. This is your one and only warning.’ I’ve been on the receiving end
503 of that one and my first warning was like that and it made me nearly leave the group
505 Additionally, IDI emphasised their approach to moderation involved being open and
506 transparent, a quality they had criticised platforms for lacking in their own moderation
507 practices:
508 ‘I think if I thought that a comment would be damaging to someone else, then I’d
509 delete it and then I’d message the person who sent the message to explain why I’ve
511 3. Discussion
24
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
512 Findings from this study suggest that although users engaging with SH and S content online
513 favour the use of moderation and moderators in these spaces, the current methods used by
514 platforms and communities may be inadequate to provide a safe and effective user
516 Many online environments rely on user reporting to ensure moderators can successfully
517 enforce community rules [3]. However, participants in this study revealed an inherent
518 inability to undertake community moderation through reporting SH/S content, particularly
519 during times of poorer mental health. Participants revealed these struggles to report
520 content stemmed from a fear of potentially stigmatising other users, and a desire to access
521 the content themselves, despite recognising the potential dangers of the material. This
522 emphasises the complexity of decision-making processes for vulnerable users in online
523 spaces and challenges the traditional methods of content reporting strategies employed by
524 platforms.
525 Although participants described difficulties in reporting SH/S content, several users did take
526 on moderating roles in these online environments, with largely altruistic intentions.
527 However, they also found their ability to moderate fluctuated alongside their own mental
528 health, a concern recognised in previous research [6, 12]. While one user was able to
529 identify their mental health declines and take action to protect themselves and others’ by
530 temporarily withdrawing from their moderating/administrative duties, another was unable
531 to safeguard themselves and became at risk of harm whilst undertaking their moderating
532 role.
533 When considering SH/S content moderation by others, participants in this study described a
534 lack of consistency in how platforms and community moderators responded to SH/S
25
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
536 results not only led to disappointment, frustration, and a perceived loss of control amongst
537 participants, but also added a speculative element to many participant accounts. Several
538 also criticised ‘catch-all’ reactive policies for SH/S material, where content could be
539 immediately removed, or users banned without warning. These punitive moderation
540 methods were viewed as non-empathetic, harmful, and potentially stigmatising, consistent
541 with user experiences of Facebook and Instagram’s ban of graphic self-harm imagery [5].
542 Participants largely agreed that lived-experience moderation was important, due to the
543 moderators’ understanding of user experiences. However, many also emphasised the
545 experience by engaging with content and users in these spaces. Therefore, alike to
546 participants in previous research [6] and recommendations from Facebook [16], some
550 Using data from the DELVE research study (Haime et al., 2023) presented a valuable
552 SH/S content. This study is particularly relevant in the context of ongoing changes in
553 legislature, such as the implementation of the Online Safety Bill [17] in UK law and the
554 Digital Services Act [20] in Europe. Despite this, it is essential to acknowledge certain
555 limitations in our approach. This study aimed to recruit participants who had engaged with
556 SH/S content online. It was evident that participants encountered such content via several
557 different platforms. Although this gave us good diversity, allowing for a broad
26
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
558 understanding of user experiences, it made more nuanced platform-specific insights and
559 implications difficult. Another limitation of this study was that participants who undertook
560 moderator roles, were doing so voluntarily at the community-level and therefore we lacked
561 insights from platform-level moderators, of moderating SH/S content. Additionally, where
562 participants considered lived experience vs. mental health professional moderators in their
563 accounts, they did not explicitly consider mental health professionals with lived experience
564 of mental illness, potentially overlooking a factor affecting moderator effectiveness. Finally,
565 we also struggled to recruit any participants who publicly posted the most explicitly harmful
566 SH/S content (including graphic images/videos, and methods information) online. Exploring
567 this perspective could offer valuable insights into participant perceptions regarding content
568 reporting and removal, and how they respond to these actions.
570 Findings from this study should inform moderation practices by online industry leaders, with
572 Platforms and communities should recognise the challenges faced by mental health
575 Platforms and communities should consider integrating support from mental health
577 moderators should have access to training and supervision from such professionals.
578 Platforms and communities should consider the ongoing well-being of moderators.
579 In doing so, they have a responsibility to ensure moderators have the capacity to
580 recognise their own mental health concerns. This includes re-evaluating current
27
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
581 models to ensure adequate time and mechanisms are allocated to moderators to
584 SH/S content, prioritising the safety of the overall userbase whilst also considering
585 the wellbeing of individuals posting content. Platforms should allow for nuance in
586 moderator decision-making processes, and prioritise the use of clear language and
587 messaging to users around decisions made. They should also consider provision of
589 Platforms and communities should consider using an open dialogue approach with
590 users in their moderation practices. This will enable users to stay informed of the
591 processes their content or account may be undergoing and ensure transparency
592 from platforms and communities regarding their moderation policies and guidelines.
593 These considerations may also provide insight to policymakers, such as Ofcom, in their role
594 governing digital industry leaders. Policymakers should recognise the challenges online
595 platforms and communities face in moderation of mental health spaces, and advocate for
596 strategies that prioritise user safety. Policymakers should support the initiatives outlined
597 above for industry leaders, promoting a transparent and sensitive approach to moderation
599 Findings from this study should also encourage further research into user experiences of
600 moderation, moderators, and moderating in online mental health spaces. Exploring the
601 perspectives of platform-level moderators overseeing SH/S content will further our
603 4. Method
28
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
605 Data collected as part of the DELVE study [19] were analysed. Participants were interviewed
606 at three time points (baseline, midpoint, endpoint) over a six-month study period about
607 their engagement with SH/S content online. Interviews were semi-structured, with
609 participant-led accounts. In addition, participants were asked to maintain daily research
610 diaries during the intervening periods. Participants were asked to reflect on their
611 experiences of engaging with content, having content blocked or reported, or reporting
612 others’ content during their interviews and in their diaries. In the interview, ‘moderation’
613 was an additional topic area where participants were asked to share their thoughts on
614 moderators they had encountered online, their experiences of others’ content being
616 The study was approved by The University of Bristol Faculty of Health Sciences Ethics
617 Committee (approval no. 117491). All participants provided written informed consent prior
618 to participation.
620 Participants were 16 years and over, English-speaking, and online users who had engaged
621 with self-harm or suicide content. Recruitment methods (outlined in [19]) involved multiple
622 channels such as social media platforms, a mental health app targeting young people, and
623 charity websites and newsletters. Participants responded to study adverts and were
624 assessed for eligibility before being sent the study information sheet. Purposive sampling
625 was used to promote diversity in gender, ethnicity, and age of sampled participants.
29
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
627 Thematic analysis [18] with deductive coding informed by interview questions was used,
629 1. Interviews were transcribed, and researchers read and familiarised themselves with
630 both the participant transcripts and their diary data, to gain overall understanding.
631 2. ZH began coding the data. Codes were systematically assigned to quotes which
632 captured certain concepts, or ideas about moderation or moderators. Coding was
633 iterative, and codes were refined and revised in consultation with LB.
634 3. Similar codes were grouped together to form themes representing patterns in
635 concepts or ideas. Themes were then explored narratively in the context of the
637
639 Anonymised transcript and questionnaire data will be stored on the University of Bristol’s
640 Research Data Service Facility. Researchers will be able to request access to non-identifiable
641 data upon reasonable request. Access will be subject to a data access agreement and
642 following approval from Dr Lucy Biddle and the University of Bristol Data Access Committee.
643 6. Funding
644 This study was supported by the National Institute for Health and Care Research Bristol
645 Biomedical Research Centre. The views expressed are those of the author(s) and not
646 necessarily those of the NIHR or the Department of Health and Social Care. Original data
647 collected as part of the DELVE study was funded by Samaritans, UK.
30
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
649 We declare that there are no competing interests to disclose for any of the authors involved
652 Zoë Haime: Project Administration, Investigation, Methodology, Formal Analysis, Writing
653 (Original Draft), Writing (Review & Editing), Visualisation; Lucy Biddle: Funding Acquisition,
655 Supervision, Writing (Review & Editing), Validation; Laura Kennedy: Investigation,
656 Validation, Writing (Review & Editing); Lydia Grace: Conceptualisation, Writing (Review &
657 Editing).
658
659 9. Acknowledgements
660 We express gratitude to the DELVE participants for their contribution to this work.
661 Appreciation also extends to networks that facilitated recruitment, including TellMi,
662 Samaritans, SMaRteN, McPin Foundation, and MQ Mental Health. We also acknowledge the
663 contributions of our colleagues Jane Derges and Rachel Cohen, for their involvement in the
664 DELVE study's conception. We would also like to acknowledge our funders: National
665 Institute for Health and Care Research Bristol Biomedical Research Centre and Samaritans,
666 UK.
667 References
31
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
669 2. York, J. C., & Zuckerman, E. (2019). Moderating the public sphere. Human rights in
671 3. Jhaver, S., Birman, I., Gilbert, E., & Bruckman, A. (2019). Human-Machine
672 Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM
674 4. Kiesler, S., Kraut, R., and Resnick. P. (2012). Regulating behavior in online
677 5. Smith, H., & Cipolli, W. (2022). The Instagram/Facebook ban on graphic self-harm
678 imagery: A sentiment analysis and topic modeling approach. Policy & Internet, 14(1),
679 170-185.
680 6. Saha, K., Ernala, S. K., Dutta, S., Sharma, E., & De Choudhury, M. (2020).
682 (eds) Social Computing and Social Media. Participation, User Experience, Consumer
683 Experience, and Applications of Social Computing. HCII 2020. Lecture Notes in
685 49576-3_7
686 7. Gerrard, Y. (2018). Beyond the hashtag: Circumventing content moderation on social
688 8. Chancellor, S., Pater, J. A., Clear, T., Gilbert, E., & De Choudhury, M. (2016).
691 Supported Cooperative Work & Social Computing (CSCW ’16). Association for
32
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
693 https://doi.org/10.1145/2818048.2819963
694 9. Moreno, M. A., Ton, A., Selkie, E., & Evans, Y. (2016). Secret society 123:
697 10. Stoilova, M., Edwards, C., Kostyrka-Allchorne, K., Livingstone, S., & Sonuga-Barke, E.
698 (2021) Adolescents’ mental health vulnerabilities and the experience and impact of
699 digital technologies: A multimethod pilot study. London School of Economics and
700 Political Science and King’s College London. Accessed online 13/02/2024:
702 11. Biddle., L., Derges, J., Mars, B., Heron, J., Donovan, J. L., Potokar, J., Piper, M., Wyllie,
703 C., & Gunnell, D. (2016). Suicide and the Internet: Changes in the accessibility of
704 suicide-related information between 2007 and 2014. Journal of Affective Disorders,
706 12. Steiger, M., Bharucha, T. J., Venkatagiri, S., Ridel, M. J., & Lease, M. (2021). The
708 moderation and avenues for improving support. In Proceedings of the 2021 CHI
709 Conference on Human Factors in Computing Systems (CHI '21). Association for
710 Computing Machinery, New York, NY, USA, Article 341, 1–14.
711 https://doi.org/10.1145/3411764.3445092
712 13. Coulson, N. S., & Shaw, R. L. (2013). Nurturing health-related online support groups:
33
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
715 14. Spence, R., Bifulco, A., Bradbury, P., Martellozzo, E., & DeMarco, J. (2023). Content
718 https://doi.org/10.1089/cyber.2023.0298
719 15. Google (2023). Google Wellness Standards for Sensitive Content Moderation (2023).
722 16. Halevy, A., Ferrer, C.C., Ma, H., Ozertem, U., Pantel, P., Saeidi, M., Silvestri, F., &
724 17. Online Safety Bill. (2023). Parliament: House of Commons. Bill no. HL Bill 87(Rev).
725 London.
726 18. Braun V, Clarke V. (2012). Thematic analysis. In: APA handbook of research methods
728 and biological. Washington, DC, US: American Psychological Association. p. 57–71.
730 19. Haime, Z., Kennedy, L., Grace, L., Cohen, R., Derges, J., & Biddle, L. (2023). The
731 Journey of Engaging with Self-Harm and Suicide Content Online: A Longitudinal
733 10.2196/preprints.47699
734 20. [Digital Services Act] Regulation (EU) 2022/2065 of the European Parliament and of
735 the Council of 19 Oct. 2022 on a Single Market For Digital Services and amending
737
34
medRxiv preprint doi: https://doi.org/10.1101/2024.02.15.24302878; this version posted February 17, 2024. The copyright holder for this
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
perpetuity.
It is made available under a CC-BY 4.0 International license .
740
35