Flaming and Trolling: Claire Hardaker
Flaming and Trolling: Claire Hardaker
Abstract: In the last two decades, pragmatic explorations of flaming and troll-
ing in computer-mediated communication have gained momentum. However,
although there is little doubt that “attacks, assaults and contemptuous remarks”
(Jucker and Taavitsainen 2000: 73) have always been commonplace, some
researchers (cf. Moor, Heuvelman and Verleur 2010; Nitin, Bansal and Kanzashi
2011) have recently argued that certain types of conflict, such as flaming and troll-
ing are particularly native to social media. To this end, this chapter aims to provide
new insights into current pragmatic research into flaming and trolling, including
how these terms are defined and deployed, case studies that illuminate how these
behaviors are accounted for by existing literature, and the current challenges that
face these fields and their development into the future.
1. Introduction
In recent years, the media has pounced on what I have, in previous research, loosely
termed negatively marked online behaviors (NMOB) (Hardaker 2010, 2013, 2015;
Hardaker and McGlashan 2016). NMOB is behavior that, for whatever reason,
others consider to fall outside of the bounds of social decorum for that particular
online interaction. It could involve something as mild as raising a topic that falls
outside of the scope of a given forum, or more seriously, posting crude jokes on a
tribute page set up to honor a deceased person. At the criminal end of the scale, it
might consist of repeatedly sending threats via emails or text messages, or sharing
someone’s personal, private details on public platforms.
The media’s interest in NMOB is unsurprising. The sheer breadth, depth, and
range of unpleasant online behaviors provide an endless source of attention-grab-
bing headlines and exposés. Likewise, the people behind that behavior, whether
characterized as trolls, cyberstalkers, or online predators, make for simplistic,
almost cartoon-like villains that the vast majority of media-consumers can readily
position themselves against.
This fervor of interest in online abuse, however, though so pervasive now, is a
relatively new phenomenon. The internet could be described as well-established
across numerous countries by the 1990s, yet as late as 2010, antisocial online
behavior was virtually ignored by the news, academia, and even to an extent, the
online platforms that were home to those behaviors themselves. Even at the time
of writing, there is still surprisingly little linguistic research on NMOB, and what
DOI 10.1515/9783110431070-018
In: C. R. Hoffmann and W. Bublitz (eds.). (2017). Pragmatics of Social Media, 493–522. Berlin/Boston:
De Gruyter Mouton. Brought to you by | Cambridge University Library
Authenticated
Download Date | 9/14/17 4:56 PM
494 Claire Hardaker
interest we can find tends to be distributed unevenly. We can readily find interest
in topics such as spam (e.g. Barron 2006; Stivale 1997), cyberbullying (e.g. Strom
and Strom 2005; Topçu, Erdur-Baker and Çapa-Aydin 2008), cyberstalking (e.g.
Bocij 2004; Whitty 2004), aggressive video games (e.g. Dill and Dill 1998; Scott
1995; van Schie and Wiegman 1997) and computer-related depression (e.g. Kraut
et al. 1998). Likewise, flaming had arguably arrived by the turn of the millennium,
with interest from fields as diverse as psychology (Collins 1992; Lea et al. 1992),
linguistics (Arendholz 2013; Herring 1994; Jucker and Taavitsainen 2000), media
and cultural studies (Millard 1997; O’Sullivan and Flanagin 2003), sociology (Lee
2005) and information science (Kayany 1998).
Trolling, however, has struggled to gain validity as field of interest until as late
as 2010. For instance, by the end of the 1990s, there had been a handful of articles
on the politics (Tepper 1997) and deceit (Donath 1999) found in online practices.
By 2005 new research was being published about online deception (Placks 2003;
Utz 2005; Zhou et al. 2004), but very little directly about trolling (see, however
Herring et al. 2002). Finally, by 2010, research dealing specifically with troll-
ing (e.g. Hardaker 2010; Shachaf and Hara 2010; Shin 2008) started to appear
in greater numbers. And by 2015, the growth had become exponential, spanning
fields as diverse as Artificial Intelligence (Dlala et al. 2014), computing (Cheng,
Danescu-Niculescu-Mizil and Leskovec 2015; Fichman and Sanfilippo 2015),
communication studies (Brabazon 2012), cultural studies (Phillips 2015), media
(Binns 2011; McCosker 2014), politics (Virkar 2014), psychology (Buckels, Trap-
nell and Paulhus 2014; Maltby et al. 2015), public relations (Weckerle 2013) and
of course, linguistics (Hardaker 2013, 2015; Hardaker and McGlashan 2016; Hop-
kinson 2013).
Why such reluctance to confront what might be viewed as an online epidemic?
One problem may be a lingering sense that computer-mediated communication
(CMC) is somehow not as worthy a subject of investigation as offline interaction,
and some research has compounded this devaluation by dichotomising virtual and
real-life activity (e.g. Baym 1996: 342; Chiou 2006: 547; Strom and Strom 2005:
41). A secondary problem from the academic perspective may be with
[…] the variability in the perceptions of norms and expectations underlying evaluations
of behaviour as polite, impolite, over-polite and so on, and thus inevitably discursive
dispute or argumentativity in relation to evaluations of im/politeness in interaction. Yet
with the exception of work by Locher (2006) and Graham (2007, 2008), there has been
little research on im/politeness in various forms of computer-mediated communication
from this perspective. (Haugh 2010: 8)
These problems are not limited to the analyst. CMC, by its very nature, is both rel-
atively new and changing extremely quickly. Analysts and users alike may strug-
gle to evaluate the appropriacy of online utterances in light of their own norms and
expectations. With this in mind, this chapter moves next into a brief overview of
CMC and the peculiarities that this context affords. The current state of the art of
academic research on flaming and trolling in CMC in general and social media in
particular is discussed in Sections 3 and 4 respectively. Finally, Section 5 looks
towards promising future directions.
2. Computer-mediated communication
In short, as we will see in the coming sections, the current focus of CMC
research is on exploring the diversity, creativity, and meaning of linguistic choices,
the construction of online identities, and the interplay between these two.
Not all the diversity, creativity, and constructions are positive, however. In
fact, almost from its first days, CMC became a fertile ground for misbehavior and
crime, whether in the form of sharing illegal images, finding potential victims, or
just being extremely unpleasant to others from the safety of a distant keyboard. As
both Donath (1999) and Sternberg (2000) observed, as a direct result of this, many
CMC platforms are built and managed with conflict at the forefront of almost
every aspect. The evidence for this can be found in the technology, in the form of
block, ban, or ignore options features. They are usually also documented in the
site’s own literature: “An extensive description of killfile techniques in a group’s
FAQ is a kind of virtual scar-tissue, an indication that they have had previous trou-
ble with trolls or flame-wars” (Donath 1999: 48).
In the space of a short chapter, it is impossible to do justice to all the factors
that play into abusive online behavior, so only three major aspects – anonymity,
meaning, and format – are covered below.
As long ago as 380BC, Plato (2007: 2.359c-2.360d) used the story of the Ring of
Gyges as an insight into the ways in which visibility, invisibility, and anonymity
could encourage a shepherd to murder a king. More recently, Lea and Spears (1991)
observed how anonymity could influence group decision-making processes, whilst
Reicher, Levine and Gordijn (1998) considered the ways that it affected power
relations within groups.
Particularly on very large social networks, little prevents users from carry-
ing out online abuse with fabricated, cloned, or even their own accounts if social
stigma does not concern them (Chester and O’Hara 2009; Phillips 2002; Zarsky
2004). Siegel et al. (1986) elaborate on the ways that anonymity can foster a sense
of impunity, a loss of self-awareness, and a likelihood of acting upon normally
inhibited impulses – an effect known as deindividuation (1986: 161). Psycholog-
ically, interactants may give less consideration to the recipient’s feelings. This,
according to Douglas and McGarty, is manifested in NMOB like flaming and troll-
ing (2001: 399). When we add to this, “the potential for reaching a diverse global
audience, consisting of hundreds of cultures, it is unsurprising that conflict is a
common phenomenon in Usenet” (Baker 2001).
This disjuncture between offline and online communication, however, is not
the only anvil on which conflict is wrought. Even the communication of straight-
forward meanings can be more complex than first appears.
3. Flaming
Before reviewing the literature, it is interesting to look back into the history of the
word flaming, because, though it may sound or feel new, this concept has been
with us for some time.
3.1. Etymology
A quick glance at any reasonably thorough dictionary shows that flame (v.) was
used as early as the 1500s to describe a violent, passionate outburst. As might
be expected of an ancient word that describes one of the earliest elemental con-
cepts, the metaphorical extension of the word flame sits within a constellation of
other metaphorical extensions, all dedicated to conveying the nuances of anger and
provocation. For instance, a fiery, hot-tempered, explosive person might fan the
flames by making heated, inflammatory or incendiary comments, and we might
simmer over an insult, burn with indignation, boil with rage, or even blow our top
like an erupting volcano. In short, high temper has long been associated with high
temperature, and when we turn to the modern era, CMC simply appears to have
co-opted this ancient fire-as-rage metaphor and applied it to the new phenomenon
of hasty, vitriolic and excessive online outbursts.
When we turn back the pages of academic research, however, we find a differ-
ent story. As already mentioned above, unlike trolling, flaming has received more
academic attention (see, for instance, Avgerinakou 2008; Chester 1996; Herring
1994; Kayany 1998; Lea et al. 1992; Millard 1997), and greater interest typically
results in a proliferation of definitions, rather than a coalescing agreement on one
accepted understanding. Despite this greater interest and the long history that this
metaphor has beyond the realms of CMC, though, as recently as fifteen years ago,
the concept was still regarded in a somewhat fuzzy manner. For instance, under the
heading, 20th century adolescents: Sounding and flaming, Jucker and Taavistainen
(2000) describe flaming thus:
The other institutionalized form of insults is the practice of flaming on the internet. It
appears to be particularly common in news groups, where a large number of partici-
pants can submit email postings under the cover of anonymity. In this context, flaming
is considered to be bad style and is rejected by the code of behavior on the internet, the
so-called netiquette.1 (Jucker and Taavitsainen 2000: 90)
This description really doesn’t tell us what flaming is at all, beyond the fact that
it is considered “bad style”, and the example that Jucker and Taavistainen (2000)
subsequently provide comes from alt.flame, a newsgroup dedicated specifically
to flaming. Jucker and Taavistainen (2000) acknowledge that by the very nature
of such a group, the flamewars carried out there are probably “of an entirely ludic
nature” (Jucker and Taavitsainen 2000: 91). In short, this tells us very little about
what we might think of as genuine flaming. Only a year later, however, Baker
(2001) gives us a more thorough definition:
Antagonistic postings are known as flames (Siegel et al. 1986) and prolonged, escalat-
ing conflicts are often referred to as flame wars. In flame wars flames can give rise to
other flames, involving more and more posters, some who may be angry that the flame
war is taking over the newsgroup. The tone of flames is intentionally aggressive and
numerous methods of attack are used, ranging from intellectualized debate, through
biting sarcasm to scatological abuse. (Baker 2001)
1
This also rather worryingly seems to suggest the existence of only one universal neti-
quette!
And only a few years later, Johnson, Cooper and Chin (2008) describe flaming as
“the antinormative hostile communication of emotions […] that includes the use
of profanity, insults, and other offensive or hurtful statements” (Johnson, Cooper
and Chin 2008: 419).
Intentionally or otherwise, Chaplin appears to have clicked “Reply All” and sent
this response not only to Katsampoukas, but to all the other 4,000 recipients of the
original email as well. Chaplin’s identity was eventually traced through his ISP,
and as a consequence, Stark Brooks asked him to resign (Atkinson 2011).
Whilst this is only one example, we find, as the literature suggests, language in
Chaplin’s email that could be characterized as “intentionally aggressive” (Baker
2001) due to the “profanity, insults, and other offensive or hurtful statements”
(Johnson, Cooper and Chin 2008: 419).
In another case, an irritated reaction that found its outlet in sarcastic humor
resulted not only in the producer of the content being fired, but also in an extended
legal battle and the involvement of multiple celebrities.
In January 2010, as snow and bad weather closed in around Doncaster, Robin
Hood Airport began to issue alerts about possible delays and closures. One
would-be passenger was trainee accountant Paul Chambers (then 26), who was
planning to fly to Belfast to finally meet face-to-face with Sarah Tonner (@crazy-
colours). As the weather worsened, Chambers tweeted several times:
(3) Paul Chambers @pauljchambers 06 Jan 2010
@Crazycolours: I was thinking that if it does then I had decided to resort to terrorism
(4) Paul Chambers @pauljchambers 06 Jan 2010
@Crazycolours: That’s the plan! I am sure the pilots will be expecting me to demand a
more exotic location than NI
We no longer have Sarah’s tweets, but we might infer that she had advocated
taking control of the aircraft. Having already mentioned a penchant for terrorism,
Chambers then tweeted his six hundred or so followers with the following:
(5) Paul Chambers @pauljchambers 06 Jan 2010
Crap! Robin Hood airport is closed. You’ve got a week and a bit to get your shit together
otherwise I’m blowing the airport sky high!!
Despite the airport manager who found it, the senior airport official who was told
about it, and even a police officer investigating it all considering the tweet a joke
rather than a credible threat, a week later, four South Yorkshire police officers
arrested Chambers at his workplace on suspicion of making a hoax bomb threat.
As a result, Chambers lost his job, and though he defended the tweet as a sarcastic
joke borne of frustration, he was ultimately convicted of “sending a public elec-
tronic message that was grossly offensive or of an indecent, obscene or menacing
character contrary to the Communications Act 2003”, leaving him with a fine and
a lifelong criminal record.
Over the next two and a half years Chambers lodged multiple appeals that
were increasingly vocally backed by celebrities including Stephen Fry. Finally, in
the high court, before the country’s most senior judge, Chambers’ conviction was
overturned – a very late acknowledgement, perhaps, that his ‘threat’ to bomb an
airport was really nothing more than a careless moment of irritated online venting.
If we are to draw out a common theme from both the available data and litera-
ture, then it is that flaming is an over-reaction to some sort of provocation, whether
that is an expletive-laden rant in reply to an unsolicited email or a sarcastic bomb-
It is difficult to see how this definition could be used in such a way that it would not
also capture flaming, trolling, cyberbullying, cyberharassment, and sex offenders
grooming children online, as well as any CMC contribution perceived to be objec-
tionable or distressing for other reasons (e.g. posting a video of animal abuse). The
problem is little better when we consider legislation and policy guidance. From
the UK, for instance, the House of Lords’ Communications Committee published
its first report on social media and criminal offences. In this, trolling is described
as the “intentional disruption of an online forum, by causing offence or starting an
argument” (2014: ch2 § 9c). However, this extremely simplistic definition doesn’t
differentiate between someone disrupting a forum because they have (or think they
have) a genuine grievance and someone doing so simply for the sake of amusement.
Similarly, in guidelines on prosecuting cases involving communications sent
via social media, the Crown Prosecution Service alludes to flaming thus:
Examples of cyberstalking may include:
– Threatening or obscene emails or text messages.
– Spamming (where the offender sends the victim multiple junk emails).
– Live chat harassment or ‘flaming’ (a form of online verbal abuse).
– Leaving improper messages on online forums or message boards.
– Sending electronic viruses.
– Sending unsolicited email.
– Cyber identity theft. (CPS 2016, emphasis mine)
4. Trolling
Just as we began the section on flaming with a brief glance into the history of
the word, so it is insightful to do the same with trolling. We find, however, a less
clear-cut answer.
4.1. Etymology
The current definition of trolling may have derived from one of two distinct routes.
The first captures troll as a noun, and finds its roots in the late fourteenth century.
In Old Norse and Scandinavian mythology, a tröll was a large, strong, nasty crea-
ture that possessed supernatural powers, but that would also turn to stone in the
sunlight (Jakobsson 2006: 1; MacCulloch 1930: 285–286). These ancient myths
persist in folklore tales such as the Norwegian fairytale The Three Billy Goats
Gruff. The second possibility captures troll as a verb, and dates back at least as far
as the 1600s. In this respect, trolling derives from fishing, and involves drawing
baited fishing lines through the water.2 This variant also has longstanding meta-
2
This is not to be confused with trawl-fishing, which involves dragging nets.
phorical extensions that include luring others along with some form of bait, and as
exhaustively searching for something.
Currently, users are as happy to invoke terms relating to mythology for the
person (e.g. get back under your bridge, don’t feed the troll) as to fishing for
the act (e.g. biting, baiting, netting, and hooking) and the reality is that we will
probably never know which history was being drawn on when the first person
uttered troll in some semblance of its modern, online sense. Certainly, there is
little enough evidence to give us much hope of deducing it from the earliest online
postings. One matter that is clear, however, is that this term has been far more
heavily influenced by the media than flaming. This is important to note, since
scholars are not impervious to being influenced by widespread, mainstream narra-
tives.
When we look to the media, we find that interest was initially slow, and prior
to 2010, there are only scattered reports on trolling (e.g. Black 2006; Cox 2006;
Moulitsas 2008; Thompson 2009). Where it was discussed, typical definitions
described trolling as the posting of incendiary comments designed to provoke con-
flict: “Hiding behind the pseudonymity of a Web alias, trolls disrupt useful dis-
cussions with ludicrous rants, inane threadjackings, personal insults, and abusive
language” (Naraine 2007: 146). Brandel (2007) adds that “[a] troll is a person who
posts with the intent to insult and provoke others. […] The goal is to disrupt the
normal traffic of a discussion group beyond repair” (Brandel 2007: 32).
Heffernan (2008) highlights the unprovoked nature of trolling, along with
another goal – amusement at another’s expense:
Consider this question from David Hume: “Would any man, who is walking alone, tread
as willingly on another’s gouty toes, whom he has no quarrel with, as on the hard flint
and pavement?” […] Internet trolls regularly tread on gouty toes. They trick vulnerable
people with whom they have no quarrel; they upset those people; they humiliate them;
they break their hearts; they mess with them. They do it for something Hume didn’t
perfectly name: the lulz—the spiteful high. (Heffernan 2008)
The earliest academic attention to the subject matter also largely created defi-
nitions from intuition, the media, and online ephemera like The Troller’s FAQ
(1996). The result is that the term has been, and continues to be used as an all-en-
capsulating term. For example, Herring et al. (2002: 372) and Turner et al. (2005)
describe trolling as luring others into frustratingly useless, circular discussion that
is not necessarily overtly argumentative. Donath (1999: 45) and Utz (2005: 50)
suggest that trollers can intentionally disseminate poor advice, thereby provok-
ing corrections from others. Tepper (1997: 41) explains how trolling can define
ingroup/outgroup membership: those who ‘bite’ signal novice, outgroup status,
whilst ingroup members will identify the troller, will not be baited, and may even
mock those who are. Donath (1999) and Dahlberg (2001) suggest that trolling is
a one-sided game of deception played on unwitting others: “The troll attempts to
pass as a legitimate participant, sharing the group’s common interests and con-
cerns” (Donath 1999: 45). Then, once the troll has developed its false identity and
been accepted into the group, they will set about disrupting the forum whilst trying
to conceal their true intent (Dahlberg 2001).
The main point here is that whilst some of these definitions overlap to an
extent, all seem to be describing a collection of symptoms, rather than one coher-
ent behavior or motive that is responsible for those choices of actions. In an effort
to create an empirically informed definition, I analysed 3,727 user discussions of
trolling drawn from an eighty-six million word Usenet corpus and concluded that
trolling is “the deliberate (perceived) use of impoliteness/aggression, deception,
and/or manipulation in CMC to create a context conducive to triggering or antago-
nizing conflict, typically for amusement’s sake” (Hardaker 2013: 79).
In short, one distinction to be drawn between flaming and trolling, as hinted at
in the previous section, is that whilst flaming may be an over-reaction to a provo-
cation, trolling proactively strives to be a provocation in its own right.
a far greater amount of context and background. For this, we turn to the case of
Brenda Leyland.
In 2007, three-year-old Madeleine McCann disappeared from the holiday resort
of Praia da Luz. Her parents, medical doctors Gerry and Kate McCann were dining
at a nearby restaurant at the time, whilst Madeleine and her younger siblings slept
in their hotel bedroom. At around 10pm, Kate McCann went to check on her chil-
dren and discovered that Madeleine was gone. Despite extensive searches, she has
never been found. During the early parts of the inquiry, the McCanns were ques-
tioned by police about her disappearance, and media outlets were quick to voice
their suspicions and theories about how the McCanns might have been involved.
Meanwhile, though the platform was only a year old at the time, users of the new
social media site, Twitter, also took an interest in the story. Whilst some invested
hours trading guesses about the truth behind Madeleine’s disappearance, others
spent their time sending threats of violence, murder and abduction of their other
children at the McCanns.
Responding to the media was relatively straightforward: the McCanns took
legal action and were ultimately awarded substantial damages, along with front-
page apologies. However, social media is a different and much more difficult plat-
form to regulate, and almost a decade later, havens of like-minded people still
interested in the Madeleine McCann case thrive. This brings us to 2014, and the
sleepy civil parish of Burton Overy in Leicestershire.
Brenda Leyland was educated at a convent school, went to church, enjoyed gar-
dening and photography, and was involved in the annual village scarecrow compe-
tition. Not all was perfect, however. A sixty-three year old mother to two sons, she
appears to have been estranged from the eldest, and her marriage had ended in 2001.
According to expert evidence at the inquest, Leyland had a history of attempted
suicide and was receiving both therapy and medication to help her deal with bouts
of severe depression and anxiety. A consultant psychiatrist who had treated Leyland
in the past also stated that she had lifelong unstable emotional personality traits.
On Twitter, Leyland had another identity: @sweepyface. The bio for sweepy-
face’s Twitter account simply read “Researcher”, and after a few dormant years, it
suddenly became active in November 2013. From that point to September 2014,
sweepyface sent roughly 4,600 tweets, and throughout 2014 alone, the number of
people following the account doubled from 93 to 183. The content of sweepyface’s
tweets skewed in a very particular direction: 87 % of them contained the word or
hashtag McCann. Meanwhile, references to the parents by their first names or ini-
tials occurred an average of once every ten tweets.
Within the online community of those campaigning about the McCanns,
sweepyface fell into a group that might be crudely titled the anti-McCanns – that
is, users who believe the McCanns to be responsible, to whatever degree, for their
daughter’s disappearance. By contrast, the pro-McCanns, as might be expected,
believe the parents innocent of most or all wrongdoing. Both groups employ the
Rather than responding publically, Brunt contacted Leyland in private. Three days
later, on Thursday 02nd October, Sky News launched an exposé on “the McCann
trolls”. As well as radio reports and online articles, the investigation consisted of
an eleven-minute video report posted on YouTube.3 A shortened version of this
report was repeated on the main Sky News channel cycle throughout the day. In
that excerpt, Brenda Leyland is doorstepped by a camera crew led by Martin Brunt:
(13) Sky News, 18:12, Thu 02nd Oct 2014: ‘Evil’ Trolls In Hate Campaign Against McCanns:
video (04m 23s)4. Excerpt: 00s 00m to 01m 11s.
Brunt, voiceover: [Shaky footage as the camera person heads towards a car that Ley-
land is exiting.] This woman uses Twitter to attack the parents of
Madeleine McCann. On the internet, she’s anonymous. Not any-
more. [Leyland has been walking their way. As they meet, she looks
somewhat bewildered.]
Brunt: [Some speech obscured by voiceover.] …I’m Martin Brunt from
Sky News.
Leyland: Well I’m just about to go out.
Brunt: Well we’ve caught you. Can we talk to you about your Twitter?
Leyland: No.
Brunt: And your attacks on the McCanns.
Leyland: No.
Brunt: Erm. Why are you attacking them so regularly?
Leyland: Look I’m just going out with a friend. Okay?
Brunt: But why are you using your Twitter account …
Leyland: Excuse me. [Turns and walks back towards her car.]
Brunt: … to attack the McCanns?
Leyland: [Stops and turns back.] I’m entitled to do that Martin.
Brunt: You know you’ve been reported to the police, to Scotland Yard.
[Camera gets in front of Leyland.] They’re considering, er, a whole
file of Twitter accounts [camera gets closer to Leyland] and that …
Leyland: That’s fair enough.
Brunt: … what supporters say is a campaign of abuse against the McCanns.
Leyland: Okay well I’m going out. [Leyland turns and walks back towards
her car. She looks back briefly as if responding to something fur-
ther but the audio has cut to the voiceover at this point.]
Brunt, voiceover: On Twitter, she uses the name sweepyface with a profile picture of
a pet. She tweets many times a day, and mostly about the McCanns.
In one message she spread rumours about the couple’s marriage.
In another, she hoped Madeleine’s parents would suffer forever.
But sweepyface is not the worst. Almost from the day Madeleine
vanished seven years ago, her family have been targeted with vile
messages on the internet.
3
https://www.youtube.com/watch?v=qkAzz8Pwdvc
4
http://news.sky.com/story/1345871/evil-trolls-in-hate-campaign-against-mccanns
This tweet appears in amongst a wide range of other content from various sources,
some containing threats and incitement to violence towards the McCanns. The
identities of those other users are kept anonymous, however, leaving Leyland as
the sole face and identity of “the McCann trolls”. Within hours, other media out-
lets make the connection between sweepyface and Brenda Leyland. Her identity,
age and village are all published, and by the next day, across most of the media,
she is being vilified as a troll.
On Saturday 04th October, two days after the exposé, Leyland is found dead in
a room at the Marriott Hotel, Leicester. An inquest immediately opens, and some
five months later, on 20th March 2015, Coroner Catherine Mason records a verdict
of suicide. As part of the inquest, Detective Sergeant Steven Hutchings of Leices-
tershire Police verifies that none of Leyland’s tweets had amounted to a criminal
offence.
This case exemplifies, to an extreme degree, the problem raised repeatedly
throughout the chapter: namely, that the definitions of terms relating to negatively
marked online behaviors are unclear, inconsistent, and because of this, they can be
badly misapplied. To take the term troll specifically, the societal stigma that goes
with being branded such is serious enough to make an otherwise unknown individ-
ual front page news. However, this very term is also being semantically stretched,
particularly by the media, to such an extent that it is catching cases in which the
accused individual is arguably not a troll at all. In short, there are serious, real-
world consequences to the ways in which we define and apply terms like troll and
these need much more investigation.
The issue of terminology is not the only challenge faced by those researching
trolling. Given that trolling, as a field of enquiry, is virtually brand new, it would
be possible to list almost any aspect as a future challenge. However, there is one
defining element of trolling, when contrasted with flaming, that we find across the
literature: deception.
Literature in the field of im/politeness is currently grappling hard with the
place and scope of intentions, in our understandings and productions of socially
(un)acceptable behavior (see, for instance, Arundale 2008; Haugh 2008; Spencer-
Oatey and Xing 2006). But how does this relate to trolling or deception? Brown and
Levinson (1987) noted very early on that someone might express themselves such
that “there is more than one unambiguously attributable intention so that the actor
cannot be held to have committed [them]self to one particular intent” (Brown and
Levinson 1987: 69).
Whilst Brown and Levinson are talking about politeness, this can equally apply
to any form of interaction, from courtroom examination to political interviews to
online trolling. As in the definitions proposed in § 4.1 above, a troll may “attempt
to pass as a legitimate participant” (Donath 1999: 45) and then, “after developing
their false identity and becoming accepted within a group, the troll sets about dis-
rupting proceedings while trying to maintain his or her cover” (Dahlberg 2001).
If indeed a troll chooses a covert strategy, should they be challenged, they may
excuse their behavior as accidental, unwitting, incidental, and so on, whilst deny-
ing any intention to offend. Similarly, just as a troll might disguise their intentions
to deliberately offend, a target might be disingenuous about their interpretations of
that behavior, and pretend to be amused rather than offended as a way to save face.
In summary, the debate involving the role of intentions has received some con-
siderable attention both within and even outside of pragmatics5 but virtually none
considers deception as part of that process.
Nefarious deception (that is, deception motivated by personal gain or harming
others, as opposed to that motivated by kindness or politeness) reaches beyond
intentions, however. It can also encapsulate the formation of an identity that is
inconsistent with one’s offline self, if done with malicious purposes in mind – an
aspect that is heavily facilitated by the anonymity that CMC can offer. This may
even stretch as far as creating multiple accounts deliberately made to look like
separate people, so that the user can cause greater disruption by the appearance of
5
See, for instance, Bach (1987), Davis (1998, 2007, 2008), Gibbs (1999, 2001), Green
(2007, 2008), Jaszczolt (2005, 2006), Keysar (2007), Recanati (1986), Saul (2001),
Searle (1983, 1990), Thompson (2008).
strength in numbers (Chester and O’Hara 2009; Phillips 2002; Zarsky 2004). Little
has been done in this area from linguistics.
Finally, deception also captures straightforward lies, such as giving advice that
the troll knows to be dangerous or incorrect in the hopes of causing hurt or harm,
or promoting themselves as trustworthy, to gain greater traction within the group.
Whilst there is very little work within linguistics on what we might crudely call
‘narrative deception’ (as opposed to intention deception, or identity deception)
fortunately there is a wealth of excellent work to be found in psychology6 which
may help move this area forward.
5. Future directions
From almost every angle, research into flaming and trolling is very new. At the
time of writing, the field that these topics sit within – CMC – has been established
for barely thirty-five years (see, for instance, Kaye 1985; Kerr and Hiltz 1982;
Kiesler, Siegel and McGuire 1984). Flaming made its debut some ten years after
that (Lea et al. 1992), whilst trolling took another decade to arrive (Herring et al.
2002). The result, then, is that almost every possible avenue qualifies as a future
direction, but there are some that are worth repeating, or raising afresh.
Perhaps the most pressing issue, as discussed throughout this chapter, is clarity
on what terms like flaming and trolling mean. It is unsurprising for new fields to
go through a period of turmoil as concepts become established and certain con-
sensuses are reached. However, in this particular area of research, a lack of clarity
can have extremely serious, real-world impacts, as in the case of Brenda Leyland.
Clear definitions also become paramount when issuing legal guidance and even
legislation itself, however. Determining that a behavior is illegal relies heavily
on being able to identify that behavior in the first place, and here, linguists have
much to offer, particularly from the fields of forensic linguistics, corpus linguistics
(e.g. through observation of largescale patterns), and pragmatics.
Further, despite the now firmly-established field of impoliteness, which includes
seminal work by the likes of Culpeper (1996), Spencer-Oatey (2005), and Bousfield
(2008), there is surprisingly little research into either flaming or trolling that seems
to draw upon this (see however, Graham 2007, 2008; Herring 1994; Locher 2006).
In short, aligning impoliteness research and research into negatively marked
online behaviors – particularly flaming – could benefit both since they have marked
similarities, such as the variability in perceptions of what constitutes impoliteness,
trolling, or flaming in the first place, and the evaluations of degrees of hostility
6
See, for instance, Akehurst et al. (1996), Ekman (1996), Memon, Vrij and Bull (2003),
Rubin (2010), Vrij (2000), Vrij et al. (2000).
made by the participants (Graham 2007: 743). Similarly, research into flaming
and trolling may be substantially supported by work in psychology, criminology,
sociology, and computing.
In short, for those interested in researching negatively marked online behavior,
a wide vista of possibilities is open for exploration. If there remains any lingering
sense that CMC is somehow not as worthy a subject of investigation as offline
interaction, hopefully this chapter has shown that it is not only a serious area, but
also one that should be ventured into carefully, and with a great deal of sensitivity.
Acknowledgments
This work was supported by the Economic and Social Research Council through
two grants: the Twitter Rape Threats and the Discourse of Online Misogyny pro-
ject (grant reference ES/L008874/1) and the ESRC Centre for Corpus Approaches
to Social Science (grant reference ES/K002155/1).
References
Baker, Paul
2001 Moral panic and alternative identity construction in Usenet. Journal of Compu
ter-Mediated Communication 7(1). http://jcmc.indiana.edu/vol7/issue1/baker.
html 08/12/09.
Barron, Anne
2006 Understanding spam: A macro-textual analysis. Journal of Pragmatics 38:
880–904.
Baym, Nancy
1996 Agreements and disagreements in a computer-mediated discussion. Research
on Language and Social Interaction 29(4): 315–345.
Bernstein, Michael S., Andrés Monroy-Hernández, Drew Harry, Paul André, Katrina Pano-
vich and Greg Vargas
2011 4chan and /B/: An analysis of anonymity and ephemerality in a large online
community. Association for the Advancement of Artificial Intelligence: 1–8.
Binns, Amy
2011 Don’t feed the trolls: Managing troublemakers in magazines’ online communi-
ties. Mapping the Magazine 3. http://www.people.vcu.edu/~dgolumbia/classes/
1314.2.spr2014/engl391/resources/Trolls.pdf.
Black, Lisa
2006 It’s a troll’s ‘life’ for some: Online games raise addiction concerns. Chicago
Tribune November 30th: 1.
Bocij, Paul
2004 Cyberstalking: Harassment in the Internet Age and How to Protect Your Fam-
ily. Westport: Praeger.
Bolter, Jay David and Richard Grusin
1998 Remediation: Understanding New Media. Cambridge, MA: The MIT Press.
Bousfield, Derek
2008 Impoliteness in Interaction. Amsterdam/Philadelphia: Benjamins.
Brabazon, Tara
2012 Digital Dialogues and Community 2.0: After Avatars, Trolls and Puppets.
Oxford: Chandos Publishing.
Brandel, Mary
2007 Blog trolls and cyberstalkers: How to beat them. Computerworld May 28: 32.
Brown, Penelope and Stephen C. Levinson
1987 Politeness: Some Universals in Language Use. Cambridge: Cambridge Uni-
versity Press. (Original edition 1978. Reprint 1987.)
Bucholtz, Mary
1999 "Why be normal?”: Language and identity practices in a community of nerd
girls. Language in Society 28(2): 203–223.
Bucholtz, Mary and Kira Hall
2005 Identity and interaction: A sociocultural linguistic approach. Discourse Stud-
ies 7(4–5): 585–614.
Buckels, Erin E., Paul D. Trapnell and Delroy L. Paulhus
2014 Trolls just want to have fun. Personality and Individual Differences 35(67):
97–102.
Chandler, Daniel
1995 Technological or Media Determinism. http://www.aber.ac.uk/media/
Documents/tecdet/tecdet.html.
December, John
1997 Notes on defining computer-mediated communication. CMC Magazine Janu-
ary. http://www.december.com/cmc/mag/1997/jan/december.html 19/07/08.
Dill, Karen E. and Jody C. Dill
1998 Video game violence: A review of the empirical literature. Aggression and
Violent Behavior: A Review Journal 3: 407–428.
Dlala, Imen Ouled, Dorra Attiaoui, Arnaud Martin and Boutheina Ben Yaghlane
2014 Trolls identification within an uncertain framework. Proceedings of the 2014
IEEE 26th International Conference on Tools with Artificial Intelligence.
http://arxiv.org/abs/1501.05272: 1011–1015.
Donath, Judith S.
1999 Identity and deception in the virtual community. In: Marc A. Smith and Peter
Kollock (eds.), Communities in Cyberspace, 29–59. London: Routledge.
Douglas, Karen M. and Craig McGarty
2001 Identifiability and self-presentation: Computer-mediated communication and
intergroup interaction. British Journal of Social Psychology 40(3): 399–416.
Ekman, Paul
1996 Why don’t we catch liars? Social Research 63(3): 801–817.
Elmer-Dewitt, Philip
1994 Bards of the Internet. Time July 4th: 66–67.
Ferris, Pixy
1997 What Is Cmc? An overview of scholarly definitions. CMC Magazine January.
http://www.december.com/cmc/mag/1997/jan/ferris.html.
Fichman, Pnina and Madelyn Rose Sanfilippo
2015 The bad boys and girls of Cyberspace: How gender and context impact per-
ception of and reaction to trolling. Social Science Computer Review 33(2):
163–180.
Gibbs, Raymond W.
1999 Intentions in the Experience of Meaning. Cambridge: Cambridge University
Press.
Gibbs, Raymond W.
2001 Intentions as emergent products of social interactions. In: Bertram Malle,
Louis Moses, and Dare Baldwin (eds.), Intentions and Intentionality, 105–
122. Cambridge, MA: MIT Press.
Graham, Sage Lambert
2007 Disagreeing to agree: Conflict, (im)politeness and identity in a computer-me-
diated community. Journal of Pragmatics 39: 742–759.
Graham, Sage Lambert
2008 A manual for (im)politeness?: The impact of the FAQ in an electronic commu-
nity of practice. In: Derek Bousfield and Miriam A. Locher (eds.), Impolite-
ness in Language: Studies on Its Interplay with Power in Theory and Practice,
324–352. Berlin/New York: de Gruyter.
Green, Mitchell
2007 Self-Expression. Oxford: Oxford University Press.
Green, Mitchell
2008 Expression, indication, and showing what’s within. Philosophical Studies 137:
389–398.
Jakobsson, Ármann
2006 The Good, the Bad and the Ugly: Bárðar Saga and Its Giants. Paper read at The
13th International Saga Conference: The Fantastic in Old Norse/Icelandic Liter-
ature, 06th–12th August 2006, at Durham and York. http://opac.regesta-imperii.
de/lang_en/anzeige.php?sammelwerk=The+Fantastic+in+Old+Norse+Icelan-
dic+Literature.+Preprint+Papers.
Jaszczolt, Katarzyna M.
2005 Default Semantics. Foundations of a Compositional Theory of Acts of Commu-
nication. Oxford: Oxford University Press.
Jaszczolt, Katarzyna M.
2006 Meaning merger: Pragmatic inference, defaults, and compositionality. Inter-
cultural Pragmatics 3(2): 195–212.
Johnson, Norman, Randolph Cooper and Wynne Chin
2008 The effect of flaming on computer-mediated negotiations. European Journal
of Information Systems 17(4): 417–434.
Jucker, Andreas H. and Irma Taavitsainen
2000 Diachronic speech acts: Insults from flyting to flaming. Journal of Historical
Pragmatics 1(1): 67–95.
Kayany, Joseph M.
1998 Contexts of uninhibited online behavior: Flaming in social newsgroups on
Usenet. Journal of the American Society for Information Science 49(12):
1135–1141.
Kaye, T.
1985 Computer-Mediated Communication Systems for Distance Education: Report
on a Study Visit to North America September/October 1985. Milton Keynes:
University of Open Institute of Educational Technology.
Kerr, Elaine B. and Starr Roxanne Hiltz
1982 Computer-Mediated Communication Systems: Status and Evaluation. New York/
London: Academic Press.
Keysar, Boaz
2007 Communication and miscommunication: The role of egocentric processes.
Intercultural Pragmatics 4(1): 71–84.
Kiesler, Sara, Jane Siegel and Timothy W. McGuire
1984 Social psychological aspects of computer-mediated communication. American
Psychologist 39: 1123–1134.
Kirsh, Elana
2012 Untangling the Web: How Facebook ruined my holiday. Jerusalem Post May
21st: http://www.jpost.com/Opinion/Columnists/Article.aspx?id=270839.
Kraut, Robert, Jolene Galegher, Robert Fish and Barbara Chalfonte
1992 Task requirements and media choice in collaborative writing. Human-Com-
puter Interaction 7: 375–407.
Kraut, Robert, Michael Patterson, Vicki Lundmark, Sara Kiesler, Tridas Mukopadhaya and
William Scherlis
1998 Internet paradox: A social technology that reduces social involvement and psy-
chological well-being. American Psychologist 53(9): 1101–1137.
Kruger, Justin, Nicholas Epley, Jason Parker and Zhi-Wen Ng
2005 Egocentrism over e-mail: Can we communicate as well as we think?’ Journal
of Personality and Social Psychology 89(6): 925–936.
Panteli, Niki
2002 Richness, power cues and email text. Information and Management 40(2):
75–86.
Phillips, David J.
2002 Negotiating the digital closet: Online pseudonymity and the politics of sexual
identity. Information, Communication and Society 5(3): 406–424.
Phillips, Whitney
2015 This Is Why We Can’t Have Nice Things: Mapping the Relationship between
Online Trolling and Mainstream Culture. Cambridge, MA: The MIT Press.
Placks, Simon James
2003 Interpersonal Deceit and Lie-Detection Using Computer-Mediated Communi-
cation. Durham: University of Durham.
Plato
2007 The Republic. 3rd ed. London: Penguin Classic.
Recanati, Francois
1986 On defining communicative intentions. Mind and Language 1: 213–242.
Reicher, Steve, R. Mark Levine and Ernestine H. Gordijn
1998 More on deindividuation, power relations between groups and the rxpression
of social identity: Three studies on the effects of visibility to the in-group.
British Journal of Social Psychology 37: 15–40.
Rubin, Victoria L.
2010 On deception and deception detection: Content analysis of computer-medi-
ated stated beliefs. ASIST 2010 October 22nd–27th, 1–10. Pittsburgh, PA. http://
onlinelibrary.wiley.com/doi/10.1002/meet.14504701124/pdf.
Sanderson, David W.
1993 Smileys: Express Yourself Sideways. Sebastopol, CA: O’Reilly and Associates.
Saul, Jennifer
2001 Critical Studies: Wayne A. Davis, Conversational implicature: Intention and
convention in the failure of Gricean theory. Nous 35: 630–641.
Scott, Derek
1995 The effect of video games on feelings of aggression. The Journal of Psychol-
ogy 129: 121–132.
Searle, John R.
1983 Intentionality. Cambridge: Cambridge University Press.
Searle, John R.
1990 Collective intentions and actions. In: Philip R. Cohen, Jerry L. Morgan and
Martha E. Pollack (eds.), Intentions in Communication, 401–415. Cambridge,
MA: Bradford Books.
Shachaf, Pnina and Noriko Hara
2010 Beyond vandalism: Wikipedia trolls. Journal of Information Science 36(3):
357–370.
Shin, Jiwon
2008 Morality and Internet behavior: A study of the Internet troll and its relation
with morality on the Internet. In: Karen McFerrin, Roberta Weber, Roger
Carlsen and Dee Anna Willis (eds.), Proceedings of Society for Information
Technology and Teacher Education International Conference 2008, 2834–
2840. Chesapeake, VA: AACE.
Utz, Sonja
2005 Types of deception and underlying motivation: What people think. Social Sci-
ence Computer Review 23(1): 49–56.
van Schie, Emil G. M. and Oene Wiegman
1997 Children and video games: Leisure activities, aggression, social integration,
and school performance. Journal of Applied Social Psychology 27: 1175–1194.
Vaughan, Jill and Lauren Gawne
2011 I can has language play: Construction of language and identity in Lolspeak.
Australian Linguistics Society Annual Conference December 7th. http://vimeo.
com/33318759.
Vinagre, Margarita
2008 Politeness strategies in collaborative e-mail exchanges. Computers and Edu-
cation 50(3): 1022–1036.
Virkar, Shefali
2014 Trolls just want to have fun: Electronic aggression within the context of e-par-
ticipation and other online political behaviour in the United Kingdom. Inter-
national Journal of E-Politics 5(4): 21–51.
Vrij, Aldert
2000 Detecting Lies and Deceit. The Psychology of Lying and the Implications for
Professional Practice. Chichester: Wiley.
Vrij, Aldert, Katherine Edward, Kim P. Roberts and Ray Bull
2000 Detecting deceit via analysis of verbal and nonverbal behavior. Journal of
Nonverbal Behavior 24(4): 239–263.
Weckerle, Andrea
2013 Civility in the Digital Age: How Companies and People Can Triumph over
Haters, Trolls, Bullies, and Other Jerks. London: Pearson.
Whitty, Monica T.
2004 Cyberstalking. NSW Crime Division: Criminology Research Council.
Wilson, Samuel M. and Leighton C. Peterson
2002 The anthropology of online communities. Annual Review of Anthropology
31(449–467): 449.
Yates, JoAnne and Wanda J. Orlikowski
2002 Genre systems: Structuring interaction through communicative norms. Jour-
nal of Business Communication 39(1): 13–35.
Zarsky, Tal. Z.
2004 Thinking Outside the Box: Considering Transparency, Anonymity, and Pseu-
donymity as Overall Solutions to the Problems of Information Privacy in the
Internet Society. Unpublished PhD Thesis. New York: Columbia Law School.
Zdenek, Sean
1999 Rising up from the mud: Inscribing gender in software design. Discourse and
Society 10(3): 379–409.
Zhou, Lina, Judee K. Burgoon, Jay F. Nunamaker and Doug Twitchell
2004 Automating linguistics-based cues for detecting deception in text-based asyn-
chronous computer-mediated communications. Group Decision and Negotia-
tion 13(1): 81–106.