Literature Review
Literature Review
A troll is usually defined as somebody who provokes and offends people to make them angry,
who wants to dominate any discussion or who tries to manipulate people’s opinions. The problems
caused by such persons have increased with the diffusion of social media. Therefore, on the one
hand, press bodies and magazines have begun to address the issue and to write articles about the
phenomenon and its related problems while, on the other hand, universities and research centres
have begun to study the features characterizing trolls and to look for solutions for their identification.
This survey aims at introducing the main researches dedicated to the description of trolls and to the
study and experimentation of methods for their detection.
With the rise of social media, users of it has been able to benefit from a simple
And fast way of communicating, capable of connecting separate individuals both physically and
Temporally. The Computer-Mediated Communication (CMC), however, cannot fully replicate the
Dynamics of verbal communication since it is exclusively based on sentences constructed very simply.
This increases the chances of misunderstandings and, consequently, of conflicts. CMC can also provide
Various degrees of anonymity, which can induce users to feel a sense of freedom and immunity from
Being held accountable for inappropriate behaviours. As a result, there has been a change in the
Way of communicating that has enabled users to limit the amount of personal information revealed.
This has led to the development of widespread trolling within the CMCs
Troll Definition
A troll has been defined as An individual who displays a negative online behaviour [1],
or as a user who initially pretends to be a Legitimate participant, but later attempts to disrupt the
community, not in a blatant way, but aiming to Attract the maximum number of responses from the other
community members [5] Trolls are also Described as individuals who derive pleasure from annoying
others [2] and, in fact, recent works Have discovered that sadism is closely associated with those who
have trolling tendencies [10] . The first references to the use of the word “troll” on the Internet can be
traced back to Usenet, a forum community popular in the eighties .A troll’s goal is to make fun of a
person; if this is analysed as a pragmatic act, it may be divided Into three basic components:
(i) a pseudo-intention, (ii) a real intention and (iii) a stimulus.
Hardaker [1], studying the behaviour of some users within a social
Media context, has found that the act of trolling is manifested in four interrelated ways:
Deception, Aggression, Disruption, success
•Deception: within a community like Usenet, as in any other question-and-answer (Q&A) forum,
If a troll wants to have some chances of success, he must keep his real intent of trolling hidden.
He will attempt to disrupt the group, trying to stay undercover. In fact, it is not possible to
Determine with certainty whether someone is causing problems intentionally and to label that
Person as a troll, because it may be simply a novice user or a discordant voice. Perhaps it is easier
(for a user or for a supervisor) to identify ambiguous behaviours and then to assess whether they
Are maintained over time. An example may be the “pseudo-naïve” behaviour, that occurs when a
Troll intentionally disseminates false or inaccurate advice or pretends to ask for help, to provoke
An emotional response in the other group members [3].
•Aggression: a troll who is aiming at generating a conflict can use a provocative tone towards other
Users. These are malicious or aggressive behaviours undertaken with the sole purpose to annoy or
Provoke others, using ridiculous rants, personal insults, offensive language or attempts to hijack
The conversation onto a different topic.
•Disruption: it is the act of causing a degradation of the conversation without necessarily attacking
A specific individual. A behaviour of this type includes sending senseless, irrelevant or repetitive
Messages aimed at seeking attention. This has also been referred to as trolling spam, linked to the
Common spam, but separate from it, as it is driven by the intention to provoke negative responses.
•Success: one of the most curious aspects of the problem is that, often, a troll is acclaimed by users
For his success both in relation to the quality of his own joke, i.e., for being funny, and for the way
Others react to it. In fact, some responses to the provocation—whether they are angry, shocked or
Curious—are regarded as a “bait” to the troll’s joke or, in other words, a demonstration that those
Who are responding were unwittingly duped by the pseudo-intent of the troll without being aware
Of the troll’s real goal. The attention of the group is similarly drawn even when the quality of
A troll’s joke is low and everybody can understand his real intent or when an experienced user
Can respond to a troll’s message in a manner that prevents him from falling into the prepared
Trap, possibly trying to unnerve the troll. So trolling, despite being a nuisance for users, may end
Up being the centre of attention of the group for its real purpose and not for his pseudo-intent.
Therefore, this aspect is related to how the group reacts to the troll and not to its modalities.
It is clear that trolling is a more complex problem than just provocative attacks. Although the
Concept may seem to be tied to the meaning of some words like rudeness, arrogance, impertinence
And vulgarity, these do not provide an accurate description of the troll’s attitude since, typically,
Trolling consists in keeping hidden the real intent of causing problems. In addition, in communities in
Which users are less vulnerable, more experienced or emotionally detached, the phenomenon can be
Seen as a playful action. As a result of this analysis, Hardaker provides an academic definition:
“A troll is a CMC user who constructs the identity of sincerely wishing to be part of the group
In question, including professing, or conveying pseudo-sincere intentions, but whose real
Intention(s) is/are to cause disruption and/or to trigger or exacerbate conflict for the purposes
Of their own amusement. Just like malicious impoliteness, trolling can (i) be frustrated if
Users correctly interpret an intent to troll, but are not provoked into responding, (ii) be
Thwarted, if users correctly interpret an intent to troll, but counter in such a way as to curtail
Or neutralize the success of the troll, (iii) fail, if users do not correctly interpret an intent to
Troll and are not provoked by the troll, or, (iv) succeed, if users are deceived into believing
The troll’s pseudo-intention(s), and are provoked into responding sincerely. Finally, users can
Mock troll. That is, they may undertake what appears to be trolling with the aim of enhancing
Or increasing affect, or group cohesion”. [1]
Troll’s Damages
Inexperienced or vulnerable users of the Internet communities who trust trolls, are involved
Emotionally, or communicate private information, may feel trolling particularly painful, distressing and
Inexplicable; given the distributed and asynchronous nature of online discussions, this may have
Consequences in the long term. These practices, although clearly problematic, are common and often
tolerated, in part because libertine values widespread on the Internet consider insulting speech as
A manifestation of freedom of expression [4]. In extreme cases, malicious users use the CMCs to
Commit crimes such as defamation, intake of others’ identities or cyberbullying. To counteract this,
Some online communities implement identity verification processes and disable the options that allow
Simultaneous communication between users [5]. Nevertheless, recently, the propensity to trolling
Seems to be widespread, which is alarming many of the most important social networks because,
In extreme cases, it has led some adolescents, like Amanda Todd, to commit suicide [6]. These attacks
Are usually directed not only to individuals, but to whole communities. For example, a growing
Number of tribute pages on Facebook are being targeted, including one in memory of the victims of the
Shootings in Cumbria and one dedicated to soldiers who died in the war in Afghanistan [7].
Even when trolling does not come as a direct attack, it can still be a threat because it can manifest
Itself in subtler ways, for example as a mean to try to manipulate others’ opinions. In fact, the rise
Of the Internet has allowed companies, organizations and governments to freely disseminate false
Rumours, misinforming and speculating, and to use other dishonest practices to polarize opinions [8]
It has been shown that the opinion of a user on certain products or on politics can be influenced by
The comments of other users [9] This way, gaining popularity is made easier for companies and for
Those political parties which make use of reputation management services, i.e., people paid to hijack
The opinions on their behalf. There are many publications that describe how these behaviours have
A strong impact on current events. For example, [10] says that in China there is an “army” of two
Million people, daily active on social networks, who flood citizens’ digital debates with comments and
Opinions that lead those discussions towards more acceptable topics, preferred by Beijing government.
The danger of social media abuses in the political sphere, with the aim of eventually affecting
Important election and voting events, is raising great concerns. One of the events that have recently
Attracted wide attention is the foreign interference during the 2016 US Presidential election. In particular,
Russia has been accused by the US Congress of conducting a systematic mass manipulation of the public
Opinion, using both human operators and software controlled accounts (i.e., bots). The accusation
Has been accompanied with a list of 2752 Twitter accounts, allegedly tied with the “Internet Research
Agency” (IRA), described as a “troll farm” based in Russia [6. This is one of the first revealed large
Scale organizations, systematically using human operators for political propaganda and deceptive
Interference campaigns. The accounts in the list used to spread politically-biased information and have
Been later deactivated by Twitter. The list provided by the US Congress is being used in a number
Of research works aiming at the automatic detection of online trolls, particularly those interfering in
The political sphere [3]. Some studies deal, in particular, with the diffusion of false information
And fake news by trolls, describing the phenomenon as “disinformation warfare” [13]. In this sense,
The problem can also be analysed from a general viewpoint of information quality assessment applied
To social media [7].
However, even without considering these extreme consequences, trolling remains a vexing
Problem because, even when undertaken in an innocent way, it hinders the normal course of a
Conversation. Indeed, user contributions in the form of posts, comments and votes are essential to
The success of an online community. With such a high number of degrees of freedom of expression,
The exclusion of individuals with an unpleasant behaviour as trolls needs to be considered very
Carefully, since it can trigger side effects. Nevertheless, it is appropriate to avoid many undesired
Disruptions and maintain clean and focused discussion threads.
Troll Detection Methods
Several research directions have been considered by researchers in order to solve the troll problem.
The first to be taken into consideration was to automatically analyse online contents through a
Natural language processing (NLP) approach. A rudimentary NLP technique involves the calculation
Of the negative words contained in a given post, in order to measure the comment’s degree of hostility.
Another approach is based on the subdivision of the content of the post in n-grams, i.e., sequences of n
Elements included in the text (in this case, words or characters, but also emoticons). These elements are
Compared to other well-known n-grams or they are used to create statistical models aimed at identifying
Trolls [29]. More sophisticated methods have been developed making progress in this direction; they try
To spot trolls using sentiment analysis techniques, i.e., by understanding and measuring the sentiment
Of the text [10-12]. In fact, by attributing a certain emotion to the words in a sentence, it is possible to
Evaluate the predominant sentiment.
Emotions such as anger or rage are clues for detecting a troll’s comment [11]. According to the
Authors of [13], the information acquired from single comments is not enough to perform a correct
Analysis and, consequently, they try to integrate methods to verify the consistency of the text according
To other comments and their topic. A second research direction involves the Social Network Analysis
(SNA) of the communities in order to identify possible trolls [9,8,7] Other analyses on data from
Users [11] are carried out, in order to identify users with antisocial behaviours within a community.
References
2 .Mihaylov, T.; Georgiev, G.; Nakov, P. Finding opinion manipulation trolls in news community
forums.
In Proceedings of the Nineteenth Conference on Computational Natural Language Learning,
Beijing, China ,30–31 July 2015; pp. 310–314.
3.Badawy, A.; Addawood, A.; Lerman, K.; Ferrara, E. Characterizing the 2016 Russian IRA
influence campaign.
Social Netw. Anal. Min. 2018, 9, 31. [CrossRef]
4.Badawy, A.; Lerman, K.; Ferrara, E. Who falls for online political manipulation? In
Proceedings of the Web
Conference 2019—Companion of the World Wide Web Conference, San Francisco, CA, USA,
13–17 May 2019;
pp. 162–168.
5.Chun, S.A.; Holowczak, R.; Dharan, K.N.; Wang, R.; Basu, S.; Geller, J. Detecting political
bias trolls in Twitter
Data. In Proceedings of the 15th International Conference on Web Information Systems and
Technologies,
6. .Kumar, S.; Spezzano, F.; Subrahmanian, V.S. Accurately detecting trolls in slashdot zoo via
decluttering.
In Proceedings of the 2014 IEEE/ACM International Conference on Advances in Social Networks
Analysis
And Mining, Beijing, China, 17–20 August 2014; pp. 188–195.
12..Ortega, F.J.; Troyano, J.A.; Cruz, F.L.; Vallejo, C.G.; Enríquez, F. Propagation of trust and
distrust for the
Detection of trolls in a social network. Comput. Netw. 2012, 56, 2884–2895. [CrossRef]
13..Seah, C.W.; Chieu, H.L.; Chai, K.M.A.; Teow, L.N.; Yeong, L.W. Troll detection by domain-
adapting sentiment
Analysis. In Proceedings of the 2015 18th IEEE International Conference on Information Fusion,
Washington,
DC, USA, 6–9 July 2015; pp. 792–799.