0% found this document useful (0 votes)
104 views

Internet Law Notes

1. This document provides an overview of the course "Internet Law & Regulation 2023/4". It outlines the key topics to be covered each week, including introduction to internet law, regulation of internet platforms, anonymity and privacy online, speech on the internet, online defamation, and data protection. 2. Early important case law is also summarized, such as Byrne v Deane which established the liability of defendants for defamatory material posted by third parties if they have the power to remove it. 3. The basics of internet law are defined, including the structure of the internet, world wide web, roles of website operators and internet intermediaries, and internet users. New challenges posed by

Uploaded by

meganggiblin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views

Internet Law Notes

1. This document provides an overview of the course "Internet Law & Regulation 2023/4". It outlines the key topics to be covered each week, including introduction to internet law, regulation of internet platforms, anonymity and privacy online, speech on the internet, online defamation, and data protection. 2. Early important case law is also summarized, such as Byrne v Deane which established the liability of defendants for defamatory material posted by third parties if they have the power to remove it. 3. The basics of internet law are defined, including the structure of the internet, world wide web, roles of website operators and internet intermediaries, and internet users. New challenges posed by

Uploaded by

meganggiblin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 77

Internet Law & Regulation 2023/4

Week 1: Introduction to Internet Law

Introduction

*User generated content is central to this course*

- E-Commerce Directive 2000


- Overtaken by Online Safety and Media Regulation Act 2022

- European Court of Justice highest court – binding.


- European Courts of Human Rights not binding but we have an obligation
to pay heed to it and ensure that no Irish judgements are in
contradiction (Sanchez v France)
People:
1. Justine Sacco
− “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”

− Lost job by time she landed in South Africa


2. Eoin McKeogh
− McKeogh v Doe [2012] IEHC 95

− “Dublin Taxi Customers Do Runner – Video Dailymotion”

− Eoin McKeogh identified from YouTube video and was abused


online and was afraid of affecting his employment.
− Eoin was incorrectly identified.
3. Sally Becrow
− The Lord McAlpine of West Green v Sally Bercow [2013] EWHC
1342
− “Why is Lord McAlpine trending? *innocent face*”

− Wife of speaker at House of Commons in London. BBC News


claimed there was a paedophile ring in the House of Commons
without naming anyone.
− Innocent face ruled to mean wink and a nod, an ironic comment.
4. Mark Savage
- Stood for Local Council election in Donabate, very conservative views
about homosexuality.
- Reddit thread: Mark Savage North County Dublin’s Homophobic
Candidate
- Reddit thread lifted into Google as first search result for his name. Mark
claimed this to be defamation and sued Google. Represented himself
and he lost. Still in the courts.

Required reading

Byrne v Deane [1937] 1 KB 818

Syllabus

Weeks 2 & 3: The Internet Platforms

• The various types of internet platform

• Current regulation of Platforms: the E-Commerce Directive/ Digital Services Act

• New legislation: Online Safety and Media Regulation Act 2022 (for big corps)

• Case law concerning liability of Platforms for User Generated Content - recent

important developments in Australia and New Zealand.

Week 4: Anonymity and Privacy


• Anonymity as a legal right

• Issues provoked by anonymous online users

• Contours of the right to privacy/ balance with freedom of expression

• Privacy and photographs

• Privacy settings on social media – asserting privacy in employment and PI

proceedings

Week 5: Speech on the internet

• Right to freedom of expression/ free speech around the world

• Hate speech online

• ECtHR decision in Sanchez v France – if you have a fb page, can you be held liable

for harmful posts made by third parties? Yes

• Jurisdictional issues – where can I bring proceedings for online speech?

Week 6: Online defamation (most common action)

• The basic of defamation law

• The principles of online defamation

• Important recent case law

Week 7: Group project review

● Given in week 4

● 3 cases split up between groups and asked to write a case summary

Week 9: Online crimes

• Online Harassment/ stalking

• Harassment, Harmful Communications and Related Offences Act 2020

• Harassment at work/ school

• Other technological crimes


Week 10: Data Protection

• Core principles of data protection law

• The GDPR

• Data processing by online platforms/ data transfers

• The Right to be Forgotten

Week 11: Intellectual Property/ electronic commerce

• Domain names/ passing off

• Copyright/ Trademarks/ Google Ad Words

• Online advertising/ online reviews

• The gig economy

Week 12: Law in practice/ Round-up

• The growing use of internet technology in legal proceedings

• Contempt of court through use of social media

• Summary and discussion

Assessment

Week 7 – Group Project


– 9 groups of 10
– 1 person (at random) does the presentation
– Everyone writes a 200 word note on their contribution.

Week 12 – Assignment
– 1 problem question option
– 2 essay question options

Early Case Law

Shetland Times v Wills & Anor (Edinburgh, 24/10/96)


– First internet law case in Ire/EU

– Copyright

– Use of hyperlinks

– Two newspapers with websites, one website was taking news from another’s

website. Tried to pass it off as their own.

Tansey v Gill [2012] 1 IR 380

– Earliest Irish case

– Website that allowed you to upload an opinion/review of any legal professions.

– Mr Tansey was a barrister that sued Mr Gill who ran the website. Gill had had bad

experiences with solicitors and took it as a personal vendetta and uploaded most of the

opinions himself.

– Peart J on the general dangers of internet publication:

"Anything can be said publicly about any person, and about any aspect of

their life whether private or public, with relative impunity, and anonymously, whereby

reputations can be instantly and permanently damaged, and where serious distress

and damage may be caused to both the target, children and adults alike, leading in

extreme cases to suicide. So serious is the mischief so easily achieved that in my

view the Oireachtas should be asked to consider the creation of an appropriate

offence under criminal law, with a penalty upon conviction sufficient to act as a real

deterrent to the perpetrator. The civil remedies currently available have been recently

demonstrated to be an inadequate means of prevention and redress."

Byrne v Deane [1937] 1 KB 818 – operator liability

1. Can the defendants be liable for material posted by third parties?

In this case, the liability of the defendants stemmed from their decision not to remove the

defamatory notice despite having the power and authority to do so. Whether they could be

liable for material posted by third parties would depend on factors such as their knowledge of
the material, their ability to control its presence, and whether they acted to remove it or

allowed it to remain. If they knowingly allowed defamatory material to persist on their

property, they could potentially be held liable.

2. Do they need to draw attention to it to be held liable for publication?

No, drawing attention to defamatory material is not a prerequisite for liability in defamation

cases. In Byrne v Dean, the defendants were held liable for publication because they

knowingly allowed the defamatory notice to remain on their property. Their failure to remove

it implied consent to its presence, and this constituted participation in its publication.

Whether the defendants actively promoted or merely allowed the material to be seen, they

were still considered responsible for its continued presence and potential harm to the

plaintiff's reputation.

4. The basics of internet law

The Internet

A global collection of networks which connect computers. It is not one physical network, but

rather a series of networks which communicate with each other based on various internet

protocols. These protocols are referred to as TCP/IP, which refers to the two most important

protocols – the Transmission Protocol (TCP) and the Internet Protocol (IP). No single

authority owns or controls the internet, and no single set of laws governs its content. The

internet hosts the world wide web, email, peer-to-peer file sharing and internet telephony.

The World Wide Web

Launched in 1989 by Tim Berners-Lee, it is a collection of pages and images that reside on

the internet. Beginning with static content such as newspaper pages and brochures, it

started offering interactive services such as Online banking, video gaming, dating services,
graphics editing, home videos, and webmail by the year 2000. These online services are

now referred to as Web 2.0. You access the world wide web through a web browser such as

Safari or Google Chrome.

Website operators

Private companies or individuals who host content generated primarily by themselves,

eg. News websites, businesses who wish to promote their own goods and/or services

Internet intermediaries

Internet service providers (email hosts, domain name registration, internet access providers),

E-commerce platforms (online marketplace, auction sites, payment systems),

Participative networking platforms (social media, video-sharing and blogging platforms)

Internet search engines.

Internet users (the public)

5. The new challenges posed by online speech

– How user-generated content has altered the legal landscape?

The changes brought by 'Web 2.0':


The UGC we will consider takes a variety of forms, including:
- Text and photographs posted on social media.
- Videos uploaded to YouTube.
- Personal opinions expressed on blogs.
- Comments added to sections at the end of news articles.
- Reviews posted in respect of movies, restaurants, medical practices etc.
New challenges posed by online speech:
The internet has changed the way we communicate, work and transact.
- Example of a letter to a newspaper
- A restaurant/ business review

The duality of online speech:


- ‘In common with other aspects of life, the internet has both a positive
and a dark
side. On the positive side, its aids free communication; it opens up
avenues of
knowledge so that it has become a centre of learning in itself; it
furthers public
debate; and has established the swiftest and most far-reaching form of
communication that humanity has known. It is, on the other hand, also
thickly
populated by fraudsters, pornographers of the worst kind and cranks."
- Charleton J in EMI Records (Ireland) Ltd and Others v Eircom Ltd [2010]
4 IR 349

1. Anonymity

– Discussed in depth in week 4

– Value of anonymity repeatedly stressed:

“Anonymity is a shield from the tyranny of the majority ... The right to remain anonymous

may be abused when it shields fraudulent conduct. But political speech by its nature will

sometimes have unpalatable consequences and, in general, our society accords greater

weight to the value of free speech than to the dangers of its misuse.” – US Supreme Court

decision in McIntyre v Ohio Election Commission 514 US 334 (1995)


"This unique feature of [the internet] promises to make public debate in cyberspace less

hierarchical and discriminatory" than in the real world because it disguises status indicators

such as race, class, and age.” – Doe v Cahill (884 A.2d 451 (2005))

2. Instantaneous and widespread publication

– Editing functions of traditional media are absent

– Usefulness of injunctions is drastically reduced

– “Genie out of the bottle” – McKeogh v Doe [2012] IEHC 95, Muwema v

Facebook [2016] IEHC 519

3. Permanence

– Damage is exacerbated by nature of the internet

“The web makes a lie of the old cliché that today’s newspaper pages are tomorrow’s fish

and chip wrapping. Nowadays ... the things you say about yourself in a newspaper are more

like tattoos – they can be extremely difficult to get rid of.” – The Guardian, 2008,

- This gives rise to the "right to the forgotten" applications

4. Jurisdictional issues

– Discussed in depth in week 5

– The global nature of internet publication


– Important cases of C-161/10 eDate Advertising GmbH v X and Martinez v MGN

[2011]

– The changes brought by 'Web 2.0':

The UGC we will consider takes a variety of forms, including:

● Text and photographs posted on social media.

● Videos uploaded to YouTube.

● Personal opinions expressed on blogs.

● Comments added to sections at the end of news articles.

● Reviews posted in respect of movies, restaurants, medical practices etc.

– New challenges posed by online speech:

The internet has changed the way we communicate, work and transact.

● Example of a letter to a newspaper

● A restaurant/ business review

– The duality of online speech:

‘In common with other aspects of life, the internet has both a positive and a dark side. On the

positive side, its aids free communication; it opens up avenues of knowledge so that it has

become a centre of learning in itself; it furthers public debate; and has established the

swiftest and most far-reaching form of communication that humanity has known. It is, on the

other hand, also thickly populated by fraudsters, pornographers of the worst kind and

cranks."

- Charleton J in EMI Records (Ireland) Ltd and Others v Eircom Ltd [2010] 4 IR

349
6. How has the law adapted to the internet?

“The internet is only a means of communication. It has not rewritten the legal rules of each

nation through which it passes. It is not an amorphous extraterrestrial body with an

entitlement to norms that run counter to the fundamental principles of human rights.” –

Charleton J in EMI Records (Ireland) Ltd and Others v Eircom Ltd [2010] 4 IR 349

– For this reason…

a) The first step is to understand the substantive law;

b) The next step is to consider how it may need to be adapted to take into account

the use of the internet

– Three main issues to consider:

a) The importance of a properly-functioning internet

b) Is there a legal right to internet access?

c) Do different standards apply to material published on the internet, as opposed to

via more traditional media? NB all

The promotion of an efficiently-functioning internet by legislators

• The application of existing legal principles to internet law

• Is there a legal right to internet access?

• What standard should be applied to material published online?

(A). The Importance of a Properly-Functioning Internet

SPEECH
"The internet is a unique democratizing medium unlike anything that has come before. The
advent of the internet dramatically changed the nature of public discourse by allowing more
and diverse people to engage in public debate. Unlike thirty years ago, when "many citizens
were barred from meaningful participation in public discourse by financial or status
inequalities and a relatively small number of powerful speakers could dominate the
marketplace of ideas" the internet now allows anyone with a phone line to become a town
crier with a voice that resonates farther than it could from any soapbox." – Doe v Cahill (884
A.2d 451 (2005))

COMMERCE
"The development of electronic commerce within the information society offers significant
employment opportunities in the Community, particularly in small and medium-sized
enterprises, and will stimulate economic growth and investment in innovation by European
companies, and can also enhance the competitiveness of European industry, provided that
everyone has access to the Internet." – Directive 2000/31/EC "The E-Commerce
Directive", Recital 2

FUNDAMENTAL RIGHTS
"The Internet has now become one of the principal means by which individuals exercise
their right to freedom of expression and information, providing as it does essential tools for
participation in activities and discussions concerning political issues and issues of general
interest" – Ahmet Yildirim v Turkey (APP NO 3111/10, ECTHR)

(B). Is Internet Access a Legal Right?

– Courts have increasingly stressed the importance of internet access, especially to


social media:
“Access to Facebook and social media platforms, including the online communities they
make possible, has become increasingly important for the exercise of free speech, freedom
of association and for full participation in democracy’ ... Having the choice to remain ‘offline’
may not be a real choice in the Internet era." – Supreme Court of Canada in Douez v
Facebook, Inc [2017] 1 SCR 751

– Convicted criminals, even sex offenders whose crimes involved use of the internet,
have been held to have a right to internet access:
"Before the creation of the Internet, if a defendant kept books of pictures of child
pornography it would not have occurred to anyone to ban him from possession of all printed
material. The Internet is a modern equivalent.” – English Court of Appeal in R v Smith; R v
Clarke; R v Hall; R v Dodd [2012] 1 WLR 1316. See also R v Parsons; R v Morgan
[2017] EWCA Crim 2163.

– Access is particularly important in respect of political debate:


- See restrictions imposed on Donald Trump in US Presidential Election
- See also restrictions in Kyrgyzstan leading up to elections in October 2021.
Anyone who claims that online content defames them is allowed by this law to ask an
"authorized administrative body" to order the content’s deletion without reference to a
judge. If the disputed online content is not deleted within 24 hours, the entire web
page or website will be completely blocked.

– Consider also:
- Facebook's defence for not introducing mandatory ID when signing up
- US case law concerning access to websites for the visually-impaired

(C). Standards to be Applied to Material Published Online

– Courts have often drawn a distinction between online speech, and material that is
published in more traditional media:
"I consider that the Court of Appeal focused too narrowly on the disclosures already made
on the internet and did not give due weight to the qualitative difference in intrusiveness and
distress likely to be involved in what is now proposed by way of unrestricted publication by
the English media in hard copy as well as on their own internet sites." – PJS v News Group
Newspapers [2016] UKSC 26.

The English High Court said the following in relation to an application to prevent the
publication of the name of a woman who was said to be having an affair with Fred Goodwin,
a prominent British banker:
‘The degree of intrusion into a person’s private life which is caused by internet publications
is different from the degree of intrusion caused by print and broadcast media … Once a
person’s name appears on a newspaper or other media archive, it may well remain there
indefinitely. Names mentioned on social networking sites are less likely to be permanent.” –
Goodwin v NGN Ltd [2011] EWHC 1437

– See also:
Recent judgement of the UK Supreme Court where it emphasised the free-wheeling element
of online discourse and suggested that the average reader does not place as much weight
on comments made on social media as it would in the more thoughtful, reflective forum of
traditional media. – Stocker v Stocker [2019] UKSC 17

– However...
In the Canadian case the Supreme Court of Ontario noted the context in which the offending
statement was made, affirming itself to be “very mindful that political discourse on weblogs
and message boards ... is qualitatively different than political discourse in more “traditional”
media like newspapers and television.” However, it held that:

“Implicit in the Defendants' submissions is that based on the rough and tumble nature of
these media platforms there would be little, if anything, that would tend to lower the plaintiff’s
reputation in the eyes of a reasonable reader. However, there is nothing in the law of
defamation to suggest that that is the case.” – Baglow v Smith, Fournier and Fournier
[2015] ONSC 1175

NB: This has not yet been considered by the Irish courts…

– And finally...
– The concept of what constitutes a ‘journalist’ has become more complex in the internet
age, as the proliferation of user-generated content, as the emergence of online ‘citizen
journalists’ has eroded the previously clear line between the media and the general public.
Why is this important?
- A ‘Bona fide member of the press’ is referred to in both s 40 of the Civil Liability and Courts
Act 2004 (family law cases) and s 159 of the Data Protection Act 2018 (reporting of personal
data processed during court proceedings.)

Cornec v Morrice [2012] 1 IR 804


In an application for a blogger to reveal his sources, the court remarked that while Mr Garde
was ‘not a journalist in the strict sense of the term,' it accepted that his activities "fall
squarely within the ‘education of public opinion’ envisaged by Article 40.6.1°. A person who
blogs on an internet site can just as readily constitute an ‘organ of public opinion’ as those
which were more familiar in 1937 and which are mentioned (but only as examples) in Article
40.6.1°."

– See also
– "I conclude that a blogger who regularly disseminates news to a significant body of the
public can be a journalist." – Slater v Blomfeld [2014] NZHC 2221

Week 2: An introduction to internet intermediaries

Overview of Week 2
Week 1 refresher
1) The introduction to intermediary intermediaries
2) The Technology giants
3) Intermediaries as "publishers"
4) Self-regulation by intermediaries
5) Current legislation regulation of internet intermediaries
6) Upcoming legislation

1. Introduction

(A) Background
Harmful material online:

• Harmful online content is a fact of modern life. Gone is the filtering process involved in
traditional media – with social media, publication of content is not only instantaneous, but is
also unfiltered and uncensored.

• Use of the internet, and social media platforms in particular, grows years on year. The
Covid pandemic, along with continued free access to almost every major platform, has
served to accelerate this change.
• With more users comes more content, and therefore more harmful content. The challenge
is how to deal with this.

How is it proposed to deal with this?

• While technology giants are clamping down on such material more than ever, they clearly
have commercial interests to protect.

• Regulation via pan-european and domestic legislation is gaining increasing momentum.

(B) Who are they and what do they do?


Who are they?

• Most legal proceedings involving the internet feature a combination of three participants in
online communications; the party that creates the content, the party that accesses the
content, and the party which enables the first two to communicate – the intermediary.

• So "intermediaries" is an umbrella term for individuals /organisations which facilitate the


use of the internet - often used interchangeably with "internet platform".

• While some intermediaries simply enable a user to gain access to the internet, other
intermediaries actively facilitate and control the sharing of user-generated content.

Types of intermediary:

Different regulations govern different intermediaries

• Internet service providers (‘ISPs’), which provide services for accessing and using the
internet. These can be subdivided further into those organisations which connect users to
the internet (‘internet access providers’), and those which provide email hosting, website
hosting and domain name registration. Many ISPs perform more than one of these functions.
i.e: Virgin media and broadband, Google for email yes but not as a search engine

• Internet search engines.

• E-commerce intermediaries, where these platforms do not take title to the goods being
sold, i.e. online marketplaces and auction sites such as Amazon and eBay, and Internet
payment systems (facilitate the public selling items online).

• Networking platforms, which include internet publishing and broadcasting platforms that
do not themselves create or own the content being published or broadcast, to include social
networking platforms, blogging platforms, video sharing websites and instant messaging
platforms.
2. The Technology Giants
Who are the most important intermediaries?

• Search engines – Google

• Social networking platforms – Facebook, LinkedIn

• Video/photo sharing platforms – YouTube, Tik Tok, Snapchat, Instagram

• Microblogging platforms -Twitter

• Host of blogs – Google

• Instant messaging services -WhatsApp

How long have they been around?

2002 LinkedIn (acquired by Microsoft in 2016)

2005 Facebook
YouTube (acquired by Google in 2006)

2006 Twitter

2009 WhatsApp (acquired by Meta in 2014)

2010 Instagram (acquired by Meta in 2012)

2011 Snapchat

2016 Tik Tok (owned by ByteDance)

How big are they? (Turnover in 2022)

Google $280bn

Microsoft $198bn

Facebook $116bn

Twitter $4.4bn

3. Intermediaries as Publishers
(A) Are intermediaries "publishers"?

What is the significance of holding intermediaries to be publishers?

• As “publisher” they would be liable at law for any unlawful content on their platforms.

What is a publisher in law?

• The "publication" of material for the purposes of defamation law can cause confusion in
relation to the concept of a traditional "publisher".

• The traditional understanding is that of a commercial publisher, whose business is issuing


material to the public – newspapers, magazines, books etc. This is specifically referred to in
the UK Defamation Act 1996. Irish Defamation Act 2009 refers to a publisher but doesn’t
define it.

• At law, a "publisher" is essentially anyone who plays a part in material being brought to the
attention of the public. Depending on their degree of involvement, they may be a "primary" or
"secondary" publisher.

• Intermediaries are viewed as secondary publishers as they don’t generate their own
content but are clearly involved in the process.

What is the legal position of traditional publishers?

• The general position of traditional publishers is that they are considered responsible for any
harmful material which they publish, whether the author is directly employed by them or not.

• For that reason, a newspaper publisher, tv or radio station would be responsible for
material contained in a reader's letter to the paper, or a comment made on air during a
broadcast discussion.

What is the significance of holding intermediaries to be publishers?

1. As "publisher" they would be liable at law for any unlawful content on their
platforms.

What is the position of intermediaries compared to traditional publishers?

• The position of internet intermediaries – the companies which facilitate users in publishing
content – is somewhat different. Their default position has always been that they are not
publishers in the traditional sense, but are instead simply facilitators who allow individual
users to publish their own content.

• Before a Joint Oireachtas Committee on Justice in October 2019, it was submitted on


Google’s behalf that ‘We do not agree that we are publishers … But it is open to legislators
to say that we are.’
• Twitter’s director of public policy for Europe stated that ‘We do not consider we exert
editorial control. Twitter is a live public service. We have no editorial control (how come you
ban people then? The basis on which they block/suspend etc is because they make the
editorial decision based on their Codes of Conduct/Practice) over any of the live content.’
See link.

Has their position been considered in this jurisdiction?

• Closest we have got to a consideration was in Savage v Data Protection Commissioner


[2018] IEHC 122 which held that Google should not be considered a publisher in respect of
the search results that it produces.

• Judgement was delivered in respect of data protection legislation. Court's finding was that
Google's search engine ‘is an automated process where individual items of information on
the internet are collated automatically and facilitate the user searching particular topics or
names.’ Whether this would be the same in defamation proceedings would have to wait for
another day.

Challenges are mounting to their stated position

• Stated position of the latter starting to be questioned:


A v Google New Zealand Ltd [2012] NZHC 2352
Google v Duffy [2017] SASCFC 130
Trkulja v Google Inc [2018] HCA 25
Defteros v Google LLC [2022] HCA 27

• The Australian Attorney General recently suggested that platforms such as Twitter and
Facebook should be treated as the primary publishers of material which they host, stating his
opinion that "My own view ... is that online platforms, so far as reasonably possible, should
be held to essentially the same standards as other publishers."

• There has been commentary in the US concerning the question of whether platforms such
as Facebook and Twitter should be considered publishers, in the light of their decision to ban
people such as Donald Trump from their networks. Is this the editorial decision of a
publisher?

• Google has asserted its right to "speech" as a publisher as a defence in proceedings in the
US. Court held in e-ventures Worldwide LLC v Google Inc that ‘Google’s actions in
formulating rankings for its search engine and in determining whether certain websites are
contrary to Google’s guidelines and thereby subject to removal are the same as decisions by
a newspaper editor regarding which content to publish, which article belongs on the front
page, and which article is unworthy of publication."

Google has argued for its entitlement to free speech and that is publication (US + CAN).
Different legislative framework there for intermediaries. Shows they can modify their position
when it suits them.
4. Protection of intermediaries at law

Types of proceedings against internet intermediaries:

1. To discover the identity of the person defaming/ harassing you.

2. To compel the platform to block or take down material.

3. To impose legal liability for content on the platform itself.

Why should they get protection?

• The attraction of bringing proceedings against an intermediary must be balanced with their
importance to a properly-functioning internet.

How are they currently regulated?

• Self-regulation: left alone for the last 15 years.

• In EU, via the E-Commerce Directive 2000 (contains broad defences for intermediaries) /
Digital Services Act 2022.

• By domestic Online Harms legislation, currently being rolled out.

Attraction of bringing proceedings for liability against internet intermediaries:

▪ It may be inordinately difficult to identify the author/creator of the content, while the platform
is instantly identifiable.

▪ The author/creator, even if identified, may be outside the jurisdiction of the court, while the
platform is likely to be based in Ireland.

▪ The author/creator is often impecunious (poor), while the platform is a far more attractive
mark for damages.

(A) Self-regulation by platforms

A) How do they currently deal with material?

• All social media platforms have terms and conditions of use to which its users consent
when signing up, and which deal with the manner in which the platform will handle material
which violates their own codes of conduct. A lot of the time their rules reflect the law (i.e
defamation, IP).
Facebook has "Community Standards"
Twitter has "Twitter Rules"
YouTube has "Community Guidelines"
• Some allow for accounts to be suspended after repeated breaches, others only allow for
content to be removed – YouTube operates a "three strikes and you're out" policy.

Does self-regulation work?

• For illegal material, such as child pornography, platforms appear very efficient.
Intermediaries have, to a large extent. For child exploitation, Facebook claims to remove
99% of content before anyone reports.

• Meta set up its "Oversight Board" in 2020 to deal with dissatisfaction with its notice and
takedown procedure. An independent body comprised of lawyers, politicians and academics.
The Board is charged with considering appeals from users whose material has been blocked
by Facebook, or whose request for material to be blocked has been refused by Facebook.

• Hotline.ie operated by Internet Service Providers Association of Ireland.

• Increasing dissatisfaction with speed with which they respond to notice and take down
requests concerning harmful content and dissatisfaction also with the manner in which they
moderate political speech. Automated tools can take months to come back if at all and the
timescales they operate on are decided by the platforms they are created by.

5.

5. Current Legislation

(A) Legislation governing intermediaries

E-Commerce Directive (Directive 2000/31/EC)

• Introduced to promote free flow of electronic information amongst EU Member States.

• Articles 14, 15 + 16 outlined the defences for intermediaries.

• Identified the importance of intermediaries and sought to provide a shield from unlimited
liability in respect of the information which passed through their networks.

• It was transposed into this jurisdiction by the European Communities (Directive


2000/31/EC) Regulations 2003.

• It has now been superseded by the Digital Services Act 2022, whose provisions are
almost identical in relation to intermediaries.

• The Act applies to legal or natural persons providing an ‘information Society service’. Such
services are defined as: (remuneration includes the giving up of personal data to a platform
so it can use that data to generate targeted advertising)
"any service normally provided for remuneration, at a distance, by
means of electronic equipment for the processing (including digital
compression) and storage of data, and at the individual request of a recipient
of a service."

The Digital Services Act and Internet Intermediaries

• Chapter 2, which includes Articles 4-8, is the crucial section in respect of user generated
content as it covers the ‘Liability of Providers of Intermediary Services' and describes the
protection that will be afforded to internet intermediaries who offer information society
services.

• There are 3 significant classifications for intermediaries…

Article 4 (formerly E Commerce Directive Article 12): ‘Mere conduit’


Blanket defence against any potential liability.
Primarily ISPs

1. Where an information society service is provided that consists of the transmission in


a communication network of information provided by a recipient of the service, or the
provision of access to a communication network, Member States shall ensure that the
service provider is not liable for the information transmitted, on condition that the
provider:
(a) does not initiate the transmission;
(b) does not select the receiver of the transmission; and
(c) does not select or modify the information contained in the transmission.
...
3. This Article shall not affect the possibility for a court or administrative authority, in
accordance with Member States' legal systems, of requiring the service provider to
terminate or prevent an infringement.

Discussion:
• Article 4 covers internet intermediaries operating as ‘mere conduits’, and governs their role
in the transmission of information via electronic communication networks – i.e. internet
access providers and email service providers.

• Some internet intermediaries operate web-based email and internet message access
protocol services (IMAP), which retain copies of email messages on a server so that they
can be accessed by subscribers from a computer, tablet or smartphone. Because they do
not delete the emails immediately, such intermediaries would probably be considered to be
hosts (separate classification), rather than mere conduits.

• See EMI v UPC [2010] IEHC 377.

Article 5 (formerly E Commerce Directive Article 13): ‘Caching’


Primarily search engines

1. Where an information society service is provided that consists of the transmission in a


communication network of information provided by a recipient of the service, Member States
shall ensure that the service provider is not liable for the automatic, intermediate and
temporary storage of that information, performed for the sole purpose of making more
efficient the information's onward transmission to other recipients of the service upon their
request, on condition that:
(a) the provider does not modify the information;
(b) the provider complies with conditions on access to the information;
(c) the provider complies with rules regarding the updating of the information,
specified in a manner widely recognised and used by industry;
(d) the provider does not interfere with the lawful use of technology, widely
recognised and used by industry, to obtain data on the use of the information; and
(e) the provider acts expeditiously to remove or to disable access to the information it
has stored upon obtaining actual knowledge of the fact that the information at the
initial source of the transmission has been removed from the network, or access to it
has been disabled, or that a court or an administrative authority has ordered such
removal or disablement.

2. This Article shall not affect the possibility for a court or administrative authority, in
accordance with Member States' legal systems, of requiring the service provider to terminate
or prevent an infringement.

Discussion:

• Caching involves the temporary storing of information by a website or search engine so as


to speed up the response time when a user makes a request to display a particular page or
return a set of search results.

• See submissions of Google in Wheat v Alphabet Inc/Google LLC [2018] EWHC


550 (Ch), at para 23

NB* Article 6 (formerly E Commerce Directive Article 14): 'Hosting'


Most common, widest application.
Primarily social networks/review pages etc.

1. Where an information society service is provided that consists of the storage of


information provided by a recipient of the service, Member States shall ensure that the
service provider is not liable for the information stored at the request of a recipient of the
service, on condition
that:
(a) the provider does not have actual knowledge of illegal activity or information and, as
regards claims for damages, is not aware of facts or circumstances from which the illegal
activity or information is apparent; or
(b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove
or to disable access to the information (what do the courts say is expeditious?).
...
3. This Article shall not affect the possibility for a court or administrative authority, in
accordance with Member States' legal systems, of requiring the service provider to terminate
or prevent an infringement, nor does it affect the possibility for Member States of
establishing procedures governing the removal or disabling of access to information.
Discussion:

Subject to certain conditions, and read together with Articles 7 & 8, this Article provides a
general defence for intermediaries in respect of user-generated content which is uploaded to
its site. Such content may take the any of the following forms:

• Advertisements placed by users which are hosted or indexed by intermediaries such as


eBay and Google.

• Content placed on social media platforms which are hosted by social media services such
as Facebook and Twitter.

• User comments which are placed underneath content generated by news websites.

Article 8 (formerly E Commerce Directive Article 15): 'No general obligation to


monitor'

1. Member States shall not impose a general obligation on providers, when providing the
services covered by Articles 4, 5 and 6, to monitor the information which they transmit or
store, nor a general obligation actively to seek facts or circumstances indicating illegal
activity.

Issues arising:

• Reading Article 6 and 8 together, there is no legal requirement on platforms to screen


material which is uploaded. Instead, their duty appears to extend no further than to react to
complaints which have been submitted by other users.

• In reality, the platforms do screen material. This has now been clarified by Article 7*, which
provides that "Providers of intermediary services shall not be deemed ineligible for the
exemptions from liability referred to in Articles 4, 5 and 6 solely because they, in good faith
and in a diligent manner, carry out voluntary own-initiative investigations

• Significantly, the prohibition against an obligation to monitor must be read in the light of
certain recent decisions which place heightened requirements on intermediaries such as
Google and Facebook"
C-18/18 Eva Glawischnig-Piesczek v Facebook

1) What is "actual knowledge"?


• When has a host has been properly notified of infringing material has been the source of
some debate, as was discussed in NI cases of CG v Facebook [2015] NIQB 11 and J20 v
Facebook [2017] NICA 48

2) What is meant by acting "expeditiously"?


• None of the platforms give a concrete time frame under which notice and take down
requests will be investigated and replied to. The reality is that most platforms take weeks,
sometimes months, to respond to such requests, itself a reflection of the vast amount of
material which is uploaded to their platforms every day.

(B) US Legislation

US law - s.230 Communications Decency Act 1996

Stratton Oakmont v. Prodigy Services Co. (N.Y. Sup. Ct., Nassau County May
24, 1995)

• Dates back to the earliest days of the internet in 1995. The defendant service provider
hosted comments, posted anonymously on its bulletin board hosted by Prodigy, which
accused the plaintiff bank and its President of fraudulent behaviour.

• Oakmont sued for defamation, and the New York Supreme Court held Prodigy liable on the
basis that it had positioned itself as a ‘family-orientated service’, and that it was ‘one of the
few bulletin boards in the country to screen all electronic messages for potential
improprieties.’ Having held itself out to monitor the content of its bulletin boards, the Court
held that ‘Prodigy is clearly making decisions as to content, and such decisions constitute
editorial control.’

• This resulted in enactment of s.230, which was upheld in Zeran v America Online, Inc.,
129 F.3d 327 (4th Cir. 1997)

Shield from liability for intermediaries in the US:

• S.230 provides that ‘No provider or user of an interactive computer service shall be treated
as the publisher or speaker of any information provided by another information content
provider.’

• Immunity exists even after notification.

• Provides immunity not just for large internet platforms, but also “users” ie for people who
host comments on their websites/ blogs.

• Exceptions are granted in respect of breaches of criminal law, especially as regards


obscene material or child exploitation material, and breaches of intellectual property laws.

6. New Legislation

(A) Background to Online Safety Act 2022

General move towards increased regulation


• The EU Digital Services Act is being mirrored by the UK Online Harms Act and Australia's
Online Safety Act, India' Information Technology Rules 2021.

Origins of the Online Safety and Media Regulation Act

• Law Reform Commission's 2016 Report on Harmful Communications and Digital Safety:

"As matters currently stand, while it would appear that the non-statutory self-
regulation by social media companies, through their content and conduct policies, has
improved in recent years, this may not be sufficient to address harmful communications
effectively … The Commission ... therefore recommends that an office be established on
statutory basis with dual roles in promoting digital and online safety and overseeing an
efficient and effective take down procedure in relation to harmful digital communications.”

Act creates a Media Commission (Comisiun na Meán) which has three main roles:

• Takes over the duties of the former BOA, which is dissolved,

• Transpose the audio-visual media services Directive, to cover audio-visual material hosted
by internet intermediaries such as YouTube, Tik Tok etc.
• Create an Online Safety Commission* (unclear if it applies to hosts).
Part 11 of the Act deals with Online Safety, which will fall to be regulated by the
Online Safety Commission. The main purpose of the Online Safety Commission is to set
up an Online Safety Code under which designated online service providers will be obliged to
operate in respect of "Harmful Online Content."

(B) Analysis of the Act

What is harmful content?

Under section 139A, it falls under two main categories (it is a non-exhaustive list)

1. Content which is a criminal offence to disseminate under Irish or EU law. This


includes:
• child sexual abuse material,
• content containing or comprising incitement to violence or hatred,
• public provocation to commit a terrorist offence.

2. Content which is likely to


• intimidate, threaten, humiliate or persecute a person,
• likely to encourage or promote eating disorders,
• likely to encourage or promote self-harm or suicide;
so long as such online content either gives rise to a risk to a person’s life, or significant
harm to their physical or mental health, where such harm is reasonably foreseeable.

What does it not regulate?


• The Bill does not apply to defamatory material, or material that breaches privacy, data
protection or intellectual property laws.

How will the Online Safety Commission operate?

• Identification of "designated online services" - ie anyone who hosts user-generated


content.

• Creation of Online Safety Code to govern the standards and practices to be observed by
designated online services. For example, time limits by which it must deal with certain types
of material; requirement to provide reports on notice and takedown requests etc.

Powers of the Commission

• Power to conduct investigations and issue "compliance notice", followed by "warning


notice".

• Power to impose fine of up to 20m euro or 10% of previous year's turnover; power to apply
to High Court to compel service to comply with notice; power to apply to High Court to have
access to service blocked.

This is primarily a "systemic" piece of legislation

• The main purpose of the Online Safety Commission is to set up an Online Safety Code
under which designated online service providers will be obliged to operate.

• Monitoring and reporting obligations will be imposed on these providers, who may be
subject to substantial sanctions if they fail to comply

• The Online Safety Code may lay down guidelines as to how the service providers'
complaints handling procedures operate (including presumably time scales under which they
must operate), but it is unclear how specific the Code will be in this regard.

Direct complaints to the Commission?

• While section 139R provides for individuals to have complaints pursued by the
Commission, the scope of this facility is unclear at the moment.

• Individuals must first have exhausted the platform's own complaints procedure before it can
make a complaint to the Commissioner. Initially, priority will be given to complaints involving
material that is harmful to children, as per section 139U.

Section 139S of the Act:


(1) Subject to subsection (2), the Commission may not consider a complaint under this
Chapter unless it is satisfied that the following conditions are met:

• the complainant has made a complaint to the provider of the designated online service
concerned about the availability of the content on the service;
• a period of more than 2 days has elapsed since the complainant made the complaint to the
provider;

• where the provider operates a process in accordance with an online safety code for
handling such a complaint, the complainant has taken reasonable steps in that period to
have the complaint resolved through that process.

(C) Proposed Legislation


Criminal Justice (Incitement to Violence or Hatred and Hate Offences) Bill 2022

• Intended to repeal the Prohibition of Incitement to Hatred Act 1989

• Has been passed by the Oireachtas and is currently before An Seanad.

• Has a specific provision intended to regulate internet intermediaries. Hate Offences Bill
2022, section 7(4):
In any proceedings for an offence under this section, it shall be a defence for a body
corporate to prove, as respects the communication of material by the body corporate, that
a) it has reasonable and effective measures in place to prevent the communication
generally of material inciting violence or hatred against a person or a group of persons on
account of their protected characteristics or any of those characteristics,

b) it was complying with the measures referred to in paragraph (a) at the time the
offence concerned was alleged to have been committed, and

c) it did not know and had no reason to suspect at the time the offence concerned
was alleged to have been committed that the content of the material concerned was
intended or likely to incite violence or hatred against a person or a group of 5 persons on
account of their protected characteristics or any of those characteristics.

Week 3: Case law involving internet intermediaries


Overview

1) The general position of intermediary intermediaries

2) Internet Service Providers

3) Search Engines

4) Online publishers

5) Hosts of discussion boards/blogs


6) Social networking platforms

Google LLC v Defteros [2022] HCA 27


Liability of search engines for the results they produce

Introduction:
The case, decided by the High Court of Australia, holds significant implications in the realm
of internet law and the liability of search engines for defamatory content linked through their
platforms. This case revolves around Mr. Defteros, an Australian lawyer, who alleged that
Google was legally liable for defamatory content related to him found in search results
provided by its search engine. The case raises critical questions about the responsibilities of
search engine operators in connection with the content they display.

Background:
Mr. Defteros, whose legal practice had represented members of the organized crime
community, took legal action against Google. He argued that Google, as a "secondary"
publisher, should be held responsible for not removing links to defamatory content after
being notified of their existence. The defamatory content included photographs of Mr.
Defteros with criminal associates and hyperlinks to articles, including a Wikipedia entry, that
he claimed were defamatory.

Lower Court Decisions:


The Supreme Court of Victoria, in Defteros v Google LLC [2020] VSC 219, rejected Google's
argument that its search results were automatically generated without human intervention,
ruling that Google was not a passive tool but actively facilitated the communication of
content. However, the court acknowledged that Google could have the defense of innocent
dissemination but emphasized that it could not be liable for publication until a reasonable
time after being notified of the defamatory material.

Google appealed, but in Defteros v Google LLC [2021] VSCA 167, the Victorian Supreme
Court of Appeal upheld the lower court's finding. Google contended that it would need to
have incorporated the defamatory material into the search result or encouraged users to
follow the links to be considered a publisher. The Court of Appeal rejected this argument,
deeming the hyperlinks themselves as an "enticement" to the users to click on the snippets
and access defamatory content.

The Decision of Australia's Highest Court:


By a majority decision of 5-2, the High Court of Australia upheld Google's appeal. Central to
this decision was the finding that Google was not a "publisher" of the defamatory content in
The Age's article. The court reviewed the common law criteria for publication, which include
authorship, authorization, assistance, and ratification. The majority reasoned that providing
search results did not constitute an act of "participation" in the communication of defamatory
matter. Crucially, they emphasized the "content-neutral" aspect of hyperlinks, meaning that
the snippets in the search results did not contain defamatory material themselves.
Concurring and Dissenting Opinions:
Justice Gageler expressed the importance of consistency in characterizing hyperlinks across
common law jurisdictions but raised the possibility that a "sponsored link" could render
Google liable as a publisher.

Dissenting judges, particularly Justice Keane and Justice Gordon, argued for a broader
interpretation of "publication" in the internet age. They contended that Google actively
participated in the publication process by assisting users in accessing defamatory content
through its search results. Justice Gordon highlighted Google's commercial interest in
providing links to news articles and the inconsistency in Google's stance on its role in
providing search results.

Conclusion:
The Google v Defteros case clarifies that search engines like Google are not considered
primary publishers of content they index. However, it leaves unresolved questions about the
extent of their liability when defamatory material is accessed through their platforms. The
case underscores the challenges of applying traditional legal principles to the digital age and
the need for international consistency in determining the responsibilities of search engine
operators. Ultimately, this decision has brought attention to the complex legal landscape
surrounding internet intermediaries and the publication of online content.

1. General position of internet intermediaries


Can they be held liable for defamatory/ harassing material which they facilitate?

COMMON TYPES OF PROCEEDINGS AGAINST SOCIAL MEDIA PLATFORMS

1. To discover the identity of the person defaming/ harassing.

2. To compel the platform to block or take down the unlawful material.

3. To impose liability for unlawful behaviour on the platform itself.

ATTRACTION OF SEEKING TO HOLD THEM RESPONSIBLE

1. It may be inordinately difficult to identify the author/creator of the content, while the
platform is instantly identifiable.

2. The author/creator, even if identified, may be outside the jurisdiction of the court, while the
platform is likely to be based in Ireland.

3. The author/creator is often impecunious, while the platform is a far more attractive mark
for damages.

IMPORTANT CONSIDERATIONS:
1. What kind of functions do they perform?
2. Can they be considered to be a primary or secondary publisher?
3. Do they need a defence, and if so what kind of defence is open to them?

2. Introductory case law:

Byrne v Deane [1937] 1 KB 818

Issue: Concept of “secondary” publisher

A potentially defamatory message was left on the notice board of a golf club. It was seen by
the owners of the club, who could have taken it down. By leaving it up, could they be held
liable for defamation?

“If defamatory matter is left on the wall of premises by a person who has the
power to remove the defamatory matter from the wall he can be said to have
published the defamatory matter to the persons who read it.”

Mulvaney v Betfair [2009] IEHC 133

Issue: Dual nature of platform

Plaintiff bookie claimed he was defamed in chatroom on the defendant's website, and
questions were whether a chatroom was an "information society service", and whether
Betfair could avail of Article 14 defence, as gambling platforms are specifically excluded from
the E-Commerce Directive.

Clarke J (as he was then) held that activities of the website operator could be separated:

“The respective activities are conducted on separate parts of the website with
no connectivity. Likewise, the “activities” concerned are very different... There is, in
my view, no significant nexus between the chatroom activity ... and the betting
activity ... which could reasonably lead to the characterisation of the chatroom as
being part of any betting activity that might be said to take place on the Betfair
website”

Kaschke v Gray, Hilton [2010] EWHC 690 (QB)

Issue: Dual nature of platform

The plaintiff claimed to have been defamed by a blog post made by Defendant 1 on a
website operated by Defendant 2. Could Hilton be certain enough to have the hosting
defence under ECommerce Directive open to him that he could have the case against him
struck out? Hilton admitted to moderating certain aspects of the website, though he claimed
not to moderate the particular blog posts, but court held the was sufficient uncertainty not to
strike out proceedings. Referring to Betfair:

“it is not necessarily a bar to entitlement to the protection conferred by


Regulation 19 [the UK version of Article 14] that the provider of an information society
service consisting of storage of information is also engaged in an activity on the
same website which is either not an information society service or if it is which does
not consist of the storage of information.”

3. ISP Case Law:

Bunt v Tilley & Ors [2006] EWHC 407

Issue: Can an internet access provider be considered to be a publisher of any type?

Claimant sued for defamation, but brought proceedings against three ISPs as well as the
author of the comments. The ISPs applied for proceedings to be struck out against them.

The Court likened ISPs to the postal service in that “ISPs do not participate in the process of
publication as such, but merely act as facilitators in a similar way to the postal services. They
provide a means of transmitting communications without in any way participating in that
process." Court held that there was no prospect of them being held liable as publishers of
the comments:
“I am also prepared to hold as a matter of law that an ISP which performs no
more than a passive role in facilitating postings on the internet cannot be deemed to
be a publisher at common law … thus they do not need a defence.”
3.
INTERNET
INTERMEDIARIES
SEARCH ENGINES (1)

4. Search Engine Case Law:

Liability of search engines (Ireland & UK):

Metropolitan Int. Schools v Designtechnica Corp & Ors [2009] EWHC 1765 (QB)

Issue: Does search engine "publish" the content contained in its snippets?

The plaintiff claimed that the first-named defendant’s website contained defamatory
postings. He also sought to bring proceedings against Google for the manner in which
material appeared in the search engine snippets which provided links to the allegedly
defamatory remarks about the plaintiff. Google objected to being served outside the
jurisdiction.

The High Court upheld Google’s objection on the basis that it could not be held to be a
publisher of the material in question. The Court stressed the difference between search
engines and hosts of websites in terms of their ability to edit, and held that Google could not
even be considered to be a publisher after it was notified of the offending material.
“[Google Inc] cannot be characterised as a publisher at common law. It has
not authorised or caused the snippet to appear on the user’s screen in any
meaningful sense. It has merely, by the provision of its search engine, played the role
of a facilitator.”

Savage v Data Protection Commissioner and Google Ireland [2018] IEHC 122

Issue: Does search engine "publish" the content contained in its snippets?

That was a case grounded in data protection rather than defamation, with the Court
remarking that different factors may be at play should the claim be one of defamation.
Nonetheless, the Court appeared to approve the decision of the English High Court in
Metropolitan as regards Google’s neutral status in the publication chain, and difference from
a website publisher, when it stated that the operation of Google’s search engine is:

“an automated process where individual items of information on the internet


are collated automatically and facilitate the user searching particular topics or names”

Liability of search engines (Australia):

Google v Duffy [2017] SASCFC 130


Issue: Does search engine "publish" the content contained in its snippets?

Ms Duffy was an online psychic who was criticised when some of her predictions were
inaccurate. Details of the engagements with her critics were reported on a consumer interest
website, ripoffreport.com, which she claimed defamed her.

Google provided links to that website and incorporated some of the comments into its
snippets (see also Savage v DPC), and the autocomplete function gave the result "Janice
Duffy Psychic Stalker" . When Google failed to remove the links for 18 months after her first
complaint, she issued proceedings against them.

The regional Supreme Court held that Google should be considered a “secondary publisher”
of the material contained in the results produced by its search engine, as it was an
“indispensable, proximate step in its publication to the searcher.”

“Google established the algorithm and programmes of its search engine and
made that search engine available to all users of the internet ... Google participated
in the publication of the paragraphs about Dr Duffy produced by its search engine
because it intended its search engine to do what it programmed it to do.”

The search engine provider should not be held liable for any information it publishes
before being put on notice of its existence and would have a defence of after being put on
notice so long as it acted expeditiously to deal with the offending material. The court also
held that it re-published the material to which it provided a hyperlink as it had “incorporated”
such material into its search engine snippet:
“Google has republished the Ripoff Reports by abstracting sufficient material
to inform the searcher of its contents, by repeating and drawing attention to the
defamatory imputation, and by providing instantaneous access to it though the
hyperlink. The very purpose of an internet search engine is to encourage browsing
and it is designed to achieve that purpose.”

Defteros v Google LLC [2021] VSCA 167, Google LLC v Defteros [2022] HCA 27

Issue: Does search engine "publish" the content contained in its snippets?

Mr. Defteros is an Australian lawyer. In 2004, articles were published which linked him to
organised crime. His proceedings against Google were based on results that were produced
by Google’s search engine which included photographs of him with various members of the
Australian criminal fraternity and provided hyperlinks to articles about him which Mr Defteros
claimed were defamatory.

The Victorian Supreme Court of Appeal upheld an earlier decision of the Supreme Court of
Victoria that Google was not a primary publisher of the material, but that as soon as notice
had been given about the defamatory content, it was incumbent on the company to remove
offending content in an expeditious manner. It considered that it became a secondary
publisher just 7 days after being put on notice. At first instance, the Court had rejected the
suggestion that Google’s systems were entirely automated:
“The Google search engine … is not a passive tool. It is designed by humans
who work for Google to operate in the way it does, and in such a way that identified
objectionable content can be removed, by human intervention.”

In its appeal, Google had focussed on the submission that it would need to have
incorporated some of the defamatory material into the search engine result (as was the case
in Duffy), and/or “enticed” or “encouraged” the user to follow the hyperlink to the defamatory
material. The Court of Appeal rejected this argument, holding that the hyperlinks were an
“enticement” to the user to click on the snippets and follow the links to the defamatory article”

“The combination of the search terms, the text of the search result and the
insertion of the URL link filtered the mass of material on the internet and both
directed and encouraged the reader to click on the link for further information. The
fact that there was more information conveyed in Duffy (FC) does not detract from
the conclusion that there was sufficient conduct here to constitute publication.”

NB: This case was appealed to the High Court of Australia, and judgement given in
2022. See Authority notes. Law is moving in this position – old excuse of automatic
algorithms dwindling.

Liability of website operators for UGC:

Tansey v Gill & Ors [2012] 1 IR 380

Issue: Only Irish case to consider potential liability of platform operator and host for UGC
In an application for injunctive relief against the website rateyoursolicitor.com, the Court
considered the damage caused by anonymous online publications. It did not, however,
consider the respective liability of the various defendants, including the first named
defendant who “ran” the website, the second-named defendant who was an “operator” of the
site, and her daughter, the fourth-named defendant, who “answered queries about it”. In
granting the injunction, the Court held that:

“The internet has facilitated an inexpensive, easy, and instantaneous means


whereby unscrupulous persons or ill motivated malcontents may give vent to their
anger and their perceived grievances against any person, where the allegations are
patently untrue, or where no right-thinking person would consider them to be
reasonable or justified ... So serious is the mischief so easily achieved that in my
view the Oireachtas should be asked to consider the creation of an appropriate
offence under criminal law, with a penalty upon conviction sufficient to act as a real
deterrent to the perpetrator. The civil remedies currently available have been recently
demonstrated to be an inadequate means of prevention and redress.”

Delfi v Estonia (Application no. 64569/09) (2015)

Issue: European Court of Human Rights considers liability of news website for user
comments

The applicant operated a large news website, which had published an article about the
damage being caused to an ice road in Estonia by a ferry company. In its comments section,
the article attracted severe criticism of a member of the ferry company's board.

The domestic courts held Delfi liable for the comments as a publisher rather than an
intermediary, noting that it “invited” the comments, and that they had an “economic interest
in
the publication of comments”. They were, therefore, not acting in a “technical, automatic or
passive” manner, even though they were not moderated, and were taken down 6 weeks
after publication.

It awarded €320 in damages. Delfi applied to the ECtHR on the basis that the decision
breached their article 10 right to freedom of expression.

The Grand Chamber declined to overturn the decision of the domestic courts, stating that

“The Court will thus proceed on the assumption that the Supreme Court’s judgement
must be understood to mean that the subsequent removal of the comments by the applicant
company, without delay after publication, would have sufficed for it to escape liability under
domestic law."

Taking into account the findings that:


- the comments were clearly offensive
- the filtering system in place by Delfi had been ineffective
- that Delfi was a well-funded, professionally-run website, and
- it had encouraged user comments for its own commercial benefit,
the Grand Chamber held that it was acceptable to impose liability on Delfi “if they fail to take
measures to remove clearly unlawful comments without delay, even without notice
from the alleged victim or from third parties.” – Inconsistent with Art 6 DSA.

Fairfax & Ors v Voller [2021] HCA 27

Issue: Australian High Court extends liability of intermediaries for third-party comments

The plaintiff had previously been in a youth detention Centre in Australia. When articles were
written about his experiences which criticised the centre, and which were published on
Facebook pages operated by news organisations, they attracted defamatory comments
about Mr Voller. He sued the news organisations, rather than the authors of the comments,
and at first instance the court considered the preliminary issue of whether they were the
“primary publishers” of these third -party comments. It held that they were, making them
liable without the need to have been put on notice about their existence.

The highest court in Australia, the High Court of Australia, recently upheld the decision,
pointing (as in Delfi) to the commercial benefits that underpin the news organisations
choices –

“The appellants chose to operate public Facebook pages in order to engage


commercially with that significant segment of the population… The appellants'
attempt to portray themselves as passive and unwitting victims of Facebook's
functionality has an air of unreality. Having taken action to secure the commercial
benefit of the Facebook functionality, the appellants bear the legal consequences.”

It concluded that:

“each appellant intentionally took a platform provided by another entity,


Facebook, created and administered a public Facebook page, and posted content on
that page. The creation of the public Facebook page, and the posting of content on
that page, encouraged and facilitated publication of comments from third parties. The
appellants were thereby publishers of the third-party comments."

Food for thought...


Could the same not be said about the intermediaries themselves – Twitter, Facebook,
Google – in relation to the principle of 'encouraging and facilitating' content?

NB: There has been a push back from this position via proposed legislation – the
Social Media (Anti-Trolling) Bill 2021 (to be discussed next week)

5. Social Networking Case Law

Sanchez v France (Application no. 45581/15, Grand Chamber, 15 May 2023)


▪ Mr. Sanchez was a French politician who was standing in Nimes, southern France, as a
candidate in the Parliamentary elections. He operated a Facebook page which he used as
part of his election campaign, and had approximately 1800 Facebook "friends". The page
was generally accessible to any Facebook member.

▪ A relatively innocuous comment about his opponent in the election attracted comments
from third parties, two of which amounted to hate speech against Muslim immigrants in
southern France, and one of which referred to the wife of Mr. Sanchez's opponent, who
was of North African descent. One of the comments was removed voluntarily, and Mr.
Sanchez published a post appealing to users to ‘be careful with the content of their
comments,’ but no further comments were removed and the worst of them remained on
the wall for 6 weeks.

▪ Mr. Sanchez was convicted by Nimes Criminal Court as the "producer" of the forum, and
held directly liable for the comments, even though there was a question as to whether he
had explicitly been put on notice of them. Instead, an imputed responsibility was placed on
him on the basis that he had made the Facebook wall public and "encouraged" people to
comment.

▪ He was fined €3,000, a decision which he appealed to the Strasbourg Court.

▪ Both the General Court and the Grand Chamber upheld the original decision. The
compared Mr. Sanchez's position to that of Delfi in respect of the commercial nature of the
Facebook page, and pointing to his important position as a politician - "‘it is crucial for
politicians, when expressing themselves in public, to avoid comments that might foster
intolerance and… they should also be particularly careful to defend democracy and its
principles."

▪ This appears inconsistent with previous jurisprudence which gave enhanced rights of
freedom of expression during political debate. It is also questionable as to whether an
individual operating a Facebook page should be subject to the Delfi principles.

6. Hosts of Discussion Boards Case Law

Godfrey v Demon Internet [2001] QB 201

Issue: Defendant was both an ISP and the host of a discussion forum.

The defendant was an ISP which hosted on its news server a particular Usenet newsgroup,
storing postings for about a fortnight. They were not held to be a primary publisher, but
having been put on notice about the defamatory content, they failed to delete it (as per
Byrne v Deane), and so could not avail of the defence of innocent publication. (no reference
to E Commerce Directive)

“I do not accept Mr Barca's argument that the defendants were merely owners of an
electronic device through which postings were transmitted. The defendants chose to store
soc.culture.thai postings within their computers. Such postings could be accessed on that
newsgroup. The defendants could obliterate and indeed did so about a fortnight after
receipt.”

Davison v Habeeb & Ors [2011] EWHC 3031 (QB)

Issue: Did host of Blogspot.com have actual knowledge of unlawful material?

Plaintiff claimed that the first-named defendant defamed her in a blog hosted by Google,
which was the fifth-named defendant. The application was to serve Google outside the
jurisdiction. The Court held that it was “at least arguable” that Google was a publisher
following notification (as per Byrne v Deane). However, the court set aside order to serve
Google outside the jurisdiction on the basis that it did not have “actual knowledge” of the
unlawful nature of the comments :

"My conclusion is that there is no realistic prospect of the claimant


establishing that the notification of her complaint fixed the fifth defendant with actual
knowledge of unlawful activity or information ... The fifth defendant was faced with
conflicting claims from the claimant and the second defendant between which it was
in no position to adjudicate. That is of course not to say that a different conclusion
could not be reached on different facts, such as where … a complaint was sufficiently
precise and well substantiated, and where there was no attempt by the author of the
defamatory material to defend what had been written." – Art 14 defence.
On appeal, however, the Court of Appeal held that Google's involvement was far from
passive, noting that “it makes the Blogger service available on terms of its own choice and it
can readily remove or block access to any blog that does not comply with those terms”.

While it was not a primary publisher of the material, the court concluded that

“if Google Inc allows defamatory material to remain on a Blogger blog after it has
been notified of the presence of that material, it might be inferred to have associated itself
with, or to have made itself responsible for, the continued presence of that material on the
blog and thereby to have become a publisher of the material.”

While it held that the five weeks it took Google to remove the material was excessive, and
that it could be liable for its publication during that time, it still refused the application to serve
it outside the jurisdiction on the basis that no real damage was caused during those 5
weeks, as hardly anyone would have accessed the comments during that period.

Tamiz v Google Inc.[2013] EWCA Civ 68

Issue: English Court of Appeal on liability of blogging platform for UGC

As in Davison v Habeeb, this concerned an attempt to serve Google outside the jurisdiction
so as to hold it liable for postings by a third-party on blogspot.com which allegedly defamed
the Plaintiff. Five weeks after receiving a complaint from Mr. Tamiz about the posting,
Google forwarded the complaint to be blogger, who removed the content 2 days later. The
plaintiff sought to make Google liable for publication for that 5-week period.
At first instance, the High Court refused the application, and in following the authority of Bunt
v Tilley, claimed that Google was simply a passive facilitator in the publication process, and
was “not required to take any positive step, technically, in the process of continuing the
accessibility of the offending material, whether it has been notified of a complainant’s
objection or not.” Eady J. compared Google Inc to the owner of a wall upon which graffiti had
been written:

“It is no doubt often true that the owner of a wall which has been festooned,
overnight, with defamatory graffiti could acquire scaffolding and have it all deleted
with whitewash. That is not necessarily to say, however, that the unfortunate owner
must, unless and until this has been accomplished, be classified as a publisher.”

7. Social Networking Platforms Case Law

Liability of Facebook for UGC:

C18/18 Eva Glawischnig Piesczek v Facebook Ireland Ltd [2019] (directly applicable)

Issue: The prohibition against monitoring under Article 15.

Ms Glawischnig-Piesczek, an Austrian politician, was the subject of a defamatory post on a


user’s Facebook page. After Facebook refused to delete the post, the applicant obtained an
injunction from the local court compelling them to do so. The injunction not only compelled
them to take the material down, but also to block its further re-appearance, to include
"equivalent" material.

After a series of domestic appeals, the matter went to the CJEU, which was asked to decide
whether Facebook could be obliged the prevent the further uploading of material that was
not only identical to the original, defamatory post, but also material which had a similar
“equivalent” meaning.

It may be noted that in earlier proceedings which were brought in respect of harassment
before the High Court of Northern Ireland, the court refused to grant a similar injunction in
XY v Facebook Ireland Ltd [2012] NIQB 96 (QBD (NI)) holding that to oblige Facebook to
block any re-uploading of identical material would place a ‘disproportionate burden’ on the
social networking service. It awaits to be seen how the CJEU’s decision is interpreted in
respect of any future applications for an online platform to not only block a specific item of
content from re-appearing on its platform, but for equivalent content to be blocked as well.

The CJEU held that it was not inconsistent with Article 15 of the E Commerce Directive
(prohibition against monitoring) to compel Facebook to prevent the future uploading of not
only identical material, but also “equivalent content”, so long as the injunction did not to
require an “independent assessment” of the material, ie manual supervision by staff. It noted
that the Austrian court had already found the material to be defamatory and ordered it to be
taken down, and that:
“although Article 15(1) prohibits Member States from imposing on host
providers a general obligation to monitor information which they transmit or store, or
a general obligation actively to seek facts or circumstances indicating illegal activity,
as is clear from recital 47 of that directive, such a prohibition does not concern the
monitoring obligations ‘in a specific case’.

The Court also held that such an obligation could have world-wide effect, as the E-
CommerceDirective "does not preclude those injunction measures from producing effects
worldwide." I left it up to domestic courts, however, to ensure that any such orders were
consistent with international law.

CG v Facebook [2015] NIQB 11, [2016] NICA 54


J20 v Facebook [2017] NICA 48

Issue: What is 'actual knowledge' of unlawful content?

In CG, a convicted sex offender sued for breach of privacy arising out of a series of
Facebook pages, one of which was entitled “Keeping our kids safe from Predators”. They
featured threats of violence against him, and he issued proceedings against not only the
creators of the page, but also Facebook under the tort of misuse of private information.

In its defence, Facebook claimed that it was not given actual knowledge about the content,
which was rejected at first instance by the court which held that due to the “considerable
resources” it had at its disposal, it could easily have made itself aware of the subject matter
of the complaint, and suggested that Facebook's demands would require a complaint to be
spelled out with “inappropriate precision.”

This particular finding was overturned on appeal, through it did uphold its position as a
publisher for a short, 2-week period in relation to one of the Facebook pages. In relation to a
similar issue in J20, which concerned allegations made against theclaimant that he was a
loyalist bigot and an informant, the court of appeal likened the position of Facebook to that of
Google in Tamiz v Google, or the golf club committee in Byrne v Deane, in that they could be
taken to have participated in the publication of defamatory material if they failed to remove it
having being put on notice of its existence. Again, however, it found a deficiency in the
manner in which Facebook was notified, and held it liable only for a short period.

It did, however, consider that a 2-week delay in taking it down after a court order was not
"acting expeditiously, and awarded £500 in damages against it.

See also YouTube (C-628/18) and Cyando (C-683/19) for what constitutes "actual
knowledge" for an intermediary. These joined cases considered in June 2021 the extent to
which an intermediary such as YouTube has to do its own investigation into whether material
is unlawful in seeking to avail of the Article 14 defence.
Week 4: Anonymity and Privacy on the Internet
Overview of week 4

1) Review of week 3

2) Anonymity
• The general right
• Can you bring proceedings anonymously/ super-injunctions?
• The position in Ireland

3) Issues provoked by anonymous online users


• How to identify anonymous users via platforms/ court orders

4) Privacy
• Contours of the right to privacy
• Balance with freedom of expression

5) Specific issues about online privacy


• Privacy and photographs
• Privacy in the workplace
• Privacy settings on social media

1. General Position
Introduction to Anonymity

- The general attitude towards anonymity:

“Anonymity is a shield from the tyranny of the majority … It thus exemplifies


the purpose behind the Bill of Rights, and of the First Amendment in particular to
protect unpopular individuals from retaliation — and their ideas from suppression —
at the hand of an intolerant society. The right to remain anonymous may be abused
when it shields fraudulent conduct. But political speech by its nature will sometimes
have unpalatable consequences and, in general, our society accords greater weight
to the value of free speech than to the dangers of its misuse.” – US Supreme Court
decision in McIntyre v Ohio Election Commission 514 US 334 (1995)

‘Anonymity has long been a means of avoiding reprisals or unwanted


attention. As such, it is capable of promoting the free flow of ideas and information in
an important manner, including, notably, on the Internet. At the same time, the Court
does not lose sight of the ease, scope and speed of the dissemination of information
on the Internet, and the persistence of the information once disclosed, which may
considerably aggravate the effects of unlawful speech on the Internet compared to
traditional media...’ – Grand Chamber of ECtHR in Delfi v Estonia (Application no.
64569/09) (2015)
- On what basis do internet users operate anonymously?

1. It is not provided for either by Statute, or as a fundamental right. On the other


hand, there is no law that states that you must identify yourself when using the
internet, uploading content, postings, comments etc.

2. Social media platforms not only allow users to operate anonymously but also
to sign up and use their platform without providing verified identification. When
signing up to their platform, Twitter and Facebook only require an email address or a
telephone number for the user wishing to join. It would drastically reduce the no. of
sign ups, and use and earning potential of these platforms.

3. Is there a legal right to retain online anonymity?

The general proposition that bloggers have a specific right to anonymity was considered by
the English High Court in The Author Of A Blog v Times Newspapers Limited [2009]
EWHC 1358 (QB).The court was faced with an application by the plaintiff – a police officer –
who wished to prevent the defendant newspaper from unmasking his identity. The Plaintiff
submitted the general proposition that ‘there is a public interest in preserving the anonymity
of bloggers.’ This was rejected by the court, which said: “...It is in my judgement a
significantly further step to argue, if others are able to deduce their identity, that they should
be restrained by law from revealing it.”

In K.U. V Finland (App no. 2872/02, 2 December 2008), the ECtHR considered an
application from a 12-year-old Finnish boy whose identity had been used on an internet
dating site. Finnish law allowed intermediaries to refuse to disclose the identity of their users
"to protect their right to anonymous expression." The Court held that the boy's right to
privacy had been breached, and that the guarantee of internet users contained in articles 8
and 10 ECHR “could not be absolute and should yield on occasion to other legitimate
imperatives, such as the prevention ... of crime or the protection of the rights and freedoms
of others”

2. Bringing Proceedings Anonymously

Right to anonymity for a plaintiff

- Why would the victim of online behaviour want to remain anonymous?

1. They may be the victim of harassment/revenge porn

In The AB v Bragg Communications Inc [2012] 2 SCR 567, the Canadian Supreme
Court allowed the victim of cyberbullying to bring her proceedings anonymously, citing
the importance of both her age (she was 15) and the need to encourage victims to
bring proceedings without "the risk of further harm from public disclosure"
2. They may be the victim of a defamation/a cyber-attack

In the Australian case of X v Twitter Inc [2017] NSWSC 1300, the Supreme Court of
New South Wales granted anonymity to a company about whom fake twitter accounts had
been set up in an attempt to damage their reputation. The Court held that "If that were not
so, the protection that the plaintiff seeks in relation to its private and confidential
information right be undone."

3. What is the law about bringing proceedings anonymously?

• The Civil Procedure Rules in the UK allow the court a general discretion to conceal
the identity of any party, or witness, ‘if it considers non-disclosure necessary in order to
protect the interests of that party or witness."

• A series of high-profile cases in the UK involving "super injunctions" around 10 years


ago obliged courts to face the reality of trying to hide the identity of parties in the age of
social media – see PJS v News Group Newspapers [2016] UKSC 26.

• The courts in Australia and New Zealand have likewise shown themselves more willing
to allow victims to bring proceedings anonymously - X v Twitter Inc [2017] NSWSC
1300.

• Situation in this jurisdiction, however, is very different...

3. Anonymity in Ireland
Right to anonymity for a plaintiff in Ireland? No

• Irish courts have traditionally been reluctant to allow for someone to institute proceedings
anonymously. This is based on the constitutional right to have justice administered in public
under Article 34.1.

"Justice shall be administered in courts established by law by judges


appointed in the manner provided by this Constitution, and, save in such special and
limited cases as may be prescribed by law, shall be administered in public."

• Aside from the requirement to hold certain proceedings in camera, the Courts have
interpreted this constitutional provision as requiring the identification of parties.

Roe v The Blood Transfusion Board [1996] 3 IR 67


• The Plaintiff had contracted hepatitis C through a blood transfusion with the Defendant and
sought to bring proceedings under an assumed name so as to protect her privacy.

In refusing the application, Laffoy J held that "the public disclosure of the true identities of
parties to civil litigation is essential if justice is to be administered in public."????????? Are
they the same thing? Would the principles have been lost if she would have been
anonymous? Would we have gotten less from the case?
McKeogh v Doe [2012] IEHC 95
• Plaintiff sought orders from the Court which would prevent six national newspapers from
either publishing details of the video clip which purported to show him evading a taxi fare, or
from revealing his identity.

• It was submitted on his behalf that retaining his anonymity was the only way for the Plaintiff
to be granted an effective remedy, as the publication of his name in the media would perform
the very mischief he was attempting to prevent. The Court rejected the application, stating
that “the right to have justice administered in public far exceeds any right to privacy,
confidentiality or a good name.”

4. Identifying Anonymous Defendants

How to unmask anonymous users

Summary:
• The benefits of anonymity, in terms of fostering positive discussion, enabling
whistleblowing and facilitating freedom of expression, are self-evident.

• The downside of facilitating anonymous posting of content is that it encourages unlawful


content.

• In respect of direct legal liability for such material, online platforms are provided robust
protection for the material they host by Article 6 of the Digital Services Act (Art.14 of the E-
Commerce Directive), and by section 27 of the Defamation Act 2009.

First step:
• Ask the internet intermediary being used (ie Facebook, Twitter etc) to identify the
anonymous user.

• Platforms are traditionally very protective about the right of their users to operate
anonymously. Twitter recently stated that: “Pseudonymity has been a vital tool for speaking
out in oppressive regimes, it is no less critical in democratic societies.
“Pseudonymity may be used to explore your identity, to find support as
victims of crimes, or to highlight issues faced by vulnerable communities.”

• General position is that they will refuse to release any information about their users without
a court order, citing issues concerning data protection and confidentiality.

• Other positions of such platforms when requested to identify users relates to whether or not
they are sure the material is actually unlawful. In Muwema v Facebook Ireland Ltd [2017]
IEHC 69, for example, Facebook said that they are 'not in a position' to arbitrate as to
whether the material a user has uploaded is unlawful.

The Australian initiative:

Background:
• Decision in Fairfax v Voller [2021] HCA 27 (Fairfax held responsible for comments made
by 3rd party users).

• In November 2021, prime minister Scott Morrison articulated the frustration of many victims
of anonymous online trolling when he declared that social media has become a “coward’s
palace … ... where (anonymous) people can say the most foul and offensive things to people
and do so with impunity.” – platform is the root cause.

• The draft Social Media (Anti-Trolling) Bill 2021, which followed almost immediately, is an
attempt to codify some of the Prime Minister’s suggestions.

Social Media (Anti-Trolling) Bill 2021

▪ The Bill proposes to make social media platforms liable for comments posted by users of
social media, as well as comments posted by third parties on that user’s page, by
considering the platforms to be the publishers of such comments. It further proposes to
remove the defence of innocent dissemination which had previously been available to them.

▪ This apparently onerous provision is tempered by a new defence which is available to


social media platforms, which requires them to have a “Complaints Scheme” in place, and to
follow that Scheme upon receiving a complaint from an injured party:

a) Upon receipt of a notice alleging that a person has been defamed, the social
media platform must contact the author of the comment within 72 hours to inform them;

b) If the author of the comment consents, the comment will be removed by the social
media service;

c) If the author of the comment does not consent to it being removed, the social
media service must provide the complainant with the author’s name, address and email
address, subject to the author consenting to such information being disclosed;

d) If the author does not consent to their name and address being disclosed, the
complainant can apply to the court for an “End-user information disclosure order”, obliging
the social media service to give the complainant contact details for the author of the
defamatory comment. (this appears to mirror the Norwich Pharmacal procedure utilised in
this jurisdiction.)

Doesn’t work if allowing people to sign up anonymously – based on the platform


knowing their identity. It compels SM platforms to know who their users are. Makes sense.
Similar in the UK, nothing like this in Ireland.

How to unmask anonymous users – Norwich Pharmacal procedure:

An action whose aim is purely to obtain information in respect of a proposed defendant is not
provided for either by legislation or the Court Rules – instead, it is provided for by the
inherent jurisdiction of the High Court. The relief was established in Norwich Pharmacal v
Customs and Excise Commissioners [1974] AC 133, a decision which has given the
application its name, in which Lord Reid gave the definitive statement of the principle:

‘…if through no fault of his own a person gets mixed up in the tortious acts of
others so as to facilitate their wrong-doing he may incur no personal liability but he
comes under a duty to assist the person who has been wronged by giving him full
information and disclosing the identity of the wrongdoers.’

The granting of a Norwich Pharmacal order is an entirely discretionary relief and the courts
have formulated a test that must be met before such an order will be granted. The test is as
follows:

1) There is a reasonable basis to allege that a wrong has been committed.

2) The disclosure of documents or information from the third party is needed to


enable action against the wrongdoer.

3) The respondent is sufficiently involved in the wrongdoing so as to have facilitated


it, even if innocently, and is in a position to provide the information.

4) The order is necessary in the interests of justice on the facts of the case.

In the majority of cases, the internet intermediary does not oppose the application, and the
order will be granted.

Issues with the procedure:

• It is only provided for in the High Court, so is expensive. Original draft of Harassment,
Harmful Communications and Related Offences Bill had a provision to bring such an
application to the Circuit Court, but this was dropped before the enactment. This is again
being recommended in the reform of the Defamation Act 2009, which is currently undergoing
consultation.

Salesian College v Facebook [2021] IEHC 287

Background:
Boys set up a Whatsapp group and shared teacher info (some student info) in the form of
banter/memes. Someone complained. School wanted to “discipline” the boys. Members of
the group were operating anonymously. Application to discover identity. Court considered
wider issues at play (1. school children’s identity, 2. only entitled to anonymous identity in
order to prosecute them for unlawful conduct, 3. general right to freedom of expression and
privacy). School didn’t intend to sue

The High Court proposed to refer three questions to the Court of Justice in respect of such
applications:

1) Do the rights conferred under Article 7, Article 8 and Article 11 of the Charter of
Fundamental Rights of the European Union imply a right, in principle, to post material
anonymously on the internet (subject always to any countervailing objective of public
interest)?
– First time anyone asked CJ if there is a right to anonymity.

2) What is the threshold to be met under the General Data Protection Regulation
and/or the Charter before the provider of a social media platform can be compelled to
disclose, to a third party, information which would identify an otherwise anonymous account
user?

3) Is there any necessity for a national court to attempt to put the affected party on
notice of an application which seeks to identify the operators of an otherwise anonymous
user account?
– No provision for this in Norwich Pharmacal orders.

• The information obtained may not even reveal the identity of the anonymous user. In
Parcel Connect v Twitter [2020] IEHC 279 for example, Twitter stated that it "has nothing to
say as to what the information should be and does not warrant that such information as it
has will be sufficient to allow the plaintiffs to establish the true identity of the owner and
operator of the account...”

• There is a lack of clarity as to who pays for the costs of the application.

5. General Right to Privacy


Origin of the right to Privacy:

- The right to privacy does not come from Statute. In Ireland, it is an unenumerated right
under Article 40.3.1 of the Constitution:

"The State guarantees in its laws to respect, and, as far as practicable, by its laws to
defend and vindicate the personal rights of the citizen."

Kennedy v Ireland [1987] 1 IR 587

“The dignity and freedom of an individual in a democratic society cannot be ensured


if his communications of a private nature,be they written or telephonic, are deliberately,
consciously and unjustifiably intruded upon and interfered with.”

It is not an unqualified right, with Hamilton P. stressing that it was subject to the
constitutional rights of others and the preservation of public order, morality and the common
good.

Right has been re-iterated on several occasions in the Superior Courts:


• Herrity v Sunday Newspapers [2008] IEHC 249
• Hickey v Sunday Newspapers [2011] 1 IR 228*
• Nolan v Sunday Newspapers [2019] IECA 141
6. Balancing Privacy w/ Freedom of Expression
Origin of the right to Freedom of Expression:

- It is also a Constitutional right but is specifically provided for under Article 40.6.1:

"The State guarantees liberty for the exercise of the following rights, subject to public
order and morality: The Right of the citizens to express freely their convictions and opinions"

It is not an unqualified right, with the same Article providing the restriction:

"The publication or utterance of seditious or indecent matter is an offence which shall


be punishable in accordance with law."

Both rights are protected by the EU Fundamental Charter of Human Rights and the
European Convention of Human Rights:

The Charter:
• Article 7 provides for "respect for private and family life"
• Article 11 provides that "Everyone has the right to freedom of expression. This right shall
include freedom to hold opinions and to receive and impart information and ideas without
interference by public authority and regardless of frontiers.

The Convention:
• Article 8 provides protection for "private and family life, his home and his correspondence"
• Article 10 provides that "Everyone has the right to freedom of expression. This right shall
include freedom to hold opinions and to receive and impart information and ideas without
interference by public authority and regardless of frontiers.

7. Specific Issues Concerning Privacy


Specific internet law issues pertaining to privacy:
1) Privacy and photography
2) Privacy at work
3) Privacy as a right v social media 'privacy' settings

1) Privacy and Photography:


Much of the case law surrounding breach of privacy has involved the publication of
photographs which that person did not consent to being published. As photographs can be
uploaded and shared so easily online, this is a particularly pertinent consideration for internet
law.

A) The right to photograph someone:


When a person objects to their photograph being 'taken', what they are in reality objecting to
is its subsequent publication to the public, very often through the news media. They are,
however, two separate rights. There is technically no right to stop someone taking your
photo -
"The famous and even the not so famous who go out in public must accept that they
may be photographed without their consent, just as they may be observed by others without
their consent." - Campbell v MGN [2004] UKHL 22.

B) When your right to privacy is infringed:


In respect of privacy and the publication of a photograph, Lord Hoffman held in Campbell v
MGN that the fact that we cannot avoid being photographed “does not mean that anyone
who takes or obtains such photographs can publish them to the world at large."

“The widespread publication of a photograph of someone which reveals him to be in


a situation of humiliation or severe embarrassment, even if taken in a public place, may be
an infringement of the privacy of his personal information. Likewise, the publication of a
photograph taken by intrusion into a private place (for example, by a long distance lens) may
in itself be such an infringement, even if there is nothing embarrassing about the picture
itself.”

See also:
– John v Associated Newspapers [2006] EMLR 27
– Hickey v Sunday Newspapers Ltd [2011] 1 IR 228

2) Privacy at work
This engages with two main issues:

a) Your employer's right to monitor your internet usage

b) The degree to which your 'private' online correspondence is truly private

"There appears .. to be no reason of principle why this understanding of the notion of


‘private life’ should be taken to exclude activities of a professional or business nature since it
is, after all, in the course of their working lives that the majority of people have a significant, if
not the greatest, opportunity of developing relationships with the outside world.”- Niemietz v
Germany (ECtHR, Application No 13710/88) 16 December 1992.

A) Employer's right to monitor an employee's internet usage in the workplace:

Barbulescu v Romania (ECtHR, App No 61496/08, 5 September 2017)

▪ The employee created and operated a Yahoo! messenger account on behalf of his
employer. The company circulated an information notice to its staff, which stipulated that
staff must not use company time for personal internet use. When accused of using the
Yahoo! service for personal reasons, the claimant denied doing so. He was then shown a
45-page transcript of the monitored communications from the company account, as well as a
number of messages that were sent on his personal Yahoo Messenger account and was
dismissed.
▪ Mr Bărbulescu alleged that his Article 8 rights to privacy had been infringed. In the lower
Chamber, it held that the monitoring had been reasonable. On appeal, the Grand Chamber
overturned the decision, finding that the domestic Romanian court had failed to consider:

- whether Mr Bărbulescu was on notice that his online usage might be monitored, or

- whether the scope of such monitoring would include the content of his
communications, rather than simply recording the flow.*

• Central to the Grand Chamber's decision was its finding that a total ban on private
correspondence in the workplace was impermissible, that "an employer’s instructions cannot
reduce private social life in the workplace to zero."

• The Grand Chamber stressed that monitoring of employees by the employer was not
illegal, holding that the latter retained ‘...a right to engage in monitoring, including the
corresponding disciplinary powers, in order to ensure the smooth running of the company’.

B) Is your 'private' correspondence truly private?

There is a recurring difficulty when an employer seeks to make use in disciplinary


proceedings of material that has been shared in closed Facebook or WhatsApp Groups, or
behind a user's privacy settings.

A Sales Assistant v A Grocery Retailer (ADJ-00011302, October 2018) - penalisation


case

▪ The complainant had been part of a picket outside the respondent's store, and was
penalised, inter alia, for her participation in the private Facebook page set up by 43 fellow
strikers. The respondents became aware of comments amongst the group which were in
breach of the company’s social media policy and used this as a reason for dismissing the
complainant. The complainant insisted that these were private communications between
union members, and not the business of the respondent.

▪ The Adjudicating Officer rejected the submission that this group could be considered
private with the following observations:

‘... [a]s a group with 43 members posting to a Facebook page, there is no prospect
that the information could be contained in the group. While the members may have aspired
to privacy, in reality the information was posted on the world-wide web ... It seems to me
naïve to think that any postings on Facebook are private.’ (only takes 1 person to break the
chain and make the info public)

Crisp v Apple (ET/1500 258/11 2011) - unfair dismissal case

• English employment case in which the complainant had been dismissed for comments he
published by third parties, and comments he made himself, on his personal FB page about
the quality of Apple's products. He claimed that those comments were private and could only
be viewed by his FB friends. The Tribunal disagreed:
"The nature of Facebook, and the internet generally, is that comments by one person
can very easily be forwarded on to others. The claimant had no control over how his
comments might be copied and passed on."

3) Privacy as a right v social media 'privacy' settings

The issue of privacy settings on social media and the degree to which they can prevent
content hidden behind those settings being used in court has not been considered by a court
in this jurisdiction…

Has yet to be discussed in Irish law.

Martin & Ors v Gabriele Giambrone P/A Giambrone & Law [2013] NIQB 48.

• Northern Ireland case concerning application for a Mareva injunction. Defendant objected
to evidence obtained from his Facebook account which suggested that he would "leave them
with nothing". Court refused to disallow the evidence, which appears to have been a private
conversation:

"I should say that anyone who uses Facebook does so at his or her peril. There is no
guarantee that any comments posted to be viewed by friends will only be seen by those
friends. Furthermore, it is difficult to see how information can remain confidential if a
Facebook user shares it with all his friends and yet no control is placed on the further
dissemination of that information by those friends."

Evidence in personal Injuries proceedings

• Evidence, particularly from Facebook that contradicts a plaintiff's case in personal injuries
proceedings, is regularly introduced in the Irish courts, without any consideration of how it
was obtained, or whether it was hidden behind privacy settings.

• The question arises as to whether you can compel a plaintiff to produce relevant
information from behind these settings via a discovery application. No consideration in this
jurisdiction, but Canadian courts have considered both sides of the argument...

Leduc v Roman [2009] Supreme Court of Ontario


Discovery sought even though public profile revealed only the name and photograph of the
plaintiff.

Granted. Court held that: ‘To permit a party claiming very substantial damages for loss of
enjoyment of life to hide behind self-set privacy controls on a website … risks depriving the
opposite party of access to material that may be relevant to ensuring a fair trial’

Stewart v Kempster [2012] Supreme Court of Ontario


Discovery refused. Court held compared the application to one that might have been made
in pre-internet times: ‘It is unimaginable that a defendant would have demanded that a
plaintiff disclose copies of all personal letters written since the accident, in the hope that
there might be some information contained therein relevant to the plaintiff's claim. The
shocking intrusiveness of such a request is obvious.’

INTERNET LAW:

Week 5: Online Speech (Part 1) NB!


DEFAMATION WILL BE ON EXAM

Overview of Week 5

Recap of Week 4 (anon/privacy)

1) Freedom of Expression
• The general right
• Restrictions to the right

2) Speech in the internet age

3) Harmful Speech
• What is it?

4) Hate Speech
• The general position
• The law in Ireland

5) Jurisdiction** exam Q
• Where can you bring proceedings for harmful speech online?

1. Freedom of Expression

General Position
The right to Freedom of Expression:

- It is a Constitutional right, specifically provided for under Article 40.6.1.

Article 40.6.1° of the Constitution:


"The State guarantees liberty for the exercise of the following rights, subject
to public order and morality: The Right of the citizens to express freely their
convictions and opinions"

It is not an unqualified right, with the same Article providing the restriction:
"The publication or utterance of seditious or indecent matter is an offence
which shall be punishable in accordance with law."
Protected by the EU Fundamental Charter of Human Rights and the European
Convention on Human Rights:

The Charter:

• Article 11 provides that:


"1. Everyone has the right to freedom of expression. This right shall include
freedom to hold opinions and to receive and impart information and ideas without
interference by public authority and regardless of frontiers.
2. The freedom and pluralism of the media shall be respected."

The Convention:

• Article 10 provides for the right under similar terms, but with explicit restrictions:

"The exercise of these freedoms, since it carries with it duties and


responsibilities, may be subject to such formalities, conditions, restrictions or
penalties as are prescribed by law and are necessary in a democratic society..."

Case law

Handyside v United Kingdom (1976) 1 EHRR 737** – scope (v. broad)

“Freedom of expression constitutes one of the essential foundations of [a


democratic] society, one of the basic conditions for its progress and for the
development of every man. Subject to paragraph 2 of Article 10, it is applicable not
only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive
or as a matter of indifference, but also to those that offend, shock or disturb the State
or any sector of the population. Such are the demands of that pluralism, tolerance
and broadmindedness without which there is no "democratic society."

R v Central Independent Television plc [1994] Fam 192

"Newspapers are sometimes irresponsible and their motives in a market


economy cannot be expected to be unalloyed by considerations of commercial
advantage. Publication may cause needless pain, distress and damage to individuals
or harm to other aspects of the public interest. But a freedom which is restricted to
what judges think to be responsible or in the public interest is no freedom. Freedom
means the right to publish things which government and judges, however well
motivated, think should not be published. It means the right to say things which ‘right-
thinking people’ regard as dangerous or irresponsible."

Fennelly J in Mahon v Post Publications [2007] 3 IR 375

“The right of freedom of expression extends the same protection to worthless,


prurient and meretricious publication as it does to worthy, serious and socially
valuable works. The undoubted fact that news media frequently and implausibly
invoke the public interest to cloak worthless and even offensive material does not
affect the principle.”

Balance w/ the Right to Privacy

When the right to freedom of expression is sought to be curtailed by a person's right


to privacy:

In Axel Springer AG v Germany [2012] ECHR 45, the ECtHR held that when the right to
freedom of expression is being balanced against the right to respect for private life, the
relevant criteria to be considered are:
(one right does not trump the other here instead in every case they are balanced on the
circumstances of that particular case and nothing more)

1) Contribution of the material to a debate of general interest


2) How well known is the person concerned and what is the subject of the
report?
3) Prior conduct of the person concerned
4) Method of obtaining the information and its veracity
5) Content, form and consequences of the publication
6) Severity of the sanction imposed

Is there increased scrutiny when speech is made online?

The particular importance of the press in imparting information is well documented, and the
same importance has been placed upon the dissemination of news via the internet, with the
ECtHR stating in Times Newspapers v United Kingdom [2009] (Applications 3002/03 and
23676/03) that:

“In light of its accessibility and its capacity to store and communicate vast
amounts of information, the Internet plays an important role in enhancing the public's
access to news and facilitating the dissemination of information generally.”

The ECtHR in Editorial Board of Pravoye Delo & Anor v Ukraine (Application no.
33014/05) was even more explicit, it is suggested that speech disseminated via the internet
should be subjected to a heightened degree of scrutiny as compared to that in the traditional
media:

“It is true that the Internet is an information and communication tool


particularly distinct from the printed media, especially as regards the capacity to
store and transmit information. The electronic network, serving billions of users
worldwide, is not and potentially will never be subject to the same regulations and
control. The risk of harm posed by content and communications on the Internet to the
exercise and enjoyment of human rights and freedoms, particularly the right to
respect for private life, is certainly higher than that posed by the press."
The Right to Internet Access

The right not only to be speak, but also to receive information:

The right to freedom of expression can be curtailed in certain circumstances. The question of
whether this extends to a denial of access to the internet has been considered by the courts
in a variety of situations.

Accessibility of the internet:

The wholesale blocking of a search engine facility was considered by the ECtHR in Ahmet
Yildirim v Turkey, (app no. 3111/10, 18 December 2012) in circumstances where the
Turkish government sought to block Google's search engine services. This measure was
intended to combat the making available of content on the Applicant's website which was
perceived to be insulting to the founder of the Turkish republic. While it was not a wholesale
ban on internet access, the Court held that:

"the fact that the effects of the restriction in issue were limited does not
diminish its significance, especially since the Internet has now become one of the
principal means by which individuals exercise their right to freedom of expression
and information, providing as it does essential tools for participation in activities and
discussions concerning political issues and issues of general interest.”

The court found that while a generic ban on all google services by the Turkish government
was a violation of the right to freedom of expression, it may be permissible to restrict internet
access without falling fall of the Convention, but stressing that “a legal framework is
required,
ensuring both tight control over the scope of bans and effective judicial review to prevent
any
abuse of power.”

2. Speech in the Internet Age

New issues concerning online speech (1):

The use of social media in particular has created new issues in respect of what exactly
constitutes 'speech' in the internet age. The brevity required by the use of Twitter, and the
fast-paced nature of social media exchanges, has resulted in a dramatic increase in
abbreviations and acronyms: LOL, DM, TBH, FOMO....

New issues as regards what constitutes speech:

• Providing a hyperlink to someone else's speech.


• Re-posting or re-tweeting comments. English comedian Alan Davies agreed to pay £15k
compensation to The Lord Alpine for having re-tweeted a tweet of Sally Bercow.

• 'Liking' someone else's post. In Bolton v Stoltenberg [2018] NSWSC 1518 , the court held
that "No authority was drawn to my attention which establishes that clicking the “like” button
on a Facebook page constitutes a level of endorsement of the publication to render the
person liable as a publisher. I do not regard “liking” a Facebook post, of itself, as analogous
to conduct of the kind described (as) “drawing the attention of another to defamatory
words..."

• Can an emoticon or emoji be defamatory? In The Lord McAlpine of West Green v Sally
Bercow [2013] EWHC 1342, the Court considered that the words "innocent face" (rather
than the emoticon) should be read as an insincere or ironic response to whether or not she
knows the answer to a question.

In Burrows v Houda [2020] NSWDC 485, a court in New South Wales considered a dispute
between two solicitors, one of whom had been the subject of potential disciplinary action.
When the other solicitor was asked on Twitter about any developments about the
proceedings, he responded with a "zipper-mouth" and a "clock ticking" emoji. The judge held
that agreed the emoji, in context, was reasonably capable of conveying that Ms Burrows had
"not merely been the subject of a referral [by the judge], but also a result adverse to her",
and that her "time was up."

PS. In South West Terminal Ltd. v Achter Land & Cattle Ltd., 2023 SKKB 116, a "thumbs
up" emoji was considered to create a binding contract...

Can a blogger attract the increased protection offered by the freedom of expression
afforded to journalists?

The fact that an author ‘self-publishes’ online may not, of itself, be considered to lessen the
value of their output. This stance was adopted by the High Court in Cornec v Morrice [2012]
1 IR 804, in which an application was made for a blogger to reveal his sources for a series of
articles which the defendant claimed violated a non-disparagement agreement that she had
with the plaintiff. Hogan J remarked that while the blogger in question, Mr Garde, was "not a
journalist in the strict sense of the term," he accepted that:

"Mr. Garde’s activities fall squarely within the ‘education of public opinion’
envisaged by Article 40.6.1°. A person who blogs on an internet site can just as
readily constitute an ‘organ of public opinion’ as those which were more familiar in
1937 and which are mentioned (but only as examples) in Article 40.6.1°, namely, the
radio, the press and the cinema. Since Mr. Garde’s activities fall squarely within the
education of public opinion, there is a high constitutional value in ensuring that his
right to voice these views in relation to the actions of religious cults is protected..."

The ECtHR came to a similar conclusion in Magyar Helsinki Bizottsag v Hungary (App.
No. 18030/11, 8 November 2016), when considering the press as a “public watchdog”,
holding that:
“The Court would also note that given the important role played by the
Internet in enhancing the public's access to news and facilitating the dissemination of
information … the function of bloggers and popular users of the social media may be
also assimilated to that of ‘public watchdogs’ in so far as the protection afforded by
Article 10 is concerned.”

The New Zealand High Court considered an identical issue in Slater v Blomfeld [2014]
NZHC 221, and held that a blogger could avail of the benefits of being a ‘journalist’ in certain
circumstances: " conclude that a blogger who regularly disseminates news to a significant
body of the public can be a journalist … The blog must have a purpose of disseminating
news. Some regular commitment to the publishing of news must exist before a blog is a
news medium."

3. Harmful Speech

Introduction

What is harmful speech?

While everyone has, in general terms, and idea of what constitutes “harm”, legislation
has tended to avoid attempting to provide definitions of the term but has instead
sought to provide non-exhaustive lists of examples of harmful content.

Online harm falls into three broad categories:

• Financial harm includes such activities as fraud, hacking, selling of counterfeit


goods and other intellectual property violations. (Considered in cybercrime and IP lectures)

• Harm generally against society includes criminal activities such as the


possession or distribution of child pornography, dissemination of hate speech, promotion of
self-harming or suicide, and contempt of court. (considered here, as well as in cybercrime
and harassment lectures)

• Harm directed at an individual focuses on the reputational, emotional or


psychological harm that is caused to individuals when they are defamed, harassed or bullied
via the internet. (considered here and in harassment lectures)

Online Safety and Media Regulation Act 2022

Under section 139A, harmful content falls under two main categories (it is a non-exhaustive
list)

1. Content which it is a criminal offence to disseminate under Irish or EU law. This includes:
• child sexual abuse material,
• content containing or comprising incitement to violence or hatred,
• public provocation to commit a terrorist offence.
2. Content which is likely to
• intimidate, threaten, humiliate or persecute a person,
• likely to encourage or promote eating disorders,
• likely to encourage or promote self-harm or suicide;
so long as such online content either gives rise to a risk to a person’s life, or significant
harm to their physical or mental health, where such harm is reasonably foreseeable.

Who Decides What Speech is Harmful?

Victims of posting harmful speech:

High profile people have lost their jobs following an outcry on social media about posts they
made:

Danny Baker, May 2019


When the Duchess of Cambridge, aka Megan Markle, gave birth, BBC presenter tweeted an
image of a couple holding hands with a chimpanzee dressed in clothes with the caption:
"Royal Baby leaves hospital". The BBC said Baker's tweet "goes against the values we as a
station aim to embody".

Alaistair Stewart, January 2020


ITV news presenter was dismissed after having an exchange with a black political lobbyist,
who he referred to in a tweet, in a quote from Measure for Measure, as "an angry ape." His
employers commented that it was just one of several tweets over which Mr. Steward had
previously been warned.

Bernard O'Byrne, July 2021


Basketball Ireland chief executive, and former chief executive of the FAI Bernard O'Byrne,
was obliged to step down from his job in July 2021. During the Euro 2020 Finals, following
the awarding of a questionable penalty in his favour, O'Byrne tweeted the following about
England footballer Raheem Sterling - "black dives matter."

Apologising for the tweet, O'Byrne described it as "an extremely ill judged attempt at
humour."

4. Hate Speech

Introduction

What is hate speech?

Hate speech is generally considered to describe various forms of communication – be they


verbal or written – which have at the root the incitement and/or promotion of hatred or
violence towards a person or defined section of society. (Sanchez different than
hatred/violence, it was sufficient for exclusion/disapproval – lesser test) The traditional
targets for such speech are those people who are defined by their race, religion, sex or
sexual orientation (or other protected categories).

The Cambridge dictionary defines it as "public speech that expresses hate or encourages
violence towards a person or group based on something such as race, religion, sex, or
sexual orientation". v Handyside right (offend/shock/disturb). Hard to find balance.
Handyside PLUS protected characteristic = HS

The regulation of hate speech has always involved a delicate balancing act, given the
strong protection offered to speech in general, particularly by EU legal instruments and
institutions. Freedom of expression is specifically protected, see also Handyside v UK.

EU initiatives:

The Digital Services Act will be the first overhaul of the E-Commerce Directive since it was
published 21 years ago. Large online platforms are required to possess stricter monitoring
and reporting procedures before they can avail of the exemptions formerly provided by
articles 12-14 of the Directive. This requirement is likely to be in respect of certain types of
material, such as hate speech, almost immediately, and the platform will face large fines for
failing to do so.

It is envisaged that the Online Safety and Media Regulation Act, when its provisions are
finalised, will do likewise.

Potential difficulties with legislation:

In France, the government didn't wait for the publication of the Act and has published its own
legislation in May 2021 which requires tech platforms to remove hateful comments within 24
hours after they are flagged by users. Terrorist and child pornography, furthermore, had to
be
removed within one hour of being flagged (no reasonable time to act). Platforms could face
fines of up to €1.25 million in the event they fail to follow the regulations. A few weeks later,
the French Constitutional Council ruled these measures to be an unconstitutional
infringement of the right to freedom of expression.

HS @ the ECtHR

Hate speech in EU Member States:

Many states have their own individual take on what constitutes hate speech. Aside from the
protected grounds which form the basis of anti-hatred legislation in Ireland, holocaust denial
is treated with particular contempt by France and Germany. In the latter, it is a criminal
offence.

Beizaras & Levickas v Lithuania (app no. 41288/15, January 2020)


The ECtHR again stressed the requirement for contracting states to have robust measures
in place to combat hate speech. The two male applicants had posted a photograph of
themselves on Facebook, kissing each other. This drew a series of offensive, homophobic
comments, variously describing the participants as "faggots", who should be "shot",
"castrated" or "sent to a gas chamber."

When requested to do so by the applicants, the domestic Lithuanian authorities refused to


do so, describing the speech as "amoral" and "improper", rather than constituting hate
speech. The Strasbourg court upheld the complaint under articles 8 and 14, and also
criticised the domestic authorities for failing to appreciate the impact that the placing of this
hate speech on Facebook could have on a victim (widely publicised – harm exacerbated).
The Court drew a telling comparison between the "bigoted attitude" of the users who posted
the comments and the "same discriminatory state of mind" of the authorities for refusing to
prosecute them.

Sanchez v France (app no. 45581/15)

▪ An application by a French right-wing politician in respect of a fine imposed on him by the


French domestic courts in relation to hate speech which had been placed by third parties on
a Facebook page which he operated.

▪ The ECtHR upheld the domestic courts' decision, holding that the applicant had chosen to
make his Facebook "wall" publicly available, that he should have known his page would
attract controversial commentary, and that a particular piece of material had been left on his
page for 3 months despite him being "aware" that unlawful comments had been posted.

NB. Mr. Sanchez was never actually informed of the particular speech by any complainant.

NNB. Hatred under French law includes inciting "hostility" – lower threshold. Some cases
have given more leeway to politicians in the run up to elections while this case narrows it.

The position in Ireland

While the Department of Justice and Equality has launched a public consultation programme
with a view to updating the legislation in the area of hate speech, it does not provide a
definition as to what constitutes such speech. The main piece of legislation to govern hate
speech in this jurisdiction remains the 32-year-old Prohibition of Incitement to Hatred Act
1989.

Section 2(1) describes the offence of ‘Actions likely to stir up hatred’, and provides that it is
an offence:
a) to publish or distribute written material,
b) to use words, behave or display written material—
i) in any place other than inside a private residence, or
ii) inside a private residence so that the words, behaviour or material are
heard or seen by persons outside the residence, or
c) to distribute, show or play a recording of visual images or sounds, if the written
material, words, behaviour, visual images or sounds, as the case may be, are threatening,
abusive or insulting and are intended or, having regard to all the circumstances, are likely to
stir up hatred. Objective test – no need for intent.

"Hatred" is defined as "hatred against a group of persons in the State or elsewhere on


account of their race, colour, nationality, religion, ethnic or national origins, membership of
the travelling community or sexual orientation...” (This reflects the discrimination grounds
under the Equal Status Act 2000). Plan to include gender in revised version.

There is no specific reference to online hate speech in the Act, which is unsurprising given
its vintage, but it is sufficiently technologically-neutral that it could be applied to online
speech.

The term “publish” is defined as meaning “to publish to the public or a section of the public.”

This would appear to rule out private communications with individuals via direct messaging,
a fact which would explain why the Act was not used in the February 2021 case in Tralee
District Court involving racial slurs that were sent to well-known English footballer Ian Wright.
(Discussed in Harassment lecture)

Case law in Ireland:

There is one documented case in this jurisdiction which a person was tried for hate
speech under the 1989 Act. It occurred in September 2011 in Killarney District Court.

A local man, Patrick Kissane, admitted to setting up a Facebook Group page entitled
‘Promote The Use of Knacker Babies as Bait’. The Group suggested that traveller children
could be used at feeding times in a zoo. The page had the lowest possible privacy settings,
so the content was viewable by any internet user. 644 people joined before it was taken
down by Facebook.

The decision is surprising for two reasons:

1. The court's reliance on Mr Kissane's lack of intent, given the fact that the relevant
provision of the 1989 Act allows for the offence to be committed where stirring up hatred is
intended or likely, meaning that intent is not a prerequisite for the offence to be committed.

2. The fact that no hatred seems to have been provoked by the specific Facebook page
would not seem to be grounds for finding that no offence had been committed – likelihood of
it having been stirred up is sufficient, and it is difficult to understand how a Facebook page
named in such terms would be unlikely to stir up hatred.

Mr Kissane was prosecuted under the 1989 Act, and while the court considered the man's
behaviour to be ‘obnoxious, revolting and disgusting’, it held that there was a reasonable
doubt as to whether the defendant intended to incite hatred towards members of the
travelling community, that no evidence was put forward of such hatred having been incited,
and his reactions should be considered to be a ‘once-off’ event.

Comparison to recent UK cases:


On a Facebook Group Page which was critical of Celtic Football Club manager Neil Lennon,
entitled ‘Neil Lennon Should be Banned’, Stephen Birrell made sectarian comments about
Catholics. His posts included "Hope they (Celtic fans) all die. Simple. Catholic scumbags ha
ha" and "Proud to hate Fenian tattie farmers." Mr Birrell was given an eight-month prison
sentence.

Recent UK case of R v Davison [2020] EWCA Crim 665.


The defendant has published various racist remarks on his Instagram account directed
toward members of the Muslim faith, which were accompanied by a photograph of him
holding a shotgun. The Court of Appeal upheld a sentence of four years' imprisonment.

Proposed legislation in Ireland:

Criminal Justice (Incitement to Violence or Hatred and Hate Offences) Bill 2022 is
currently before An Seanad. It proposes to update the 1989 by simplifying its provisions and
enhancing its sentencing options. It retains the necessity that the hatred be aimed at one of
the "protected characteristics."

Section 7 of the Bill provides that:


A person shall be guilty of an offence under this section if
(a) the person
(i) communicates material to the public or a section of the public, or
(ii) behaves in a public place in a manner, that is likely to incite
violence or hatred against a person or a group of persons on account of their
protected characteristics or any of those characteristics, and
(b) the person does so with intent to incite violence or hatred against such a
person or group of persons on account of those characteristics or any of those
characteristics or being reckless as to whether such violence or hatred is thereby
incited.

Potential liability of corporate bodies (eg. social media platforms): (difficult to establish –
there as a threat but in reality the only ones prosecuted will be the authors)

It shall be a defence if:


• the body has in place reasonable and effective measures to prevent dissemination
of communications inciting hatred generally,
• was complying with those measures at the time, and
• was unaware and had no reason to suspect that this particular content was
intended or likely to incite hatred

The Online Safety and Media Regulation Act establishes the Media Commission, which
will take over from the previous Broadcasting Authority of Ireland and, as well as inheriting
the BAI's existing role, will also be tasked with dealing with harmful online content, including
regulation of how the technology giants deal with such material.
5. Jurisdiction

Introduction

Where can you bring proceedings for harmful speech online?:

The Brussels Recast Regulation 1215/2012 (Brussels 1 Regulation Recast) seeks to


provide parties with relative certainty as to where they may sue or be sued, as well as
reducing the risk of irreconcilable judgments across different Member States.

General position under Article 4:


“Subject to this Regulation, persons domiciled in a Member State shall,
whatever their nationality, be sued in the courts of that Member State”

This applies regardless of where the plaintiff is domiciled, so long as the defendant is
domiciled in an EU Member State.

• Article 6(1) provides that if the defendant is not domiciled in an EU Member State,
then jurisdiction will be determined by the law of that non-Member State.

Special jurisdiction:

The Rules of special jurisdiction are provided for by Articles 7, 8 and 9 of the Brussels 1
Regulation Recast, which are available in order that jurisdiction may, in certain cases, be
determined by a close connection between the court and the action, and to facilitate the
sound administration of justice.

Articles 7(1) and 7(2):

A person domiciled in a Member State may be sued in another Member State:


(1) (a) in matters relating to a contract, in the courts for the place of performance of
the obligation in question;
(2) in matters relating to tort, delict or quasi-delict, in the courts for the place where
the harmful event occurred or may occur.

Where does the "harmful event occur"?

In respect of tort under section 7(2), the CJEU has held that the concept of where the
“harmful event occurred” covers both the place where the event giving rise to the damage
occurred, and the place where the damage itself occurred or may occur.

If a party chooses to bring proceedings in the jurisdiction where damage has been caused,
they may only recover for the damage which occurred within that jurisdiction.

J in the Internet Age

What happens when the material is online?


The question of jurisdiction is given added complexity in the internet age. This is particularly
true in respect of personality rights involved in defamation and privacy actions, where the
importance of publication, in terms of degree and location, raises issues in respect of
jurisdiction.

As publication on the internet essentially takes place simultaneously in multiple jurisdictions


worldwide, the CJEU has carved out specific rules regarding jurisdiction for proceedings
involving damage to personality rights on the internet.

Traditional position: C69/93 Shevill & Ors v Presse Alliance

It was held that the victim of a libel by a newspaper article distributed in several EU States
may bring an action for damages against the publisher either:

a) before the courts of the Contracting State of the place where the publisher of the
defamatory publication is established, which have jurisdiction to award damages for all the
harm caused by the defamation, or

b) before the courts of each Contracting State in which the publication was
distributed and where the victim claims to have suffered injury to his reputation, which have
jurisdiction to rule solely in respect of the harm caused in the State of the court seised.

Online position: C509/09 eDate Advertising v X and Martinez & Anor v MGN (2011)

The court described three options for a plaintiff in respect of where to bring proceedings:

• In respect of all the damage caused, either

a) before the courts of the Member State in which the publisher of that content is
established, or
b) before the courts of the Member State in which the centre of his interests is based;

• In respect of the damage caused in the territory of the Member State of the court seised,
before the courts of each Member State in the territory of which content placed online is or
has been accessible (sufficient to be ABLE to read, doesn’t have to have been accessed).

Irish Case Law

The right of Irish courts to accept jurisdiction for publications made via the internet has been
the subject of controversy, as a series of decisions have raised questions about the correct
interpretation of eDate & Martinez. In Ireland, for a claim of defamation you must prove
publication and that it has been accessed by more than 1 other person – contradiction with
eDate)

Coleman v MGN [2012] IESC 20


• An appeal from the 2007 High Court decision of to accept jurisdiction, in circumstances
were the plaintiff, from Mayo, was pictured in the Daily Mirror newspaper in the UK in 2003
under the heading "Yob War – Boozy: Lads on a typical night out in Britain." Proceedings
were amended for the purposes of including online publication in the Supreme Court appeal.

• This publication related primarily to the Daily Mirror website, though evidence that this
website existed in 2003 was never shown in court. In allowing the appeal, Denham CJ held
that:

There is a need for evidence of publication to establish the tort of defamation.


There is no evidence before the Court that the Daily Mirror was published online in 2003.
There is no evidence that the daily edition of the Daily Mirror was on the world wide web in
2003. Thirdly, there is no evidence of any hits on any such website in this jurisdiction. These
are fatal flaws in the plaintiff's case."

CSI Manufacturing v Dun & Bradstreet [2013] IEHC 547

The plaintiff company was based in Ballymount, Dublin, and sought a declaration under s. 28
of the 2009 Act that a credit rating report published online by the defendant in the UK was
defamatory. The report was available only to subscribers of the Dun & Bradstreet service,
which required payment to access the information hidden behind a paywall.

The court refused jurisdiction, on the basis that "Although the information in the present case
was technically “accessible” in Ireland due to the fact that the respondent company has Irish
clients, it must still be proven that it has been accessed in this jurisdiction to show
publication within s.28 of the Act of 2009. This has not been made out by the applicant."

The court further held that "Looking at the European jurisprudence as outlined in Martinez
and Shevill the court will only proceed to apply the centre of interest test after publication is
made out. The Supreme Court examining the same jurisprudence in Coleman could not infer
publication from a subscription site where the information was not readily accessible in this
jurisdiction. Furthermore no evidence of publication in Ireland has been made out."

Robbins v Buzzfeed [2021] IEHC 433

The jurisdiction rule as set down in eDate, as well as the decisions in Coleman and CSI
Manufacturing, were recently examined in Robbins v Buzzfeed.

The plaintiff – a US citizen - sought to institute proceedings in Ireland in respect of a


statement placed online in the US by a US company. The question raised by the defendant
was whether the plaintiff had to establish publication of the article in this jurisdiction so as to
grant the Irish courts jurisdiction.

Evidence was adduced that 13,000 people had accessed the story in this jurisdiction, so the
court had no difficulty in holding that there had been publication in Ireland.

The Court correctly stated that such evidence was not required under eDate, as the mere
"accessibility" of the information online would suffice. In doing so, it commented that there
was a difference between what is required to establish jurisdiction, and what is required to
actually ground a case for defamation"

"It is not in dispute that an essential element of the tort of defamation is publication and there
is no dispute that, in order to succeed in its claim at trial, the plaintiff must prove publication
as an essential element of the tort. That is not, however, to say that, with regard to the rules
derived from Brussels Recast, in particular Article 7 (2) as interpreted by the CJEU in
Martinez, a plaintiff must also prove the fact of publication or access (as opposed to or,
rather, in addition to, accessibility of the online material) in the context of asserting
jurisdiction pursuant to Article 7 (2)."

In relation to CSI, however, Heslin J also commented that "the decision in CSI
Manufacturing is not, in my view, authority for the proposition that, whereas Martinez gives
jurisdiction where material is accessible, one also has to establish that the material was in
fact accessed.

Rather, Kearns P. noted the correct test ... and dealing with a subscriber–only site where the
only evidence of access was from one subscriber in Northern Ireland, Kearns P. found on
the facts of the case before him that the material was not accessible, not having been
accessed."

*Week 6: Online Speech (Part 2)


Overview of week 6
1) Review of Week 5

2) The basics of defamation


• The necessary ingredients

3) Online defamation
• Meaning on the internet
• The nature and extent of publication

4) The remedies
• The traditional principles
• Assessing damages for online defamation

5) The Review of the Defamation Act 2009

6) Online defamation case law

1. Defamation: The Basics


● A civil cause of action, governed by the Defamation Act 2009. Section 6(2) provides
that:
"The tort of defamation consists of the publication, by any means, of a
defamatory statement concerning a person to one or more than one person (other
than the first-mentioned person)…."

A plaintiff will therefore need to establish:


a) publication to at least one person other than the plaintiff. Generally speaking,
proof of publication will require to be adduced, though it will be inferred in certain
circumstances.

b) a defamatory meaning to the statement, namely "a statement which tends to


injure a person's reputation in the eyes of reasonable members of society."

c) identification of the plaintiff in the statement.

Some issues:

▪ Defamation is "actionable per se" - ie you do NOT need to adduce proof of damage. Once
you establish that a defamatory statement has been published, it is up to the defendant to
raise one of the available defences to escape liability.

▪ What is a "reasonable member of society"? The court will consider what is the 'ordinary
reasonable reader' of the statement, bearing in mind the medium through which it is
published. In Reynolds v Malocco [1999] ILRM 289, the court held that the expression "gay
bachelor" did not connote a person who was ‘lively, cheerful, vivacious, light-hearted, fond of
pleasure and Gaiety’. Instead, it clearly suggested that the plaintiff was homosexual - "one
would have to be a resident of the moon not to be aware of this."

▪ The fact that the 2009 Act refers to publication "by any means" provides that
publication via the internet is covered by the Act.

The defences:

A defendant has various defences open to them, including:

• Statute of limitations - proceedings will generally need to be instituted within 12 months


of the material being published. For publication via the internet, the date on which this period
starts is the date it the statement "is first capable of being viewed through that medium..."

• Truth - is the statement "substantially" true?

• Privilege – absolute or qualified

• Fair and reasonable publication on a matter of public interest

• Honest opinion – is your opinion honestly held?


• Innocent publication – under s.27, very important in relation to internet intermediaries
Section 27 of the Defamation Act 2009 (Art 14 E-Commerce Dir.)

1) It shall be a defence (to be known as the “defence of innocent publication ”) to a


defamation action for the defendant to prove that—
a) he or she was not the author, editor or publisher of the statement to which
the action relates,

b) he or she took reasonable care in relation to its publication, and

c) he or she did not know, and had no reason to believe, that what he or she
did caused or contributed to the publication of a statement that would give rise to a
cause of action in defamation.

2) A person shall not, for the purposes of this section, be considered to be the author,
editor or publisher of a statement if—

a) in relation to printed material containing the statement, he or she was


responsible for the printing, production, distribution or selling only of the printed
material,

b) in relation to a film or sound recording containing the statement, he or she


was responsible for the processing, copying, distribution, exhibition or selling only of
the film or sound recording,

c) in relation to any electronic medium on which the statement is recorded or


stored, he or she was responsible for the processing, copying, distribution or selling
only of the electronic medium or was responsible for the operation or provision only
of any equipment, system or service by means of which the statement would be
capable of being retrieved, copied, distributed or made available.

2. Online Defamation Specific Issues

Publication online:

Even though the internet has technically a world-wide reach consisting of billions of users,
some evidence of publication will need to be established.

In Coleman v MGN [2012] 2 ILRM 81, the Supreme Court held that:

"There is a need for evidence of publication to establish the tort of defamation. There
is no evidence before the Court that the Daily Mirror was published online in 2003. There is
no evidence that the daily edition of the Daily Mirror was on the world wide web in
2003.Thirdly, there is no evidence of any hits on any such site in this jurisdiction. These are
fatal flaws in the plaintiff's case."
The placing of a statement to the internet will not of itself be sufficient to ground a case in
defamation.

In Dow Jones v Gutnick [2002] HCA 56, the High Court of Australia held that:

“It is only when the material is in comprehensible form that the damage to
reputation is done and it is damage to reputation which is the principal focus of
defamation, not any quality of the defendant's conduct. In the case of material on the
World Wide Web, it is not available in comprehensible form until downloaded on to
the computer where that person downloads the material that the damage to
reputation may be done.” – no longer good law, must only show that it has been
viewed.

In Ryanair v Fleming [2016] IECA 265, the High Court held that to establish publication it
would be for the claimant to prove that the material in question was accessed or
downloaded." (Is 'downloading' proof of publication?) – accessing of info completes
publication

Factors that will go to proof of publication online:

• Google Analytics figures – little evidence of their use in the courts so far.

• Number of followers/ friends that the defendant has on Twitter/ Facebook – inference will
be drawn.

• Number of retweets/ reposts/ likes of a statement – in the recent Australian case of


Goldberg v Voigt [2020] NSWDC 174, the court held that:

"In my view, as a matter of common sense, someone who “likes” a post is


likely to have read it, or they would not be “liking” anything at all." See also Foster v
Jassen [2021] NIQB 56 (considered later)

Who other than the author can be held liable for publishing the material?

• Is a retweet republishing of defamatory material?


Touched on in the UK in McAlpine v Bercow [2013] EWHC 1342.

• Can a "like" on Twitter or Facebook be considered to be republication? No – yet to be


considered in Europe/Ire/UK
In Bolton v Stoltenberg [2018] NSWSC 1518, the Australian court held that "Liking”
a post is not … the same as hyperlinking a defamatory article..."

See also Melike v Turkey (ECtHR, 35686/19), in which the ECtHR held that "the
applicant was not the individual who had created and published the content on the social
network in question; her action had been limited to clicking on the “Like” button below that
content. Adding a “Like” to online content could not be considered to carry the same weight
as sharing the content on social networks, in that a “Like” merely expresses sympathy for the
content published, and not an active desire to disseminate it. Further, the authorities had not
alleged that the content in question had reached a very large audience on the social media
concerned.

• Is a hyperlink to defamatory material considered publication of that material? – No, unless


actively encourage someone to follow the link or if you incorporate some defamatory
material in the link.
No, according to ECtHR in Magyar Jeti ZRT v Hungary (App No 11257/16), unless
it incorporates and/or promotes the defamatory material, according to. No, according to
Australian High Court in recent Google LLC v Defteros [2022] HCA 27.

• Can a company operating a Facebook page liable for user's comments? – Yes for
commercial/business accounts
Yes, according to the High Court of Australia in its recent decision in Fairfax & Ors v
Voller [2021] HCA 27.

• If Voller is good law, can comments left on someone's personal Facebook profile, or in
reply to their tweets, attract liability for the operator of that account?

• This involves a consideration of whether the material was "republished" by the original
author, or "repeated" by the commenter. Link between your post and the defamer’s
comment.

• See also the ECtHR in Sanchez v France – extends liability to individuals whose use is
commercial/political.

Meaning:

The general rule about how to interpret the meaning of a statement was approved by the
Court of Appeal in Gilchrist v Sunday Newspapers [2017] IECA 191. This includes the
factors that:

• The governing principle is reasonableness.

• The hypothetical reasonable reader is not naïve but he is not unduly suspicious. He
can read between the lines...

• Over-elaborate analysis is best avoided.

• The article must be read as a whole, and any “bane and antidote” taken together –
cannot sue just on the basis of a headline for example as it may be ironic or clickbait etc.

• The hypothetical reader is taken to be representative of those who would read the
publication in question.

Should statements made via the internet be treated differently by the courts?

When deciding on the natural and ordinary meaning of a statement, the nature of
the audience, and the medium through which it is communicated will be factors to consider.
Monroe v Hopkins [2018] EWHC 3525 (considered later)
"It is very important when assessing the medium of a Tweet not to be over-
analytical … People tend to scroll through messages relatively quickly. Largely, the
meaning that an ordinary reasonable reader will receive from a Tweet is likely to be
more impressionistic than … from a newspaper article."

Acumen Law v Nguyen [2018] BCSC 961

Defendant posted a negative review stating "I spent nearly $2000 for [lawyer] to lose a case
for me that they seemed they didn't put any effort into. Anywhere else would be moore
helpful. worstest lawyer. would not recommend..." The court held that it would not be taken
seriously by ordinary readers, due partly to the poor grammar.

See also Stocker v Stocker [2019] UKSC 17 (considered later).

3. Defamation Remedies
Remedies for victims of online defamation:

1. Damages:
General principles in respect of how damages for defamation will be assessed were recently
explained by the High Court of Northern Ireland in Foster v Jessen [2021] NIQB 56.

The purpose of damages is described follows:

"The award of general damages in defamation proceedings is intended to


serve the following 3 functions, namely:

(i) To act as a consolation to the plaintiff for the distress the plaintiff suffers
from the publication of the statement;

(ii) To repair loss to the plaintiff’s reputation; and

(iii) As a vindication for the plaintiff’s reputation."

Vindication is an aspect of the award so that if the allegations should re-emerge … the
plaintiff "must be able to point to a sum awarded by a jury sufficient to convince a bystander
of the baselessness of the charge."

Factors that will be taken into account in respect of the quantum of damages include:

• The gravity and extent of publication of the statement

• The effect on the plaintiff, to include any aggravating behaviour by the defendant
• Actual financial loss or expenditure by the plaintiff (special damages)
• Mitigating factors such as a quick take-down of the publication, an apology and retraction
by the defendant.

Examples of awards:

The Court may consider extent of initial publication, and difficulty in controlling its further
dissemination. Court in Cairns v Modi [2012] EWHC 756 awarded plaintiff £90,000 for a
tweet seen only by 65 people, approving the dicta of Lord Atkin in Ley v Hamilton that it was

"impossible to track the scandal, to know what quarters the poison may
reach..." – i.e was it discussed elsewhere? Did the 65 people tell others?

Inconsistency, however, as to whether defamation on social media will attract greater


damages than in traditional media:

AB v Facebook [2013] NIQB 14, plaintiff awarded £35,000 as libel on Facebook page was
equated with "the main page of a leading newspaper or a popular television programme".

Monroe v Hopkins [2017] EWHC 433, the plaintiff was awarded £24,000 for two tweets
which suggested she supported the vandalising of war memorials.

Monir v Wood [2018] EWHC 3525, tweet which labelled the plaintiff a sex offender attracted
£40,000, though the court admitted it would have been £250,000 had it appeared in a
national newspaper.

Foster v Jessen [2021] NIQB 56, the High Court of Northern Ireland awarded £125,000 for
a seriously defamatory post about the First Minister. (considered later)

In this jurisdiction, there have been two substantial awards made by Circuit Courts for cases
involving defamatory statements published on Facebook.

▪ In June 2016, a county Monaghan man was ordered to pay €75,000 in damages to the
Plaintiff after being found guilty of having defamed him on Facebook by suggesting that he
had been responsible for the National Regional Game Council “going broke.”

▪ A substantial award was also made in November 2017 in Roscommon Circuit Court, where
the former secretary of a darts society was found to have been defamed by comments,
posted on the “Darts in Ireland” Facebook, which suggested that he was responsible for the
disappearance of money from the Roscommon County Darts society in the 1980s and
1990s. The Court found that the Plaintiff had been the victim of “a particularly nasty
defamation” with “pretty devastating effects for him” and awarded him €60,000 in damages.

2. Injunctions:
This is usually the most pressing requirement is to have the material removed or blocked.
First stage would be to seek voluntary taking down of the material by the platform.

Test for granting an injunction for defamation has usually been a strict one, requiring proof
that the defendant "had no defence that was reasonably likely to succeed". Codified by s. 33
of the 2009 Act, which allows for the prohibition of publication of defamatory material (even
before it is published but courts are reluctant to do so unless it is crystal clear that it is
defamatory and the D cannot win).

Section 33 of the 2009 Act:

The High Court, or where a defamation action has been brought, the court in which it was
brought, may, upon the application of the plaintiff, make an order prohibiting the publication
or further publication of the statement in respect of which the application was made if in its
opinion—

a) the statement is defamatory, and

b) the defendant has no defence to the action that is reasonably likely to succeed.

This section was recently considered by the High Court in LIDL v Irish Farmers
Association [2021] IEHC 381.

Cases involving applications for injunctions:

• In Tansey v Gill [2012] 1 IR 380, Peart J in the High Court suggested a less strenuous test
when someone has been defamed on the internet.

• This appears to have been rejected by Barrett J recently in Philpott v Irish Examiner
[2018] 3 IR 565, who stressed the material must be defamatory (2009 position).

• In Muwema v Facebook [2016] IEHC 519, Binchy J held that a plaintiff would almost
never be able to obtain injunctive relief against a social media platform, as the latter would
always have the defence of innocent publication available to them (being
author/editor/publisher is only 1 part of 3 of the test, also must establish reasonable care
taken and 1 more thing.

4. Review of the Defamation Act 2009


In March 2022, the Department of Justice published its Report into the Review of the 2009
Act. Its principal recommendations include:

▪ The requirement for providing online publication should be clarified, and that the definitions
and potential liability of, and defences open to, “authors”, “editors” and “publishers” be
clarified. This is clearly aimed at rectifying the much-complained issue with the s.27 defence
of Innocent Publication in the 2009 Act, which provides a defence without defining any of
those terms, and instead relying on examples of what they are not (publisher = traditional
form).

▪ The Report recommends that the s.27 defence should be explicitly extended to “operators
of websites”. It points out that “such a defence already exists in England and Wales”, a clear
reference to the s.5 defence under the UK Defamation Act 2013. There is, unfortunately, no
reference in the Report to the requirement specified in the UK Act and Australian Bill, and it
thus appears to miss the point that at the heart of this Complaints Mechanism is a
requirement for anonymous users to be ultimately identifiable. No injunction if innocent
publication defence available for social media platforms.

▪ The Report recommends an amendment to the manner in which an applicant can apply to
have defamatory material removed from the internet. It is unclear, however, exactly what the
Report envisages in suggesting a “faster mechanism” to deal with applications to take down
material.

The problem with s.33 is not that it is slow – the problem is that the requirement to show that
the Defendant has no defence that is reasonably likely to succeed, in an application against
an internet intermediary, is a hugely problematic one.

▪ A most welcome recommendation of the Report in respect of online defamation is that a


statutory power be created to grant Norwich Pharmacal relief, and that this be extended
to the Circuit Court. No reference is made, however, to the costs of such applications.

▪ Despite various oblique references to anonymity, no specific recommendations of any


sort are made in respect of how to deal with online anonymity, such as compelling social
media platforms to have verifiable information about their users’ identity. This is most
disappointing.

5. Online Defamation Recent Case Law


Meaning, and extent of publication, of a statement published on Twitter: Monroe v
Hopkins [2017] EWHC 433

Twitter spat between activist and journalist.

The plaintiff was food blogger and a political activist, whose father was in the military, and
a frequent user of Twitter to voice her opposition to Conservative party politics. The
defendant was journalist Katie Hopkins.

The day after the 2015 General Election in the UK, there was a protest in central London at
which a war memorial was vandalised. A political commentator , @RedPenny, voiced her
support for those who defaced the monument, which led Katie Hopkins to aim critical tweets
in her direction. Hopkins then turned her attention to Jack Monroe with the following tweet:

"@MsJackMonroe scrawled on any memorials recently? Vandalised the


memory of those who fought for your freedom. Grandma got any more medals?"
When Ms. Monroe objected to the tweet pointing out her brother and father were both in
the army, Hopkins doubled down, referring to Monroe as "social anthrax"

The court performed an extensive analysis of the way that Twitter operates, the extent of
publication through the use of Twitter analytics, and the issue of what the natural and
ordinary meaning of the tweets should be, and noted that:

• Twitter is a "conversational medium";

• it would be wrong to engage in elaborate analysis of a 140 character tweet - an


impressionistic approach is much more fitting and appropriate to the medium;

• This impressionistic approach must take account of the whole tweet and the context in
which the ordinary reasonable reader would read that tweet. That context includes (a)
matters of ordinary general knowledge; and (b) matters that were put before that reader via
Twitter.

Notwithstanding this, the court rejected the submission that normal principles should be set
aside when considering the "wild west" of social media, and that the statement should be
taken less seriously simply because it was made via Twitter. It concluded that the ordinary
reader of Twitter would hold the tweets to be defamatory.

Application for injunction to have material taken down from Facebook: Muwema v
Facebook [2017] IEHC 519

The plaintiff was a Nigerian lawyer who claimed to have been defamed by comments
posted on a Facebook page by a user identified only as 'TVO'. He sought a Norwich
Pharmacal order to identify the user of that account (which was granted at first instance)
and injunctive relief compelling Facebook to take down the posts, pursuant to s.33 of the
2009 Act.

The Court considered s.33 (injunction app.) in conjunction with s.27 (the defence of innocent
publication) to decide under what circumstances Facebook may be obliged to remove the
material. Having concluded that Mr. Muwema needed to establish that Facebook ("the
defendant") had no defence that was likely to succeed, the Court decided that, pursuant to
s.27, Facebook had an absolute defence as it was not the "publisher" of the material. (There
should be a separate test for intermediaries and publishers).

He furthermore stated as a general proposition that a plaintiff would never be able to get
relief against an intermediary such as Facebook, as they latter would always succeed
under the defence offered by s.27. He based this finding on the following:

"On the face of it the defendant has available to it a statutory defence to the
proceedings issued against it. It was submitted on behalf of the plaintiff that the
defendant cannot avail of the defence of innocent publication because it was made
aware by the plaintiff of the defamatory material and declined to take it down from its
platform. But the criteria for eligibility for the defence are not drawn in that way..." –
not correct, that is exactly how it is drawn.
This decision appears to be incorrect in law. The Court appears to have been satisfied that
sub section (a) of section 27, of itself, is enough to provide for that defence of innocent
publication. But the Act says that sub sections (b) and c must also be satisfied, and in this
case Facebook would clearly appear to have been on notice that the material they were
hosting "would give rise to a cause of action in defamation."

Meaning of a statement published on Facebook: Stocker v Stocker [2019] UKSC 17

“He tried to strangle me.” What would those words convey to the ordinary
reasonable reader of a Facebook post?"

The parties were a formerly married couple who had gone through an acrimonious divorce.
Mr. Stocker brought defamation proceedings against his ex-wife in respect of comments
made by the latter on the Facebook page of his new girlfriend, Ms. Bligh, which alleged that
he had “tried to strangle” her. In the High Court and Court of Appeal, the court held that this
meant he had tried to kill her, which was considered defamatory of him.

When Mrs. Stocker appealed to the Supreme Court, the court criticised the over-literal
interpretation adopted by the lower courts:

“"The imperative is to ascertain how a typical reader would interpret the


message. That search should reflect the circumstance that this is a casual medium; it
is in the nature of conversation rather than carefully chosen expression, and it is pre-
eminently one in which the reader reads and passes on."

The Court considered that the trial judge performed an overly analytical approach to arriving
at the meaning of the statement, holding that “he failed to conduct a realistic exploration of
how the ordinary reader of the post would have understood it. Readers of Facebook posts
do not subject them to close analysis."

Also considered the issue of publication, with the defendant claiming that it was the
operator of the Facebook profile (her ex-husband's new partner) who published the
statement, or alternatively re-published it. The Court of Appeal disagreed:

“I do not accept that the publications with this case was concerned were
republications … the posting of the Comments on Ms Bligh's Facebook Wall was in
reality no different in substance or in principle to the putting up of a notice on a
conventional notice board, accessible to third parties."

How to quantify an award for defamation: Foster v Jessen [2021] NIQB 56

The plaintiff was the First Minister for Northern Ireland in or around Christmas 2019. During
the week or so leading up to Christmas 2019, Mrs Foster was engaged in intense
negotiations with a view to the re-establishment of the Northern Ireland Executive.

Around this time, a number of anonymous tweets were posted alleging that Mrs Foster had
been discovered having an affair with one of her Close Protection Officers and that her
marriage had broken down. On the evening of 23 December, the defendant Christian
Jessen, doctor and presenter of Ch4 'Embarrassing Bodies' posted the following tweet:

“Rumours are bouncing around that the DUPs Arlene Foster has been busted
having an affair. Isn’t she the ‘sanctity of marriage’ preaching woman? It always comes to
bite them in the arse in the end. Rather satisfying for us gay boys who she made feel even
shittier….”

The defendant has 311,000 followers on Twitter, and it was retweeted 517 times and liked
approximately 3,500 times. Solicitors for the plaintiff posted a message on his Twitter
account the following day, demanding that the tweet be removed. Ultimately, it was not taken
down until 7 January 2020.

The court held that:

"The duration of publication of the libel on the defendant’s Twitter account,


the large numbers who became aware of the content of the defamatory statement
and the significant number who repeated it are important factors to take into account
when determining the appropriate award."

In considering the proportionality of the award, the Court compared it to the NI equivalent of
the Book of Quantum, which estimates the loss of an eye to be between £80-140,000, and
amputation between £125-250,000. The Court settled on an award of £125,000 plus costs.

Damages for defamation via Instagram: Aslani v Sobierajska [2021] EWHC 2127 (QB)

The plaintiff was a plastic surgeon based in Marbella with a particular specialisation in a
procedure known as a "BBL". He was a respected member of the American Institute of
Plastic surgeons, and had performed surgeries c. 14,000 times.

He had performed the BBL surgery on the defendant on several occasions, a social media
influencer, who then posted defamatory comments about the surgeon on Instagram. At the
time of the postings, she had approx. 53,000 followers.

The claim related to four postings on Instagram, the first being a video which claimed:

"He botched my body, he left me disfigured ... I have serious issues with my
head, with my image, with my self-image because of him and he is bringing my name
up to pages trying to pinpoint them on to me. He keeps talking about me to his
patients, he brings up my name to his patients … this is not normal ... I have never
ever seen a professional surgeon act like this."

The Court noted that Social media platforms, especially Instagram with its focus on images,
form an important part of the Claimant's advertising of his services. In line with his customer
base, a significant proportion of his Instagram followers will be from the UK. To Court also
noted that:
▪ approximately 15 people noted the publications to the Claimant ;

▪ that the Defendant has been in contact with prospective clients of the Claimant;

▪ that there has been a decrease in the Claimant's bookings;

▪ that 66 people interacted with the review on RealSelf, a website used by those
contemplating surgery and specific surgeons.

▪ the Claimant named six people who did not proceed with surgery, with a
contributing factor being the Defendant's campaign against the Claimant."

▪ the nature of the Defendant's followers is also very relevant. The Defendant had
developed a reputation as an advocate of BBLs and accordingly followers were likely to have
an active interest in this type of surgery.

The Court awarded £40,000 in damages to the Plaintiff.

You might also like