Internet Law Notes
Internet Law Notes
Introduction
Required reading
Syllabus
• New legislation: Online Safety and Media Regulation Act 2022 (for big corps)
• Case law concerning liability of Platforms for User Generated Content - recent
proceedings
• ECtHR decision in Sanchez v France – if you have a fb page, can you be held liable
● Given in week 4
• The GDPR
Assessment
Week 12 – Assignment
– 1 problem question option
– 2 essay question options
– Copyright
– Use of hyperlinks
– Two newspapers with websites, one website was taking news from another’s
– Mr Tansey was a barrister that sued Mr Gill who ran the website. Gill had had bad
experiences with solicitors and took it as a personal vendetta and uploaded most of the
opinions himself.
"Anything can be said publicly about any person, and about any aspect of
their life whether private or public, with relative impunity, and anonymously, whereby
reputations can be instantly and permanently damaged, and where serious distress
and damage may be caused to both the target, children and adults alike, leading in
offence under criminal law, with a penalty upon conviction sufficient to act as a real
deterrent to the perpetrator. The civil remedies currently available have been recently
In this case, the liability of the defendants stemmed from their decision not to remove the
defamatory notice despite having the power and authority to do so. Whether they could be
liable for material posted by third parties would depend on factors such as their knowledge of
the material, their ability to control its presence, and whether they acted to remove it or
No, drawing attention to defamatory material is not a prerequisite for liability in defamation
cases. In Byrne v Dean, the defendants were held liable for publication because they
knowingly allowed the defamatory notice to remain on their property. Their failure to remove
it implied consent to its presence, and this constituted participation in its publication.
Whether the defendants actively promoted or merely allowed the material to be seen, they
were still considered responsible for its continued presence and potential harm to the
plaintiff's reputation.
The Internet
A global collection of networks which connect computers. It is not one physical network, but
rather a series of networks which communicate with each other based on various internet
protocols. These protocols are referred to as TCP/IP, which refers to the two most important
protocols – the Transmission Protocol (TCP) and the Internet Protocol (IP). No single
authority owns or controls the internet, and no single set of laws governs its content. The
internet hosts the world wide web, email, peer-to-peer file sharing and internet telephony.
Launched in 1989 by Tim Berners-Lee, it is a collection of pages and images that reside on
the internet. Beginning with static content such as newspaper pages and brochures, it
started offering interactive services such as Online banking, video gaming, dating services,
graphics editing, home videos, and webmail by the year 2000. These online services are
now referred to as Web 2.0. You access the world wide web through a web browser such as
Website operators
eg. News websites, businesses who wish to promote their own goods and/or services
Internet intermediaries
Internet service providers (email hosts, domain name registration, internet access providers),
1. Anonymity
“Anonymity is a shield from the tyranny of the majority ... The right to remain anonymous
may be abused when it shields fraudulent conduct. But political speech by its nature will
sometimes have unpalatable consequences and, in general, our society accords greater
weight to the value of free speech than to the dangers of its misuse.” – US Supreme Court
hierarchical and discriminatory" than in the real world because it disguises status indicators
such as race, class, and age.” – Doe v Cahill (884 A.2d 451 (2005))
– “Genie out of the bottle” – McKeogh v Doe [2012] IEHC 95, Muwema v
3. Permanence
“The web makes a lie of the old cliché that today’s newspaper pages are tomorrow’s fish
and chip wrapping. Nowadays ... the things you say about yourself in a newspaper are more
like tattoos – they can be extremely difficult to get rid of.” – The Guardian, 2008,
4. Jurisdictional issues
[2011]
The internet has changed the way we communicate, work and transact.
‘In common with other aspects of life, the internet has both a positive and a dark side. On the
positive side, its aids free communication; it opens up avenues of knowledge so that it has
become a centre of learning in itself; it furthers public debate; and has established the
swiftest and most far-reaching form of communication that humanity has known. It is, on the
other hand, also thickly populated by fraudsters, pornographers of the worst kind and
cranks."
- Charleton J in EMI Records (Ireland) Ltd and Others v Eircom Ltd [2010] 4 IR
349
6. How has the law adapted to the internet?
“The internet is only a means of communication. It has not rewritten the legal rules of each
entitlement to norms that run counter to the fundamental principles of human rights.” –
Charleton J in EMI Records (Ireland) Ltd and Others v Eircom Ltd [2010] 4 IR 349
b) The next step is to consider how it may need to be adapted to take into account
SPEECH
"The internet is a unique democratizing medium unlike anything that has come before. The
advent of the internet dramatically changed the nature of public discourse by allowing more
and diverse people to engage in public debate. Unlike thirty years ago, when "many citizens
were barred from meaningful participation in public discourse by financial or status
inequalities and a relatively small number of powerful speakers could dominate the
marketplace of ideas" the internet now allows anyone with a phone line to become a town
crier with a voice that resonates farther than it could from any soapbox." – Doe v Cahill (884
A.2d 451 (2005))
COMMERCE
"The development of electronic commerce within the information society offers significant
employment opportunities in the Community, particularly in small and medium-sized
enterprises, and will stimulate economic growth and investment in innovation by European
companies, and can also enhance the competitiveness of European industry, provided that
everyone has access to the Internet." – Directive 2000/31/EC "The E-Commerce
Directive", Recital 2
FUNDAMENTAL RIGHTS
"The Internet has now become one of the principal means by which individuals exercise
their right to freedom of expression and information, providing as it does essential tools for
participation in activities and discussions concerning political issues and issues of general
interest" – Ahmet Yildirim v Turkey (APP NO 3111/10, ECTHR)
– Convicted criminals, even sex offenders whose crimes involved use of the internet,
have been held to have a right to internet access:
"Before the creation of the Internet, if a defendant kept books of pictures of child
pornography it would not have occurred to anyone to ban him from possession of all printed
material. The Internet is a modern equivalent.” – English Court of Appeal in R v Smith; R v
Clarke; R v Hall; R v Dodd [2012] 1 WLR 1316. See also R v Parsons; R v Morgan
[2017] EWCA Crim 2163.
– Consider also:
- Facebook's defence for not introducing mandatory ID when signing up
- US case law concerning access to websites for the visually-impaired
– Courts have often drawn a distinction between online speech, and material that is
published in more traditional media:
"I consider that the Court of Appeal focused too narrowly on the disclosures already made
on the internet and did not give due weight to the qualitative difference in intrusiveness and
distress likely to be involved in what is now proposed by way of unrestricted publication by
the English media in hard copy as well as on their own internet sites." – PJS v News Group
Newspapers [2016] UKSC 26.
The English High Court said the following in relation to an application to prevent the
publication of the name of a woman who was said to be having an affair with Fred Goodwin,
a prominent British banker:
‘The degree of intrusion into a person’s private life which is caused by internet publications
is different from the degree of intrusion caused by print and broadcast media … Once a
person’s name appears on a newspaper or other media archive, it may well remain there
indefinitely. Names mentioned on social networking sites are less likely to be permanent.” –
Goodwin v NGN Ltd [2011] EWHC 1437
– See also:
Recent judgement of the UK Supreme Court where it emphasised the free-wheeling element
of online discourse and suggested that the average reader does not place as much weight
on comments made on social media as it would in the more thoughtful, reflective forum of
traditional media. – Stocker v Stocker [2019] UKSC 17
– However...
In the Canadian case the Supreme Court of Ontario noted the context in which the offending
statement was made, affirming itself to be “very mindful that political discourse on weblogs
and message boards ... is qualitatively different than political discourse in more “traditional”
media like newspapers and television.” However, it held that:
“Implicit in the Defendants' submissions is that based on the rough and tumble nature of
these media platforms there would be little, if anything, that would tend to lower the plaintiff’s
reputation in the eyes of a reasonable reader. However, there is nothing in the law of
defamation to suggest that that is the case.” – Baglow v Smith, Fournier and Fournier
[2015] ONSC 1175
NB: This has not yet been considered by the Irish courts…
– And finally...
– The concept of what constitutes a ‘journalist’ has become more complex in the internet
age, as the proliferation of user-generated content, as the emergence of online ‘citizen
journalists’ has eroded the previously clear line between the media and the general public.
Why is this important?
- A ‘Bona fide member of the press’ is referred to in both s 40 of the Civil Liability and Courts
Act 2004 (family law cases) and s 159 of the Data Protection Act 2018 (reporting of personal
data processed during court proceedings.)
– See also
– "I conclude that a blogger who regularly disseminates news to a significant body of the
public can be a journalist." – Slater v Blomfeld [2014] NZHC 2221
Overview of Week 2
Week 1 refresher
1) The introduction to intermediary intermediaries
2) The Technology giants
3) Intermediaries as "publishers"
4) Self-regulation by intermediaries
5) Current legislation regulation of internet intermediaries
6) Upcoming legislation
1. Introduction
(A) Background
Harmful material online:
• Harmful online content is a fact of modern life. Gone is the filtering process involved in
traditional media – with social media, publication of content is not only instantaneous, but is
also unfiltered and uncensored.
• Use of the internet, and social media platforms in particular, grows years on year. The
Covid pandemic, along with continued free access to almost every major platform, has
served to accelerate this change.
• With more users comes more content, and therefore more harmful content. The challenge
is how to deal with this.
• While technology giants are clamping down on such material more than ever, they clearly
have commercial interests to protect.
• Most legal proceedings involving the internet feature a combination of three participants in
online communications; the party that creates the content, the party that accesses the
content, and the party which enables the first two to communicate – the intermediary.
• While some intermediaries simply enable a user to gain access to the internet, other
intermediaries actively facilitate and control the sharing of user-generated content.
Types of intermediary:
• Internet service providers (‘ISPs’), which provide services for accessing and using the
internet. These can be subdivided further into those organisations which connect users to
the internet (‘internet access providers’), and those which provide email hosting, website
hosting and domain name registration. Many ISPs perform more than one of these functions.
i.e: Virgin media and broadband, Google for email yes but not as a search engine
• E-commerce intermediaries, where these platforms do not take title to the goods being
sold, i.e. online marketplaces and auction sites such as Amazon and eBay, and Internet
payment systems (facilitate the public selling items online).
• Networking platforms, which include internet publishing and broadcasting platforms that
do not themselves create or own the content being published or broadcast, to include social
networking platforms, blogging platforms, video sharing websites and instant messaging
platforms.
2. The Technology Giants
Who are the most important intermediaries?
2005 Facebook
YouTube (acquired by Google in 2006)
2006 Twitter
2011 Snapchat
Google $280bn
Microsoft $198bn
Facebook $116bn
Twitter $4.4bn
3. Intermediaries as Publishers
(A) Are intermediaries "publishers"?
• As “publisher” they would be liable at law for any unlawful content on their platforms.
• The "publication" of material for the purposes of defamation law can cause confusion in
relation to the concept of a traditional "publisher".
• At law, a "publisher" is essentially anyone who plays a part in material being brought to the
attention of the public. Depending on their degree of involvement, they may be a "primary" or
"secondary" publisher.
• Intermediaries are viewed as secondary publishers as they don’t generate their own
content but are clearly involved in the process.
• The general position of traditional publishers is that they are considered responsible for any
harmful material which they publish, whether the author is directly employed by them or not.
• For that reason, a newspaper publisher, tv or radio station would be responsible for
material contained in a reader's letter to the paper, or a comment made on air during a
broadcast discussion.
1. As "publisher" they would be liable at law for any unlawful content on their
platforms.
• The position of internet intermediaries – the companies which facilitate users in publishing
content – is somewhat different. Their default position has always been that they are not
publishers in the traditional sense, but are instead simply facilitators who allow individual
users to publish their own content.
• Judgement was delivered in respect of data protection legislation. Court's finding was that
Google's search engine ‘is an automated process where individual items of information on
the internet are collated automatically and facilitate the user searching particular topics or
names.’ Whether this would be the same in defamation proceedings would have to wait for
another day.
• The Australian Attorney General recently suggested that platforms such as Twitter and
Facebook should be treated as the primary publishers of material which they host, stating his
opinion that "My own view ... is that online platforms, so far as reasonably possible, should
be held to essentially the same standards as other publishers."
• There has been commentary in the US concerning the question of whether platforms such
as Facebook and Twitter should be considered publishers, in the light of their decision to ban
people such as Donald Trump from their networks. Is this the editorial decision of a
publisher?
• Google has asserted its right to "speech" as a publisher as a defence in proceedings in the
US. Court held in e-ventures Worldwide LLC v Google Inc that ‘Google’s actions in
formulating rankings for its search engine and in determining whether certain websites are
contrary to Google’s guidelines and thereby subject to removal are the same as decisions by
a newspaper editor regarding which content to publish, which article belongs on the front
page, and which article is unworthy of publication."
Google has argued for its entitlement to free speech and that is publication (US + CAN).
Different legislative framework there for intermediaries. Shows they can modify their position
when it suits them.
4. Protection of intermediaries at law
• The attraction of bringing proceedings against an intermediary must be balanced with their
importance to a properly-functioning internet.
• In EU, via the E-Commerce Directive 2000 (contains broad defences for intermediaries) /
Digital Services Act 2022.
▪ It may be inordinately difficult to identify the author/creator of the content, while the platform
is instantly identifiable.
▪ The author/creator, even if identified, may be outside the jurisdiction of the court, while the
platform is likely to be based in Ireland.
▪ The author/creator is often impecunious (poor), while the platform is a far more attractive
mark for damages.
• All social media platforms have terms and conditions of use to which its users consent
when signing up, and which deal with the manner in which the platform will handle material
which violates their own codes of conduct. A lot of the time their rules reflect the law (i.e
defamation, IP).
Facebook has "Community Standards"
Twitter has "Twitter Rules"
YouTube has "Community Guidelines"
• Some allow for accounts to be suspended after repeated breaches, others only allow for
content to be removed – YouTube operates a "three strikes and you're out" policy.
• For illegal material, such as child pornography, platforms appear very efficient.
Intermediaries have, to a large extent. For child exploitation, Facebook claims to remove
99% of content before anyone reports.
• Meta set up its "Oversight Board" in 2020 to deal with dissatisfaction with its notice and
takedown procedure. An independent body comprised of lawyers, politicians and academics.
The Board is charged with considering appeals from users whose material has been blocked
by Facebook, or whose request for material to be blocked has been refused by Facebook.
• Increasing dissatisfaction with speed with which they respond to notice and take down
requests concerning harmful content and dissatisfaction also with the manner in which they
moderate political speech. Automated tools can take months to come back if at all and the
timescales they operate on are decided by the platforms they are created by.
5.
5. Current Legislation
• Identified the importance of intermediaries and sought to provide a shield from unlimited
liability in respect of the information which passed through their networks.
• It has now been superseded by the Digital Services Act 2022, whose provisions are
almost identical in relation to intermediaries.
• The Act applies to legal or natural persons providing an ‘information Society service’. Such
services are defined as: (remuneration includes the giving up of personal data to a platform
so it can use that data to generate targeted advertising)
"any service normally provided for remuneration, at a distance, by
means of electronic equipment for the processing (including digital
compression) and storage of data, and at the individual request of a recipient
of a service."
• Chapter 2, which includes Articles 4-8, is the crucial section in respect of user generated
content as it covers the ‘Liability of Providers of Intermediary Services' and describes the
protection that will be afforded to internet intermediaries who offer information society
services.
Discussion:
• Article 4 covers internet intermediaries operating as ‘mere conduits’, and governs their role
in the transmission of information via electronic communication networks – i.e. internet
access providers and email service providers.
• Some internet intermediaries operate web-based email and internet message access
protocol services (IMAP), which retain copies of email messages on a server so that they
can be accessed by subscribers from a computer, tablet or smartphone. Because they do
not delete the emails immediately, such intermediaries would probably be considered to be
hosts (separate classification), rather than mere conduits.
2. This Article shall not affect the possibility for a court or administrative authority, in
accordance with Member States' legal systems, of requiring the service provider to terminate
or prevent an infringement.
Discussion:
Subject to certain conditions, and read together with Articles 7 & 8, this Article provides a
general defence for intermediaries in respect of user-generated content which is uploaded to
its site. Such content may take the any of the following forms:
• Content placed on social media platforms which are hosted by social media services such
as Facebook and Twitter.
• User comments which are placed underneath content generated by news websites.
1. Member States shall not impose a general obligation on providers, when providing the
services covered by Articles 4, 5 and 6, to monitor the information which they transmit or
store, nor a general obligation actively to seek facts or circumstances indicating illegal
activity.
Issues arising:
• In reality, the platforms do screen material. This has now been clarified by Article 7*, which
provides that "Providers of intermediary services shall not be deemed ineligible for the
exemptions from liability referred to in Articles 4, 5 and 6 solely because they, in good faith
and in a diligent manner, carry out voluntary own-initiative investigations
• Significantly, the prohibition against an obligation to monitor must be read in the light of
certain recent decisions which place heightened requirements on intermediaries such as
Google and Facebook"
C-18/18 Eva Glawischnig-Piesczek v Facebook
(B) US Legislation
Stratton Oakmont v. Prodigy Services Co. (N.Y. Sup. Ct., Nassau County May
24, 1995)
• Dates back to the earliest days of the internet in 1995. The defendant service provider
hosted comments, posted anonymously on its bulletin board hosted by Prodigy, which
accused the plaintiff bank and its President of fraudulent behaviour.
• Oakmont sued for defamation, and the New York Supreme Court held Prodigy liable on the
basis that it had positioned itself as a ‘family-orientated service’, and that it was ‘one of the
few bulletin boards in the country to screen all electronic messages for potential
improprieties.’ Having held itself out to monitor the content of its bulletin boards, the Court
held that ‘Prodigy is clearly making decisions as to content, and such decisions constitute
editorial control.’
• This resulted in enactment of s.230, which was upheld in Zeran v America Online, Inc.,
129 F.3d 327 (4th Cir. 1997)
• S.230 provides that ‘No provider or user of an interactive computer service shall be treated
as the publisher or speaker of any information provided by another information content
provider.’
• Provides immunity not just for large internet platforms, but also “users” ie for people who
host comments on their websites/ blogs.
6. New Legislation
• Law Reform Commission's 2016 Report on Harmful Communications and Digital Safety:
"As matters currently stand, while it would appear that the non-statutory self-
regulation by social media companies, through their content and conduct policies, has
improved in recent years, this may not be sufficient to address harmful communications
effectively … The Commission ... therefore recommends that an office be established on
statutory basis with dual roles in promoting digital and online safety and overseeing an
efficient and effective take down procedure in relation to harmful digital communications.”
Act creates a Media Commission (Comisiun na Meán) which has three main roles:
• Transpose the audio-visual media services Directive, to cover audio-visual material hosted
by internet intermediaries such as YouTube, Tik Tok etc.
• Create an Online Safety Commission* (unclear if it applies to hosts).
Part 11 of the Act deals with Online Safety, which will fall to be regulated by the
Online Safety Commission. The main purpose of the Online Safety Commission is to set
up an Online Safety Code under which designated online service providers will be obliged to
operate in respect of "Harmful Online Content."
Under section 139A, it falls under two main categories (it is a non-exhaustive list)
• Creation of Online Safety Code to govern the standards and practices to be observed by
designated online services. For example, time limits by which it must deal with certain types
of material; requirement to provide reports on notice and takedown requests etc.
• Power to impose fine of up to 20m euro or 10% of previous year's turnover; power to apply
to High Court to compel service to comply with notice; power to apply to High Court to have
access to service blocked.
• The main purpose of the Online Safety Commission is to set up an Online Safety Code
under which designated online service providers will be obliged to operate.
• Monitoring and reporting obligations will be imposed on these providers, who may be
subject to substantial sanctions if they fail to comply
• The Online Safety Code may lay down guidelines as to how the service providers'
complaints handling procedures operate (including presumably time scales under which they
must operate), but it is unclear how specific the Code will be in this regard.
• While section 139R provides for individuals to have complaints pursued by the
Commission, the scope of this facility is unclear at the moment.
• Individuals must first have exhausted the platform's own complaints procedure before it can
make a complaint to the Commissioner. Initially, priority will be given to complaints involving
material that is harmful to children, as per section 139U.
• the complainant has made a complaint to the provider of the designated online service
concerned about the availability of the content on the service;
• a period of more than 2 days has elapsed since the complainant made the complaint to the
provider;
• where the provider operates a process in accordance with an online safety code for
handling such a complaint, the complainant has taken reasonable steps in that period to
have the complaint resolved through that process.
• Has a specific provision intended to regulate internet intermediaries. Hate Offences Bill
2022, section 7(4):
In any proceedings for an offence under this section, it shall be a defence for a body
corporate to prove, as respects the communication of material by the body corporate, that
a) it has reasonable and effective measures in place to prevent the communication
generally of material inciting violence or hatred against a person or a group of persons on
account of their protected characteristics or any of those characteristics,
b) it was complying with the measures referred to in paragraph (a) at the time the
offence concerned was alleged to have been committed, and
c) it did not know and had no reason to suspect at the time the offence concerned
was alleged to have been committed that the content of the material concerned was
intended or likely to incite violence or hatred against a person or a group of 5 persons on
account of their protected characteristics or any of those characteristics.
3) Search Engines
4) Online publishers
Introduction:
The case, decided by the High Court of Australia, holds significant implications in the realm
of internet law and the liability of search engines for defamatory content linked through their
platforms. This case revolves around Mr. Defteros, an Australian lawyer, who alleged that
Google was legally liable for defamatory content related to him found in search results
provided by its search engine. The case raises critical questions about the responsibilities of
search engine operators in connection with the content they display.
Background:
Mr. Defteros, whose legal practice had represented members of the organized crime
community, took legal action against Google. He argued that Google, as a "secondary"
publisher, should be held responsible for not removing links to defamatory content after
being notified of their existence. The defamatory content included photographs of Mr.
Defteros with criminal associates and hyperlinks to articles, including a Wikipedia entry, that
he claimed were defamatory.
Google appealed, but in Defteros v Google LLC [2021] VSCA 167, the Victorian Supreme
Court of Appeal upheld the lower court's finding. Google contended that it would need to
have incorporated the defamatory material into the search result or encouraged users to
follow the links to be considered a publisher. The Court of Appeal rejected this argument,
deeming the hyperlinks themselves as an "enticement" to the users to click on the snippets
and access defamatory content.
Dissenting judges, particularly Justice Keane and Justice Gordon, argued for a broader
interpretation of "publication" in the internet age. They contended that Google actively
participated in the publication process by assisting users in accessing defamatory content
through its search results. Justice Gordon highlighted Google's commercial interest in
providing links to news articles and the inconsistency in Google's stance on its role in
providing search results.
Conclusion:
The Google v Defteros case clarifies that search engines like Google are not considered
primary publishers of content they index. However, it leaves unresolved questions about the
extent of their liability when defamatory material is accessed through their platforms. The
case underscores the challenges of applying traditional legal principles to the digital age and
the need for international consistency in determining the responsibilities of search engine
operators. Ultimately, this decision has brought attention to the complex legal landscape
surrounding internet intermediaries and the publication of online content.
1. It may be inordinately difficult to identify the author/creator of the content, while the
platform is instantly identifiable.
2. The author/creator, even if identified, may be outside the jurisdiction of the court, while the
platform is likely to be based in Ireland.
3. The author/creator is often impecunious, while the platform is a far more attractive mark
for damages.
IMPORTANT CONSIDERATIONS:
1. What kind of functions do they perform?
2. Can they be considered to be a primary or secondary publisher?
3. Do they need a defence, and if so what kind of defence is open to them?
A potentially defamatory message was left on the notice board of a golf club. It was seen by
the owners of the club, who could have taken it down. By leaving it up, could they be held
liable for defamation?
“If defamatory matter is left on the wall of premises by a person who has the
power to remove the defamatory matter from the wall he can be said to have
published the defamatory matter to the persons who read it.”
Plaintiff bookie claimed he was defamed in chatroom on the defendant's website, and
questions were whether a chatroom was an "information society service", and whether
Betfair could avail of Article 14 defence, as gambling platforms are specifically excluded from
the E-Commerce Directive.
Clarke J (as he was then) held that activities of the website operator could be separated:
“The respective activities are conducted on separate parts of the website with
no connectivity. Likewise, the “activities” concerned are very different... There is, in
my view, no significant nexus between the chatroom activity ... and the betting
activity ... which could reasonably lead to the characterisation of the chatroom as
being part of any betting activity that might be said to take place on the Betfair
website”
The plaintiff claimed to have been defamed by a blog post made by Defendant 1 on a
website operated by Defendant 2. Could Hilton be certain enough to have the hosting
defence under ECommerce Directive open to him that he could have the case against him
struck out? Hilton admitted to moderating certain aspects of the website, though he claimed
not to moderate the particular blog posts, but court held the was sufficient uncertainty not to
strike out proceedings. Referring to Betfair:
Claimant sued for defamation, but brought proceedings against three ISPs as well as the
author of the comments. The ISPs applied for proceedings to be struck out against them.
The Court likened ISPs to the postal service in that “ISPs do not participate in the process of
publication as such, but merely act as facilitators in a similar way to the postal services. They
provide a means of transmitting communications without in any way participating in that
process." Court held that there was no prospect of them being held liable as publishers of
the comments:
“I am also prepared to hold as a matter of law that an ISP which performs no
more than a passive role in facilitating postings on the internet cannot be deemed to
be a publisher at common law … thus they do not need a defence.”
3.
INTERNET
INTERMEDIARIES
SEARCH ENGINES (1)
Metropolitan Int. Schools v Designtechnica Corp & Ors [2009] EWHC 1765 (QB)
Issue: Does search engine "publish" the content contained in its snippets?
The plaintiff claimed that the first-named defendant’s website contained defamatory
postings. He also sought to bring proceedings against Google for the manner in which
material appeared in the search engine snippets which provided links to the allegedly
defamatory remarks about the plaintiff. Google objected to being served outside the
jurisdiction.
The High Court upheld Google’s objection on the basis that it could not be held to be a
publisher of the material in question. The Court stressed the difference between search
engines and hosts of websites in terms of their ability to edit, and held that Google could not
even be considered to be a publisher after it was notified of the offending material.
“[Google Inc] cannot be characterised as a publisher at common law. It has
not authorised or caused the snippet to appear on the user’s screen in any
meaningful sense. It has merely, by the provision of its search engine, played the role
of a facilitator.”
Savage v Data Protection Commissioner and Google Ireland [2018] IEHC 122
Issue: Does search engine "publish" the content contained in its snippets?
That was a case grounded in data protection rather than defamation, with the Court
remarking that different factors may be at play should the claim be one of defamation.
Nonetheless, the Court appeared to approve the decision of the English High Court in
Metropolitan as regards Google’s neutral status in the publication chain, and difference from
a website publisher, when it stated that the operation of Google’s search engine is:
Ms Duffy was an online psychic who was criticised when some of her predictions were
inaccurate. Details of the engagements with her critics were reported on a consumer interest
website, ripoffreport.com, which she claimed defamed her.
Google provided links to that website and incorporated some of the comments into its
snippets (see also Savage v DPC), and the autocomplete function gave the result "Janice
Duffy Psychic Stalker" . When Google failed to remove the links for 18 months after her first
complaint, she issued proceedings against them.
The regional Supreme Court held that Google should be considered a “secondary publisher”
of the material contained in the results produced by its search engine, as it was an
“indispensable, proximate step in its publication to the searcher.”
“Google established the algorithm and programmes of its search engine and
made that search engine available to all users of the internet ... Google participated
in the publication of the paragraphs about Dr Duffy produced by its search engine
because it intended its search engine to do what it programmed it to do.”
The search engine provider should not be held liable for any information it publishes
before being put on notice of its existence and would have a defence of after being put on
notice so long as it acted expeditiously to deal with the offending material. The court also
held that it re-published the material to which it provided a hyperlink as it had “incorporated”
such material into its search engine snippet:
“Google has republished the Ripoff Reports by abstracting sufficient material
to inform the searcher of its contents, by repeating and drawing attention to the
defamatory imputation, and by providing instantaneous access to it though the
hyperlink. The very purpose of an internet search engine is to encourage browsing
and it is designed to achieve that purpose.”
Defteros v Google LLC [2021] VSCA 167, Google LLC v Defteros [2022] HCA 27
Issue: Does search engine "publish" the content contained in its snippets?
Mr. Defteros is an Australian lawyer. In 2004, articles were published which linked him to
organised crime. His proceedings against Google were based on results that were produced
by Google’s search engine which included photographs of him with various members of the
Australian criminal fraternity and provided hyperlinks to articles about him which Mr Defteros
claimed were defamatory.
The Victorian Supreme Court of Appeal upheld an earlier decision of the Supreme Court of
Victoria that Google was not a primary publisher of the material, but that as soon as notice
had been given about the defamatory content, it was incumbent on the company to remove
offending content in an expeditious manner. It considered that it became a secondary
publisher just 7 days after being put on notice. At first instance, the Court had rejected the
suggestion that Google’s systems were entirely automated:
“The Google search engine … is not a passive tool. It is designed by humans
who work for Google to operate in the way it does, and in such a way that identified
objectionable content can be removed, by human intervention.”
In its appeal, Google had focussed on the submission that it would need to have
incorporated some of the defamatory material into the search engine result (as was the case
in Duffy), and/or “enticed” or “encouraged” the user to follow the hyperlink to the defamatory
material. The Court of Appeal rejected this argument, holding that the hyperlinks were an
“enticement” to the user to click on the snippets and follow the links to the defamatory article”
“The combination of the search terms, the text of the search result and the
insertion of the URL link filtered the mass of material on the internet and both
directed and encouraged the reader to click on the link for further information. The
fact that there was more information conveyed in Duffy (FC) does not detract from
the conclusion that there was sufficient conduct here to constitute publication.”
NB: This case was appealed to the High Court of Australia, and judgement given in
2022. See Authority notes. Law is moving in this position – old excuse of automatic
algorithms dwindling.
Issue: Only Irish case to consider potential liability of platform operator and host for UGC
In an application for injunctive relief against the website rateyoursolicitor.com, the Court
considered the damage caused by anonymous online publications. It did not, however,
consider the respective liability of the various defendants, including the first named
defendant who “ran” the website, the second-named defendant who was an “operator” of the
site, and her daughter, the fourth-named defendant, who “answered queries about it”. In
granting the injunction, the Court held that:
Issue: European Court of Human Rights considers liability of news website for user
comments
The applicant operated a large news website, which had published an article about the
damage being caused to an ice road in Estonia by a ferry company. In its comments section,
the article attracted severe criticism of a member of the ferry company's board.
The domestic courts held Delfi liable for the comments as a publisher rather than an
intermediary, noting that it “invited” the comments, and that they had an “economic interest
in
the publication of comments”. They were, therefore, not acting in a “technical, automatic or
passive” manner, even though they were not moderated, and were taken down 6 weeks
after publication.
It awarded €320 in damages. Delfi applied to the ECtHR on the basis that the decision
breached their article 10 right to freedom of expression.
The Grand Chamber declined to overturn the decision of the domestic courts, stating that
“The Court will thus proceed on the assumption that the Supreme Court’s judgement
must be understood to mean that the subsequent removal of the comments by the applicant
company, without delay after publication, would have sufficed for it to escape liability under
domestic law."
Issue: Australian High Court extends liability of intermediaries for third-party comments
The plaintiff had previously been in a youth detention Centre in Australia. When articles were
written about his experiences which criticised the centre, and which were published on
Facebook pages operated by news organisations, they attracted defamatory comments
about Mr Voller. He sued the news organisations, rather than the authors of the comments,
and at first instance the court considered the preliminary issue of whether they were the
“primary publishers” of these third -party comments. It held that they were, making them
liable without the need to have been put on notice about their existence.
The highest court in Australia, the High Court of Australia, recently upheld the decision,
pointing (as in Delfi) to the commercial benefits that underpin the news organisations
choices –
It concluded that:
NB: There has been a push back from this position via proposed legislation – the
Social Media (Anti-Trolling) Bill 2021 (to be discussed next week)
▪ A relatively innocuous comment about his opponent in the election attracted comments
from third parties, two of which amounted to hate speech against Muslim immigrants in
southern France, and one of which referred to the wife of Mr. Sanchez's opponent, who
was of North African descent. One of the comments was removed voluntarily, and Mr.
Sanchez published a post appealing to users to ‘be careful with the content of their
comments,’ but no further comments were removed and the worst of them remained on
the wall for 6 weeks.
▪ Mr. Sanchez was convicted by Nimes Criminal Court as the "producer" of the forum, and
held directly liable for the comments, even though there was a question as to whether he
had explicitly been put on notice of them. Instead, an imputed responsibility was placed on
him on the basis that he had made the Facebook wall public and "encouraged" people to
comment.
▪ Both the General Court and the Grand Chamber upheld the original decision. The
compared Mr. Sanchez's position to that of Delfi in respect of the commercial nature of the
Facebook page, and pointing to his important position as a politician - "‘it is crucial for
politicians, when expressing themselves in public, to avoid comments that might foster
intolerance and… they should also be particularly careful to defend democracy and its
principles."
▪ This appears inconsistent with previous jurisprudence which gave enhanced rights of
freedom of expression during political debate. It is also questionable as to whether an
individual operating a Facebook page should be subject to the Delfi principles.
Issue: Defendant was both an ISP and the host of a discussion forum.
The defendant was an ISP which hosted on its news server a particular Usenet newsgroup,
storing postings for about a fortnight. They were not held to be a primary publisher, but
having been put on notice about the defamatory content, they failed to delete it (as per
Byrne v Deane), and so could not avail of the defence of innocent publication. (no reference
to E Commerce Directive)
“I do not accept Mr Barca's argument that the defendants were merely owners of an
electronic device through which postings were transmitted. The defendants chose to store
soc.culture.thai postings within their computers. Such postings could be accessed on that
newsgroup. The defendants could obliterate and indeed did so about a fortnight after
receipt.”
Plaintiff claimed that the first-named defendant defamed her in a blog hosted by Google,
which was the fifth-named defendant. The application was to serve Google outside the
jurisdiction. The Court held that it was “at least arguable” that Google was a publisher
following notification (as per Byrne v Deane). However, the court set aside order to serve
Google outside the jurisdiction on the basis that it did not have “actual knowledge” of the
unlawful nature of the comments :
While it was not a primary publisher of the material, the court concluded that
“if Google Inc allows defamatory material to remain on a Blogger blog after it has
been notified of the presence of that material, it might be inferred to have associated itself
with, or to have made itself responsible for, the continued presence of that material on the
blog and thereby to have become a publisher of the material.”
While it held that the five weeks it took Google to remove the material was excessive, and
that it could be liable for its publication during that time, it still refused the application to serve
it outside the jurisdiction on the basis that no real damage was caused during those 5
weeks, as hardly anyone would have accessed the comments during that period.
As in Davison v Habeeb, this concerned an attempt to serve Google outside the jurisdiction
so as to hold it liable for postings by a third-party on blogspot.com which allegedly defamed
the Plaintiff. Five weeks after receiving a complaint from Mr. Tamiz about the posting,
Google forwarded the complaint to be blogger, who removed the content 2 days later. The
plaintiff sought to make Google liable for publication for that 5-week period.
At first instance, the High Court refused the application, and in following the authority of Bunt
v Tilley, claimed that Google was simply a passive facilitator in the publication process, and
was “not required to take any positive step, technically, in the process of continuing the
accessibility of the offending material, whether it has been notified of a complainant’s
objection or not.” Eady J. compared Google Inc to the owner of a wall upon which graffiti had
been written:
“It is no doubt often true that the owner of a wall which has been festooned,
overnight, with defamatory graffiti could acquire scaffolding and have it all deleted
with whitewash. That is not necessarily to say, however, that the unfortunate owner
must, unless and until this has been accomplished, be classified as a publisher.”
C18/18 Eva Glawischnig Piesczek v Facebook Ireland Ltd [2019] (directly applicable)
After a series of domestic appeals, the matter went to the CJEU, which was asked to decide
whether Facebook could be obliged the prevent the further uploading of material that was
not only identical to the original, defamatory post, but also material which had a similar
“equivalent” meaning.
It may be noted that in earlier proceedings which were brought in respect of harassment
before the High Court of Northern Ireland, the court refused to grant a similar injunction in
XY v Facebook Ireland Ltd [2012] NIQB 96 (QBD (NI)) holding that to oblige Facebook to
block any re-uploading of identical material would place a ‘disproportionate burden’ on the
social networking service. It awaits to be seen how the CJEU’s decision is interpreted in
respect of any future applications for an online platform to not only block a specific item of
content from re-appearing on its platform, but for equivalent content to be blocked as well.
The CJEU held that it was not inconsistent with Article 15 of the E Commerce Directive
(prohibition against monitoring) to compel Facebook to prevent the future uploading of not
only identical material, but also “equivalent content”, so long as the injunction did not to
require an “independent assessment” of the material, ie manual supervision by staff. It noted
that the Austrian court had already found the material to be defamatory and ordered it to be
taken down, and that:
“although Article 15(1) prohibits Member States from imposing on host
providers a general obligation to monitor information which they transmit or store, or
a general obligation actively to seek facts or circumstances indicating illegal activity,
as is clear from recital 47 of that directive, such a prohibition does not concern the
monitoring obligations ‘in a specific case’.
The Court also held that such an obligation could have world-wide effect, as the E-
CommerceDirective "does not preclude those injunction measures from producing effects
worldwide." I left it up to domestic courts, however, to ensure that any such orders were
consistent with international law.
In CG, a convicted sex offender sued for breach of privacy arising out of a series of
Facebook pages, one of which was entitled “Keeping our kids safe from Predators”. They
featured threats of violence against him, and he issued proceedings against not only the
creators of the page, but also Facebook under the tort of misuse of private information.
In its defence, Facebook claimed that it was not given actual knowledge about the content,
which was rejected at first instance by the court which held that due to the “considerable
resources” it had at its disposal, it could easily have made itself aware of the subject matter
of the complaint, and suggested that Facebook's demands would require a complaint to be
spelled out with “inappropriate precision.”
This particular finding was overturned on appeal, through it did uphold its position as a
publisher for a short, 2-week period in relation to one of the Facebook pages. In relation to a
similar issue in J20, which concerned allegations made against theclaimant that he was a
loyalist bigot and an informant, the court of appeal likened the position of Facebook to that of
Google in Tamiz v Google, or the golf club committee in Byrne v Deane, in that they could be
taken to have participated in the publication of defamatory material if they failed to remove it
having being put on notice of its existence. Again, however, it found a deficiency in the
manner in which Facebook was notified, and held it liable only for a short period.
It did, however, consider that a 2-week delay in taking it down after a court order was not
"acting expeditiously, and awarded £500 in damages against it.
See also YouTube (C-628/18) and Cyando (C-683/19) for what constitutes "actual
knowledge" for an intermediary. These joined cases considered in June 2021 the extent to
which an intermediary such as YouTube has to do its own investigation into whether material
is unlawful in seeking to avail of the Article 14 defence.
Week 4: Anonymity and Privacy on the Internet
Overview of week 4
1) Review of week 3
2) Anonymity
• The general right
• Can you bring proceedings anonymously/ super-injunctions?
• The position in Ireland
4) Privacy
• Contours of the right to privacy
• Balance with freedom of expression
1. General Position
Introduction to Anonymity
2. Social media platforms not only allow users to operate anonymously but also
to sign up and use their platform without providing verified identification. When
signing up to their platform, Twitter and Facebook only require an email address or a
telephone number for the user wishing to join. It would drastically reduce the no. of
sign ups, and use and earning potential of these platforms.
The general proposition that bloggers have a specific right to anonymity was considered by
the English High Court in The Author Of A Blog v Times Newspapers Limited [2009]
EWHC 1358 (QB).The court was faced with an application by the plaintiff – a police officer –
who wished to prevent the defendant newspaper from unmasking his identity. The Plaintiff
submitted the general proposition that ‘there is a public interest in preserving the anonymity
of bloggers.’ This was rejected by the court, which said: “...It is in my judgement a
significantly further step to argue, if others are able to deduce their identity, that they should
be restrained by law from revealing it.”
In K.U. V Finland (App no. 2872/02, 2 December 2008), the ECtHR considered an
application from a 12-year-old Finnish boy whose identity had been used on an internet
dating site. Finnish law allowed intermediaries to refuse to disclose the identity of their users
"to protect their right to anonymous expression." The Court held that the boy's right to
privacy had been breached, and that the guarantee of internet users contained in articles 8
and 10 ECHR “could not be absolute and should yield on occasion to other legitimate
imperatives, such as the prevention ... of crime or the protection of the rights and freedoms
of others”
In The AB v Bragg Communications Inc [2012] 2 SCR 567, the Canadian Supreme
Court allowed the victim of cyberbullying to bring her proceedings anonymously, citing
the importance of both her age (she was 15) and the need to encourage victims to
bring proceedings without "the risk of further harm from public disclosure"
2. They may be the victim of a defamation/a cyber-attack
In the Australian case of X v Twitter Inc [2017] NSWSC 1300, the Supreme Court of
New South Wales granted anonymity to a company about whom fake twitter accounts had
been set up in an attempt to damage their reputation. The Court held that "If that were not
so, the protection that the plaintiff seeks in relation to its private and confidential
information right be undone."
• The Civil Procedure Rules in the UK allow the court a general discretion to conceal
the identity of any party, or witness, ‘if it considers non-disclosure necessary in order to
protect the interests of that party or witness."
• The courts in Australia and New Zealand have likewise shown themselves more willing
to allow victims to bring proceedings anonymously - X v Twitter Inc [2017] NSWSC
1300.
3. Anonymity in Ireland
Right to anonymity for a plaintiff in Ireland? No
• Irish courts have traditionally been reluctant to allow for someone to institute proceedings
anonymously. This is based on the constitutional right to have justice administered in public
under Article 34.1.
• Aside from the requirement to hold certain proceedings in camera, the Courts have
interpreted this constitutional provision as requiring the identification of parties.
In refusing the application, Laffoy J held that "the public disclosure of the true identities of
parties to civil litigation is essential if justice is to be administered in public."????????? Are
they the same thing? Would the principles have been lost if she would have been
anonymous? Would we have gotten less from the case?
McKeogh v Doe [2012] IEHC 95
• Plaintiff sought orders from the Court which would prevent six national newspapers from
either publishing details of the video clip which purported to show him evading a taxi fare, or
from revealing his identity.
• It was submitted on his behalf that retaining his anonymity was the only way for the Plaintiff
to be granted an effective remedy, as the publication of his name in the media would perform
the very mischief he was attempting to prevent. The Court rejected the application, stating
that “the right to have justice administered in public far exceeds any right to privacy,
confidentiality or a good name.”
Summary:
• The benefits of anonymity, in terms of fostering positive discussion, enabling
whistleblowing and facilitating freedom of expression, are self-evident.
• In respect of direct legal liability for such material, online platforms are provided robust
protection for the material they host by Article 6 of the Digital Services Act (Art.14 of the E-
Commerce Directive), and by section 27 of the Defamation Act 2009.
First step:
• Ask the internet intermediary being used (ie Facebook, Twitter etc) to identify the
anonymous user.
• Platforms are traditionally very protective about the right of their users to operate
anonymously. Twitter recently stated that: “Pseudonymity has been a vital tool for speaking
out in oppressive regimes, it is no less critical in democratic societies.
“Pseudonymity may be used to explore your identity, to find support as
victims of crimes, or to highlight issues faced by vulnerable communities.”
• General position is that they will refuse to release any information about their users without
a court order, citing issues concerning data protection and confidentiality.
• Other positions of such platforms when requested to identify users relates to whether or not
they are sure the material is actually unlawful. In Muwema v Facebook Ireland Ltd [2017]
IEHC 69, for example, Facebook said that they are 'not in a position' to arbitrate as to
whether the material a user has uploaded is unlawful.
Background:
• Decision in Fairfax v Voller [2021] HCA 27 (Fairfax held responsible for comments made
by 3rd party users).
• In November 2021, prime minister Scott Morrison articulated the frustration of many victims
of anonymous online trolling when he declared that social media has become a “coward’s
palace … ... where (anonymous) people can say the most foul and offensive things to people
and do so with impunity.” – platform is the root cause.
• The draft Social Media (Anti-Trolling) Bill 2021, which followed almost immediately, is an
attempt to codify some of the Prime Minister’s suggestions.
▪ The Bill proposes to make social media platforms liable for comments posted by users of
social media, as well as comments posted by third parties on that user’s page, by
considering the platforms to be the publishers of such comments. It further proposes to
remove the defence of innocent dissemination which had previously been available to them.
a) Upon receipt of a notice alleging that a person has been defamed, the social
media platform must contact the author of the comment within 72 hours to inform them;
b) If the author of the comment consents, the comment will be removed by the social
media service;
c) If the author of the comment does not consent to it being removed, the social
media service must provide the complainant with the author’s name, address and email
address, subject to the author consenting to such information being disclosed;
d) If the author does not consent to their name and address being disclosed, the
complainant can apply to the court for an “End-user information disclosure order”, obliging
the social media service to give the complainant contact details for the author of the
defamatory comment. (this appears to mirror the Norwich Pharmacal procedure utilised in
this jurisdiction.)
An action whose aim is purely to obtain information in respect of a proposed defendant is not
provided for either by legislation or the Court Rules – instead, it is provided for by the
inherent jurisdiction of the High Court. The relief was established in Norwich Pharmacal v
Customs and Excise Commissioners [1974] AC 133, a decision which has given the
application its name, in which Lord Reid gave the definitive statement of the principle:
‘…if through no fault of his own a person gets mixed up in the tortious acts of
others so as to facilitate their wrong-doing he may incur no personal liability but he
comes under a duty to assist the person who has been wronged by giving him full
information and disclosing the identity of the wrongdoers.’
The granting of a Norwich Pharmacal order is an entirely discretionary relief and the courts
have formulated a test that must be met before such an order will be granted. The test is as
follows:
4) The order is necessary in the interests of justice on the facts of the case.
In the majority of cases, the internet intermediary does not oppose the application, and the
order will be granted.
• It is only provided for in the High Court, so is expensive. Original draft of Harassment,
Harmful Communications and Related Offences Bill had a provision to bring such an
application to the Circuit Court, but this was dropped before the enactment. This is again
being recommended in the reform of the Defamation Act 2009, which is currently undergoing
consultation.
Background:
Boys set up a Whatsapp group and shared teacher info (some student info) in the form of
banter/memes. Someone complained. School wanted to “discipline” the boys. Members of
the group were operating anonymously. Application to discover identity. Court considered
wider issues at play (1. school children’s identity, 2. only entitled to anonymous identity in
order to prosecute them for unlawful conduct, 3. general right to freedom of expression and
privacy). School didn’t intend to sue
The High Court proposed to refer three questions to the Court of Justice in respect of such
applications:
1) Do the rights conferred under Article 7, Article 8 and Article 11 of the Charter of
Fundamental Rights of the European Union imply a right, in principle, to post material
anonymously on the internet (subject always to any countervailing objective of public
interest)?
– First time anyone asked CJ if there is a right to anonymity.
2) What is the threshold to be met under the General Data Protection Regulation
and/or the Charter before the provider of a social media platform can be compelled to
disclose, to a third party, information which would identify an otherwise anonymous account
user?
3) Is there any necessity for a national court to attempt to put the affected party on
notice of an application which seeks to identify the operators of an otherwise anonymous
user account?
– No provision for this in Norwich Pharmacal orders.
• The information obtained may not even reveal the identity of the anonymous user. In
Parcel Connect v Twitter [2020] IEHC 279 for example, Twitter stated that it "has nothing to
say as to what the information should be and does not warrant that such information as it
has will be sufficient to allow the plaintiffs to establish the true identity of the owner and
operator of the account...”
• There is a lack of clarity as to who pays for the costs of the application.
- The right to privacy does not come from Statute. In Ireland, it is an unenumerated right
under Article 40.3.1 of the Constitution:
"The State guarantees in its laws to respect, and, as far as practicable, by its laws to
defend and vindicate the personal rights of the citizen."
It is not an unqualified right, with Hamilton P. stressing that it was subject to the
constitutional rights of others and the preservation of public order, morality and the common
good.
- It is also a Constitutional right but is specifically provided for under Article 40.6.1:
"The State guarantees liberty for the exercise of the following rights, subject to public
order and morality: The Right of the citizens to express freely their convictions and opinions"
It is not an unqualified right, with the same Article providing the restriction:
Both rights are protected by the EU Fundamental Charter of Human Rights and the
European Convention of Human Rights:
The Charter:
• Article 7 provides for "respect for private and family life"
• Article 11 provides that "Everyone has the right to freedom of expression. This right shall
include freedom to hold opinions and to receive and impart information and ideas without
interference by public authority and regardless of frontiers.
The Convention:
• Article 8 provides protection for "private and family life, his home and his correspondence"
• Article 10 provides that "Everyone has the right to freedom of expression. This right shall
include freedom to hold opinions and to receive and impart information and ideas without
interference by public authority and regardless of frontiers.
See also:
– John v Associated Newspapers [2006] EMLR 27
– Hickey v Sunday Newspapers Ltd [2011] 1 IR 228
2) Privacy at work
This engages with two main issues:
▪ The employee created and operated a Yahoo! messenger account on behalf of his
employer. The company circulated an information notice to its staff, which stipulated that
staff must not use company time for personal internet use. When accused of using the
Yahoo! service for personal reasons, the claimant denied doing so. He was then shown a
45-page transcript of the monitored communications from the company account, as well as a
number of messages that were sent on his personal Yahoo Messenger account and was
dismissed.
▪ Mr Bărbulescu alleged that his Article 8 rights to privacy had been infringed. In the lower
Chamber, it held that the monitoring had been reasonable. On appeal, the Grand Chamber
overturned the decision, finding that the domestic Romanian court had failed to consider:
- whether Mr Bărbulescu was on notice that his online usage might be monitored, or
- whether the scope of such monitoring would include the content of his
communications, rather than simply recording the flow.*
• Central to the Grand Chamber's decision was its finding that a total ban on private
correspondence in the workplace was impermissible, that "an employer’s instructions cannot
reduce private social life in the workplace to zero."
• The Grand Chamber stressed that monitoring of employees by the employer was not
illegal, holding that the latter retained ‘...a right to engage in monitoring, including the
corresponding disciplinary powers, in order to ensure the smooth running of the company’.
▪ The complainant had been part of a picket outside the respondent's store, and was
penalised, inter alia, for her participation in the private Facebook page set up by 43 fellow
strikers. The respondents became aware of comments amongst the group which were in
breach of the company’s social media policy and used this as a reason for dismissing the
complainant. The complainant insisted that these were private communications between
union members, and not the business of the respondent.
▪ The Adjudicating Officer rejected the submission that this group could be considered
private with the following observations:
‘... [a]s a group with 43 members posting to a Facebook page, there is no prospect
that the information could be contained in the group. While the members may have aspired
to privacy, in reality the information was posted on the world-wide web ... It seems to me
naïve to think that any postings on Facebook are private.’ (only takes 1 person to break the
chain and make the info public)
• English employment case in which the complainant had been dismissed for comments he
published by third parties, and comments he made himself, on his personal FB page about
the quality of Apple's products. He claimed that those comments were private and could only
be viewed by his FB friends. The Tribunal disagreed:
"The nature of Facebook, and the internet generally, is that comments by one person
can very easily be forwarded on to others. The claimant had no control over how his
comments might be copied and passed on."
The issue of privacy settings on social media and the degree to which they can prevent
content hidden behind those settings being used in court has not been considered by a court
in this jurisdiction…
Martin & Ors v Gabriele Giambrone P/A Giambrone & Law [2013] NIQB 48.
• Northern Ireland case concerning application for a Mareva injunction. Defendant objected
to evidence obtained from his Facebook account which suggested that he would "leave them
with nothing". Court refused to disallow the evidence, which appears to have been a private
conversation:
"I should say that anyone who uses Facebook does so at his or her peril. There is no
guarantee that any comments posted to be viewed by friends will only be seen by those
friends. Furthermore, it is difficult to see how information can remain confidential if a
Facebook user shares it with all his friends and yet no control is placed on the further
dissemination of that information by those friends."
• Evidence, particularly from Facebook that contradicts a plaintiff's case in personal injuries
proceedings, is regularly introduced in the Irish courts, without any consideration of how it
was obtained, or whether it was hidden behind privacy settings.
• The question arises as to whether you can compel a plaintiff to produce relevant
information from behind these settings via a discovery application. No consideration in this
jurisdiction, but Canadian courts have considered both sides of the argument...
Granted. Court held that: ‘To permit a party claiming very substantial damages for loss of
enjoyment of life to hide behind self-set privacy controls on a website … risks depriving the
opposite party of access to material that may be relevant to ensuring a fair trial’
INTERNET LAW:
Overview of Week 5
1) Freedom of Expression
• The general right
• Restrictions to the right
3) Harmful Speech
• What is it?
4) Hate Speech
• The general position
• The law in Ireland
5) Jurisdiction** exam Q
• Where can you bring proceedings for harmful speech online?
1. Freedom of Expression
General Position
The right to Freedom of Expression:
It is not an unqualified right, with the same Article providing the restriction:
"The publication or utterance of seditious or indecent matter is an offence
which shall be punishable in accordance with law."
Protected by the EU Fundamental Charter of Human Rights and the European
Convention on Human Rights:
The Charter:
The Convention:
• Article 10 provides for the right under similar terms, but with explicit restrictions:
Case law
In Axel Springer AG v Germany [2012] ECHR 45, the ECtHR held that when the right to
freedom of expression is being balanced against the right to respect for private life, the
relevant criteria to be considered are:
(one right does not trump the other here instead in every case they are balanced on the
circumstances of that particular case and nothing more)
The particular importance of the press in imparting information is well documented, and the
same importance has been placed upon the dissemination of news via the internet, with the
ECtHR stating in Times Newspapers v United Kingdom [2009] (Applications 3002/03 and
23676/03) that:
“In light of its accessibility and its capacity to store and communicate vast
amounts of information, the Internet plays an important role in enhancing the public's
access to news and facilitating the dissemination of information generally.”
The ECtHR in Editorial Board of Pravoye Delo & Anor v Ukraine (Application no.
33014/05) was even more explicit, it is suggested that speech disseminated via the internet
should be subjected to a heightened degree of scrutiny as compared to that in the traditional
media:
The right to freedom of expression can be curtailed in certain circumstances. The question of
whether this extends to a denial of access to the internet has been considered by the courts
in a variety of situations.
The wholesale blocking of a search engine facility was considered by the ECtHR in Ahmet
Yildirim v Turkey, (app no. 3111/10, 18 December 2012) in circumstances where the
Turkish government sought to block Google's search engine services. This measure was
intended to combat the making available of content on the Applicant's website which was
perceived to be insulting to the founder of the Turkish republic. While it was not a wholesale
ban on internet access, the Court held that:
"the fact that the effects of the restriction in issue were limited does not
diminish its significance, especially since the Internet has now become one of the
principal means by which individuals exercise their right to freedom of expression
and information, providing as it does essential tools for participation in activities and
discussions concerning political issues and issues of general interest.”
The court found that while a generic ban on all google services by the Turkish government
was a violation of the right to freedom of expression, it may be permissible to restrict internet
access without falling fall of the Convention, but stressing that “a legal framework is
required,
ensuring both tight control over the scope of bans and effective judicial review to prevent
any
abuse of power.”
The use of social media in particular has created new issues in respect of what exactly
constitutes 'speech' in the internet age. The brevity required by the use of Twitter, and the
fast-paced nature of social media exchanges, has resulted in a dramatic increase in
abbreviations and acronyms: LOL, DM, TBH, FOMO....
• 'Liking' someone else's post. In Bolton v Stoltenberg [2018] NSWSC 1518 , the court held
that "No authority was drawn to my attention which establishes that clicking the “like” button
on a Facebook page constitutes a level of endorsement of the publication to render the
person liable as a publisher. I do not regard “liking” a Facebook post, of itself, as analogous
to conduct of the kind described (as) “drawing the attention of another to defamatory
words..."
• Can an emoticon or emoji be defamatory? In The Lord McAlpine of West Green v Sally
Bercow [2013] EWHC 1342, the Court considered that the words "innocent face" (rather
than the emoticon) should be read as an insincere or ironic response to whether or not she
knows the answer to a question.
In Burrows v Houda [2020] NSWDC 485, a court in New South Wales considered a dispute
between two solicitors, one of whom had been the subject of potential disciplinary action.
When the other solicitor was asked on Twitter about any developments about the
proceedings, he responded with a "zipper-mouth" and a "clock ticking" emoji. The judge held
that agreed the emoji, in context, was reasonably capable of conveying that Ms Burrows had
"not merely been the subject of a referral [by the judge], but also a result adverse to her",
and that her "time was up."
PS. In South West Terminal Ltd. v Achter Land & Cattle Ltd., 2023 SKKB 116, a "thumbs
up" emoji was considered to create a binding contract...
Can a blogger attract the increased protection offered by the freedom of expression
afforded to journalists?
The fact that an author ‘self-publishes’ online may not, of itself, be considered to lessen the
value of their output. This stance was adopted by the High Court in Cornec v Morrice [2012]
1 IR 804, in which an application was made for a blogger to reveal his sources for a series of
articles which the defendant claimed violated a non-disparagement agreement that she had
with the plaintiff. Hogan J remarked that while the blogger in question, Mr Garde, was "not a
journalist in the strict sense of the term," he accepted that:
"Mr. Garde’s activities fall squarely within the ‘education of public opinion’
envisaged by Article 40.6.1°. A person who blogs on an internet site can just as
readily constitute an ‘organ of public opinion’ as those which were more familiar in
1937 and which are mentioned (but only as examples) in Article 40.6.1°, namely, the
radio, the press and the cinema. Since Mr. Garde’s activities fall squarely within the
education of public opinion, there is a high constitutional value in ensuring that his
right to voice these views in relation to the actions of religious cults is protected..."
The ECtHR came to a similar conclusion in Magyar Helsinki Bizottsag v Hungary (App.
No. 18030/11, 8 November 2016), when considering the press as a “public watchdog”,
holding that:
“The Court would also note that given the important role played by the
Internet in enhancing the public's access to news and facilitating the dissemination of
information … the function of bloggers and popular users of the social media may be
also assimilated to that of ‘public watchdogs’ in so far as the protection afforded by
Article 10 is concerned.”
The New Zealand High Court considered an identical issue in Slater v Blomfeld [2014]
NZHC 221, and held that a blogger could avail of the benefits of being a ‘journalist’ in certain
circumstances: " conclude that a blogger who regularly disseminates news to a significant
body of the public can be a journalist … The blog must have a purpose of disseminating
news. Some regular commitment to the publishing of news must exist before a blog is a
news medium."
3. Harmful Speech
Introduction
While everyone has, in general terms, and idea of what constitutes “harm”, legislation
has tended to avoid attempting to provide definitions of the term but has instead
sought to provide non-exhaustive lists of examples of harmful content.
Under section 139A, harmful content falls under two main categories (it is a non-exhaustive
list)
1. Content which it is a criminal offence to disseminate under Irish or EU law. This includes:
• child sexual abuse material,
• content containing or comprising incitement to violence or hatred,
• public provocation to commit a terrorist offence.
2. Content which is likely to
• intimidate, threaten, humiliate or persecute a person,
• likely to encourage or promote eating disorders,
• likely to encourage or promote self-harm or suicide;
so long as such online content either gives rise to a risk to a person’s life, or significant
harm to their physical or mental health, where such harm is reasonably foreseeable.
High profile people have lost their jobs following an outcry on social media about posts they
made:
Apologising for the tweet, O'Byrne described it as "an extremely ill judged attempt at
humour."
4. Hate Speech
Introduction
The Cambridge dictionary defines it as "public speech that expresses hate or encourages
violence towards a person or group based on something such as race, religion, sex, or
sexual orientation". v Handyside right (offend/shock/disturb). Hard to find balance.
Handyside PLUS protected characteristic = HS
The regulation of hate speech has always involved a delicate balancing act, given the
strong protection offered to speech in general, particularly by EU legal instruments and
institutions. Freedom of expression is specifically protected, see also Handyside v UK.
EU initiatives:
The Digital Services Act will be the first overhaul of the E-Commerce Directive since it was
published 21 years ago. Large online platforms are required to possess stricter monitoring
and reporting procedures before they can avail of the exemptions formerly provided by
articles 12-14 of the Directive. This requirement is likely to be in respect of certain types of
material, such as hate speech, almost immediately, and the platform will face large fines for
failing to do so.
It is envisaged that the Online Safety and Media Regulation Act, when its provisions are
finalised, will do likewise.
In France, the government didn't wait for the publication of the Act and has published its own
legislation in May 2021 which requires tech platforms to remove hateful comments within 24
hours after they are flagged by users. Terrorist and child pornography, furthermore, had to
be
removed within one hour of being flagged (no reasonable time to act). Platforms could face
fines of up to €1.25 million in the event they fail to follow the regulations. A few weeks later,
the French Constitutional Council ruled these measures to be an unconstitutional
infringement of the right to freedom of expression.
HS @ the ECtHR
Many states have their own individual take on what constitutes hate speech. Aside from the
protected grounds which form the basis of anti-hatred legislation in Ireland, holocaust denial
is treated with particular contempt by France and Germany. In the latter, it is a criminal
offence.
▪ The ECtHR upheld the domestic courts' decision, holding that the applicant had chosen to
make his Facebook "wall" publicly available, that he should have known his page would
attract controversial commentary, and that a particular piece of material had been left on his
page for 3 months despite him being "aware" that unlawful comments had been posted.
NB. Mr. Sanchez was never actually informed of the particular speech by any complainant.
NNB. Hatred under French law includes inciting "hostility" – lower threshold. Some cases
have given more leeway to politicians in the run up to elections while this case narrows it.
While the Department of Justice and Equality has launched a public consultation programme
with a view to updating the legislation in the area of hate speech, it does not provide a
definition as to what constitutes such speech. The main piece of legislation to govern hate
speech in this jurisdiction remains the 32-year-old Prohibition of Incitement to Hatred Act
1989.
Section 2(1) describes the offence of ‘Actions likely to stir up hatred’, and provides that it is
an offence:
a) to publish or distribute written material,
b) to use words, behave or display written material—
i) in any place other than inside a private residence, or
ii) inside a private residence so that the words, behaviour or material are
heard or seen by persons outside the residence, or
c) to distribute, show or play a recording of visual images or sounds, if the written
material, words, behaviour, visual images or sounds, as the case may be, are threatening,
abusive or insulting and are intended or, having regard to all the circumstances, are likely to
stir up hatred. Objective test – no need for intent.
There is no specific reference to online hate speech in the Act, which is unsurprising given
its vintage, but it is sufficiently technologically-neutral that it could be applied to online
speech.
The term “publish” is defined as meaning “to publish to the public or a section of the public.”
This would appear to rule out private communications with individuals via direct messaging,
a fact which would explain why the Act was not used in the February 2021 case in Tralee
District Court involving racial slurs that were sent to well-known English footballer Ian Wright.
(Discussed in Harassment lecture)
There is one documented case in this jurisdiction which a person was tried for hate
speech under the 1989 Act. It occurred in September 2011 in Killarney District Court.
A local man, Patrick Kissane, admitted to setting up a Facebook Group page entitled
‘Promote The Use of Knacker Babies as Bait’. The Group suggested that traveller children
could be used at feeding times in a zoo. The page had the lowest possible privacy settings,
so the content was viewable by any internet user. 644 people joined before it was taken
down by Facebook.
1. The court's reliance on Mr Kissane's lack of intent, given the fact that the relevant
provision of the 1989 Act allows for the offence to be committed where stirring up hatred is
intended or likely, meaning that intent is not a prerequisite for the offence to be committed.
2. The fact that no hatred seems to have been provoked by the specific Facebook page
would not seem to be grounds for finding that no offence had been committed – likelihood of
it having been stirred up is sufficient, and it is difficult to understand how a Facebook page
named in such terms would be unlikely to stir up hatred.
Mr Kissane was prosecuted under the 1989 Act, and while the court considered the man's
behaviour to be ‘obnoxious, revolting and disgusting’, it held that there was a reasonable
doubt as to whether the defendant intended to incite hatred towards members of the
travelling community, that no evidence was put forward of such hatred having been incited,
and his reactions should be considered to be a ‘once-off’ event.
Criminal Justice (Incitement to Violence or Hatred and Hate Offences) Bill 2022 is
currently before An Seanad. It proposes to update the 1989 by simplifying its provisions and
enhancing its sentencing options. It retains the necessity that the hatred be aimed at one of
the "protected characteristics."
Potential liability of corporate bodies (eg. social media platforms): (difficult to establish –
there as a threat but in reality the only ones prosecuted will be the authors)
The Online Safety and Media Regulation Act establishes the Media Commission, which
will take over from the previous Broadcasting Authority of Ireland and, as well as inheriting
the BAI's existing role, will also be tasked with dealing with harmful online content, including
regulation of how the technology giants deal with such material.
5. Jurisdiction
Introduction
This applies regardless of where the plaintiff is domiciled, so long as the defendant is
domiciled in an EU Member State.
• Article 6(1) provides that if the defendant is not domiciled in an EU Member State,
then jurisdiction will be determined by the law of that non-Member State.
Special jurisdiction:
The Rules of special jurisdiction are provided for by Articles 7, 8 and 9 of the Brussels 1
Regulation Recast, which are available in order that jurisdiction may, in certain cases, be
determined by a close connection between the court and the action, and to facilitate the
sound administration of justice.
In respect of tort under section 7(2), the CJEU has held that the concept of where the
“harmful event occurred” covers both the place where the event giving rise to the damage
occurred, and the place where the damage itself occurred or may occur.
If a party chooses to bring proceedings in the jurisdiction where damage has been caused,
they may only recover for the damage which occurred within that jurisdiction.
It was held that the victim of a libel by a newspaper article distributed in several EU States
may bring an action for damages against the publisher either:
a) before the courts of the Contracting State of the place where the publisher of the
defamatory publication is established, which have jurisdiction to award damages for all the
harm caused by the defamation, or
b) before the courts of each Contracting State in which the publication was
distributed and where the victim claims to have suffered injury to his reputation, which have
jurisdiction to rule solely in respect of the harm caused in the State of the court seised.
Online position: C509/09 eDate Advertising v X and Martinez & Anor v MGN (2011)
The court described three options for a plaintiff in respect of where to bring proceedings:
a) before the courts of the Member State in which the publisher of that content is
established, or
b) before the courts of the Member State in which the centre of his interests is based;
• In respect of the damage caused in the territory of the Member State of the court seised,
before the courts of each Member State in the territory of which content placed online is or
has been accessible (sufficient to be ABLE to read, doesn’t have to have been accessed).
The right of Irish courts to accept jurisdiction for publications made via the internet has been
the subject of controversy, as a series of decisions have raised questions about the correct
interpretation of eDate & Martinez. In Ireland, for a claim of defamation you must prove
publication and that it has been accessed by more than 1 other person – contradiction with
eDate)
• This publication related primarily to the Daily Mirror website, though evidence that this
website existed in 2003 was never shown in court. In allowing the appeal, Denham CJ held
that:
The plaintiff company was based in Ballymount, Dublin, and sought a declaration under s. 28
of the 2009 Act that a credit rating report published online by the defendant in the UK was
defamatory. The report was available only to subscribers of the Dun & Bradstreet service,
which required payment to access the information hidden behind a paywall.
The court refused jurisdiction, on the basis that "Although the information in the present case
was technically “accessible” in Ireland due to the fact that the respondent company has Irish
clients, it must still be proven that it has been accessed in this jurisdiction to show
publication within s.28 of the Act of 2009. This has not been made out by the applicant."
The court further held that "Looking at the European jurisprudence as outlined in Martinez
and Shevill the court will only proceed to apply the centre of interest test after publication is
made out. The Supreme Court examining the same jurisprudence in Coleman could not infer
publication from a subscription site where the information was not readily accessible in this
jurisdiction. Furthermore no evidence of publication in Ireland has been made out."
The jurisdiction rule as set down in eDate, as well as the decisions in Coleman and CSI
Manufacturing, were recently examined in Robbins v Buzzfeed.
Evidence was adduced that 13,000 people had accessed the story in this jurisdiction, so the
court had no difficulty in holding that there had been publication in Ireland.
The Court correctly stated that such evidence was not required under eDate, as the mere
"accessibility" of the information online would suffice. In doing so, it commented that there
was a difference between what is required to establish jurisdiction, and what is required to
actually ground a case for defamation"
"It is not in dispute that an essential element of the tort of defamation is publication and there
is no dispute that, in order to succeed in its claim at trial, the plaintiff must prove publication
as an essential element of the tort. That is not, however, to say that, with regard to the rules
derived from Brussels Recast, in particular Article 7 (2) as interpreted by the CJEU in
Martinez, a plaintiff must also prove the fact of publication or access (as opposed to or,
rather, in addition to, accessibility of the online material) in the context of asserting
jurisdiction pursuant to Article 7 (2)."
In relation to CSI, however, Heslin J also commented that "the decision in CSI
Manufacturing is not, in my view, authority for the proposition that, whereas Martinez gives
jurisdiction where material is accessible, one also has to establish that the material was in
fact accessed.
Rather, Kearns P. noted the correct test ... and dealing with a subscriber–only site where the
only evidence of access was from one subscriber in Northern Ireland, Kearns P. found on
the facts of the case before him that the material was not accessible, not having been
accessed."
3) Online defamation
• Meaning on the internet
• The nature and extent of publication
4) The remedies
• The traditional principles
• Assessing damages for online defamation
Some issues:
▪ Defamation is "actionable per se" - ie you do NOT need to adduce proof of damage. Once
you establish that a defamatory statement has been published, it is up to the defendant to
raise one of the available defences to escape liability.
▪ What is a "reasonable member of society"? The court will consider what is the 'ordinary
reasonable reader' of the statement, bearing in mind the medium through which it is
published. In Reynolds v Malocco [1999] ILRM 289, the court held that the expression "gay
bachelor" did not connote a person who was ‘lively, cheerful, vivacious, light-hearted, fond of
pleasure and Gaiety’. Instead, it clearly suggested that the plaintiff was homosexual - "one
would have to be a resident of the moon not to be aware of this."
▪ The fact that the 2009 Act refers to publication "by any means" provides that
publication via the internet is covered by the Act.
The defences:
c) he or she did not know, and had no reason to believe, that what he or she
did caused or contributed to the publication of a statement that would give rise to a
cause of action in defamation.
2) A person shall not, for the purposes of this section, be considered to be the author,
editor or publisher of a statement if—
Publication online:
Even though the internet has technically a world-wide reach consisting of billions of users,
some evidence of publication will need to be established.
In Coleman v MGN [2012] 2 ILRM 81, the Supreme Court held that:
"There is a need for evidence of publication to establish the tort of defamation. There
is no evidence before the Court that the Daily Mirror was published online in 2003. There is
no evidence that the daily edition of the Daily Mirror was on the world wide web in
2003.Thirdly, there is no evidence of any hits on any such site in this jurisdiction. These are
fatal flaws in the plaintiff's case."
The placing of a statement to the internet will not of itself be sufficient to ground a case in
defamation.
In Dow Jones v Gutnick [2002] HCA 56, the High Court of Australia held that:
“It is only when the material is in comprehensible form that the damage to
reputation is done and it is damage to reputation which is the principal focus of
defamation, not any quality of the defendant's conduct. In the case of material on the
World Wide Web, it is not available in comprehensible form until downloaded on to
the computer where that person downloads the material that the damage to
reputation may be done.” – no longer good law, must only show that it has been
viewed.
In Ryanair v Fleming [2016] IECA 265, the High Court held that to establish publication it
would be for the claimant to prove that the material in question was accessed or
downloaded." (Is 'downloading' proof of publication?) – accessing of info completes
publication
• Google Analytics figures – little evidence of their use in the courts so far.
• Number of followers/ friends that the defendant has on Twitter/ Facebook – inference will
be drawn.
Who other than the author can be held liable for publishing the material?
See also Melike v Turkey (ECtHR, 35686/19), in which the ECtHR held that "the
applicant was not the individual who had created and published the content on the social
network in question; her action had been limited to clicking on the “Like” button below that
content. Adding a “Like” to online content could not be considered to carry the same weight
as sharing the content on social networks, in that a “Like” merely expresses sympathy for the
content published, and not an active desire to disseminate it. Further, the authorities had not
alleged that the content in question had reached a very large audience on the social media
concerned.
• Can a company operating a Facebook page liable for user's comments? – Yes for
commercial/business accounts
Yes, according to the High Court of Australia in its recent decision in Fairfax & Ors v
Voller [2021] HCA 27.
• If Voller is good law, can comments left on someone's personal Facebook profile, or in
reply to their tweets, attract liability for the operator of that account?
• This involves a consideration of whether the material was "republished" by the original
author, or "repeated" by the commenter. Link between your post and the defamer’s
comment.
• See also the ECtHR in Sanchez v France – extends liability to individuals whose use is
commercial/political.
Meaning:
The general rule about how to interpret the meaning of a statement was approved by the
Court of Appeal in Gilchrist v Sunday Newspapers [2017] IECA 191. This includes the
factors that:
• The hypothetical reasonable reader is not naïve but he is not unduly suspicious. He
can read between the lines...
• The article must be read as a whole, and any “bane and antidote” taken together –
cannot sue just on the basis of a headline for example as it may be ironic or clickbait etc.
• The hypothetical reader is taken to be representative of those who would read the
publication in question.
Should statements made via the internet be treated differently by the courts?
When deciding on the natural and ordinary meaning of a statement, the nature of
the audience, and the medium through which it is communicated will be factors to consider.
Monroe v Hopkins [2018] EWHC 3525 (considered later)
"It is very important when assessing the medium of a Tweet not to be over-
analytical … People tend to scroll through messages relatively quickly. Largely, the
meaning that an ordinary reasonable reader will receive from a Tweet is likely to be
more impressionistic than … from a newspaper article."
Defendant posted a negative review stating "I spent nearly $2000 for [lawyer] to lose a case
for me that they seemed they didn't put any effort into. Anywhere else would be moore
helpful. worstest lawyer. would not recommend..." The court held that it would not be taken
seriously by ordinary readers, due partly to the poor grammar.
3. Defamation Remedies
Remedies for victims of online defamation:
1. Damages:
General principles in respect of how damages for defamation will be assessed were recently
explained by the High Court of Northern Ireland in Foster v Jessen [2021] NIQB 56.
(i) To act as a consolation to the plaintiff for the distress the plaintiff suffers
from the publication of the statement;
Vindication is an aspect of the award so that if the allegations should re-emerge … the
plaintiff "must be able to point to a sum awarded by a jury sufficient to convince a bystander
of the baselessness of the charge."
Factors that will be taken into account in respect of the quantum of damages include:
• The effect on the plaintiff, to include any aggravating behaviour by the defendant
• Actual financial loss or expenditure by the plaintiff (special damages)
• Mitigating factors such as a quick take-down of the publication, an apology and retraction
by the defendant.
Examples of awards:
The Court may consider extent of initial publication, and difficulty in controlling its further
dissemination. Court in Cairns v Modi [2012] EWHC 756 awarded plaintiff £90,000 for a
tweet seen only by 65 people, approving the dicta of Lord Atkin in Ley v Hamilton that it was
"impossible to track the scandal, to know what quarters the poison may
reach..." – i.e was it discussed elsewhere? Did the 65 people tell others?
AB v Facebook [2013] NIQB 14, plaintiff awarded £35,000 as libel on Facebook page was
equated with "the main page of a leading newspaper or a popular television programme".
Monroe v Hopkins [2017] EWHC 433, the plaintiff was awarded £24,000 for two tweets
which suggested she supported the vandalising of war memorials.
Monir v Wood [2018] EWHC 3525, tweet which labelled the plaintiff a sex offender attracted
£40,000, though the court admitted it would have been £250,000 had it appeared in a
national newspaper.
Foster v Jessen [2021] NIQB 56, the High Court of Northern Ireland awarded £125,000 for
a seriously defamatory post about the First Minister. (considered later)
In this jurisdiction, there have been two substantial awards made by Circuit Courts for cases
involving defamatory statements published on Facebook.
▪ In June 2016, a county Monaghan man was ordered to pay €75,000 in damages to the
Plaintiff after being found guilty of having defamed him on Facebook by suggesting that he
had been responsible for the National Regional Game Council “going broke.”
▪ A substantial award was also made in November 2017 in Roscommon Circuit Court, where
the former secretary of a darts society was found to have been defamed by comments,
posted on the “Darts in Ireland” Facebook, which suggested that he was responsible for the
disappearance of money from the Roscommon County Darts society in the 1980s and
1990s. The Court found that the Plaintiff had been the victim of “a particularly nasty
defamation” with “pretty devastating effects for him” and awarded him €60,000 in damages.
2. Injunctions:
This is usually the most pressing requirement is to have the material removed or blocked.
First stage would be to seek voluntary taking down of the material by the platform.
Test for granting an injunction for defamation has usually been a strict one, requiring proof
that the defendant "had no defence that was reasonably likely to succeed". Codified by s. 33
of the 2009 Act, which allows for the prohibition of publication of defamatory material (even
before it is published but courts are reluctant to do so unless it is crystal clear that it is
defamatory and the D cannot win).
The High Court, or where a defamation action has been brought, the court in which it was
brought, may, upon the application of the plaintiff, make an order prohibiting the publication
or further publication of the statement in respect of which the application was made if in its
opinion—
b) the defendant has no defence to the action that is reasonably likely to succeed.
This section was recently considered by the High Court in LIDL v Irish Farmers
Association [2021] IEHC 381.
• In Tansey v Gill [2012] 1 IR 380, Peart J in the High Court suggested a less strenuous test
when someone has been defamed on the internet.
• This appears to have been rejected by Barrett J recently in Philpott v Irish Examiner
[2018] 3 IR 565, who stressed the material must be defamatory (2009 position).
• In Muwema v Facebook [2016] IEHC 519, Binchy J held that a plaintiff would almost
never be able to obtain injunctive relief against a social media platform, as the latter would
always have the defence of innocent publication available to them (being
author/editor/publisher is only 1 part of 3 of the test, also must establish reasonable care
taken and 1 more thing.
▪ The requirement for providing online publication should be clarified, and that the definitions
and potential liability of, and defences open to, “authors”, “editors” and “publishers” be
clarified. This is clearly aimed at rectifying the much-complained issue with the s.27 defence
of Innocent Publication in the 2009 Act, which provides a defence without defining any of
those terms, and instead relying on examples of what they are not (publisher = traditional
form).
▪ The Report recommends that the s.27 defence should be explicitly extended to “operators
of websites”. It points out that “such a defence already exists in England and Wales”, a clear
reference to the s.5 defence under the UK Defamation Act 2013. There is, unfortunately, no
reference in the Report to the requirement specified in the UK Act and Australian Bill, and it
thus appears to miss the point that at the heart of this Complaints Mechanism is a
requirement for anonymous users to be ultimately identifiable. No injunction if innocent
publication defence available for social media platforms.
▪ The Report recommends an amendment to the manner in which an applicant can apply to
have defamatory material removed from the internet. It is unclear, however, exactly what the
Report envisages in suggesting a “faster mechanism” to deal with applications to take down
material.
The problem with s.33 is not that it is slow – the problem is that the requirement to show that
the Defendant has no defence that is reasonably likely to succeed, in an application against
an internet intermediary, is a hugely problematic one.
The plaintiff was food blogger and a political activist, whose father was in the military, and
a frequent user of Twitter to voice her opposition to Conservative party politics. The
defendant was journalist Katie Hopkins.
The day after the 2015 General Election in the UK, there was a protest in central London at
which a war memorial was vandalised. A political commentator , @RedPenny, voiced her
support for those who defaced the monument, which led Katie Hopkins to aim critical tweets
in her direction. Hopkins then turned her attention to Jack Monroe with the following tweet:
The court performed an extensive analysis of the way that Twitter operates, the extent of
publication through the use of Twitter analytics, and the issue of what the natural and
ordinary meaning of the tweets should be, and noted that:
• This impressionistic approach must take account of the whole tweet and the context in
which the ordinary reasonable reader would read that tweet. That context includes (a)
matters of ordinary general knowledge; and (b) matters that were put before that reader via
Twitter.
Notwithstanding this, the court rejected the submission that normal principles should be set
aside when considering the "wild west" of social media, and that the statement should be
taken less seriously simply because it was made via Twitter. It concluded that the ordinary
reader of Twitter would hold the tweets to be defamatory.
Application for injunction to have material taken down from Facebook: Muwema v
Facebook [2017] IEHC 519
The plaintiff was a Nigerian lawyer who claimed to have been defamed by comments
posted on a Facebook page by a user identified only as 'TVO'. He sought a Norwich
Pharmacal order to identify the user of that account (which was granted at first instance)
and injunctive relief compelling Facebook to take down the posts, pursuant to s.33 of the
2009 Act.
The Court considered s.33 (injunction app.) in conjunction with s.27 (the defence of innocent
publication) to decide under what circumstances Facebook may be obliged to remove the
material. Having concluded that Mr. Muwema needed to establish that Facebook ("the
defendant") had no defence that was likely to succeed, the Court decided that, pursuant to
s.27, Facebook had an absolute defence as it was not the "publisher" of the material. (There
should be a separate test for intermediaries and publishers).
He furthermore stated as a general proposition that a plaintiff would never be able to get
relief against an intermediary such as Facebook, as they latter would always succeed
under the defence offered by s.27. He based this finding on the following:
"On the face of it the defendant has available to it a statutory defence to the
proceedings issued against it. It was submitted on behalf of the plaintiff that the
defendant cannot avail of the defence of innocent publication because it was made
aware by the plaintiff of the defamatory material and declined to take it down from its
platform. But the criteria for eligibility for the defence are not drawn in that way..." –
not correct, that is exactly how it is drawn.
This decision appears to be incorrect in law. The Court appears to have been satisfied that
sub section (a) of section 27, of itself, is enough to provide for that defence of innocent
publication. But the Act says that sub sections (b) and c must also be satisfied, and in this
case Facebook would clearly appear to have been on notice that the material they were
hosting "would give rise to a cause of action in defamation."
“He tried to strangle me.” What would those words convey to the ordinary
reasonable reader of a Facebook post?"
The parties were a formerly married couple who had gone through an acrimonious divorce.
Mr. Stocker brought defamation proceedings against his ex-wife in respect of comments
made by the latter on the Facebook page of his new girlfriend, Ms. Bligh, which alleged that
he had “tried to strangle” her. In the High Court and Court of Appeal, the court held that this
meant he had tried to kill her, which was considered defamatory of him.
When Mrs. Stocker appealed to the Supreme Court, the court criticised the over-literal
interpretation adopted by the lower courts:
The Court considered that the trial judge performed an overly analytical approach to arriving
at the meaning of the statement, holding that “he failed to conduct a realistic exploration of
how the ordinary reader of the post would have understood it. Readers of Facebook posts
do not subject them to close analysis."
Also considered the issue of publication, with the defendant claiming that it was the
operator of the Facebook profile (her ex-husband's new partner) who published the
statement, or alternatively re-published it. The Court of Appeal disagreed:
“I do not accept that the publications with this case was concerned were
republications … the posting of the Comments on Ms Bligh's Facebook Wall was in
reality no different in substance or in principle to the putting up of a notice on a
conventional notice board, accessible to third parties."
The plaintiff was the First Minister for Northern Ireland in or around Christmas 2019. During
the week or so leading up to Christmas 2019, Mrs Foster was engaged in intense
negotiations with a view to the re-establishment of the Northern Ireland Executive.
Around this time, a number of anonymous tweets were posted alleging that Mrs Foster had
been discovered having an affair with one of her Close Protection Officers and that her
marriage had broken down. On the evening of 23 December, the defendant Christian
Jessen, doctor and presenter of Ch4 'Embarrassing Bodies' posted the following tweet:
“Rumours are bouncing around that the DUPs Arlene Foster has been busted
having an affair. Isn’t she the ‘sanctity of marriage’ preaching woman? It always comes to
bite them in the arse in the end. Rather satisfying for us gay boys who she made feel even
shittier….”
The defendant has 311,000 followers on Twitter, and it was retweeted 517 times and liked
approximately 3,500 times. Solicitors for the plaintiff posted a message on his Twitter
account the following day, demanding that the tweet be removed. Ultimately, it was not taken
down until 7 January 2020.
In considering the proportionality of the award, the Court compared it to the NI equivalent of
the Book of Quantum, which estimates the loss of an eye to be between £80-140,000, and
amputation between £125-250,000. The Court settled on an award of £125,000 plus costs.
Damages for defamation via Instagram: Aslani v Sobierajska [2021] EWHC 2127 (QB)
The plaintiff was a plastic surgeon based in Marbella with a particular specialisation in a
procedure known as a "BBL". He was a respected member of the American Institute of
Plastic surgeons, and had performed surgeries c. 14,000 times.
He had performed the BBL surgery on the defendant on several occasions, a social media
influencer, who then posted defamatory comments about the surgeon on Instagram. At the
time of the postings, she had approx. 53,000 followers.
The claim related to four postings on Instagram, the first being a video which claimed:
"He botched my body, he left me disfigured ... I have serious issues with my
head, with my image, with my self-image because of him and he is bringing my name
up to pages trying to pinpoint them on to me. He keeps talking about me to his
patients, he brings up my name to his patients … this is not normal ... I have never
ever seen a professional surgeon act like this."
The Court noted that Social media platforms, especially Instagram with its focus on images,
form an important part of the Claimant's advertising of his services. In line with his customer
base, a significant proportion of his Instagram followers will be from the UK. To Court also
noted that:
▪ approximately 15 people noted the publications to the Claimant ;
▪ that the Defendant has been in contact with prospective clients of the Claimant;
▪ that 66 people interacted with the review on RealSelf, a website used by those
contemplating surgery and specific surgeons.
▪ the Claimant named six people who did not proceed with surgery, with a
contributing factor being the Defendant's campaign against the Claimant."
▪ the nature of the Defendant's followers is also very relevant. The Defendant had
developed a reputation as an advocate of BBLs and accordingly followers were likely to have
an active interest in this type of surgery.