The Legal Challenges of The Fourth Industrial Revolution: Dário Moura Vicente Sofia de Vasconcelos Casimiro Chen Chen
The Legal Challenges of The Fourth Industrial Revolution: Dário Moura Vicente Sofia de Vasconcelos Casimiro Chen Chen
The Legal
Challenges
of the Fourth
Industrial
Revolution
The European Union's Digital Strategy
Law, Governance and Technology Series
Volume 57
Series Editors
Pompeu Casanovas, UAB, Institute of Law and Technology UAB, Barcelona, Spain
Giovanni Sartor, University of Bologna and European University Institute of
Florence, Florence, Italy
The Law, Governance and Technology Series is intended to attract manuscripts
arising from an interdisciplinary approach in law, artificial intelligence and infor-
mation technologies. The idea is to bridge the gap between research in IT law and
IT-applications for lawyers developing a unifying techno-legal perspective. The
series will welcome proposals that have a fairly specific focus on problems or
projects that will lead to innovative research charting the course for new interdisci-
plinary developments in law, legal theory, and law and society research as well as in
computer technologies, artificial intelligence and cognitive sciences. In broad
strokes, manuscripts for this series may be mainly located in the fields of the Internet
law (data protection, intellectual property, Internet rights, etc.), Computational
models of the legal contents and legal reasoning, Legal Information Retrieval,
Electronic Data Discovery, Collaborative Tools (e.g. Online Dispute Resolution
platforms), Metadata and XML Technologies (for Semantic Web Services), Tech-
nologies in Courtrooms and Judicial Offices (E-Court), Technologies for Govern-
ments and Administrations (E-Government), Legal Multimedia, and Legal
Electronic Institutions (Multi-Agent Systems and Artificial Societies).
Dário Moura Vicente •
Sofia de Vasconcelos Casimiro • Chen Chen
Editors
Chen Chen
Centre for Research in Private Law – CIDP
University of Lisbon
Lisbon, Portugal
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by
similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This book has its genesis in the conference on The Legal Challenges of the Fourth
Industrial Revolution: The European Union’s Digital Strategy, which took place at
the Law School of the University of Lisbon on 5 and 6 May 2022.
The conference was the third of a series of events included in the Lisbon
University Research Centre for Private Law research project on Private Law in the
Digital Era and was ultimately intended, like its predecessors, to give rise to a
collective work containing the studies presented in that conference.
The editors would like to thank all guest speakers, of six different nationalities,
for having accepted their invitation, which allowed the conference to have a truly
international character, as intended from the outset.
A special word of thanks is also due to the Research Centre’s staff for their help
with the logistics of the event and to Springer for having accepted the publication of
the present volume.
v
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Dário Moura Vicente, Sofia de Vasconcelos Casimiro, and Chen Chen
Part I Content
The Legal Challenges of the Fourth Industrial Revolution: Copyright
in the Digital Single Market: Between New Uses of Protected Content
and Fairness Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Silvia Scalzini
Due Diligence Obligations and Liability of Intermediary Services:
The Proposal for the EU Digital Services Act . . . . . . . . . . . . . . . . . . . . . 29
Pedro de Miguel Asensio
Legal Challenges Posed by the Modern-Day Transportation Services.
A Brief Overview from the Private Law Perspective . . . . . . . . . . . . . . . . 47
António B. Rodrigues
The Regulation of Content Moderation . . . . . . . . . . . . . . . . . . . . . . . . . 63
Federico Galli, Andrea Loreggia, and Giovanni Sartor
Part II Economy
The European Way to Regulate Big Tech: The EU’s Digital Markets
Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Alexandre de Streel and Peter Alexiadis
“eCommerce and EU Consumers’ Rights” . . . . . . . . . . . . . . . . . . . . . . . 125
Elsa Dias Oliveira
Online Platforms and Taxes in the EU: A Compatible Match? . . . . . . . . 143
Paula Rosado Pereira
vii
viii Contents
Part IV People
Data Protection Litigation System Under the GDPR . . . . . . . . . . . . . . . 235
António Barreto Menezes Cordeiro
R2D: The Right to Disconnect from Work . . . . . . . . . . . . . . . . . . . . . . . 249
Isabel Vieira Borges
Is There a Need for an EU Catalogue of Fundamental Digital
Rights? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Ana Maria Guerra Martins
Countering Terrorism Propaganda Online Through TERREG
and DSA: A Battlefield or a Breath of Hope for Our Fundamental
Human Rights? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Eugénie Coche
AI and Fundamental Rights: The People, the Conversations, and the
Governance Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Roger Brownsword
Editors and Contributors
Dário Moura Vicente is a Full Professor at the Faculty of Law of the University of
Lisbon, Portugal, where he has taught, over the past 30 years, Private International
Law, Comparative Law, Civil Law, Information Society Law and Intellectual
Property Law. He is also the Chairman of the Portuguese Society for Intellectual
Property Law (Associação Portuguesa de Direito Intelectual) and a member of the
Lisbon Centre for Research in Private Law (CIDP) (Centro de Investigação de
Direito Privado), whose research line on Private Law in the Digital Era he
coordinates.
Chen Chen is a Guest Lecturer at the Faculty of Law of the University of Lisbon,
Portugal, a lawyer at Vieira de Almeida & Associados—Sociedade de Advogados
SP RL (VdA) and a researcher at the Lisbon Centre for Research in Private Law
(CIDP) (Centro de Investigação de Direito Privado).
ix
x Editors and Contributors
Contributors
The book’s analysis starts with copyright in the Digital Single Market. In addition
to focusing on certain aspects of the Directive (EU) 2019/790 on copyright and
related rights in the Digital Single Market, two particular topics are addressed in this
respect: the questioned legitimacy of new uses of protected works in the digital
environment; and a different distribution of the value created along the value chain
related to copyright, in order to achieve a better remuneration of creative content.
The analysis of the main challenges posed in relation to content continues in Part I
of the book with the issues raised by the combat against unlawful online content, as
well as the protection of lawful online content. This topic is covered by Regulation
(EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022
on a Single Market for Digital Services and amending Directive 2000/31/EC (‘Dig-
ital Services Act’), the implications of which for the evolution of European Union
law on the liability of intermediary service providers are explored in detail.
This Act aims to update the rules that define the duties and liabilities of digital
service providers, in particular of online platforms. It applies to social networks,
collaborative economy services, search engines, online advertising, cloud services,
Wi-Fi hotspots, web hosting, messaging services, application shops, services based
on artificial intelligence, content delivery networks and domain name services. It
will replace Articles 12 to 15 of the Directive 2000/31/EC of the European Parlia-
ment and of the Council of 8 June 2000 on certain legal aspects of information
society services, in particular electronic commerce, in the Internal Market (‘e-
commerce Directive’) on the liability of intermediary service providers and the
national regulations implementing it.
The consequences of adopting these rules in a regulation rather than in a directive
are analysed in this part of the book. A comparison is also drawn between the
territorial scope of application of the new European instrument and the current
situation under the e-commerce Directive. A targeting, or direction of activities,
criterion is used to define the spatial scope of application of the Digital Services Act.
This may thus be applied to service providers not established in the European Union.
In this sense, the Digital Services Act can be said to apply extraterritorially, although
this application is mitigated by the requirement of a substantial link of the situation in
question with the European Union for the Regulation to apply. One thus observes a
major change of perspective in comparison with the e-commerce Directive: while the
European Union's main objective with that Directive was to harmonise the
e-commerce regime in the internal market, the new Regulation projects the
European Union’s regime externally, as it is outside its territory that the main
economic operators affected by the Regulation are located. The trend towards the
extraterritorial application of European Union law, which had already manifested
itself in Regulation (EU) 2016/679 of the European Parliament and of the Council, of
27 April 2016, on the protection of natural persons with regard to the processing of
personal data and on the free movement of such data, and repealing Directive 95/46/
EC (General Data Protection Regulation or ‘GDPR’), thus reveals itself once again.
The amendments and additions made to the rules formerly contained in Articles
12 to 15 of the e-commerce Directive are subsequently discussed in the book, as well
as the implications on the evolving landscape of service providers of the
Introduction 3
Tax policy and taxation implementation in the digital market in the European
Union is a difficult topic, but too important to be ignored when identifying the
challenges of the fourth industrial revolution. This is therefore the next topic
addressed in the book. The taxation of online platforms in the European Union is
hard to achieve, as is explained in the contribution dedicated to this topic. Some of
the most relevant proposals to solve this problem, submitted in the context of
initiatives of the G20, the Organisation for Economic Cooperation and Development
(OECD), and the European Union, are described in the book, with a particular focus
on the Base Erosion and Profit Shifting (‘BEPS’) Project, on the European Union
Directive Proposals regarding significant digital presence and digital service tax, and
also on the BEPS 2.0 Project.
The last topic addressed in Part II of the book concerns the importance of digital
advertising and its regulation. The role of new pro-competitive regime for digital
markets in shaping the regulatory approach to the phenomenon of digital advertising
is explained, as well as the importance of establishing better communication and
cooperation between regulators and central governments on this matter.
It is however not only concerns about content and the economy that are at stake in
the fourth industrial revolution. Without security, networks cannot function, data
cannot flow, and organisations, States, and people cannot trust technology. Part III
of the book is therefore devoted to security as the third central legal challenge of this
phenomenon.
In 2020, the Commission and the High Representative of the European Union for
Foreign Affairs and Security Policy presented a new European Union Strategy for
Cybersecurity (‘2020 EU Cybersecurity Strategy’), aimed at bolstering Europe's
collective resilience against cyber threats and at helping to ensure that all citizens
and businesses can fully benefit from trustworthy and reliable services and digital
tools. This is obviously of the utmost importance, given the exponential increase in
cybercrime occurred during the pandemic. As the strategy makes clear, the malicious
targeting of critical infrastructure is a major global risk, and concerns about security
are a major disincentive to using online services. The European Union has only
recently started becoming fully aware of the relevance of cyber threats. Improving
security in the digital environment is therefore essential for people to trust, use, and
benefit from innovation, connectivity, and automation, and for safeguarding funda-
mental rights and freedoms, including the rights to privacy and to the protection of
personal data, and the freedom of expression and information.
This Part of the book thus begins with an analysis of the 2020 EU Cybersecurity
Strategy, the major contributions of which are examined here. The first of those
contributions lies in the Strategy’s general framework, which has evolved from the
narrow field of security to the broader context of digitalisation. Its second major
contribution consists in the introduction of the concept of European technological
sovereignty. A third contribution, from a functional perspective, results from the fact
that cybersecurity is conceived as a horizontal or cross-cutting policy. The Strategy’s
fourth contribution is the improved balance between technical and non-technical
issues, as well as the greater attention given to certain basic technological questions.
A fifth contribution lies in the evolution from an essentially declarative or descriptive
Introduction 5
Silvia Scalzini
This chapter is the text of the speech that has been presented at the International Conference: “The
Legal Challenges of the Fourth Industrial Revolution: The European Union’s Digital Strategy”
organised by the CIDP of the University of Lisbon, on 5–6 May 2022. The Author wish to thank
the organisers and the participants of the conference as well as Prof. Luigi Mansani, Prof.
Eleonora Rosati and Dr. Andrea Giulia Monteleone for the insightful discussions on the topic.
S. Scalzini (✉)
University of Parma, Parma, Italy
e-mail: [email protected]
reforms in order to ensure internal and external consistency of EU copyright and the
related rights system within the digital single market.
Historically the evolution of copyright law was closely related to industrial revolu-
tions and technological change, starting from the invention of the printing press.1
The focal point of the latest industrial revolution2 is the digitization of information.
Indeed, through the scanning of books, libraries have become digital archives;
music is recorded and fixed in a digital format that renders it decomposable and
recompilable; while with 3D printers, three-dimensional physical objects can be
scanned, and transformed into digital representations, with the possibility of rapid
re-materialisation—with identical or modified characteristics. New technologies
allow also interactive creative processes and further opportunities to reproduce and
disseminate works protected by copyright.
To sum up, rapid technological developments continue to transform the way
works and other subject matters are created, produced, distributed and exploited.3 In
consequence, novel practices, players, and business models, as well as new conflicts
of interests have arisen. This has created new challenges for the interpretation and
application of copyright law, in two particular respects: (i) the questioned legitimacy
of new uses of protected works in the digital environment; and (ii) a different
distribution of the value created along the value chain related to copyright, thereby
creating the need to rethink the functions of copyright as a tool to create markets for
protected works and to direct revenues towards the production side of original
content.
The digitization process has led to an evolution of the market dynamics and of
techniques for the production of goods and services. Such developments have
prompted the EU institutions (in dialogue with Member States) to devise a complex
new industrial policy, aimed also at fostering the efficient circulation of digital
content and services, with a view to promoting economic development and interna-
tional competitiveness within a framework that can guarantee the effective protec-
tion of fundamental rights. Copyright law is also part of this strategy, in order to
render copyright rules fit for the digital age, with a particular focus on the flow of
content protected by copyright in the digital single market.
Due to the evolution of the entire legal framework, it is essential to look at
copyright law not in isolation and to discuss its correlation and interaction with the
other elements of such a complex legal patchwork. Moreover, although EU Law is
1
See Olivieri and Scalzini (2018), passim; Geiger (2017), p. 73.
2
See Schwab (2017); Floridi (2017).
3
Jütte (2017); Ghidini (2018); Sganga (2018); De Vasconcelos Casimiro (2016).
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 11
clearly driving this process in accordance with its values and principles, also national
law (and its enforcement) is highly relevant, especially in the attempt not to fragment
the market. Even more relevant is also the practical application of the rules by
stakeholders, also in technological terms.
Accordingly, after discussing the impact of the fourth industrial revolution on
copyright law, this chapter proceeds to frame the modernisation of copyright law
within the EU Digital Market Strategy, by focusing on selected aspects of the
Directive 2019/790 (CDSMD). This instrument is largely driven by the aim to foster
an efficient flow of content within the digital single market, by also allowing a fair
distribution of the value generated by digital uses. In particular, there are three main
legal answers given by the CDSMD to some pressing challenges of the fourth
industrial revolution: (a) text and data mining exceptions and limitations, provided
for by art. 3 and 4 CDSMD, in order to face the interface of copyright law with free
flow of digital data; (b) a press publishers’ related right for online uses of press
publications, provided for by art. 15 CDSMD, in the attempt to resolve the B2B
“value gap” problem between content producers and platforms; (c) the new Liability
Regime for Online Content-Sharing Services Providers provided for by Art.
17 CDSMD, in order to manage the collision of a triangle of rights and interests
within the online flow of content, namely among rightholders, platforms and
end-users. While discussing the strengths and weaknesses of such solutions, this
chapter concludes with an overview of some further legal challenges, which may
require further legislative or interpretative reforms.
Over the past thirty years, copyright reform in Europe has rested on “two pillars:
harmonization at the EU level and modernization at the EU (but also national)
level”.4
The growing economic and social importance of copyright (and of intellectual
property in general) for the implementation of the internal market and for strength-
ening European competitiveness has facilitated the evolution of a multilevel
European Union regulatory framework in the field of copyright. Since 1991, several
Directives and a number of Regulations have been issued with the aim not only of
eliminating national divergences but also of taking the opportunity to regulate new
forms of creation and use of intellectual works and other subject matters, as well as
new phenomena of transnational circulation of content resulting from the rapid
technological progress.5
4
Rosati (2021), §1.
5
Ramalho (2021), p. 3; Synodinou (2021), p. 39 the desireability of unification of EU
Copyright Law.
12 S. Scalzini
At the same time, the harmonisation of copyright and related rights, in order to
contribute to the functioning of the internal market and to stimulate innovation,
creativity, investment and production of new content over the years, has been
implemented with the goal to “provide for a high level of protection for rightholders,
facilitate the clearance of rights, and create a framework in which the exploitation of
works and other protected subject matter can take place”.6 Moreover, that legal
framework has also contributed “to the Union's objective of respecting and promot-
ing cultural diversity, while at the same time bringing European common cultural
heritage to the fore”.7
The construction of EU copyright law has not however followed a linear path.8
Tensions have frequently arisen due to the balancing of interests underlying EU
measures, the necessary compromises in the legislative processes and the margin of
freedom and autonomy of the Member States.
Moreover, the Court of Justice of European Union has played a crucial role in the
interpretation of EU copyright law,9 (i) by providing responses to legal dilemmas
posed by new technologies (e.g. regarding the scope of the right of communication
to the public, and the interpretation of exceptions and limitations to copyright),
(ii) by interpreting the fragmented acquis communautaire and harmonising de facto
certain basic concepts and notions (e.g. the notion of work or the notion of origi-
nality) and finally (iii) by taking into account the balancing of fundamental rights in
its reasoning.10
Due to the transformations brought by rapid technological developments and the
emergence of new conflicts of interests, there was the urgent need to modernise the
copyright framework, in order to guarantee the good functioning of, and fairness in,
the marketplaces for copyright-protected works and other protected subject matter,
as well as to overcome interpretative uncertainties caused by technological
advancement.
In this context, in 2015 the Commission unveiled its Digital Single Market
(DSM) Strategy,11 which has as main priority “the creation of digital single market
for consumers and businesses” and included—among others—a number of initia-
tives for reforming and further harmonising copyright law.
Following the DSM Strategy, on 14 September 2016 the Proposal for a Directive
on copyright in the Digital Single Market was issued by the European Commission,
6
Such rationales have been indicated by Recital n. 2 CDSMD.
7
See Recital n. 2 CDSMD.
8
Bently and Radauer (2014); Pila and Ohly (2013); Geiger (2013).
9
Rosati (2019).
10
See CJEU C-275/06 Productores de Musica de Espana v. Telefonica de Espana SAU; C-70/10
Scarlet Extended SA c. Société belge des auteurs, compositeurs et éditeurs SCRL; C-360/10
Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) c. Netlog NV;
C-469/17, Funke Medien NRW GmbH c. Bundesrepublik Deutschland; C-516/17, Spiegel Online
GmbH c. Volker Beck; C-476/17, Pelham GmbH e a. c. Ralf Hütter e Florian Schneider-Esleben.
11
Commission Communication of 9 December 2015 entitled ‘Towards a modern, more European
copyright framework’, DSM Strategy.
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 13
The CDSMD contains provisions that may be divided into three main groups.14
The first group of measures is “aimed to adapt exceptions and limitations to the
digital and cross-border environment” (Title II), in order to overcome interpretative
uncertainties as regards the lawfulness of certain (unauthorized) uses, including
cross-border uses, of works and other subject matters in the digital environment.
Indeed, the CDSMD introduces exceptions or limitations (E&L) for (i) Uses of Text
and Data Mining technologies (Art. 3 and Art. 4); (ii) use of works and other subject
matter in digital and cross-border teaching activities (Article 5); (iii) and the preser-
vation of cultural heritage (Article 6). Unlike previous Directives (and especially
Directive 2001/29/EC), the introduction of such E&L is mandatory for Member
States and, according to Art. 7 CDSMD, E&L are not overridable by contract (except
for Art. 4). Therefore any contractual provision contrary to the exceptions is
unenforceable (for example it is not possible to exclude by contract text and data
mining of protected content for research purposes).
The second group of provisions refer to a broad set of measures, which are aimed
overall to “improve licensing practices and ensure wider access to content” (Title
III). To this end, the CDSMD provides for (i) a framework for the use, of out-of-
12
DIRECTIVE (EU) 2019/790 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
of 17 April 2019 on copyright and related rights in the Digital Single Market and amending
Directives 96/9/EC and 2001/29/EC.
13
Recital 3 DCSMD.
14
For comments of the Directive see Rosati (2021), Quintais (2020), Dusollier (2020).
14 S. Scalzini
First of all, the exceptions introduced by Art. 3 and 4 CDSMD are highly relevant as
they clarify the application of copyright rules to the activities of extracting text and
data from the digital version of protected contents such as texts, images or video.
Indeed the notion of “text and data mining” is defined as “any automated
analytical technique aimed at analysing text and data in digital form in order to
generate information which includes but is not limited to patterns, trends and
correlations”.16
15
See Kretschmer et al. (2016).
16
Article 2, 2 CDSMD.
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 15
Such activities are very common in many scientific and commercial fields17 and
they are useful for the development of big data analysis and artificial intelligence
systems (AI), for which the availability of large amounts of data is crucial. Also
creations made by AI (or with the help of AI) benefit from the previous TDM.18
Before the introduction of TDM E&L by the CDSMD, there was considerable
legal uncertainty regarding the lawfulness of such activities without the authorisation
of the rightholders. Such uncertainties concerned the interpretation of the scope of
exclusive rights as regards including the computational use of data forming the
digital version of the protected works and subject matters, as well as the application
of the available E&L. The Member State approaches also were highly fragmented.19
Although individual or aggregated digital data are not (always) protectable
subject matters,20 TDM activities at some stages may interfere with the economic
rights conferred by copyright or sui generis database rights, in particular, “the
reproduction of works or other subject matter, the extraction of contents from a
database or both which occur for example when the data are normalised in the
process of text and data mining”.21
In order to overcome such uncertainties and to make incentives for the develop-
ment of innovation and free flow of data in the digital single market, the CDSMD
introduced two different E&L. First Article 3 CDSMD provides for a mandatory
“purpose specific exception” to the exclusive right of reproduction and to the right to
prevent extraction from a database for TDM for scientific research carried out by
universities and other research organisations, as well as cultural heritage institutions.
Accordingly it permits reproductions and extractions of text or data for scientific
research purposes from “lawfully accessible” works, materials or databases. The rule
also allows the storage of copies or other materials used in the TDM activity for
preservation and also for enabling the verification of research results. In addition
rightholders shall be allowed to apply measures in order to maintain the security and
integrity of networks and databases, provided that such measures “should remain
proportionate to the risks involved”22 without undermining “the effective application
of the exception”.23 In order comply with these rules, best practices between the
beneficiaries of the exception and the right holders are to be “encouraged” by
Member States. However, the Directive does not specify the nature of such “prac-
tices” or the binding or non-binding nature of their adoption.
17
They are often carried out, for instance, on user generated content, such as tweets, messages,
photos.
18
Mansani (2019), p. 3ff.
19
Geiger et al. (2018), p. 818.
20
Caspers and Guibault (2016), p. 1; Ducato and Strowel (2019).
21
Recital 8 CDSMD. Rosati (2018), p. 4; Triaille et al. (2014); Montagnani and Aime
(2017), p. 380.
22
See Recital n. 16 CDSMD.
23
See Recital n. 16 CDSMD.
16 S. Scalzini
24
Art. 4(3) CDSMD.
25
See Recital 18 CDSMD.
26
See also Hugenholtz (2019).
27
Rosati (2021), p. 89. Ducato and Strowel (2019).
28
Geiger et al. (2018); Strowel and Ducato (2021), p. 299 ff; Stenftleben (2017); European
Copyright Society, General Opinion on the EU Copyright Reform Package, https://
europeancopyrightsocietydotorg.files.wordpress.com/2015/12/ecs-opinion-on-eu-copyright-
reform-def.pdf.
29
See Authors Guild v. Google, Inc., 804 F. 3d 202 (2d Cir. 2015); Authors Guild v. HathiTrust,
755 F.3d 87 (2d Cir.2014). Sobel (2017) W., p. 49.
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 17
As a general observation, I have the impression that the new exceptions, despite
the mandatory nature of Art 3, leave extensive scope for the private autonomy of the
rightholders.30
The main critical issue of the provisions, by the way, is the requirement of the
“lawful access” to the material used as a pre-condition for the application of the
TDM E&Ls. Accordingly, while one may presume the free utilization of data
contained in databases, works or other materials whose access is open or permitted
by licenses (unless there is an express reservation for purposes unrelated to scientific
research), such exceptions do not apply to content and works not made accessible by
the rightholders. And this represents the main limitation of the framework from the
perspective of the free flow and reuse of digital data.
While a cautious approach might be justified in an environment that is still in fieri,
there are already many views31 suggesting the reform of TDM E&Ls, especially to
allow effective and widespread use of TDM techniques on large amounts of data and
materials, thereby reducing transaction costs for the reuse of data.
All in all, while TDM activities essentially refer to the technical operations
performed by algorithms to extract and process data, any reproduction or elaboration
of original works resulting in texts, videos, or images can always be assessed ex post
as copyright infringements, thereby triggering copyright enforcement.
In my view, at least for scientific works, the lawful access requirement might also
be mitigated by other bodies of law, external from copyright law, such as the new
Open Data and reuse of public sector information Directive32 that explicitly requires
Member States to support the availability and the reuse of research data. The
potential interference of these two bodies of law, in order to lower the barriers for
data reuse, should therefore require a more careful attention by scholars and national
institutions.
In conclusion, even if Art. 3 and 4 CDSMD might have improved legal certainty
about new computational uses of protected works and other subject matters, there are
still further steps required in order to build a consistent framework for the free flow
of data in the digital single market.
30
For an analysis of such profile see Scalzini (2019). The concrete functioning of the exception,
once implemented in the national legal systems, and the related exercise of the opt out that may
create very different scenarious, also due to the broad spectrum of very different right-holders.
31
Some scholars call for “a consistent international baseline that resolves the tensions between
copyright and Text and data mining practices”, also in light of transnational uses of data. See in
particular Flynn et al. (2020), p. 393.
32
Directive (EU) 2019/1024 of the European Parliament and of the Council of 20 June 2019 on
open data and the re-use of public sector information (recast).
18 S. Scalzini
Turning to Art. 15 and Art. 17, they are the most tangled and debated provisions
within the entire Directive, as they are aimed at regulating the liability of platforms
for the uses of protected content, with a strong impact on the online flow of content.
As for Art. 15, it has introduced a new press publishers’ right, designed as a
two-year related right, exclusive in nature, which covers the online reproduction and
making available of press publications by online service providers. The intended
goal of the provision is to facilitate the control and the licensing of press content in
the digital environment and to rebalance the bargaining power of press publishers
vis-à-vis digital platforms, supporting at the same time the sustainability of the press
industry. Indeed the discussion around the introduction (and the harmonization) of
such a right at the EU level finds its roots in a widespread commercial crisis of
traditional press publishers and news organizations in the digital environment,
which—in turn—poses a concern for the freedom, the diversity and the quality of
the press. Within this broad context, traditional press operators began to complain
about the free riding of their press publications online by digital content aggregators
and search engine services, whose activity is based on the reuse of such content
(usually) for profit. As I wrote in a recent paper, “this is not only a tale about
copyright and related rights”,33 as this conflict involves competitive dynamics
between different categories of business operators and therefore the discussion
involves also competition law considerations.
Accordingly, many concerns that have been raised involve the choice of the tool,
namely a related right.34 Indeed, the introduction of a further layer of exclusive rights
has effect on contrasting rights and interests, such as freedom of information and the
freedom to conduct a business, also in consideration of the fact that other legal
systems explored solutions of a different nature.35
Since the entry into force of the Directive, attention has shifted to the ways
forward to limit the concerns raised by the new right, while at the same time
(possibly) enhancing the fairness and the well-functioning of copyright-focused
markets.
Indeed, beside the concerns on the underlying justification of the new right, there
are several concerns regarding its design and effects that have been (and should
further) be considered within the national implementation processes and the follow-
ing application and interpretation of the rules. In my view there are three main issues.
First of all, the definition and the scope of the subject matter covered by the right
is rather blurred. The right covers “press-publications”,36 where the only threshold
33
Scalzini (2021).
34
Rosati (2016), p. 569; Kretschmer et al. (2016); Geiger et al. (2017), p. 202; Hilty and Moscon
(2017); Ramalho (2017), p. 71; van Eechoud (2017).
35
See Colangelo (2021), p. 133.
36
Defined by Article 2(4)CDSMD, with some exclusions.
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 19
set forth by the Directive is the exclusion from the scope of the right of “individual
words or very short extracts of a press publication” (Article 15(1)). However the
definition of “very short extracts of a press publication” is not provided by the
Directive, which37 merely clarifies that such a notion should be interpreted suffi-
ciently strictly so as “not to affect the effectiveness of the rights”, thus paving the
way for a risk of overprotection, legal uncertainty and national fragmentation.
Then, a further issue concerns the risk of double layering of rights, implying a risk
of overlap with other forms of protection of works and other subject matter embod-
ied in a press publication, potentially giving rise to uncertainty in negotiations and
clearances as well as possible abuses.
The final point concerns the workability of this solution in order to incentivise
licensing mechanisms with a special regard to the future scenarios that lie ahead. To
give an example, in France, the refusal to negotiate a remuneration from a platform
with market power after the national implementation of art. 15 has triggered the
intervention of the French Competition Authority,38 while on the other hand, the
general application of the right may disproportionately harm smaller players, by
increasing transaction costs and raising the market entry barriers. Moreover, in order
to fulfil the goal of the right, Member States ended up adopting their own solutions,
thus jeopardising the harmonisation process. This is the case in Italy, which has
implemented the rule based on a fair compensation scheme and a sort of “assisted
negotiation” in the event of a failure to reach an agreement among online service
providers and press publishers, where an independent Authority is to provide the
criteria to ensure the fairness of the compensation39 (which is a very difficult task,
also considering the differences in terms of business models, dimensions and
practices concerned).
Ultimately, the future directions and the effects of the introduction of this right at
the EU level remain uncertain and could bring new interpretative challenges before
national courts as well as, probably, also new requests for preliminary rulings before
the Court of Justice of the European Union.
37
See recital 58 CDSMD.
38
See French Competition Authority, Autorité de la concurrence, decision of 9 April 2020, n. 20-
MC-01 ‘relative à des demandes de mesures conservatoires présentées par le Syndicat des éditeurs
de la presse magazine, l'Alliance de la presse d'information générale e.a. et l’Agence France-
Presse’, https://www.autoritedelaconcurrence.fr/fr/decision/relative-des-demandes-de-mesures-con
servatoires-presentees-par-le-syndicat-des-editeurs-de (last accessed 25 July 2020).
39
See the new Art. 43 bis of the Italian copyright law (L. 633/1941).
20 S. Scalzini
Driven by a fairness rationale, also Art. 17 CDSMD has been introduced to foster the
development of fair licensing markets between rightholders and online content-
sharing service providers, due to the fact that such services, such as social networks
and other platforms, have gradually become a main source of access to a large
amount of copyright protected content online for end-users.40
Accordingly, Article 17 of the CDSMD has created a new liability regime for
OCSSP,41 by clarifying that such services perform an act of communication to the
public or an act of making available to the public (and therefore directly accountable)
when they give the public access to copyright-protected works or other protected
subject matter uploaded by its users. In such cases indeed the provider cannot benefit
from the safe harbour established by Article 14(1) of Directive 2000/31/EC
(E-commerce Directive).42
OCSSP must therefore obtain prior authorisation from the rightholders, for
example by concluding a licence agreement, in order to communicate those works
to the public or make them available to the public (Art. 17(2)). The Directive sets the
obligation for the online content-sharing service providers to get the authorization of
the rightholders to perform their own activities (when they give the public access to
copyright-protected subject matter uploaded by its users) and also extends the scope
of this authorization so as to cover the activities of users (when they are not acting on
a commercial basis or where their activity does not generate significant revenues).
Rightholders in any case are not obliged to grant the authorisation and they can also
decide the scope of such license, according to the principle of freedom of contract.
The absence of the authorization triggers the application of the specific liability
regime under Article 17, by holding the provider directly liable for the conduct of its
users unless it demonstrates three cumulative conditions, namely: (a) “to have made
best efforts to obtain an authorization”, and (b) to have “made, in accordance with
high industry standards of professional diligence, best efforts to ensure the
unavailability of specific works and other subject matter for which the rightholders
have provided the service providers with the relevant and necessary information”;
40
See nternational Federation of the Phonographic Industry, Global Music Report 2021—Annual
State of the Industry (2021), p. 40.
41
I.e., according to Art. 286) CDSMD, providers “of an information society service of which the
main or one of the main purposes is to store and give the public access to a large amount of
copyright-protected works or other protected subject matter uploaded by its users, which it
organises and promotes for profit-making purposes”.
42
Providers which do not meet the criteria set out in that provision remain subject to the general
liability regime, as interpreted by the CJEU. See CJEU, Judgment of 22 June 2021, Youtube and
Cyando, C-682/18 and C-683/18, EU:C:2021:50.
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 21
and in any event (c) to have “acted expeditiously, upon receiving a sufficiently
substantiated notice from the rightholders, to disable access to, or to remove from
their websites, the notified works or other subject matter, and made best efforts to
prevent their future uploads in accordance with point (b)”.43
The liability regime is further specified and supplemented in Article 17(5) to
(10) CDSMD. In particular, Art. 17(7) of the Directive states that “the cooperation
between online content-sharing service providers and rightholders is not to result in
the prevention of the availability of works or other protected subject matter uploaded
by users, which do not infringe copyright and related rights, including where such
works or other protected subject matter are covered by an exception or limitation”.44
Art. 17 is a complex provision, and it is clear that it is the outcome of several
compromises that make its implementation difficult.45 The core challenges of Article
17 may be identified basically from a twofold perspective: (i) on the one hand, there
is the need that the implementation and the interpretation of this provision are both
consistent with the goal of European harmonisation and the objective of fostering the
functioning of (a fairer) digital single market; (ii) on the other hand, the major
challenge focuses on how to implement and to apply the special regime set forth
by Art. 17 in order to strike a fair balance of interests and to respect fundamental
rights within the triangle of positions described above, namely among providers,
users of their services and rightholders.
From the first perspective, despite the length of the provision (10 paragraphs and
about 10 related recitals), the wording of the rule gives rise to many varied interpre-
tations. An example is provided by the need of a common understanding of the
concept of “best efforts” as referred in particular in art. 17(4), which even received
divergent linguistic translations—e.g. “massimi sforzi”, “meilleure efforts”46–. The
debate is focused on how to interpret this clause so as to require providers to
undertake reasonable efforts, proportionate to the goal to be achieved and which
could in this regard constitute standards and best practices. The fear is that a lack of
uniform interpretation of such concepts and the related legal uncertainty for the
economic operators on how to fulfil the best efforts obligation will culminate in a de
facto requirement of extensive monitoring and search activities of content uploaded
online by users, thus leading to overblocking.
This issue brings us to the second perspective from which look at the challenges
of Art. 17, that I mentioned before, namely the balance of rights underlying the
special regime set forth by Art. 17.
As discussed above, the goal of such provision is to foster a licensing market
between rightholders and platforms to fill the so called “value gap” and to allow a
redistribution to the rightholders of the value generated by the exploitation and uses
43
See Art. 17 (4) CDSMD.
44
For a comment of this provision see Rendas (2022), p. 54.
45
See Quintais et al. (2019); Geiger and Jütte (2021), p. 517; Rosati (2021); Husovec and
Quintais (2021).
46
Rosati (2021), Art. 17, §3.4; See also Commission Article 17 Guidelines.
22 S. Scalzini
of copyright protected content on the online platforms. The very challenge is how to
implement this market (and also fairness) rationale in a way that is consistent with
the fundamental balance of interests underlying EU and national copyright protec-
tion, namely protection of IP (rightholders), the freedom to conduct a business (from
the standpoint of the digital services) and the freedom of expression and information
(from the standpoint of the users who lawfully uses the content under an E&L).
Indeed Art. 17 may trigger the implementation of certain mechanisms that
basically leaves to a private ordering regime (i.e. the cooperation of rightholders
and providers, with the help of technology) the online flows of content and the
exercise of freedom of expression by the users.
The compliance of Art. 17 with fundamental rights has been the object of the
highly relevant decision of the Court of Justice of the European Union of 26 April
2022 in the case C- 401/19.47 In this case, the Republic of Poland asked the Court to
annul Article 17(4), point (b), and point (c), in fine, of Directive (EU) 2019/790 or in
the alternative, to annul Article 17 of that Directive in its entirety, claiming the
“infringement of the right to freedom of expression and information, guaranteed in
Article 11 of the Charter”.48 According to Poland, Article 17(4) de facto imposes
preventive monitoring measures on online content-sharing service providers, with-
out providing adequate safeguards to ensure the respect of the right to freedom of
expression and information.
The Court considered the partial annulment inadmissible and dismissed the
action, thus saving the existence of Art. 17 CDSMD, but it provided certain
guidelines for a balanced interpretation of the provision, which may be useful also
beyond the realm of copyright.
In particular, the Court recognised that the specific liability regime entails a
limitation on the exercise of the right to freedom of expression and information of
the users.49 Indeed, as also stated by the Advocate General,50 in order to be able to
carry out the review of user uploaded content prior to their dissemination, required
by Art. 17(4), online content-sharing service providers tend to rely on use “automatic
recognition and filtering tools”, thus restricting “an important means of disseminat-
ing online content”.51
However, this obligation “has been accompanied by appropriate safeguards by
the EU legislature in order to ensure, in accordance with Article 52(1) of the Charter,
respect for the right to freedom of expression and information of the users of those
services, guaranteed by Article 11 of the Charter, and a fair balance between that
47
CJEU, Judgment of the Court (Grand Chamber) of 26 April 2022, Republic of Poland v European
Parliament and Council of the European Union, Case C-401/19, ECLI:EU:C:2022:297.
48
§22 ss.
49
C-401/19, § 39-58.
50
See Opinion Of Advocate General Saugmandsgaard Øe delivered on 15 July 2021 ECLI:EU:
C:2021:613.
51
C-401/19,§ 54 and 55.
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 23
right, on the one hand, and the right to intellectual property, protected by Article
17(2) of the Charter, on the other”.52
Beyond the outcome of the decision (that confirmed the work of the European
legislators), the Court in its reasoning underlined a number of points that will led the
future discussion on the implementation, compliance and application of Art. 17, also
at the national level. Indeed the Court expressed certain considerations in order to
focus the attention on the need to effectively respect freedom of expression and
information. I will try to underline (and interpret) some of the most important
passages of the ruling.
Firstly, the Court admitted the use of ex-ante filtering systems but clarifies that,
unlike Art. 17(4), Art. 17(7) prescribes an “obligation of results”,53 in requiring that
the cooperation between rightholders and OCCSP must avoid preventing the avail-
ability of non-infringing user adapted content, including cases covered by an
exception or limitation of those rights. Moreover the Court clearly stressed that the
safeguards contained in Art. 17 (7) and (9) prevent the providers in taking measures
which “would affect the essence of that fundamental right of users who share content
on their platforms which does not infringe copyright and related rights”.54 In
particular, it is incompatible with the right to freedom of expression and information
“the use of filtering systems, unable to adequately distinguish between unlawful
content and lawful content”,55 resulting in a block of lawful communications.
Essentially, as underlined by the scholars who first commented on this decision,
“filtering measures must perform well”.56 And the decision raise new issues on how
to implement, design and control automated upload systems and filters capable of
distinguishing unlawful content with precision and on the need to take this into
account in the forthcoming AI act and in the interpretation of the provisions of the
Digital Services Act (DSA).57
It is therefore crucial to have a (legal) common understanding on what is a lawful
content. In general terms, a content is lawful when the use is covered by E&L to
copyright and related rights, when the work is in the public domain or its use has
been duly authorized by rightholders. And in this regard I would like to point out
another important passage, where the Court expressly stated that in order to ensure
that users “receive uniform protection across the European Union in each Member
State users should be authorised to upload and make available content generated by
themselves for the specific purposes of quotation, criticism, review, caricature,
parody or pastiche”.58 This is a significant statement in that it underlines the
52
C-401/19, §98.
53
C-401/19, §78.
54
C-401/19, §80.
55
C-401/19, §86.
56
Husovech (2022).
57
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on
a Single Market For Digital Services and amending Directive 2000/31/EC.
58
C-401/19, §87.
24 S. Scalzini
To conclude on certain future perspectives, in the short run, the multi-level dialogue
(among national legislators and courts and the EU institutions) will define and shape
the evolution of the copyright and related rights system and—more generally—the
interplay of copyright law with the new emerging digital law patchwork, by
interpreting and applying the new discussed rules. In the long run, new challenges
are appearing on the horizon, such as the application of copyright law to work
created by AI systems, the use of the blockchain to manage copyright or the
interpretations of the law to apply for the creation, flow and use of protected
works in the metaverso.
59
On this point see Borghi (2021), p. 263.
60
C-401/19, §66 and 91.
61
CJEU, judgment of 22 June 2021, YouTube and Cyando, C-682/18 and C-683/18, EU:C:2021:
503.
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 25
Copyright protection in the digital single market must accordingly allow the
system the necessary flexibility to adequately resolve conflicts of interests through
a delicate balancing exercise aimed at satisfying the functions of the exclusive rights
in question, as well as to respect fundamental rights and promote a sustainable
system of innovation and cultural progress.
References
Bently L, Radauer A (2014) European intellectual property law: what lies ahead. Paper for the
directorate general for Internal Policies of the EU Parliament
Borghi M (2021) Exceptions as users’ rights? In: Rosati E (ed) Routledge handbook of EU
copyright law. Routledge, London, pp 263–280
Caspers M, Guibault L (2016) A right to ‘read’ for machines: assessing a black-box analysis
exception for data mining. Comput Sci 53:1–5
Colangelo G (2021) Enforcing copyright through antitrust? The strange case of news publishers
against digital platforms. J Antitrust Enforcement 10:133–161
De Vasconcelos Casimiro S (2016) The convergence of mass media and Copyright: times of
change. Intellectual Property Magazine 1
Ducato R, Strowel A (2019) Limitations to text and data mining and consumer empowerment:
making the case for a right to “Machine Legibility”. IIC - Int Rev Intellect Prop Compet Law 50:
649–684
Dusollier S (2020) The 2019 Directive on copyright in the digital single market: some progress, a
few bad choices, and an overall failed ambition. Common Mark Law Rev 57:979–1030
Floridi L (2017) La quarta rivoluzione: come l’infosfera sta trasformando il mondo. Milano,
Raffaello Cortina Editore
Flynn S, Geiger C, Quintais JP et al (2020) Implementing user rights for research in the field of
artificial intelligence: a call for international action. Eur Intellect Prop Rev 42:393–398
Geiger C (ed) (2013) Constructing European intellectual property. Achievements and perspectives.
Edward Elgar, Cheltenham
Geiger C (2017) Copyright as an access right, securing cultural participation through the protection
of creators’ interests. In: Giblin R, Weatherall K (eds) What if we could reimagine copyright?
ANU Press, Canberra, pp 73–109
Geiger C, Jütte BJ (2021) Platform Liability Under Art. 17 of the copyright in the digital single
market directive, automated filtering and fundamental rights: an impossible match. GRUR Int
70:517–543
Geiger C, Bulayenko O, Frosio GF (2017) The introduction of a neighbouring right for press
publisher at EU level: the unneeded (and unwanted) reform. EIPR 39:202–210
Geiger C, Frosio G, Bulayenko O (2018) Text and data mining in the proposed copyright reform:
making EU ready for an age of big data? IIC. Int Rev Ind Prop Copyright Law 49:814–844
Ghidini G (2018) Rethinking intellectual property: balancing conflicts of interest in the constitu-
tional paradigm. Edward Elgar, Cheltenham
Hilty RM, Moscon V (eds) (2017) Modernisation of the EU Copyright Rules. Position Statement of
the Max Planck Institute for Innovation and Competition’ (2017) Max Planck Institute for
Innovation and Competition Research Paper No. 17-12. Available at https://pure.mpg.de
Hugenholtz B (2019) The New Copyright Directive: Text and Data Mining (Articles 3 and 4).
Kluwer Copyright Blog, 24 July 2019. http://copyrightblog.kluweriplaw.com. Accessed
21 Dec 2022
Husovec M, Quintais JP (2021) How to License Article 17? Exploring the implementation options
for the new EU rules on content-sharing platforms. GRUR Int, 4/2021. https://doi.org/10.2139/
ssrn.3463011
26 S. Scalzini
Husovech M (2022) Internet filters do not infringe freedom of expression if they work well. But
will they?, Euractiv, 2 May 2022. https://www.euractiv.com. Accessed 21 Dec 2022
Jütte BJ (2017) Reconstructing European Copyright Law for the digital single market: between old
paradigms and digital challenges. Nomos, Baden-Baden
Kretschmer M, Dusollier S, Geiger C, Hugenholtz PB (2016) The European Commission’s public
consultation on the role of publishers in the copyright value chain: a response by the European
Copyright Society. Eur Intellect Prop Rev 38:591–595
Mansani L (2019) Le eccezioni per estrazione di testo e dati, didattica e conservazione del
patrimonio culturale. AIDA 28:3–21
Montagnani ML, Aime G (2017) Il text and data mining e il diritto d’autore. AIDA 26:376–394
Olivieri G, Scalzini S (2018) La proprietà intellettuale. In: Caporale C, De Martin JC, Marchis V
et al (eds) Le sfide della scienza, “Europa”. Istituto della Enciclopedia Italiana Treccani, pp
598–606
Pila J, Ohly A (eds) (2013) The Europeanization of intellectual property law. Towards a European
legal methodology. Oxford University Press, Oxford
Quintais JP (2020) The new copyright in the digital single market directive: a critical look. Eur
Intellect Prop Rev 42:28–41
Quintais JP, Frosio G, Gompel S van Hugenholtz PB, Husovec M, Jütte BJ, Senftleben M (2019)
Safeguarding User Freedoms in Implementing Article 17 of the Copyright in the Digital Single
Market Directive: Recommendations From European Academics (November 11, 2019). Avail-
able at SSRN: https://ssrn.com/abstract=3484968 or https://doi.org/10.2139/ssrn.3484968
Ramalho A (2017) Beyond the cover story – an enquiry into the EU competence to introduce a right
for publishers IIC 48:71–98
Ramalho A (2021) The competence and rationale of EU copyright harmonisation. In: Rosati E
(ed) Routledge handbook of EU copyright law. Routledge, London, pp 3–18
Rendas T (2022) Are copyright-permitted uses ‘Exceptions’, ‘Limitations’ or ‘User Rights’? the
special case of Article 17 CDSM Directive. J Intellect Prop Law Pract 17:54–64
Rosati E (2016) Neighbouring rights for publishers: are national and (possible) EU initiatives
lawful? IIC 47:569–594
Rosati E (2018) The exception for text and data mining (TDM) in the proposed Directive on
Copyright in the digital single market – Technical aspects. https://www.europarl.europa.eu
Rosati E (2019) Copyright and the Court of Justice of the European Union. Oxford University
Press, Oxford
Rosati E (2021) Copyright in the digital single market article-by-article commentary to the pro-
visions of Directive 2019/790. Oxford University Press, Oxford
Scalzini S (2019) L’estrazione di dati e di testo per finalità commerciali dai contenuti degli utenti.
Algoritmi, proprietà intellettuale e autonomia negoziale. In Analisi Giuridica dell’Economia 1:
395–423
Scalzini S (2021) The new related right for press publishers: what way forward? In: Rosati E
(ed) Routledge handbook of EU copyright law. Routledge, London, pp 101–119
Schwab K (2017) The fourth industrial revolution. World Economic Forum
Sganga C (2018) Propertizing European Copyright. History, challenges and opportunities. Edward
Elgar, Cheltenham
Sobel BL (2017) Artificial intelligence’s fair use crisis. Colum J Law Arts 41:45–97
Stenftleben M (2017) EU copyright Reform and Startups – Shedding the Light on Potential Threats
in the Political Black Box
Strowel A, Ducato R (2021) Artificial Intelligence and text and data mining: a copyright carol. In:
Rosati E (ed) Routledge handbook of EU copyright law. Routledge, London, pp 299–316
Synodinou T-E (2021) The desiderability of unification of EU copyright law. In: Rosati E
(ed) Routledge handbook of EU copyright law. Routledge, London, pp 39–60
The Legal Challenges of the Fourth Industrial Revolution: Copyright in. . . 27
Triaille J-P, de Meeûs d’Argenteuil J, de Francquen A (2014) Study on the legal framework on text
and data mining (TDM). European Commission, Directorate-General for the Internal Market
and Services, Publications Office. Available at https://data.europa.eu/doi/10.2780/1475.
Accessed 21 Dec 2022
van Eechoud M (2017) A Publisher’s intellectual property right: implications for freedom of
expression, authors and open content policies (2017) OpenForum Europe, 32. http://www.
openforumeurope.org
Silvia Scalzini is Assistant Professor in Commercial Law at University of Parma, Italy (silvia.
[email protected]); Scientific Coordinator of the Master in Competition and Innovation Law at
Luiss Guido Carli University (Rome).
Due Diligence Obligations and Liability
of Intermediary Services: The Proposal
for the EU Digital Services Act
Abstract With a view to assessing the significance of the so-called Digital Services
Act package and its implications for the evolution of EU law on the liability of
intermediary service providers, this contribution focuses on several issues. First, the
consequences of the change in instrument and the incorporation of the rules in a
Regulation are addressed, including the comparison between the territorial scope of
application of the new instrument and the current situation under Directive 2000/31/
EC. Second, the amendments and additions made in the proposed Regulation to
Articles 12 to 15 of the Directive are discussed. Third, the implications in the
evolving landscape of service providers of the traditional reliance of EU law on
the neutral position of the intermediary as a requirement to be eligible to benefit from
the liability exemptions, are considered. Finally, an overview of the new due
diligence obligations laid down in the Proposal is provided, in order to determine
their potential relevance in the application of the horizontal liability exemptions to
private claims against intermediaries, apart from the new envisaged public enforce-
ment measures.
1 Introduction
The so-called Digital Services Act package proposed by the Commission aims to
update the regulatory framework regarding providers of digital services, and online
platforms in particular, with a view to building a safer digital space and establishing
a level playing field. The package consists of two main legislative initiatives: the
Proposal for a Regulation on a Single Market for Digital Services (Draft Digital
Services Act or DDSA)1 and the Proposal for a Regulation on contestable and fair
markets in the digital sector (Draft Digital Markets Act or DDMA).2
The DDSA envisages the modernisation of the regime applicable to providers of
intermediary services as a category within the broader and well-established EU
concept of information society services providers. The DDSA amends the
e-commerce Directive.3 Since 2000 the Directive has provided the general legal
framework on the provision of information society services, with a focus on ensuring
the free movement of such services within the internal market. However, the
adoption of the new instrument will not lead to the abandonment of the rules of
the Directive on intermediary service providers as interpreted by the CJEU. On the
contrary, the DDSA is constructed on the basis that the same rules on liability of
intermediary service providers will continue to apply, although their content will
become part of the new Regulation to foster harmonisation. The only amendment to
the E-commerce Directive contemplated by the DDSA is the deletion of Articles
12 to 15. Those provisions will become part of the DDSA since they are reproduced
with some minor changes in Articles 3, 4, 5 and 7 of the new instrument.
The DDSA is intended to complement other instruments of EU law that regulate
certain aspects of the provision of intermediary services. Such instruments are
deemed “lex specialis” with respect to the general framework laid down by the
DDSA and hence their application to issues fully addressed by them is not to be
affected by the DDSA. Relevant EU acts in this regard are Regulation (EU) 2019/
1150 on business users of online intermediation services,4 Regulation (EU) 2021/
784 on terrorist content online,5 certain Union instruments on consumer protection,
such as the Unfair Commercial Practices Directive6 and Directive 2011/83/EU on
1
Proposal for a Regulation of the European Parliament and of the Council on a Single Market for
Digital Services (Digital Services Act) and amending Directive 2000/31/EC, 15.12.2020, COM
(2020) 825 final.
2
Proposal for a Regulation of the European Parliament and of the Council on contestable and fair
markets in the digital sector (Digital Markets Act), 15.12.2020, COM(2020) 842 final.
3
Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain
legal aspects of information society services, in particular electronic commerce, in the Internal
Market (‘Directive on electronic commerce’), OJ L 178, 17.07.2000, 1.
4
Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on
promoting fairness and transparency for business users of online intermediation services, OJ L
186, 11.7.2019, 57.
5
Regulation (EU) 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online OJ L 172, 17.5.2021, 79.
6
Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning
unfair business-to-consumer commercial practices in the internal market.
Due Diligence Obligations and Liability of Intermediary Services:. . . 31
7
Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on
consumer rights.
8
Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019
amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of
the European Parliament and of the Council as regards the better enforcement and modernisation of
Union consumer protection rules.
9
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation),
OJ L 119, 4.5.2016, 1.
10
Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the
coordination of certain provisions laid down by law, regulation or administrative action in Member
States concerning the provision of audiovisual media services, OJ L 95, 15.4.2010, 1, as amended.
11
Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018
amending Directive 2010/13/EU on the coordination of certain provisions laid down by law,
regulation or administrative action in Member States concerning the provision of audiovisual
media services (Audiovisual Media Services Directive) in view of changing market realities, OJ
L 303, 28.11.2018, 69.
32 P. de Miguel Asensio
who provide intermediary services that fall within the scope of application of the
new instrument benefit from the horizontal liability exemption.
In addition to confirming the previous horizontal rules on liability of intermediary
service providers, Chapter II of the DDSA contains certain new provisions
supplementing them. Moreover, Chapter III of the DDSA lays down detailed due
diligence obligations which are applicable to intermediary service providers. Such
an evolution may be critical to prevent the misuse for unlawful activities of the
digital services concerned, improve the procedures of service providers to react
against illegal content online and to increase transparency in order to safeguard the
fundamental rights of Internet users.
Another remarkable feature of the DDSA is that it introduces new public enforce-
ment mechanisms to supervise compliance by service providers, in particular online
platforms, with the due diligence obligations imposed on them. In that regard
Chapter IV of the DDSA deals with the creation of new national competent author-
ities, the specific powers granted to them as well as with the coordinated supervisory
system at Union level.
Beyond the significance of those provisions with regard to the public enforcement
of the due diligence obligations laid down by the DDSA, the issue arises as to what
extent the breach of the new sets of obligations may become a relevant factor when
assessing whether an intermediary service provider benefits from the horizontal
liability exemptions and cannot be held liable in relation to unlawful content
provided by the users of the service. It is noteworthy that the obligations laid
down in Chapter III are intended to ensure that providers operate responsibly and
diligently and adapt to the societal risks that their services pose.
The DDMA is based on a different approach and aims to supplement existing EU
competition rules by providing new tools to address unfairness in dependency
relationships between very large platforms and their business users. This instrument
focuses on the core platform services of a small group of very large provides that
qualify as gatekeepers. It is not concerned with obligations regarding unlawful
content nor with the liability of online intermediaries. The DDMA establishes
specific ex ante obligations for a small group of large digital platform service
providers in order to address the economic imbalances and possible unfair practices
of platform markets.
In order to assess the significance of the so-called Digital Services Act package
and its implications for the evolution of EU law on the liability of intermediary
service providers, this contribution focuses on several issues. First, the consequences
of the change in instrument and the incorporation of the rules in a Regulation are
addressed, including the comparison between the territorial scope of application of
the DDSA and the current situation under the E-commerce Directive (Sect. 2, infra).
Second, the amendments and additions made in the DDSA to Articles 12 to 15 of the
Directive are discussed (Sect. 3). Third, the implications in the evolving landscape of
service providers of the traditional reliance of EU law on the neutral position of the
intermediary as a requirement to be eligible to benefit from the liability exemptions,
are considered (Sect. 4). Fourth, a brief overview of the new due diligence obliga-
tions laid down in the DDSA is provided, in order to determine their potential
Due Diligence Obligations and Liability of Intermediary Services:. . . 33
12
Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015
laying down a procedure for the provision of information in the field of technical regulations and of
rules on Information Society services, OJ L 241, 17.9.2015, 1.
34 P. de Miguel Asensio
content behind the hyperlink, which has been made available by the initial publisher
on the linked website. From the perspective of Article 10 of the ECHR13 (freedom of
expression), the mere posting of hyperlinks cannot be equated with the dissemina-
tion of unlawful information, automatically entailing liability for the content itself.
With regard to its territorial scope of application, the DDSA is based on a
different approach than the e-commerce Directive. Pursuant to Article 1(3), the
DDSA applies to intermediary services provided to recipients of the service that
have their place of establishment or residence in the Union, irrespective of the place
of establishment of the providers of those services. This is consistent with the fact
that its main objective is to set out uniform rules for a safe, predictable and trusted
online environment (Article 1(2)(b)). Ensuring a level playing field within the
internal market requires the application of the DDSA also to providers established
outside the EU. By contrast, the main objective of the e-commerce Directive is to
ensure the free movement of information society services between the Member
States. Only providers established in a Member State benefit from the internal
market rule, which is one of its main features. Under the e-commerce Directive,
each Member State must ensure that the services provided by a service provider
established on its territory comply the relevant provisions applicable in the Member
State in question, including those on liability of intermediaries, which fall within the
so-called ‘coordinated field’. Within the coordinated field, Member States may not
restrict the freedom to provide information society services from another Member
State (Articles 2(h) and 3). Hence, providers of such services should not be made
subject to stricter requirements than those provided for by the substantive law
applicable in the Member State in which that service provider is established, without
prejudice to the derogations laid down in Article 3(4).
The DDSA does not change the application within the framework of the
e-commerce Directive of the country of origin criterion, although the increased
harmonization resulting from the incorporation of the rules in a Regulation will
decisively erode the practical significance of this criterion with respect to the liability
exemptions of intermediaries. Furthermore, the DDSA rules concerning the liability
of intermediary services are intended to be applied irrespective of the place of
establishment of the service providers. The mere reference in Article 1(3) DDSA
to the provision of services to recipients whose place of establishment or residence is
in the EU, without further clarification, appears to be insufficient. Recital 7 of the
DDSA is more precise and states that its rules should apply to providers of interme-
diary services irrespective of their place of establishment, ‘in so far as they provide
services in the Union, as evidenced by a substantial connection to the Union’.
According to Article 2.d), that defines the expression ‘to offer services in the
Union’, such a substantial connection is deemed to exist with providers that enable
persons in one or more Member States to use their services, ‘where the service
provider has an establishment in the Union or, in its absence, on the basis of the
13
European Convention on Human Rights (ECHR), 1950, 119 UNTS 99.
Due Diligence Obligations and Liability of Intermediary Services:. . . 35
14
Regulation (EU) No 1215/2012 of the European Parliament and of the Council of 12 December
2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial
matters, OJ L 351, 20.12.2012, 1.
15
Regulation (EC) No 593/2008 of the European Parliament and of the Council of 17 June 2008 on
the law applicable to contractual obligations (Rome I) OJ L 177, 4.7.2008, 6.
16
Regulation (EC) No 864/2007 of the European Parliament and of the Council of 11 July 2007 on
the law applicable to non-contractual obligations (Rome II), OJ L 199, 31.7.2007, p. 40.
17
De Miguel Asensio (2020), paras 2.96–2.99.
36 P. de Miguel Asensio
18
Wilman (2021), p. 334.
Due Diligence Obligations and Liability of Intermediary Services:. . . 37
19
Opinion of Advocate General Campos Sánchez-Bordona delivered on 28 November 2019, Coty
Germany, C-567/18, EU:C:2019:1031, para. 82.
20
CJEU Judgment of 3 October 2019, Glawischnig-Piesczek, C-18/18, EU:C:2019:821.
38 P. de Miguel Asensio
scope of the order must be specified, which must be limited to what is strictly
necessary to achieve its objective. According to Recital 31, the authority issuing
the order should balance its objective and legal basis with the rights and interests of
all third parties that may be affected by the order, in particular their fundamental
rights under the Charter. In the case of orders that may have effects beyond the
territory of the Member State authority concerned, a prior assessment is required as
to whether the information at issue is likely to constitute illegal content in other
Member States concerned and in order to take account of the relevant rules of Union
law or international law and the interests of international comity.
It should be recalled that in Glawischnig-Piesczek, the CJEU concluded that the
e-commerce Directive establishes no territorial limitation on the scope of the mea-
sures which Member States are entitled to adopt and, hence, it does not preclude the
adoption of injunctions producing effects which extend worldwide (paras 49–50).
However, this does not imply that the Directive itself provides a legal basis for the
adoption of measures with such scope, nor that the adoption of such measures can
take place without respecting certain conditions and taking into account other
considerations. With regard to the adoption of measures with such scope, the
Court of Justice merely established that it is “necessary to ensure that EU rules in
that area are consistent with the rules applicable at international level” (para 51), and
stated that Member States should ensure that their measures producing effects
worldwide take due account of the relevant internationally applicable rules, (para
52), without providing further concrete details.
A prerequisite for the adoption of content removal measures having worldwide
effects, is that the court or authority to which they are requested has unlimited
international jurisdiction.21 However, it is not sufficient for the court hearing the
case to have jurisdiction without territorial restriction, since the content and scope of
the measures it adopts will be conditioned by substantive questions. As noted by the
Advocate General in Glawischnig-Piesczek, a Member State court may be prevented
from adjudicating on a removal worldwide of content online not because of a lack of
jurisdiction but, because of an issue of substance.22
The possible adoption of measures of global scope—such as the worldwide
removal or blocking of content by an intermediary service provider—based on a
single law makes it advisable for the competent court to adopt ‘an approach of self-
limitation’, especially in matters, such as those involving personality rights, in which
the balancing of the fundamental rights involved results in significant divergencies
even between systems that are relatively close.23 Such ‘an approach of self-limita-
tion’, ‘in the interest of international comity’, will normally result in the adoption of
21
CJEU Judgment of 17 October 2017, Bolagsupplysningen, C-194/16, EU:C:2017:766, para 48.
22
Opinion of Advocate General Szpunar delivered on 4 June 2019, Glawischnig-Piesczek, C-18/
18, EU:C:2019:458, para 86.
23
Opinion Glawischnig-Piesczek, para 100.
Due Diligence Obligations and Liability of Intermediary Services:. . . 39
measures that prevent access to the unlawful content concerned within the EU or
certain of its Member States, typically by means of geolocation measures, but
without ordering its removal on a worldwide basis.
This approach is consistent with the result reached by the Court of Justice in the
field of personal data protection in Google (Territorial scope of de-referencing).24
Restraint in the adoption of such measures is fully consistent with the coexistence of
a plurality of territorially-based legal systems that regulate Internet activities in a
context of absence of global standards regarding the legality of the contents dissem-
inated and the activities carried out on the Internet.25
A potential shortcoming of the DDSA with regard to the liability exemptions has to
do with the absence of additional clarification of a key element, such as the concept
of intermediary service provider. The DDSA is intended to establish rules on the
provision of intermediary services and Article 2(f) provides a definition of ‘inter-
mediary service’. However, as previously noted, when defining this category, the
DDSA merely states that an intermediary service means a ‘mere conduit’ service
(in the sense of Article 12 of the e-commerce Directive and Article 3 DDSA); a
‘caching’ service (in the sense of Article 13 of the e-commerce Directive and Article
4 of the DDSA); and a ‘hosting’ service (in the sense of Article 14 of the e-commerce
Directive and Article 5 of the DDSA). As a novelty, Article 2(h) of the DDSA
provides a definition of ‘online platform’, as a subcategory of hosting service
provider which, ‘at the request of a recipient of the service, stores and disseminates
to the public information, unless that activity is a minor and purely ancillary feature
of another service and, for objective and technical reasons cannot be used without
that other service. . .’.
The characterisation of a service as ‘intermediary’ is decisive in order for its
provider to benefit from the liability exemption, so that, it decisively influences the
success of any liability claims against service providers such as platforms.
According to the initial case-law of the Court of Justice in this area, for a service
to qualify as ‘intermediary’ in order to fall within the scope of Article 14 of the
Directive (Article 5 of the DDSA), it is essential that the provider confines itself to
provide the service ‘neutrally by a merely technical and automatic processing of the
data provided by its customers’, and this is not the case where the service provider
‘plays an active role of such a kind as to give it knowledge of, or control over’ the
24
CJEU Judgment of 24 September 2019, Google (Territorial scope of de-referencing), C-507/
17, EU:C:2019:772, para 73.
25
Kuschel (2020), p. 419.
40 P. de Miguel Asensio
data provided by its customers.26 As an example, it may be recalled regarding the for
sale of goods uploaded to eBay by its customer-sellers, the Court stated that
‘(w)here. . . the operator has provided assistance which entails, in particular,
optimising the presentation of the offers for sale in question or promoting those
offers, it must be considered not to have taken a neutral position between the
customer-seller concerned and potential buyers but to have played an active role
of such a kind as to give it knowledge of, or control over, the data relating to those
offers for sale’. Therefore, the Court concluded that regarding such content the
service provider cannot rely on the exemption from liability referred to in Article
14(1) of the e-commerce Directive (Article 5 of the DDSA).27
In the light of the above, certain practices of social networks and other online
platforms in the sense of Article 2(h) of the DDSA would merit a detailed exami-
nation in order to establish whether their activity is indeed that of a hosting service
provider for the purposes of Article 5 of the DDSA. For example, this may be the
case with respect to content that has been posted by users but is recommended by the
platform itself, based on the operation of its own algorithms, as being of special
interest to a user with the aim of capturing their attention and retaining them on the
website in question. Regardless of the fact that it does not interfere with the content
of the information uploaded by the user of the service, the algorithm and the activity
of certain platforms are usually determinant of whether a particular content is
accessed by users to a greater or lesser extent precisely as a result of an intervention
by the platform typically aimed at promoting its own business by inciting access to
some of its content. The selection and interest of the platform itself are determinant
of the user’s access to those specific contents and not others. In such circumstances, a
more elaborate analysis of the operation of the platform may be necessary to
determine with respect to which specific contents its intervention has not been
merely passive and, therefore, may not benefit from the exemption from liability.
In YouTube and Cyando,28 the CJEU repeated, with regard to video-sharing and
file-sharing services, that to be exempted under Article 14 of the e-commerce
Directive, from liability arising out of the dissemination of content which users
illegally communicate to the public via the platform, it is necessary to establish that
the role played by that operator is neutral and that its conduct is merely technical,
automatic and passive. Additionally, the CJEU acknowledged that the fact that the
platform provider implements technological measures aimed at detecting illegal
content uploaded to its platform, does not mean that, by doing so, that operator
plays an active role (para 109). As already noted, Article 6 of the DDSA expressly
incorporates the same approach.
26
CJEU Judgment of 23 March 2010, Google France and Google, C 236/08 to C 238/08, ECLI:EU:
C:2010:159, paras 112, 114 and 120; and CJEU Judgment of 12 July 2011, L’Oréal, C
324/09, ECLI:EU:C:2011:474, paras. 112–113.
27
CJEU Judgment of 12 July 2011, L’Oréal, C 324/09, ECLI:EU:C:2011:474, para. 116.
28
CJEU Judgment of 22 June 2021, YouTube Cyando, C 682/18 and C 683/18, EU:C:2021:503.,
para 106, referring to Judgment of 12 July 2011, L’Oréal, C-324/09, EU:C:2011:474, para 113.
Due Diligence Obligations and Liability of Intermediary Services:. . . 41
Chapter III of the DDSA lays down asymmetric due diligence obligations on
intermediary service providers.29 Although its content is influenced by the 2018
Commission Recommendation on illegal content,30 the imposition of detailed due
diligence obligations on intermediary service providers amounts to a very significant
change in comparison to the previous regime. The division of Chapter III into five
sections is due to the asymmetrical application of the obligations it imposes on the
four categories of addressees of such obligations.
Section 1 is applicable to all providers of intermediary services and includes the
obligations to establish a single point of contact to facilitate direct communication
with authorities (Article 10), to designate a legal representative in the Union for
providers not established in any Member State, but subject to the DDSA (Article 11),
to set out in their terms and conditions any restrictions that they may impose on the
use of their services and to act responsibly in applying those restrictions (Article 12);
and transparency obligations concerning the removal and the disabling of informa-
tion (Article 13).
Providers of hosting services are subject additionally to the obligations
established in section 2. Such obligations include, in particular: to put in place
mechanisms to allow third parties to notify the presence of alleged illegal content
(Article 14); and to inform recipients of the service of the removal of information
provided by them and provide a clear and specific statement of reasons for that
decision (Article 15).
The additional obligations applicable to online platforms that are not micro- or
small enterprises are laid down in section 3. Such obligations include: to provide an
internal complaint-handling system in respect of decisions taken regarding illegal
content or information incompatible with their terms and conditions (Article 17);
out-of-court dispute settlement to resolve any dispute with users (Article 18),31
priority treatment regarding notices submitted by trusted flaggers (Article 19);
measures against misuse of platforms (Article 20); to inform competent authorities
of information giving rise to a suspicion of serious criminal offences (Article 21); to
29
For an overview of the due diligence obligations under the DDSA, see Savin (2021), sect. 3.
30
Commission Recommendation on measures to effectively tackle illegal content online, o1.3.2018
(COM(2018) 1177 final).
31
For a critical assessment, see Wimmers (2021).
Due Diligence Obligations and Liability of Intermediary Services:. . . 43
receive, store, make reasonable efforts to assess the reliability of and publish
information on the traders using their services where the platforms allow consumers
to conclude contracts with those traders (Article 22); and transparency requirements
in respect of online advertising (Article 24).
Finally section 4 establishes additional obligations applicable to very large online
platforms (as defined by Article 25) to manage systemic risks. Such obligations
include: to conduct risk assessments on the systemic risks relating to their services
(Article 26); to take reasonable and effective measures aimed at mitigating those
risks (Article 27); to submit themselves to external and independent audits (Article
28); requirements concerning recommender systems and online advertising (Articles
29 and 30).
Chapter IV of the DDSA contains provisions regarding its implementation and
enforcement. It establishes new mechanisms, such as the designation of national
competent bodies for the application of the Regulation which are granted specific
powers of investigation and enforcement in respect of providers of intermediary
services under the jurisdiction of the respective Member State. The DDSA envisages
public enforcement measures, such as the power to order the cessation of infringe-
ments and to impose remedies to bring the infringement effectively to an end; the
power to impose fines for failure to comply with the Regulation; the power to impose
a periodic penalty payment to ensure that an infringement is terminated; the power to
adopt interim measures to avoid the risk of serious harm; and the possibility to
request the competent judicial authority of to order the temporary restriction of
access of recipients of the service concerned (Articles 41 and 42). Furthermore,
section 3 of Chapter IV establishes specific provisions on the supervision, investi-
gation, enforcement and monitoring of very large online platforms.
Although the DDSA establishes only public enforcement mechanisms,32 the due
diligence obligations in Chapter III should also become a relevant factor in the
context of private enforcement, such as in civil liability claims brought against
providers of intermediary services. The due diligence obligations in Chapter III of
the Regulation result in an elaborate system of obligations depending on the level of
risk, compliance with which should in the future decisively influence the application
of the provisions laying down the conditions under which intermediary service
providers, and particularly providers of hosting services, are exempt from liability
for the third-party information they transmit and store.
Under Article 14 of the e-commerce Directive (Article 5 of the DDSA), in order
to establish whether the hosting service provider benefits from the exemption from
liability, it is necessary to assess whether it has acted with due diligence and
deployed reasonable means to mitigate the risks created by their services. The
proliferation of business models based on the dissemination of content uploaded
(free of charge) by users requires an analysis of the specific circumstances in order to
assess whether the hosting service provider that invokes its lack of knowledge and
control over the content can benefit from the exemption from liability. In line with
32
Buiten (2021), pp. 377–378.
44 P. de Miguel Asensio
the Delfi judgment and the subsequent case-law of the ECtHR in related cases,33 in
order to determine the possible exemption of service providers from liability for
content posted by third parties and to reach a proper balance between the funda-
mental rights involved, the level of diligence with which the operator acted must be
assessed, taking into consideration the risks to the rights of potential victims
generated by the services it provides and the measures adopted to address those
risks. The Delfi judgment highlighted how the increase in services that allow content
to be disseminated anonymously is a highly relevant risk factor, while at the same
time making it particularly difficult for the victims to hold the authors of the harmful
comments accountable.
The service provider’s compliance with the due diligence obligations laid down
in Chapter III of the DDSA is to be relevant in that regard in the context of
non-contractual liability claims against intermediaries. Determining whether the
provider of hosting services has acted diligently to benefit from the liability exemp-
tion requires, among other elements, assessing the risks for the dissemination of
illegal content arising from the business model freely chosen by the provider
concerned.34 As stated in Recital 48 of the e-commerce Directive, the liability
exemptions do not affect the possibility of requiring providers of hosting services
‘to apply duties of care, which can reasonably be expected from them and which are
specified by. . . law, in order to detect and prevent certain types of illegal activities’.
In this sense, despite the significant differences between Article 5 of the DDSA
(Article 14 of the e-commerce Directive) and Article 17 of Directive (EU) 2019/790
on copyright and related rights in the Digital Single Market,35 there are situations—
depending on circumstances such as the risk of dissemination of illegal content
through the services offered—in which, in order for the hosting service provider to
benefit from the limitation of liability, it may be justified to require certain measures
similar to some of those provided for in Article 17, including the use of tools for
automatic recognition and filtering of illegal content. Such tools in the context of
Article 5 of the DDSA may also be necessary for the protection of fundamental
rights (other than intellectual property rights), so as to ensure a fair balance between
them and the right to freedom of expression and information and other rights that
may be involved, such as the right to freedom to conduct a business.36
33
See case Delfi AS v. Estonia, no. 64569/09, Judgments of 10 October 2013 and 16 June 2015
(Grand Chamber); case Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary,
no. 22947/13, Judgment of 2 February 2016; case Pihl v. Sweden, no 74742/14, Decision of
9 March 2017; case Magyar Jeti Zrt v. Hungary, no. 11257/16, Judgment of 4 December 2018;
and case of Sánchez v. France, Judgment of 2 September 2021, n° 45581/15.
34
De Miguel Asensio (2022), paras 2.365–2.379.
35
CJEU Judgment of 26 April 2022, Poland / Parliament and Council, C-401/19, EU:C:2022:297,
para 50.
36
See, by analogy, CJEU Judgment of 26 April 2022, Poland / Parliament and Council, C-401/
19, EU:C:2022:297, para 83. For an overview of the fundamental rights of users and platforms to be
balanced, see Frosio and Geiger (2022), section II.B.
Due Diligence Obligations and Liability of Intermediary Services:. . . 45
This is a conclusion consistent with the fact that the previous case-law of the
Court of Justice cited in its judgment of 26 April 2022, Poland / Parliament and
Council, to endorse the obligations and guarantees set out in Article 17 of Directive
(EU) 2019/790, is essentially concerned with the application of Articles 12 to 15 of
the e-commerce Directive.37 Such an approach is also consistent with the judgment
(s) of the ECtHR in the notorious Delfi case. Precisely, this ECtHR judgment is cited
by the CJEU in paragraph 74 of the judgment Poland / Parliament and Council, to
justify that the formulation in open terms - even, arguably, taking into account the
specific context of Delfi, as open as those of Article 5 of the DDSA (Article 14 of the
e-commerce Directive)—of limitations on the exercise of the right to freedom of
expression and information under Article 17 of Directive (EU) 2019/790 satisfies the
requirement that the limitation be provided for by law.
Failure to comply with due diligence obligations applicable to the provider under
Chapter III of the DDSA may be decisive in determining that its conduct has not
been diligent for the purposes of claiming that it has no actual knowledge of illegal
content or is unaware of facts or circumstances from which the illegal content is
apparent, in order to benefit from the liability exemption under Article 5 of the
DDSA. Some obligations seem particularly relevant in this regard, such as the
obligation to put in place mechanisms to allow third parties to notify the presence
of alleged illegal content (Article 14 of the DDSA), and the obligation to take
reasonable and effective measures aimed at mitigating certain risks relating to the
functioning and use of their services (Article 27). In fact, apart from the specific
obligations imposed on very large platforms under Article 27, the duty to diligently
take reasonable measures aimed at mitigating the risks posed by their services seems
a pertinent factor to be taken into consideration with respect to all intermediaries for
the purposes of the liability exemptions under Chapter II.
References
Buiten MC (2021) The digital services act: from intermediary liability to platform regulation.
JIPITEC 12:361
De Miguel Asensio P (2020) Conflict of laws and the internet. Edward Elgar, Cheltenham
De Miguel Asensio P (2022) Derecho Privado de Internet, 6th edn. Civitas Thomson-Reuters,
Navarra
Frosio G, Geiger C (2022) Taking Fundamental Rights Seriously in the Digital Services Act’s
Platform Liability Regime, Available at https://ssrn.com. Accessed 21 Oct 2022
Kuschel L (2020) Zur inhaltlichen und räumlichen Reichweite von Anordnungen gegenüber
Hosting-Providern. IPRax 40:419–425
37
See Judgments of 16 February 2012, SABAM, C-360/10, EU:C:2012:85; 27 March 2014, UPC
Telekabel Wien, C-314/12, EU:C:2014:192; 3 October 2019, Glawischnig-Piesczek, C-18/18, EU:
C:2019:821; and 22 June 2021, YouTube and Cyando, C-682/18 and C-683/18, EU:C:2021:503.
46 P. de Miguel Asensio
Savin A (2021) The EU Digital Services Act: Towards a More Responsible Internet (February
16, 2021). CBS LAW Research Paper No. 21-04, available at https://ssrn.com. Accessed
21 Oct 2022
Wilman F (2021) The EU’s system of knowledge-based liability. 12 JIPITEC 317
Wimmers J (2021) The out-of-court dispute settlement mechanism in the digital services act – a
disservice to its own goals. JIPITEC 12:381
Pedro de Miguel Asensio is Chair Professor of Private International Law at the Complutense
University of Madrid and Director of its postgraduate programme on IT Law. Professor De Miguel
Asensio has authored numerous works on Private International law, International Business law and
IT law, including Conflict of Laws and the Internet (Edward Elgar, 2020), and is the co-editor of
Encyclopedia of Private International Law (4 vols, Edward Elgar, 2017, and winner of the
American Society of International Law Certificate of Merit). He is a consultant at Allen & Overy
LLP (Madrid). Co-rapporteur of the Committee on Intellectual Property and Private International
Law of the International Law Association. He was also a member of the European Max-Planck
Group on Conflict of Laws in Intellectual Property and one of the drafters of the CLIP Principles
and Commentary (Oxford University Press, 2013). He is a member of the International Scientific
Advisory Board (Fachbeirat) of the Max Planck Institute for International, European and Regula-
tory Procedural Law and of the Scientific Council of the European Association of Private
International Law.
Legal Challenges Posed by the Modern-Day
Transportation Services. A Brief Overview
from the Private Law Perspective
António B. Rodrigues
A. B. Rodrigues (✉)
Faculdade de Direito da Universidade de Lisboa, Lisboa, Portugal
e-mail: [email protected]
The initial omission of the legal framework required to assess the validity and
requirements of this type of transportation was not intentional. The transportation
contract itself is absent in the civil codes of various European jurisdictions, relying
on the existing regulation in commercial law. On the other hand, when a professional
provides this service for non-professional use (i.e. to a consumer), in addition to
private law rules, consumer law is called to action.
In broad terms, this kind of passenger transportation is characterised by the two
main obligations, namely the obligation to carry and the corresponding one to pay
the price, or fare, by the client. The innovation comes from a new subject added to
this legal relationship. This new element is called the operator of the electronic
platform (such as the Uber company). In the context of the (re)adjusted trilateral
relationship (between customer, driver, and the new platform operator) and the
majority of the models devised, this operator functions, first and foremost, as a
business intermediary.
1
Regarding the concept of sharing economy, see, with references, Belk (2014), pp. 1595–1600.
2
See Makela et al. (2018), pp. 1–14; and Solmecke and Lengersdorf (2015), p. 493.
3
About the platform economy, see Ayata (2021), pp. 7–33.
Legal Challenges Posed by the Modern-Day Transportation Services. A. . . 49
The platform operator’s role is clear: it meets the passenger’s transportation need
in the initial phase by connecting them with the current offers. It should be noted that
the pre-existing service is always provided by a third party (the driver). Nonetheless,
the operator’s role appears to go beyond mere intermediation, in that it has a decisive
influence on transportation itself. Among other things, the operator ensures a
particular driver’s availability and controls the service quality. Fundamentally, it is
also the platform operator that the client pays for the journey the driver carried out.
There is no direct exchange of money between the driver and the client, which is
otherwise a fundamental aspect of the established contractual framework.
In the European Union (EU), the intervention of the electronic platform called into
question the classification of the business itself as a transportation one. Several
TNCs initially sought to avoid being classified as such at all costs. The reason is
understandable in the context of the EU legislation in force: economic constraints are
inherent to the transportation of people and goods by the internal Law of each EU
Member State. Moreover, it meant that TNC activity would have to comply with a
legal framework not entirely harmonised between the several countries.
This legal framework was suspected (as was later established) to be restrictive to
their activity. The reason was clear: promoting a competitive model with the
pre-existing one (serving public transportation, including taxis) required compliance
with several legal requirements/constraints. Furthermore, that was not the most
conducive for business, as was publicly admitted. In other words, by not being
qualified as a transportation service, the activity of the TNCs could take refuge in
the desired quality of an information society service,4 to avoid the not-so-clear
internal rules of each EU Member State.
If the arguments advanced by the TNCs could prevail, they could act freely and
outside national regulation, albeit with minimal compliance with the EU Treaties
(see articles 56(1) and 58(1) of the Treaty on the Functioning of the European Union;
4
On the concept, see Article 2(a) of Directive 2000/31/EC, referred to in full in the following
footnote: reference is made, in turn, to the services mentioned in Article 1(2) of Directive 83/34/
EEC (as amended by Directive 98/48/EC of the European Parliament and of the Council of 20 July
1998, which has since been repealed by Directive 2015/1535/EC of the European Parliament and of
the Council). Article 1(2) of Directive 83/34/EEC (amended by Directive 98/48/EC of the European
Parliament and of the Council of 20 July 1998, since repealed by Directive (EU) 2015/1535 of the
European Parliament and of the Council of 9 September 2015 laying down a procedure for the
provision of information in the field of technical regulations and of rules on information society
services). See, Moura Vicente (2005), pp. 407–435.
50 A. B. Rodrigues
TFEU), as well as the Internal Market Services Directive and the E-Commerce
Directive.5
The claim was not entirely unfounded. It corresponded to the first judgment of the
Court of Justice of the European Union (CJEU) in a famous case concerning the sale
of contact lenses. In that case (C-108/09, Ker-Optika), the online sale of the medical
product in question was classified as an information society service. For this reason,
it was allowed to take place in establishments not specialised in medical instruments,
contrary to what the domestic Hungarian legislation required. The same understand-
ing was confirmed by the same Court in 2018 regarding the operation of platforms
destined for short-term accommodation (C-390/18, Airbnb Ireland UC, et al.). These
companies were also classified as providing a mere information society service.
This was not the CJEU’s fundamental understanding of the actions of TNCs. The
paradigm shift stems from a fundamental decision of the CJEU on 20 December
2017,6 in itself a significant fact.
In short, a professional association of taxi drivers in Barcelona (Asociación
Profesional Elite Taxi) filed a claim against Uber Systems Spain SL before the
Barcelona Commercial Court (Juzgado de lo Mercantil no. 3 de Barcelona). The
Spanish association claimed that the activities carried on by that company consti-
tuted unfair competition under domestic Law, breaching the strict requirements for
access to the profession of a taxi driver (cf. Ley 19/2003 del Taxi and the
Regulamento Metropolitano del Taxi). In turn, Uber argued that it provided merely
a dematerialised intermediation service (via an electronic application) between
non-professional drivers (who even use their vehicle) and end-users (the clients).
The Spanish Court referred the classification of the service to the CJEU under the
preliminary ruling procedure (article 267 TFEU).
The CJEU recognised, essentially based on (1) the role of the platform operator in
the creation of the offer (non-existent otherwise as the drivers are not able to provide
that service alone) and (2) its decisive and shaping influence on the conditions of the
service, that the overall service provided by TNCs incorporates both elements of an
information society service and a transportation one, the latter being, however, the
main one. Other arguments can be put forward, notably those contained in the
opinion of Advocate General Maciel Szpunar regarding this case (C-434/15),
who said: “Uber actually does much more than match supply to demand: it created
5
See Directive 2006/123/EC of the European Parliament and of the Council of 12 December 2006
on services in the internal market (OJ 2006 L 376, p. 36; cf. the Directive's Recital 21, Articles
2(2) (d) and 9(1)) and Directive 2000/31/EC of the European Parliament and of the Council of
8 June 2000 on certain legal aspects of information society services, in particular electronic
commerce, in the Internal Market (OJ 2000 L 178, page 1), respectively.
6
Proc. C-434/15, Professional Association Elite Taxi vs. Uber Systems Spain SL. The decision is
well documented, such as by Pérez (2018), pp. 53–94.
Legal Challenges Posed by the Modern-Day Transportation Services. A. . . 51
the supply itself. It also lays down rules concerning the essential characteristics of
the supply and organises how it works”.7
By considering this service as one that takes place in the field of transportation
(58/1 TFEU), the Court determined that it is for the Member States to regulate
internally the conditions under which this service is to be provided.
Other decisions followed, notably in France and Portugal.
In France, the previous judgment was upheld in a famous case (C-320/16; Uber
France SAS). It concerned the admissibility of a criminal penalty for anyone who
“organises a system connecting customers and persons providing road passenger
transportation services for consideration with vehicles having less than ten seats,
without having a qualification for that purpose”.
In Portugal, the CJEU maintained its ruling in the classification of the Portuguese
airline company TAP Air Portugal in the proceedings concerning its indirect
reprivatisation of share capital (C-563/17, Associação Peço a Palavra, et al.). The
CJEU considered that the activity carried out by TAP did not constitute an informa-
tion society service but simply a ‘service in the field of transport’.
There have been other attempts to again bring the matter before the CJEU, which
have proved unsuccessful. This was the case in Belgium (such as the request of the
Nederlandstalige rechtbank van koophandel Brussel rejected by Order of the Court
on 27 October 2016; C-526/15, Uber Belgium BVBA vs Taxi Radio Bruxellois NV)
and in Germany (see the initial request of the Bundesgerichtshof, later withdrawn by
itself, decided by the President of the CJEU on 12 April 2018; C-371/17, About BV
vs Richard Leipold).
The CJEU judgment promoted a wave of reform of the domestic laws of various EU
Member States. The need to adjust the existing legal frameworks to a new reality has
resulted in a reformist impetus, which goes across several legal systems.
4.1 France
France pioneered the modernisation of its legal framework. To this end, it benefited
from a Law that allowed it to frame the activity of “chauffeur-driven tourist vehicles”
(CTVs) from the outset. These included the Law on the development and
7
See the Opinion of Advocate General Szpunar delivered on 11 May 2017, § 43, available at:
https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:62015CC0434. About this decision,
see, Hacker (2018), pp. 80–96, and Sousa Ferro (2019), p. 69.
52 A. B. Rodrigues
4.2 Germany
The German Courts have been several times asked to rule on the lawfulness of Uber
activity, regarding its compliance (or lack thereof) with the Law on the Carriage of
Persons (Personenbeförderungsgesetz; PBefG), as well as in matters relating to the
lawful competition (Gesetz gegen den unlauteren Wettbewerb; UWG). The German
administrative courts in Hamburg and Berlin decided that the activity developed by
UberPop did not meet the requirements for the development of its activity.8
Recently, an amendment to the PBefG was carried out by an amendement of
16 April 2021 (Gesetz zur Modernisierung des Personenbeförderungsrechts).
With this reform, among other aspects, two new paragraphs (1a and 3) were added
to the scope of application of the PBefG.9
4.3 Italy
8
In this regard, see the decisions of the administrative Court of the first instance
(Verwaltungsgericht) of Hamburg of 27 August 2014 (proc. no. 5 E 3534/14) and Berlin on
16 September 2014 (proc. no. 11 L 353.14). The higher Court in administrative matters has upheld
both judgments (Oberverwaltungsgericht, OVG; see, in this regard, the OVG decisions on
24 September 2014 and the OVG Berlin-Brandenburg decision on 10 April 2015). See, Wimmer
and Weiß (2015), pp. 80–85; König (2020), pp. 11–35.
9
In Geneva a ban on Uber exercising its activity in 2015 stands out; Kim Sommer (2015),
pp. 116–118. On the matter, see also Ludwigs (2017), p. 1646. With a Public Law prespective,
see, also, Ingold (2014), p. 3334.
Legal Challenges Posed by the Modern-Day Transportation Services. A. . . 53
4.4 Spain
Spain reframed in 2013 the activity of car rental with driver (by the Law 9/2013 of
4 July, which amended the Ley de Ordenación de los Transportes Terrestres;
LOTT): the activity was qualified as one of passenger transport, the need for
authorisation before the start of the activity was created and a numerus clausus
was introduced for access to the profession. Moreover, in 2014, in a controversial
decision, the Madrid Commercial Court (Juzgado de lo Mercantil no. 2 of Madrid)
cautiously suspended the activity of one of the companies deployed throughout the
country. This ban would last until March 2016. Currently, the resumption of
transportation through so-called vehículos con conductor (VTC) is manifest, apart
from some cities, such as Barcelona. However, in 2015 a new legislative amendment
appeared (Real Decreto 1057/2015, of 20 November), this time regarding the LOTT
(Real Decreto 1211/1990, of 28 September), greatly restricting the access and
exercise of TNC activity.
4.5 Portugal
Until the approval of Law no. 45/2018, of 10 August, TNC activity in Portugal did
not adhere to any minimum requirements, without prejudice to those voluntarily
10
See the decision of the Court of Milan of 2 July 2015, N.R.G. 35445/2015+ 36491/2015, and that
of the Court of Rome of 07 April 2017, R. G. 25857/2017.
11
See Di Amato (2016), pp. 177–190, and Caruso (2018), pp. 223–264.
54 A. B. Rodrigues
assumed by the companies.12 The unfair competition with the taxi sector was
manifest and formed the grounds for the direct recourse to the Courts in the first
instance.
Initially, the Portuguese National Association of Road Transport in Light Motor
Vehicles filed an injunction against Uber, Inc., seeking to terminate its activity in
Portugal. The association claimed that there was a manifest inequity in its exercise,
given the restrictions imposed solely in the taxi sector. The Court granted the
injunction without a prior hearing of the defendant; subsequently, on 25 June
2015, the same first instance court of the District of Lisbon upheld the existing
precautionary measures. This decision would be later confirmed by the Lisbon
Appeal Court, in its ruling on 27 April 2017. With this decision, the Court found
that the “non-compliance with Laws of public interest, such as the licensing of the
road transportation activity (. . .) generates unfair competition, with related financial
damages, in a market that the Legislator wanted regulated in a certain way”. The
Appeal filed by the operator of the electronic platform was dismissed. Nonetheless,
in this context, a legislative reform was necessary.
After an initial draft bill, vetoed by the President of the Republic on 29 April
2018, Law No. 45/2018, of 10 August, came into effect. This Law regulates the
individual and remunerated transportation of passengers in uncharacterised vehi-
cles from an electronic platform (transporte individual e remunerado de
passageiros em veículos descaracterizados a partir de plataforma eletrónica;
TVDE) and it came to establish the legal regime of TVDE activity in Portugal, as
well as that of the electronic platforms that organise the TVDE transport.
The discussion on the framework due to the matter remains open. The legislative
delay in regulating the matter continues to be compensated by a strong intervention
of the Courts in the various EU Member States. This current dynamic is illustrated by
the recent ban on Uber activity in Brussels with effect from 26 November 2021, as
well as the proposal by the European Commission, in response to appeals by the
ILO, presented on 9 December 2021, among other documents, for a Directive aimed
at improving working conditions through electronic platforms (see procedure 2021/
0414/COD).
This proposal aims to facilitate the identification of the employment nature of the
link between the platform operator and the drivers and, consequently, to guarantee
workers the right to a minimum wage, to ensure collective bargaining, to protect
working time, as well as the right to holidays, leave and deductions.
12
The three studies of Morais Carvalho (2015b), pp. 63–65, (2015a), pp. 157–158, and (2018),
pp. 1191–1224, are important in this regard. See, also, Carvalho (2016), pp. 221–238.
Legal Challenges Posed by the Modern-Day Transportation Services. A. . . 55
Forming the contract underlying the TVDE service is particularly complex. Three
decisive and successive moments may be considered, leading to the conclusion of
the contract: first, access to the electronic platform by the user; second, the negoti-
ating proposals made by the electronic operator; and finally, the user’s agreement.
The contract in question depends on the initial existence of a negotiating proposal
(commonly known as an offer). Electronic means make it possible to customise the
offer the customer is looking for (e.g., determining the destination, the departure
time, the place of pick-up, and the specific vehicle required). In short, it provides the
tool enabling the user to determine which existing offer interests him. The electronic
platform also facilitates the convenient, quick, and transparent indication of the key
elements of the available offers, namely price, estimated time of duration, suggested
itinerary and identity of the driver and the vehicle, as well as other possible elements
(e.g., the application of discounts, promotions, gratuities). All these elements typify
the circumstantial nature of the offer of a transportation service. This is not
contradicted by the fact that it occurs based on fully computerized processes using
a mobile application.
When an offer exists (and this may not be the case in certain peak periods, certain
areas, or certain intervals of the day), the user’s acceptance is sufficient for the
contract to be concluded. Moreover, this acceptance is generally carried out through
a final click by the user on the application, whereby the user declares that he accepts
the conditions inherent to the proposal.
The transportation request submitted directly by the user does not constitute a true
business proposal. Rather it is merely a filtering mechanism for those negotiating
offers which pre-exist on the platform.
This moment is not to be confused with the initial moment the negotiating
proposal is issued, nor, moreover, with the moment at which the transportation
service starts to be provided.13 Regarding this last aspect, it is necessary to define
when the transportation itself commences. As a rule, the carriage is in progress from
the moment the driver accepts a request for transportation, settling the perennial
question of who is to bear the cost associated with the interim period between
acceptance of the request for transportation and user entry into the vehicle, as well
13
With an outstanding explanation of all these steps, see Lacerda Barata (2015), pp. 619–668. In
Germany, see Koller (2020), margin numbers 38-41a. It should be noted that the German transpor-
tation contract (Beförderungsvertrag) is regulated in several places: the goods (Güter) transporta-
tion is located in the German commercial code (§§ 407 et seq. HGB), but the Aviation Act (LuftVG)
and the Law concerning the private law situation of inland navigation (BinSchG) must also be
acknowledged. The transportation of people by land, on the other hand, is in the Law regarding the
transportation of people (PBefG); see, Heinze/Fehling/Fiedler, in PBefGKommentar, § 1 margin
numbers 1–4, Canaris (2006), pp. 484 et seq., and Thume in MükoHGB § 407 margin numbers
122–124.
56 A. B. Rodrigues
as the admissibility of unilateral cancellation of the service by the client until that
moment.
Nor does the medium of this contract appear to be unconstrained or open. The
client may only use the TNCs through an electronic platform, and this naturally
implies the adoption of the written form. After all, the platform operates in the digital
environment, and the inherent electronic contracting corresponds to the written form
by legal equivalence expressed in most European jurisdictions.
Furthermore, it is generally necessary for the user to register on an electronic
platform through which the service is subsequently subscribed to. Furthermore, this
adhesion to a platform is prior (it should be emphasised) to the contracting of the
service itself and constitutes a transaction. Therefore, this contract only involves the
electronic platform operator and the user. The function of this contract is essential: it
instrumentalises the subsequent and possible service indirectly requested by the user
(as this is done through the electronic platform) to a driver.
The legislation does not specifically regulate this business nor the activity of
TNCs. The transportation service provided is not, therefore, the mere contractual
performance of the former but an autonomous type of contract.
As we have seen, it is debatable whether this quality (of mere intermediary)
corresponds to the true role of TNCs, considering the view of the CJEU and the
leading doctrine on this matter.14
As far as we are concerned, the possibility of devising a model that is far removed
from the role of pure intermediary is evident. Such a model could be trilateral, and to
that extent, has the necessary complexity (albeit atypical) to encompass the legal
relationships established between the electronic platform operator and the user, as
well as between the platform operator and the driver. Given the legal framework in
force, we cannot fail to assign/attribute the main role of intermediation to the
platform operator, which is a point that certainly justifies a future revision of the
framework and a harmonization of the understanding held in other fields related to
the Sharing Economy.
For this reason, although the electronic contract underlying TNC services is, as a
rule, between a professional and a consumer (B2C; Business-to-Consumer), there
are other contracts, which ought to be classified as electronic contracts between
professionals (B2B; Business-to-Business) and between consumers (C2C; Con-
sumer-to-Consumer).
14
Colangelo and Maggiolino (2017), pp. 14–15.
Legal Challenges Posed by the Modern-Day Transportation Services. A. . . 57
The activity of TNCs is still at an early stage, at least in comparison with the
millenary institutes on which the contracts in Romano-Germanic legal systems are
based.15 We can highlight three especially contentious fields.
As we can observe, several obligations bind each of the parties throughout the
provision of transportation services by TNCs.
It is vital to appreciate the business complexity parallel to the provision of this
transportation service.16 The contract concluded between the TNC and the customer
is not to be confused with other legal ties established directly between subjects other
than the platform operator and the end-user.
Firstly, the driver may work for an entity providing the transportation service.
This new operator, for which the former acts as a driver (generally a company), is
directly linked to the operator of the electronic platform. If it exists, a set of
obligations falls upon this new operator, parallel to the main one of transporting
(which is borne by the driver), namely the obligation to provide the service,
respecting the grounds for refusal, such as the prohibition of unlawful discrimination
of potential customers.
The driver is the person who drives the vehicle and, as such, carries out the
material transportation of the user (or the person designated by the user). To be a
15
Regarding the Romano-Germanic legal tradition, see Moura Vicente (2022), pp. 99–229.
16
Antonini (2015), pp. 255–324.
58 A. B. Rodrigues
Furthermore, serious doubts arise concerning the framework of the legal relationship
established between the driver and any company for which they work. This lack of
clarity is one of the main complaints of the industry. In other countries, the matter is
unclear since the difficulties in classifying the negotiating relationship arise from the
subjects involved. In fact, in most countries, it is in the direct relationship established
between the driver and the operator of the electronic platforms where the
difficulty lies.
This is not the case in Portugal, which adds the relationship between the driver
and their employer as an additional element of complexity to this negotiating
relationship. Thus under Portuguese Law, the functional link between drivers and
platform operators is reduced to a mere “registration with the electronic platform”
(Article 10/1 TVDE Law) and the contract (of employment or provision of services)
is instead concluded between the driver and the person providing the service, i.e. the
17
Locks (2015), pp. 329–342. About the basis of contract determination and scope of the contract
statue, see Reithmann and Martiny (2022), pp. 68–174.
Legal Challenges Posed by the Modern-Day Transportation Services. A. . . 59
TVDE operator itself. To this extent, the solutions existing in other countries are not
(apparently) decisive, such as the Italian legal provision of the employment nature of
the relationship between drivers and platforms (through Decree-Law No. 101 of
3 September 2019), nor, incidentally, the entire omission of the matter in
Spanish Law.
In any case, it should be noted the proliferation of decisions of the higher courts in
France and the United Kingdom towards the recognition of the employment nature
of the bond (again, between platforms and drivers). Let us look at these decisions. In
France, we highlight the decision of the Cour de Cassation (Chamber of Labour) of
4 March 2020 (proc. no. 19-13.316)—the Court upheld the understanding held in
2018 as to the also contractual nature of the digital platform management company
and its couriers (CS, Chamber of Labour, of 28 November 2018, no. 17-20.079). In
the United Kingdom, the decision on point is that of the Supreme Court (2021;
UKSC 5).
The same approach has been adopted in certain European cities, such as in
Geneva and Brussels, where the qualification of the employment relationship by
the Courts between TVDE drivers and platform operators is clear. It is also worth
mentioning the application in Spain of the “Rider Law” (Royal Decree-Law 9/2021
of 11 May), which, introduced in August 2021, recognized the status of workers for
all those who make deliveries through digital platforms and, as a result, triggered an
extraordinarily strong wave of dismissals.
It is unsurprising to learn of the most recent EU initiative in this area: the
European Commission has proposed unifying regulation of the matter through a
new Directive.
7 Final Remarks
The legal regime for TNC services is new. The system is based on the dematerial-
ization of the activity. There are problematic aspects, such as the heavy system of
liability of the various agents, as well as the initial classification of the bonds
established between the parties. The most recent initiatives on this matter, both
national and international (with emphasis on the proposal of a new Directive on
this matter) are heading in this direction.
References
Antonini A (2015) Corso di Diritto dei Trasporti, 3rd edn. Giuffrè, Milan
Ayata Z (2021) A conceptual overview of legal challenges posed by Uber. In: Ayata Z, Önay I (eds)
Global perspectives on legal challenges posed by ridesharing companies. A case study of uber.
Springer, Singapore, pp 7–33
Belk R (2014) You are what you can access: sharing and collaborative consumption online. J Bus
Res 67:1595–1600
60 A. B. Rodrigues
António B. Rodrigues is guest Lecturer at the Faculty of Law of the University of Lisbon and
researcher at the Centre for Research in Private Law (CIDP/FDUL). Lawyer in Lisbon.
The Regulation of Content Moderation
Abstract Online platforms have become a key infrastructure for creating and
sharing content, thus representing a paramount context for the individual/collective
exercise of fundamental rights (e.g., freedom of expression, association) and the
realisation of social values (citizens’ information, education, democratic dialogue).
At the same time, platforms offer new opportunities for unfair or harmful behav-
iours, such as the unauthorised distribution of copyrighted content, privacy viola-
tion, unlawful content distribution (e.g., hate speech, child pornography), and fake
news. To prevent or at least mitigate the spread of such content, online platforms
have been encouraged to resort to content moderation. This activity uses automated
systems to govern content flows to ensure lawful and productive user interactions.
These systems deploy state-of-the-art AI technologies (e.g., deep learning, NLP) to
detect prohibited content and restrict its further dissemination. In this Chapter, we
will address the use of automated systems in content moderation and the related
regulatory aspects. Section 2 will provide a general overview of content moderation
on online platforms, focusing mainly on automated filtering. Further, Sect. 3 will
describe existing techniques for automatically filtering content. Section 4 will
discuss some critical challenges in automated content moderation, namely vulnera-
bility, failures in accuracy, subjectivity and discrimination. Furthermore, Sect. 5 will
define some of the steps needed to regulate moderation. Finally, in Sect. 6, we will
review existing legislation that addresses content moderation in online
environments.
F. Galli (✉)
University of Bologna, Bologna, Italy
e-mail: [email protected]
A. Loreggia
University of Brescia, Brescia, Italy
e-mail: [email protected]
G. Sartor
European University Institute and University of Bologna, Bologna, Italy
e-mail: [email protected]
1 Introduction
The volume of content that flows through the Internet every minute is increasing at a
speed that was unimaginable some years ago. Every single minute, Facebook
analyses more than one million authentications; Twitter publishes roughly 200 thou-
sand users’ tweets; Messenger and WhatsApp are used to send almost 69 million
messages; approximately 500 h of video content is uploaded onto YouTube. And
these are just some numbers related to the year 2021. In a study, Cisco describes how
both the number of users and the content traffic on the Internet are predicted to grow
over the next few years: comparing statistics from 2018, the percentage of connected
users is expected to grow by about 15% by the end of 2023 and the number of
devices per person by 50%, while the speed of Internet connections will more than
double.
Content platforms and social media services are largely responsible for this
incredible upsurge in content creation and sharing. These platforms allow users to
share information in real-time, easily, free of charge or at a very low cost, and most
importantly potentially achieve a huge audience. 1 In giving the possibility to
communicate and share information, platforms have become a paramount context
for the individual or collective exercise of fundamental rights, such as freedom of
expression and association, while also contributing to the realisation of social values
(citizens’ information, education, and democratic dialogues).2 At the same time,
however, online platforms offer new opportunities for unfair or harmful behaviours.
On the one hand, problems may emerge in relation to the unauthorised publication
and distribution of copyrighted content and with regard to violations of privacy and
data protection. On the other hand, further problems may relate to the distribution of
unlawful content. This unlawful or antisocial informational behaviour includes
multiple activities such as hate speech, child pornography, incitement to commit
crimes, libel or slander, as well as the distribution of fake news and content
expressing incivility and aggression or cruelty against humans and animals.
To prevent unlawful and harmful online behaviours, or at least to mitigate their
effects, online platforms have been pressured to resort to content moderation. This
activity consists of organising and governing content flows to ensure lawful and
productive interaction between users.3
Content moderation today is primarily, but not exclusively, carried out through
automated means. Indeed, given the vast amount of information, human moderation
is inadequate, and automated filters are needed in order to monitor online content
that is uploaded onto digital platforms and detect (potentially) unlawful and abusive
content. Techniques developed in machine learning and in AI (such as deep learn-
ing) are particularly effective in categorising unwanted content. However, their use
1
1 Zhang, Ghorbani (2020).
2
Francesco et al. (2022).
3
Grimmelmann (2015).
The Regulation of Content Moderation 65
presents also risks, as it may lead to the exclusion of valuable content and may affect
freedom of expression, access to information and democratic dialogue.
The use of automated content moderation ought therefore to be accompanied by
regulatory measures. In particular, these should aim at achieving two parallel and
often conflicting objectives: (a) preventing, limiting and mitigating as much as
possible the individual and social harm that can be caused by unlawful or inappro-
priate content and activities, while (b) allowing and even facilitating the delivery of
beneficial content as well as accessible and civil interaction. Also considering this
delicate balance, the European Commission has announced a set of initiatives to
strengthen the European market and promote innovation and competition. Among
these initiatives, two proposals—namely, the Digital Services Act (DSA) and the
Artificial Intelligence Act (AIA)—are relevant for the governance of automated
content moderation. In particular, the DSA aims to align the old e-commerce
directive with the new technological, economic and social context in which ISPs
operate today and contains specific rules on content moderation.
In this chapter, we will address the use of automated systems in content moder-
ation and the related regulatory aspects. In Sect. 2, we will provide a general
overview of content moderation on online platforms, focusing mainly on automated
filtering. Further, Section 3 will include a description of existing techniques for
automatically filtering content. Section 4 will discuss some of the critical challenges
in automated content moderation, namely vulnerability, failures in accuracy, and
subjectivity and discrimination. Furthermore, Section 5 will define some of the steps
needed to regulate moderation. Finally, in Sect. 6, we will review existing legislation
that addresses content moderation in online environments.
The dynamics of online communities have shown that the Internet does not function
as a perfect “marketplace of ideas,” where true claims, valuable ideas, and pro-social
content (“goods”) naturally tend to prevail over the competition of fake news, bad
memes, and socially obnoxious content.4 It is true that in many cases, wrong ideas
and obnoxious behaviour (“bads”) can be effectively met with appropriate criticism,5
but this is not always the case.
4
The notion of a marketplace of ideas, made famous by John Stuart Mills, gained currency in the
law owing to US Justice Oliver Wendell Holmes, who in his dissenting opinion in Abrams v. the
United States (1917) argued that the “ultimate good desired is better reached by free trade in ideas—
that the best test of truth is the power of the thought to get itself accepted in the competition of the
market.
5
This idea was affirmed by another famous US justice Louis Brandeis, who in Whitney
v. California 1927 affirmed that against falsehoods and fallacies, “the remedy to be applied is
more speech, not enforced silence.”
66 F. Galli et al.
In recent years, the circulation of abusive user-generated content has indeed fallen
under the spotlight. This has led to online content-sharing platforms being used to
deliver so-called “information bads” rather than “information goods”, i.e. content
that has a negative social value that does more harm than benefit. Online abusive
behaviour can consist of informational behaviour that is unlawful or anti-social in
various respects: it may involve violating the privacy or reputation of participants or
third parties, promoting violence, racism, or terrorism, sharing violent or child
pornography, etc. Broadly understood, it may also consist of the violation of any
kind of legally protected rights, including those rights whose legitimacy and scope
are controversial, such as intellectual property rights.
Note that abusive content is not limited to what is prohibited by the law: the
concept of “information bads” needs to be contextualised to the participants in an
online community, to their interests and expectations. For instance, the distribution
of specific images of nudity may be permissible by law but still negatively affect
communication within specific online communities. Similarly, the use of uncivil,
rude, or aggressive language (which does not reach the threshold required for
slander, libel, or prohibited hate speech) may be deleterious to the smooth interaction
within many online communities. The fact that an item of speech is legally permit-
ted, and rightly so, as a matter of general laws, does not entail that distributing this
item is valuable in the context of a particular community or that its distribution in
that community is beneficial to society. Often, online platforms specify in general
terms what counts as unwanted speech in their terms of service, namely the unilat-
erally predisposed contracts that users accept to use the platforms. According to such
contracts, users commit themselves to comply not only with the law but also with the
requirements established by the platforms.6
To maintain the benefits provided by users’ productive engagement while
addressing the negative impacts, many platforms, whether major or minor, have
resorted to automated content moderation. For example, Facebook declared that
more than 95% of hate speech is taken down by automatic content moderation.7
Facebook also developed specific algorithms able to detect almost the entire amount
of spam content and terrorist propaganda.8 Similarly, YouTube declared that in 2019
more than 30 million videos were removed from the platforms for violating the
platform’s guidelines. 2.7 billion bad ads were removed, and 1.2 million accounts
were removed for violating YouTube’s ads policies.9 Facebook and YouTube are
only examples. The size of the market of content moderation solutions is expected to
reach 12.7 billion dollars in 2025.10 Social media giants like Facebook and Google
have developed their own AI-based content moderation systems. Nevertheless,
6
Leerssen (2015).
7
Business Insider (2021), available at: https://www.businessinsider.com Accessed 15 July 2022.
8
PC (2020), available at: https://www.pcmag.com Accessed 15 July 2022.
9
YouTube, Information quality & content moderation, available at https://blog.google Accessed
15 July 2022.
10
Marketwatch, available at https://www.marketwatch.com Accessed 15 July 2022.
The Regulation of Content Moderation 67
companies exist which provide customers with third-party digital content modera-
tion solutions. These include Amazon, Microsoft, Accenture, and Appen. Most of
these systems still require human intervention: for example, YouTube has expanded
its workforce of moderators to 10,000 units, though their intervention is limited to
managing complaints and to addressing the most problematic cases. Facebook has
over 15,000 moderators at Facebook, and most are contracted through third-party
companies.11
Indeed, the activity of content moderation may be performed only by humans or
only by automated algorithmic tools, but most often by a combination of the two. For
instance, individual bloggers usually engage manually with the contributions posted
on their blogs and decide what to maintain or delete. On the other hand, automated
moderation is used on large platforms, in particular, to prevent the posting of
copyrighted works. In many other cases, moderation is entrusted to a complex
socio-technical system, including a multi-stage combination of humans and
machines that interact in complex ways. For example, an automated system may
classify certain items as definitely harmful, which leads to the automated removal of
such items, or as definitely not harmful, which leads to the items being made
available online. In cases where the classification is uncertain, i.e. where items are
automatically classified as potentially harmful, such items may be subject to subse-
quent human review, according to which they can be filtered in or filtered out.
Content moderation may also involve taking actions other than filtering an item,
i.e. rejecting or admitting it to the platform. It may also involve editing and altering,
prioritising and hiding, or combining information from individual user contributions.
All these activities may rely on centralised moderation (such as in Facebook), which
is applied top-down by the platform itself generally according to uniform policies, or
bottom-up through decentralised filtering performed by multiple users acting as
moderators (such as in Reddit).12 These are the so-called “trusted flaggers”,
i.e. those individuals or entities which may have particular expertise or interest in
a particular field and that may be attributed by the platform the role of signalling
harmful content within the communities.
Content moderation may be reactive, i.e. take place after an issue with a content
item has been signalled by users or trusted flaggers. On the other hand, it may be
proactive, i.e. rely on the active search of potentially abusive content. Proactive
moderation may even take place before the content becomes available to users on the
platforms. This kind of moderation may be found in small, moderated groups (e.g. a
comment section in a blog) where a moderator reads all posts before making them
available to the readers. It may also be adopted in those instances of filtering that are
for the large part automated, such as the detection of clear violations of copyright
(the posting of an entire copyrighted file).
11
Leskin (2021).
12
New America, Case Study: Reddit, available at https://www.newamerica.org Accessed
15 July 2022.
68 F. Galli et al.
Reactive filtering has the advantage of not delaying the publication of materials
and not overburdening moderators. The obvious disadvantage is that harmful mate-
rial remains available until it is removed.
The technology used in content moderation strongly depends on the media in which
they are expressed (text, audio, images and video, or their combinations) and varies
from simple approaches to very complex AI techniques. The latter generally rely on
machine learning13 techniques able to extract information from a set of samples,
being this information then used for further predictions.
Machine learning techniques are usually classified into three main categories that
differ in the way information is extracted and used: supervised learning,
unsupervised learning, and reinforcement learning. For the purpose of automatic
filtering, techniques from the first two categories are usually employed. Supervised
learning comprehends all those techniques that train machines to recognise and
select an output class to be assigned to the input data by showing correct examples.
Unsupervised learning algorithms are all those models trained to recognise recurring
patterns in the input data without examples. These patterns are helpful for classifying
and grouping the data into subsets that somehow describe common structures.
Besides machine learning, in recent years, we have observed an increasing and
widespread use of deep learning techniques. These are based on neural networks
built with a considerable number of parameters and can achieve extremely high
performance in many domains, especially in situations where the goal is to identify
repetitive patterns in the data and then use them to convey the prediction. In
moderation, neural networks are usually employed in textual or multimedia pattern
recognition, typically in order to classify new content as appropriate or inappropriate
on the basis of its similarity to content already classified in one way or another.
Based on these available technologies, several techniques may be used to engage
in content moderation and filtering.
Some techniques are based on relatively simple deterministic methods, as in the case
of metadata and hash filtering.
Metadata is information that describes input at a very high level. Examples of
metadata are the title of a book, its author, the length of a piece of music, and so
on. In general, metadata is all the information that accompanies content when it is
13
Michalski et al. (2013).
The Regulation of Content Moderation 69
The textual contents available online include literary productions (e.g. reports,
articles, books) and messages published or exchanged online. At times the illegality
or inappropriateness of content can be determined simply by comparing text
sequences from the content with other sequences accessible to the system. This is
the case for copyright violations of textual documents that usually reproduce por-
tions of text. In other cases, an analysis of the text and the language used to identify
the meaning conveyed is necessary, for example, to identify bullying messages, hate
content, or racism.
The most straightforward approach is to blacklist, that is to create and maintain
datasets containing text forms that represent unwanted content that is wished to
identify in the text. If such content is placed in a new message, the offending
message may be rejected, deleted, or reported. It is the first level of control generally
applied in social networks or in some blogs and that enables the blocking of certain
explicitly offensive content. Facebook, for example, gives page administrators the
capability to apply different filter levels.
70 F. Galli et al.
This very simple form of control, however, is of limited application, requiring the
exact coincidence of the textual sequences. Natural Language Processing (NLP) is a
discipline that studies and develops techniques to enable the computer to cope with
natural languages, such as English or Italian. NLP techniques enable the develop-
ment of tools for the representation and management of syntax, grammar, and
semantics of a language. In the moderation of textual contents, NLP techniques
become indispensable when the simple search for occurrences in the text is no longer
sufficient. Such techniques are, for example, necessary for the automatic identifica-
tion of hate content.
It is highly demanding to provide a formal definition of incitement to hatred as it
is difficult to identify it by comparing individual words or sentences. NLP tech-
niques can help extract the content and opinion conveyed in a text through a more
complex syntactic and semantic analysis. In the context of natural language
processing, the evolution and development of machine learning techniques have
had a wide-spread and substantial impact. The application of neural networks to the
classification of opinions14 and the detection of incitement to hatred have led in
recent years to remarkable results.15
NLP employ complex types of deep neural networks: from convolutional neural
networks16 (models capable of considering the proximity of inputs such as pixels of
an image and extracting corresponding information) to recurrent neural networks,17
(i.e. networks capable of processing input sequences and considering the sequence in
which information appears during processing).
14
Deriu and Cieliebak (2016).
15
Fortuna and Nunes (2018).
16
LeCun et al. (1998).
17
Schmidhuber (1989).
18
Google (2022), available at: https://support.google.com Accessed 15 July 2022.
The Regulation of Content Moderation 71
race, gender, sexual orientation etc.) or that are intended to combat expressions of
hate. At the same time, the automatic identification of unwanted memes is particu-
larly difficult because the meaning of memes derives from their combination of
different media and requires decoding contextual and socio-cultural references.
Finally, other multimedia content (such as video) can also be subject to automated
moderation. For example, Facebook has developed a machine learning approach,
called Whole Post Integrity Embeddings (WPIE), to address content violating
Facebook guidelines. The system addresses multimedia content by providing a
holistic analysis of a post’s visual and textual content and related comments across
all dimensions of inappropriateness (violence, hate, nudity, drugs, etc.). It is claimed
that automated tools have improved the implementation of Facebook content guide-
lines. For instance, about 4.4 million items of drug sale content have been removed
in just the third quarter of 2019, 97.6% of which were detected proactively.
4 Failures in Moderation
Machine learning techniques are usually evaluated using some accuracy metrics that
measure the number of samples that the model correctly classifies. These samples are
termed true positives (e.g. harmful samples that are correctly classified as dangerous)
or true negatives (e.g. non-harmful samples that are correctly classified as safe
content). It is also possible for a machine learning system to misclassify samples
and commit errors. This may occur when the model generates false positives
(e.g. non-harmful content being classified as dangerous) or false negatives
(e.g. harmful content being classified as safe content). The confusion matrix is a
standard way to visualise the performance of a model. Figure 1 reports an example of
a confusion matrix for a binary classification task: each cell reports the number of
samples that are true positive, true negative, false positive, or false negative.
Developing a sound and reliable system means minimising the number of errors it
makes. To achieve this goal, machine learning techniques used for filtering and
implementing moderation systems need a significant amount of training data with
specific characteristics, one of the main characteristics being the representativeness
of the domain addressed. If some categories are under-represented or over-
represented in the training set, then the tool may be unable to recognize items of a
specific category. This would affect the accuracy of the tool, leading to the removal
72 F. Galli et al.
of items that are not harmful or, conversely, to the publication of items that should
instead be banned.
For some kinds of content, finding datasets with appropriate characteristics is not
a problem. This is the case when the categories are well-known and parameters for
assessing data items are clear. Examples derive the copyright-infringement area and
the child sexual abuse material. In such cases, it is usually clear whether an item
should be classified as infringing or not. For some other kinds of content, such as
violence, this distinction is often not easy. We provide two examples that illustrate
the complexity and the stakes of this task.
On 15 March 2019, two mass shooting attacks took place in Christchurch,
New Zealand, during prayers at mosques. The terrorist who carried out the attack
live-streamed the mass murder of worshippers using a camera fixed to his helmet.
The video was searchable on YouTube, Facebook and other platforms: neither
automatic tools nor human moderators could detect the video and block it before it
was made available online.19 On the other hand, YouTube came into the spotlight
when it removed thousands of videos on atrocities in Syria. Those documents could
be used as evidence of crimes committed against humanity during the war, and their
removal could potentially jeopardize future war crime prosecutions.20 The videos at
issue in these examples had highly similar visual content, but they were intended to
transmit very different messages. To detect such differences in meaning is a complex
task. As curious examples in which automated classification failed to correctly
identify certain items, we can mention the removal of the image of Copenhagen’s
Little Mermaid statue for breaking nudity rules21 as well as the prohibition of a photo
of the Neptune statue in Bologna as explicitly sexual.22
19
Lapowsky (2019).
20
Browne (2017).
21
Bolton (2016).
22
Delgrossi and Said-Moorhouse (2017).
The Regulation of Content Moderation 73
The term “ground truth” refers to the correct result, as opposed to the result proposed
by a machine learning system. In online filtering, the truth of reference is not
provided by a standard but rather by human assessments of whether a given item
falls into a category of unwanted content according to laws, guidelines, or social
norms. The interpretation is therefore subjective and linked to individual evaluators.
In fact, human evaluations constitute the training set of automatic classifiers and
provide the basis for evaluating and correcting the results of these classifiers. Thus,
the operation of an automatic filtering system will reflect the attitudes, and perhaps
the prejudices, of the human beings whose behaviour the system is intended to
emulate.
The standard for making content accessible to the public varies according to the
social and cultural fabric: what is acceptable in one community (real or virtual) may
not be acceptable in another. This reflects the territoriality of the law: international
platforms should comply with various national laws, some of which may require that
certain content be legal, others that the same content is illegal. However, moderation
systems work best if they apply the same standards regardless of the origin and
dissemination of the content. This creates a situation where platforms tend to apply
the strictest rule which leads to excessive removal of content even in cases where
such content is not illegal.
Additionally, virtual communities may have divergent opinions and standards on
what may be considered appropriate or inappropriate. In these cases, filtering can
raise some issues related to unfair discrimination to the extent that content provided
by, or about certain groups can be excluded or deprioritised. Content filtering can
lead to the identification of individuals who have posted unwanted material affecting
the privacy and data protection rights of those individuals. This tool can therefore be
used to identify political oppositions, as happened in China and Egypt, thereby
damaging the freedom of expression and association.
4.3 Vulnerability
Filtering algorithms can be the target of specific attacks depending on the technology
used. As noted above, blacklisting technologies can be circumvented by applying
small changes to the content (for example by replacing part of an expression or by
using emojis) to avoid a match to an entry in the blacklisted database. These changes
can be done automatically through specialised tools.
The higher complexity of the filtering algorithms requires more complex attack
techniques. Adversarial attack approaches correspond to the latest generation of
attacks to challenge automatic filters. In this approach, two interacting systems
(a discriminator and a generator) compete with the aim of deceiving the other
system. The generator tries to generate messages that are incorrectly classified by
74 F. Galli et al.
the discriminator (for example, fake news or reviews classified as original, or spam
messages classified as non-spam), the discriminator seeks to classify correctly all the
inputs.
5 Moderating Moderation
As filtering systems are vulnerable, fallible and inherently subjective, their possible
failures should be anticipated, and mistakes should be identified and remedied. In
this respect, secondary regulation of online speech may require not only introducing
laws on the liability of intermediaries in relation to transmitted content but also the
governance of moderation systems which are intended to filter out illegal and
harmful content. A combination of procedural and substantive legal measures
could be adopted to ensure that moderating systems do not.
23
We can recall the Santa Clara Principles, the Corporate Accountability Index, and the Electronic
Frontier Foundation principles on filtering.
The Regulation of Content Moderation 75
period, Twitter suspended a total of 453,754 unique accounts for violations related to
child sexual exploitation. Of those unique suspended accounts, 91% were flagged by
a combination of technology (including PhotoDNA and internal, proprietary
tools).24 Moreover, Twitter actioned 27,935 unique accounts for violations related
to COVID-19 misleading information, in addition to automatically removing 33,761
content items. These numbers suggest how the hybrid approach for content moder-
ation seems to be the emerging solution for mitigating some of the aforementioned
issues.
Automatic and hybrid content moderation is also employed by YouTube. The
content moderation is applied centrally both ex-ante and ex-post. According to
YouTube’s Transparency Report during the first trimester of 2020, YouTube mod-
erators manually flagged and removed 399.422 videos from the website, while
automatic procedures flagged and removed 5,711,586 videos.25 This is more than
14 times the number of videos flagged by humans. Apart from the difference in
numbers, it is interesting to appreciate that the automatic procedure makes it possible
to remove 53% of videos before they are viewed at least one time and 28% of videos
with a maximum of 10 views. Moreover, YouTube removes potential harmful
comments. From January to March 2020 the platform flagged 693.579.605 com-
ments as potential spam.
In its transparency report, Google indicates the number of links (URLs) requested
to be delisted from the results of its search engine. The number is enormous: more
than 4 billion URLs26 have been requested to be delisted upon notification due to
copyright infringements. Unfortunately, the webpage does not report whether the
delisting is performed automatically or manually. It seems plausible that a hybrid
approach is adopted.
24
Twitter (2022), available at https://transparency.twitter.com Accessed 15 July 2022.
25
YouTube (2022), available at: https://transparencyreport.google.com Accessed 15 July 2022.
26
Google, (2022) https://transparencyreport.google.com Accessed 15 July 2022.
76 F. Galli et al.
removal, presenting their reasons against filtering out their posts. In both cases, the
simplicity of the procedure is key, as is a timely response by moderators.
For example, YouTube provides a simple mechanism allowing users to appeal
against the removal of their posts.27 In its transparency report, YouTube gives
evidence of the scope of this procedure: during the last trimester of 2021, a total of
3,754,215 videos were removed for violating the Community Guidelines, among
which 3,451,691 removed automatically. 213,346 of these removals were appealed.
The appeal requests were reviewed by a human senior moderator, who can uphold or
reverse the automated decision. Of the reported appeals a total of 43,331 (approx-
imately 25% of the received appeals) were successful, leading to the reinstatement of
the videos.
Similarly, Twitter users can submit a form to contest a suspended or blocked
account.28 The procedure is manually validated and usually takes up to 7 days to be
validated. Unfortunately, the process lacks transparency, as the number of filed
appeals is not mentioned in the transparency report, nor is the number of processed
or reinstated accounts or tweets. Instead, the report describes the number of tweets
and accounts that are withheld.29 As reported by the platform “Many countries,
including the United States, have laws that may apply to Tweets and/or Twitter
account content. [...] if we receive a valid and properly scoped request from an
authorised entity, it may be necessary to withhold access to certain content in a
particular country from time to time.” In this regard, in comparison with the previous
reporting period, Twitter received around 67% more requests to remove content,
these requests originating from 49 different countries.30
27
YouTube (2022), available at: https://support.google.com Accessed 15 July 2022.
28
Twitter (2022), available at: https://help.twitter.com Accessed 15 July 2022.
29
Twitter (2022), available at: https://transparency.twitter.com Accessed 15 July 2022.
30
Twitter (2022), available at: https://blog.twitter.com Accessed 15 July 2022.
The Regulation of Content Moderation 77
One might wonder whether allowing the owners of online platforms the power to
discreetly filter or remove any content they object to, for whatever reason, may result
in an excessive restriction on freedom of expression. In fact, users who participate in
online platforms usually sign terms of services that grant broad powers to platforms
regarding blocking or removing content. Users usually do not actually read such
contracts for a variety of reasons: reading contracts requires considerable effort, their
full understanding requires specialist skills, and even when identifying unfair
clauses, the user would accept them anyway given the lack of alternatives to the
main online services and the impossibility of a negotiation with the providers of such
services.31
We may ask whether clauses which grant suppliers arbitrary power of removal
can be considered unfair under the Unfair Terms of Contract Directive.32 According
to Article 3 of that Directive, a clause is unfair when “contrary to the requirement of
good faith” it causes a significant imbalance in the rights and obligations of the
parties resulting from the contract to the detriment of the consumer. Probably, this
could be the case for clauses that give suppliers the power to filter or remove any
content posted by a user without specifying the conditions for the exercise of this
power.
Unfortunately, neither abstract standards nor precise rules provide a complete
solution to the problem of identifying illegal or inappropriate content. While stan-
dards may fail to provide adequate guidance to decision-makers and lead to capri-
cious or unfair behaviour, precise rules may fail to be under or over-inclusive, and
their application could lead to the blocking of malicious content while excluding
harmless or valuable communications.
31
Richards and Hartzog (2018).
32
Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts, OJ L
95, 21.4.1993.
78 F. Galli et al.
In the last 20 years, the basic legal framework for content moderation has been
provided by the 2000 e-Commerce Directive.33 Actually, the Directive does not
directly deal with content moderation but with the extent to which ISPs can be
considered liable for illegal content created by the recipient of the service.
33
Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain
legal aspects of information society services, in particular electronic commerce, in the Internal
Market, OJ L 178, 17.7.2000.
The Regulation of Content Moderation 79
As known Article 12–14 provides different criteria depending on the nature of the
ISP, i.e., whether it provides mere conduit, caching or hosting. In the case of hosting,
it has often been argued that only “passive hosting” is covered. In accordance with
this idea, intermediaries that store and make accessible user-generated materials, but
who also organise these materials, index them, link them to ads, remove unwanted
items, etc., should not enjoy the protection that is granted to hosting services. This
approach has been followed by the case-law of various European countries, though
not by the EU Court of Justice, which still links the activity that excludes liability
exemptions to the knowledge of, or control over, specific items of content.34
A key provision of the e-Commerce Directive which touches upon content
moderation obligation, however, is Article 15. Intermediaries may be ordered by
competent authorities to terminate or prevent infringements by their users, but they
may not be put under any “general obligation to monitor the information which they
transmit or store” nor to “actively seek facts or circumstances indicating illegal
activity”. Thus, on a literal reading of the e-Commerce Directive, it seems that
content moderation and automatic filtering to detect unlawful content cannot be
imposed by law. Indeed, a legal norm requiring platforms to filter all incoming
information would establish a general obligation to “monitor the information which
they transmit or store” and would violate the prohibition in Article 15. If such an
obligation cannot be imposed by law, providers cannot be deemed responsible for
failing to comply with such an obligation.
A related issue is whether the e-Commerce Directive discourages ISPs from
engaging in moderation proactively. This issue is linked to a paradox created by
the Directive, according to which a platform will not engage in moderation since the
act of preventively organising and filtering content would imply that the platform is
an active hosting provider and as such would not be eligible to benefit from the
liability exemptions. This consideration inspired the so-called “good Samaritan”
clause, included in the 1999 US Communication Decency Act, section
230, according to which providers should not be considered liable for “any action
voluntarily taken in good faith to restrict access to or availability of material that the
provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent,
harassing, or otherwise objectionable, whether or not such material is constitution-
ally protected.”
Given the above, the e-Commerce directive seems to show a hands-off approach
to content moderation. This can be explained by looking at the policies pursued by
the 2000 Directive: on the one hand, supporting the development of the internet
economy (by shielding operators from the cost of preventive measures and liabili-
ties); on the other hand, protecting user freedoms, as the fear of incurring liabilities
could lead providers to censor user content and block user activities, even when
contents and activities are lawful and socially beneficial. This hands-off approach
was also supported on a pragmatic ground: it was assumed that the size and speed of
34
CJEU, C-324/09, L’Oréal and Others.
80 F. Galli et al.
online user activity make it impossible to monitor online content and activities
effectively.
35
Directive 2011/93/EU of the European Parliament and of the Council of 13 December 2011 on
combating the sexual abuse and sexual exploitation of children and child pornography, and
replacing Council Framework Decision 2004/68/JHA.
36
Directive (EU) 2017/541 of the European Parliament and of the Council of 15 March 2017 on
combating terrorism and replacing Council Framework Decision 2002/475/JHA and amending
Council Decision 2005/671/JHA.
37
Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018
amending Directive 2010/13/EU on the coordination of certain provisions laid down by law,
regulation or administrative action in Member States concerning the provision of audiovisual
media services (Audiovisual Media Services Directive) in view of changing market realities.
The Regulation of Content Moderation 81
to actively engage in moderation in order to protect minors from content that may
impair their development and the general public from content that incites violence
or hatred or whose dissemination is a criminal offence in connection with
terrorism, child pornography, racism and xenophobia.
• The 2019 Directive on Copyright in the Digital Single Market38 The Directive
establishes obligations for copyrighted-materials sharing providers, including a
best-effort obligation to implement agreements with rights-holders and remove
and prevent access to works identified by them.39 establishes that the liability
exemption for host providers does not cover the unauthorised communication or
making available to the public of material uploaded by their users.
The growing introduction of preventive measures, albeit specific to particular cate-
gories of illegal content, has led the EU legislator to rethink the regulation of the
liability of internet service providers.
The initial steps were taken in March 2018 by the European Commission, which
adopted a Recommendation setting principles for the providers of hosting services
and the Member States to take effective, appropriate and proportionate measures to
tackle illegal content online.40 The recommendation sets out general principles for
all types of illegal content online and recommends strict moderation for terrorist
content. In particular, it encourages appropriate, proportionate and specific mea-
sures, which could involve the use of automated means, provided some safeguards
be in place, in particular human oversight and verification.41
This policy initiative culminated in December 2022 in a legislative reform, i.e. the
proposed Digital Services Act.42 This Regulation introduces new rules to regulate
the liability of internet service providers and in particular online platforms. After
2 years of discussion, on 5 July 2022, the European Parliament approved the final
text at the first reading.43
38
Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on
copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and
2001/29/EC.
39
Article 17 Directive on Copyright.
40
Recommendation 2018/334 of the European Commission of 1 March 2018 on measures to
effectively tackle illegal content online, OJ [2018] L 63/50. This Recommendation follows the
Communication of the European Commission of 28 September 2017, Tackling Illegal Content
Online. Towards an enhanced responsibility for online platforms, COM (2017) 555.
41
European Commission Recommendation 2018/334, Points 16–21.
42
Proposal for a Regulation of the European Parliament and of the Council on a Single Market For
Digital Services (Digital Services Act) and amending Directive 2000/31/EC COM/2020/825 final.
43
The adopted text is available at https://www.europarl.europa.eu Accessed 15 July 2022.
82 F. Galli et al.
The Digital Services Act aims to assure fairness, trust, and safety in the digital
environment and to enable Internet users to fully exercise their rights, in particular
the right to freedom of expression and information. The Regulation preserves and
upgrades the liability exemptions for online intermediaries contained in the
e-Commerce Directive. In addition, it includes due diligence obligations concerning
the design and operation of intermediary services to ensure a safe, transparent and
predictable online ecosystem. In defining the legal framework, the DSA shows a
risk-based approach parameter to the type and size of ISPs. There are general rules
that apply to all ISPs (such as rules on liability for illegal content); basic obligations
that apply to hosting services; advanced obligations that apply to a subset of hosting
providers, i.e. large online platforms (to the exclusion of small ones); and special
obligations that apply to very large online platforms (VLOP).44
Regarding liability, the DSA maintained the liability exemptions provided by the
e-Commerce directive,45 as well as the prohibition of imposing on IPS a general
obligation to monitor or actively seek illegal content. At the same time, the DSA
introduced the long-awaited “Good Samaritan” clause46 according to which ISP
cannot be deemed ineligible for the exemptions from liability solely because they
carry out, in good faith and in a diligent manner, voluntary own-initiative investi-
gations or take other measures aimed at detecting, identifying and removing, or
disabling of access to, illegal content.
Significant changes introduced by the DSA, however, concern the detailed due
diligence obligations concerning content moderation.47
As regards transparency, both individual and societal dimensions are covered and
modulated according to the type of ISP.
First, all ISPs must include in their terms of service information concerning any
restrictions on the use of their services.48 The information must specify the func-
tioning of the moderating system, whether automated, semi-automated, or human-
driven, and must include the rules of procedure for internal complaint handling
systems.
44
For a primer on DSA, see Husovec and Laguna (2022).
45
Only the liability exemption of “mere conduit” providers was expanded to also include those
cases where providers are notified about illegal content (e.g., a website they give access to).
46
Article 6 DSA.
47
This is defined in Article 2, lit. p) as “the activities undertaken by providers of intermediary
services aimed at detecting, identifying and addressing illegal content or information incompatible
with their terms and conditions, provided by recipients of the service, including measures taken that
affect the availability, visibility and accessibility of that illegal content or that information, such as
demotion, disabling of access to, or removal thereof, or the recipients’ ability to provide that
information, such as the termination or suspension of a recipient’s account”.
48
Article 12 DSA.
The Regulation of Content Moderation 83
Second, ISPs must make available, at least once a year, a clear and comprehen-
sible report on their content moderation activity.49 The report must include several
information items, such as:
• the number of orders issued by national authorities to remove illegal content,
categorised by type of content concerned, the Member State issuing the order, and
the average time needed to give effect to the order;
• the number of notices submitted by individuals and entities that receive the
intermediary service, categorised by type of allegedly illegal content, the number
of notices processed by automated means and the average time to take action;
• information about the content moderation, including the use of automated means
or human experts, the measures taken to provide training to the persons involved
in moderation and the number and type of measures taken that may affect content;
• any use made of automated means and a qualitative description of its functioning,
including a specification of the purposes, the indicators of the accuracy and the
possible rate of error, and any safeguards applied.
• the number of complaints received, the decisions taken in respect of those
complaints, the average time needed for making those decisions;
Additional rules on transparency apply when intermediaries qualify as hosting
services, including providers of online platforms. First, intermediaries must provide
a clear and specific statement of reasons to the affected user whenever a restriction is
imposed, including removal, disabling, demotion of content and suspension or
termination of the services. The statement must include the fact relied on for taking
the decisions, whether the decision was taken after a notice or based on a voluntary
initiative, a reference to the legal ground relied on and explanations as to why the
information is considered to be illegal content on that ground.
Second, hosting providers must inform users about the number of notices sub-
mitted in accordance with the notice and action mechanism. The information must
include:
• the type of allegedly illegal content concerned;
• the number of notices submitted by the trusted flaggers;
• any action taken pursuant to the notices by differentiating whether the action was
taken on the basis of the law or the terms and conditions of the provider;
• the number of notices processed by using automated means;
• the median time needed for taking the action.
Regarding contestability, the DSA provides both ex-ante and ex-post measures,
while however limiting them to hosting providers or online platforms.
On the one hand, it mandates hosting services to adopt a mechanism to allow
individuals and entities to notify the presence of illegal content.50 The systems must
be easy to access and use and should ensure that the notices are sufficiently
49
Article 13 DSA.
50
Article 14 DSA.
84 F. Galli et al.
The proposal for an Artificial Intelligence Act,51 adopted by the European Commis-
sion in April 2021, intends to create a uniform legal framework for developing and
deploying AI systems within the European market. The proposal addresses safety
and human rights-critical applications through a risk-based approach. It classifies AI
systems and applications into four separate risk categories with related prohibitions
and obligations: unacceptable risk, high-risk, limited risks, and minimal risk.
The first category is covered by outright prohibitions, namely that it is strictly
forbidden to develop and use unacceptable-risk AI systems. This category covers
systems used for manipulating users, determining access to services based on social
scores, and remotely identifying individuals via biometric systems for legal enforce-
ment purposes.52
Below this level are high-risk systems, i.e. systems which represent a safety
component of products regulated by the New Legislative Framework (e.g., robot-
assisted surgery; autonomous vehicles) or which fall into the categories provided in
Annex III.53 The latter includes areas such as management of critical public infra-
structure, educational training, employment (e.g., CV sorting), and law enforcement
interfering with fundamental rights. A series of design and management require-
ments are connected to the category of high-risk systems. These include: quality
measures on data governance (relevance, representativeness, correctness, complete-
ness);54 technical documentation and record-keeping obligations;55 transparency
51
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised
rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative
acts, COM/2021/206 final.
52
Article 5 AIA.
53
Article 6 AIA.
54
Article 10 AIA.
55
Articles 11–12 AIA.
The Regulation of Content Moderation 85
56
Article 13 AIA.
57
Article 14 AIA.
58
Article 15 AIA.
59
Article 69 AIA.
60
No substantial changes in this domain have been introduced by the Report containing all the
European Parliament’s amendments published on 14 June 2022, available at https://
artificialintelligenceact.eu Accessed 15 July 2022.
86 F. Galli et al.
7 Conclusion
References
Bolton D (2016) Facebook removes image of Copenhagen’s little mermaid statue for breaking
nudity rules. Independent https://www.independent.co.uk Accessed 15 July 2022
Browne M (2017) YouTube removes videos showing atrocities in Syria. New York Times 22.
https://www.nytimes.com Accessed 15 July 2022
Delgrossi S, Said-Moorhouse L (2017) Facebook banned Neptune statue photo for being ‘explicitly
sexual’. CNN Travel https://edition.cnn.com. Accessed 15 July 2022
Francesco C, Fossa F, Loreggia A et al (2022) A principle-based approach to AI: the case for
European Union and Italy. AI & Soc 38:521–535. https://doi.org/10.1007/s00146-022-01453-8
Deriu JM, Cieliebak M (2016) Sentiment analysis using convolutional neural networks with multi-
task training and distant supervision on Italian tweets. In: Fifth evaluation campaign of natural
language processing and speech tools for Italian, Napoli, Italy, 5–7 December 2016. Italian
Journal of Computational Linguistics
Fortuna P, Nunes S (2018) A survey on automatic detection of hate speech in text. ACM Comput
Surv (CSUR) 51(4):1–30
Grimmelmann J (2015) The virtues of moderation. Yale J Law Technol 17:42
Husovec M, Laguna IR (2022) Digital Services Act: A Short Primer. Available at SSRN
Lapowsky I (2019) Why Tech didn’t Stop the New Zealand Attack from Going Viral. Wired https://
www.wired.com Accessed 15 July 2022
LeCun Y, Bottou L, Bengio Y et al (1998) Gradient-based learning applied to document recogni-
tion. Proc IEEE 86(11):2278–2324
Leerssen P (2015) Cut out by the middle man: the free speech implications of social network
blocking and banning in the EU. J Intell Prop Info Technol Elec Commer Law 6:99
Leskin P (2021) Facebook Content Moderator Who Quit Reportedly Wrote a Blistering Letter
Citing ’Stress Induced Insomnia’ Among Other ’Trauma’. Insider https://www.businessinsider.
com Accessed 15 July 2022
The Regulation of Content Moderation 87
Michalski RS, Carbonell JG, Mitchell TM (2013) Machine learning: an artificial intelligence
approach. Springer Science & Business Media
Richards N, Hartzog W (2018) The pathologies of digital consent. Wash Univ Law Rev 96:1461
Schmidhuber J (1989) A local learning algorithm for dynamic feedforward and recurrent networks.
Connect Sci 1(4):403–412
Zhang X, Ghorbani AA (2020) An overview of online fake news: characterization, detection, and
discussion. Inf Process Manag 57(2):102025. https://doi.org/10.1016/j.ipm.2019.03.004
Federico Galli is a Junior Assistant Professor and Research Fellow at CIRSFID-Alma AI,
University of Bologna. In 2021, he obtained a PhD in Law Science and Technology from the
University of Bologna and Computer Science from the University of Luxembourg. He is a Contract
Professor in Privacy Law at the University of Urbino. His research interests cover Computer Law
(in particular, privacy, e-commerce, platforms liability and accountability) and AI&Law and legal
informatics (legal analytics, argument mining, outcome prediction). He has published in Italian and
international journals. He is currently involved in several projects dealing with legal and ethical
issues of commutation and with legal informatics applications.
Giovanni Sartor is a Full Professor in Legal informatics and Computer Law at the University of
Bologna and a Part-time Professor in Legal informatics and Legal Theory at the European
University Institute of Florence. He obtained a PhD at the European University Institute (Florence).
He was a researcher at the Italian National Council of Research (ITTIG, Florence), held the Chair in
Jurisprudence at the Queen’s University of Belfast, and was a Marie-Curie Professor at the
European University of Florence. He has been President of the International Association for
Artificial Intelligence and Law and is co-director of the Artificial Intelligence and Law Journal.
He is currently a member of the ERC Scientific Board. He has published widely in legal philosophy,
computational logic, legislation technique, and computer law. His research interests include legal
theory, argumentation theory, modal and deontic logic, logic programming, multi-agent systems,
computer and Internet law, data protection, e-commerce, law and technology, aviation law, and
human rights.
Part II
Economy
The European Way to Regulate Big Tech:
The EU’s Digital Markets Act
Abstract This chapter seeks to provide an overview of the Digital Markets Act
(DMA). It focuses particularly on the objectives and the principles of the DMA, the
platforms subject to regulation, the obligations and prohibitions under the DMA, the
Act’s institutional framework (especially the role of the Commission as the exclusive
European Digital Regulator and of EU Member State authorities and courts) and its
relationship with competition policy. It concludes with a reflection on the future of
Big Tech regulation.
1 Introduction
A. de Streel
University of Namur, Namur, Belgium
e-mail: [email protected]
P. Alexiadis (✉)
King’s College London, London, UK
e-mail: [email protected]
The genesis of such a public policy choice in the European Union (EU) was
brought into sharp focus by the protracted competition law investigation into various
commercial practices of Google in internet search by the European Commission (the
‘Commission’), which had been ongoing since 2010 and which had to wait to be
resolved by the vindication of the Commission’s 2017 Decision before the General
Court as late as November 2021.3 In response to the demands by EU Member States
that appropriate action be taken to compensate for the slow and arguably ineffective
application of EU competition rules, the Commission tabled in December 2020 a
proposal for the Digital Markets Act (DMA) which lays out a unique regime of
economic regulation applicable to large digital platforms to make the digital markets
more contestable and fairer.4
The proposal was agreed by the EU lawmakers in the summer 20225 after a
negotiation phase conducted in record time which reflected the political pressure to
act quickly and the general consensus on the way forward. The DMA should be
published in the Official Journal in September 2022 and become applicable in three
phases of more or less 6 months each. In a first phase, the European Commission will
adopt a procedural regulation; then in the summer 2023, the Commission will
designate 10 to 15 digital gatekeepers which will be subject to the list of prohibitions
and obligations; finally, at the beginning of 2024, the designated gatekeepers will
have to comply with the obligations and submit compliance reports to the
Commission.
As indicated by Article 1(1), the purpose of the DMA is to: ‘contribute to the proper
functioning of the internal market by laying down harmonised rules ensuring for all
businesses, contestable and fair markets in the digital sector across the Union where
gatekeepers are present, to the benefit of business users and end users.’
3
Commission Decision of 27 June 2017, Case 39 740 Google Search (Shopping) which has been
upheld by the General Court in Case T-612/17 Google v. Commission, EU:T:2021:763. An appeal
of this Judgement is still pending before the Court of Justice in Case C-48/22P.
4
Proposal of the Commission of 15 December 2020 for a Regulation of the European Parliament
and of the Council on contestable and fair markets in the digital sector (Digital Markets Act), COM
(2020) 842 and Impact Assessment Report of the Commission Services on the Proposal for a
Regulation on contestable and fair markets in the digital sector (Digital Markets Act), SWD
(2020) 363.
5
Regulation 2022 of the European Parliament and of the Council on contestable and fair markets in
the digital sector and amending Directives 2019/1937 and 2020/1828 (Digital Markets Act). In
parallel, EU lawmakers agreed on the Digital Services Act (DSA), which deals with issues of illegal
content on social networks and illegal products on marketplace, recommender systems and online
advertising: Regulation 2022 of the European Parliament and of the Council on a Single Market For
Digital Services (Digital Services Act) and amending Directive 2000/31.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 93
The first objective of the DMA is to promote the contestability of digital markets
which is defined as ‘the ability of undertakings to effectively overcome barriers to
entry and expansion and challenge the gatekeeper on the merits of their products
and services’.6 In order to do so, the DMA promotes greater head-to-head compe-
tition between digital platforms (inter-platform competition) and also opens up
existing digital platforms to competition at different layers of the value chain
(intra-platform competition). Concerns about contestability follow from the fact
that the types of digital markets covered by the DMA are already considered by
many to have ‘tipped’ in favour of the largest platform, coupled with the observation
that the key digital players are not in a position to exert sufficient competitive
constraints on one another in their ‘home’ markets.7 Insofar as the DMA changes
the commercial landscape to the degree that it will encourage existing digital payers
to enter one another’s competitive spaces, the possibility exists that inter-platform
competition might indeed be possible, despite the proverbial network competition
horse having already bolted from the digital stable through the effects of market
tipping. There are arguably greater possibilities for intra-platform competition taking
off, however, when it comes to new digital services being launched in the future
under the shadow of the behavioural rules established under the DMA.
The second objective of the DMA is to reduce unfairness in commercial dealings,
which reflects ‘an imbalance between the rights and obligations of business users
where the gatekeeper obtains a disproportionate advantage’.8 The DMA seeks to
address fairness issues by constraining large digital platforms in their role as
intermediators between businesses and end users. It is implicit that this intermedi-
ation role, while capable of providing end users with tangible benefits, nevertheless
threatens to place business providers on the other side of the platform in a disad-
vantageous bargaining position. Given the inherent difficulties faced by competition
law in censuring potential anti-competitive conduct driven by network effects and
positive feedback loops, the role of regulatory intervention that is driven by notions
of fairness is to iron out some of the behavioural kinks in the marketplace that
exacerbate the impact of those network effects.
Contestability and fairness are therefore linked in practice,9 ultimately aiming to
promote users choices as well as the degree and the diversity of innovation in the
6
DMA, Recital 32. Also, see DMA, Art.12(5b). As regards an economic definition of contestability
and fairness in the DMA context, see Cremer et al. (2021).
7
In other words, despite their vast assets, the other members of the GAFAM have been unable to
exert sustainable competitive constraints on Google in the provision of search services, with a
similar scenario applying to Facebook (Meta) when it comes to providing social media, and
Amazon with respect to marketplace sales, and so forth.
8
DMA, Recital 33. Also, Art.12(5a)
9
DMA, Recital 34 states that “Contestability and fairness are intertwined. The lack of, or weak,
contestability for a certain service can enable a gatekeeper to engage in unfair practices. Similarly,
unfair practices by a gatekeeper can reduce the possibility of business users or others to contest the
gatekeeper’s position. A particular obligation in this Regulation may, therefore, address both
elements.”
94 A. de Streel and P. Alexiadis
digital economy.10 Indeed, Larouche and de Streel (2021) illustrate how the DMA’s
obligations promote ‘sustaining innovation’ by business users offering
complementing services on the regulated platforms, and at the same time, ‘disruptive
innovation’ by new entrant offering alternative digital services which could replace
the regulated gatekeepers.
The third objective of the DMA is the strengthening of the internal market.
Outside the regulatory measures found in the DMA, Member States are only
permitted to impose obligations on digital platforms where they are pursuing other
legitimate public policy goals or exercising national competition law powers, but
only insofar as those powers do not undermine the application of EU competition
rules.11 The sine qua non for achieving an internal market—the policy ‘sacred cow’
of European integration policy—must surely be the achievement of a single digital
space for European businesses and consumers. While it is clear that the common
market goal is not alien to competition law enforcement, as illustrated either in the
application of vertical restraints policy or in the promotion of pan-European energy
markets, it is equally clear that it does not fit squarely within the remit of ‘pure’
competition policy.
In pursuing those three objectives, the DMA applies two key general principles of
EU law: 12(i) the principle of effectiveness which implies that any obligation
imposed should achieve the objectives of the DMA; and (ii) the principle of
proportionality which implies that the content and form of regulatory obligations
should not exceed what is necessary to achieve the objectives of the DMA.13
The application of the DMA is premised upon the satisfaction of two legal condi-
tions, namely: (i) the designation of firms with a ‘gatekeeper’ status; (ii) that is made
in relation to a list of core platform services.
10
DMA, Recital 32 notes that: ‘weak contestability reduces the incentives to innovate and improve
products and services for the gatekeeper, its business users, its challengers and customers and thus
negatively affects the innovation potential of the wider online platform economy.’ See also DMA,
Art.12(5b).
11
DMA, Art. 1(5)-(6).
12
See, in particular, DMA, Art.8(2), for obligation specifications, as well as Art.18(2) for additional
behavioural and structural remedies in the event of systematic non-compliance.
13
TEU, Art.5(4).
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 95
The DMA covers a closed list of ten digital services—or business models—which
are characterised as “Core Platform Services” (CPS).14 Many of these services have
already been defined in previous EU legislative enactments. Those CPSs are:
(1) Online B2C intermediation services which are classified as “information soci-
ety services”,15 and which allow “(i) business users to offer goods or services to
consumers, with a view to (ii) facilitating the initiating of direct transactions
between business users and consumers regardless of whether the transaction is
finally concluded offline or online and which (iii) provide services to business
users, based on contractual relationships between the platform and the business
user.”16 This range of services includes platforms acting as marketplaces,17
such as Amazon marketplace, and app stores18 such as Apple App store or
Google Play store.
(2) Online search engines, which are also information society services “allowing
users to input queries in order to perform searches of, in principle, all websites,
or all websites in a particular language, on the basis of a query on any subject in
the form of a keyword, voice request, phrase or other input, and returns results
in any format in which information related to the requested content can be
found”; 19 these services include, for instance, Google’s search engine or
Microsoft ‘s “Bing” service.
(3) Online social networks, which are “platforms that enable end users to connect
and communicate with each other, share content and discover other users and
content across multiple devices and, in particular, via chats, posts, videos and
14
DMA, Art.2(2). Those types of digital services are also referred by the Commission as ‘business
models’: Commission Staff Working Document of 25 May 2016, Online Platforms, SWD
(2016) 172.
15
“Information Society Services” are services normally provided for remuneration, at a distance, by
electronic means and at the individual request of a recipient: Directive 2015/1535 of the European
Parliament and of the Council of 9 September 2015 laying down a procedure for the provision
of information in the field of technical regulations and of rules on Information Society
services, OJ [2015] L 241/1.
16
The DMA relies on the definition set out in the Regulation 2019/1150 of the European Parliament
and of the Council of 20 June 2019 on promoting fairness and transparency for business users of
online intermediation services, OJ [2019] L 186/55 (P2B Regulation), Art.2(2).
17
Defined as information society services that allow consumers and/or traders to conclude online
sales or service contracts with traders either on the online marketplace’s website or on a trader’s
website that use computing services provided by the online marketplace: Directive 2016/1148 of the
European Parliament and of the Council of 6 July 2016 concerning measures for a high common
level of security of network and information systems across the Union [2016] OJ L194/1, Art.4(17).
18
Defined as a type of online intermediation service, which is focused on software applications as
the intermediated product or service: DMA, Art.2(14).
19
Here as well, the DMA relies on definitions already included in previous enactments: P2B
Regulation, Art. 5(2) and Network Information Security Directive, Art.4(18).
96 A. de Streel and P. Alexiadis
20
DMA, Art. 2(7). This definition is new to the DMA.
21
Definition derived from the Directive 2010/13 of the European Parliament and of the Council of
10 March 2010 on the coordination of certain provisions laid down by law, regulation or admin-
istrative action in Member States concerning the provision of audiovisual media services (Audio-
visual Media Services Directive), OJ [2010] L 95/1, as amended by Directive 2018/1808, Art.1
(1aa).
22
Definition derived from the Directive 2018/1972 of the European Parliament and of the Council
of 11 December 2018 establishing the European Electronic Communications Code (EECC), OJ
[2018] L 321/36, Art. 2(5) and (7).
23
DMA, Art. 2(11). This definition is new to the DMA.
24
DMA, Art. 2(12). This definition is new to the DMA.
25
Definition derived from the Network Information Security Directive 2016/1148, Art.4(19).
26
DMA, Art.2(10). This definition is also new to the DMA.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 97
(10) Advertising services offered by a provider of any of the above-listed nine Core
Platforms Services, including advertising networks, advertising exchanges and
any ad intermediation services such as those provided by Google AdSense.
The ten Core Platform Services have been selected for the purpose of regulation
because of a number of commonly occurring (and often cumulative) characteristics,
namely: extreme economies of scale and scope; important network effects; multi-
sidedness; possible user lock-in and absence of multi-homing; vertical integration;
and data-driven advantages. Those characteristics are not new in and of themselves
but, when they apply cumulatively, they are likely to lead to market concentration as
well as generating concerns about dependency and unfairness which, according to
the Commission, cannot be addressed effectively under existing EU laws.27
The DMA also contains a built-in dynamic mechanism which allows the
European Commission, after conducting a so-called market investigation to propose
that EU lawmakers amend the DMA in order to include new digital services in the
list of CPSs.28 In practice, this type of market investigation mechanism to extend to
the CPS list merely reflects an application of the right of legislative initiative already
entrusted to the Commission under the European Treaties.29
The DMA establishes a legal regime based on asymmetric regulation: its obligations
do not apply to any and all providers of Core Platform Services, but only to the
largest operators which have been designated as “gatekeepers”. Such a designation is
effected by the European Commission through the application of a cumulative “three
criteria test”, namely, based on its:
(i) significant impact on the EU internal market;
(ii) the control of an important gateway for business users to reach end-users; and
(iii) an entrenched and durable position.30
The gatekeeper designation is made in relation to an individual firm with respect to a
particular CPS insofar as it only concerns the CPS(s) for which the firm satisfies the
three criteria test to the exclusion of other CPSs offered by the same firm, and a
fortiori of other digital services outside the CPS list.31 For instance, if Facebook
holds a gatekeeper position for social network service, that does mean that Facebook
will also be designated as gatekeeper for the provision of a marketplace service.
27
DMA, Recital.2; also Commission DMA Impact Assessment SWD (2020) 363, paras 128–130.
28
DMA, Art.19(3a).
29
TEU, Art.17(1).
30
DMA, Art.3(1).
31
DMA, Art.3(9) and Recital 29.
98 A. de Streel and P. Alexiadis
32
DMA, Art.3(2). The Annex to the DMA clarifies the methodology that will be used to measure
the quantitative thresholds.
33
The market capitalization threshold, however, needs only be fulfilled for the last financial year.
34
DMA Impact Assessment, para.148.
35
DMA, Art.3(3).
36
DMA, Art.3(4) and Art.17(3).
37
DMA, Art.3(8) and Recital 25.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 99
Aside from designating gatekeepers which satisfy the listed criteria, the Com-
mission may also designate an “emerging” gatekeeper when a CPS provider satisfies
the first two listed criteria (i.e., significant impact and important gateway) and where
the fulfilment of the third criterion is foreseeable at some point in the future.38 In this
situation, the emerging gatekeeper is subject to a sub-set of the obligations imposed
on existing gatekeepers, driven by the aim of preventing market tipping taking place.
With a view to ensuring that the list of designated gatekeepers is kept current in
the face of emerging technology and new business models, or even misrepresenta-
tions made by gatekeepers, at the time of the original analysis, a review mechanism
is available to the Commission which allows it to determine whether market
circumstances justify the expansion (or reduction) of the list of designated gate-
keepers.39 When it comes to determining whether an existing gatekeeper designation
should be relinquished, the formal review period is three years, whereas the Com-
mission reviews on an annual basis whether the list of designated gatekeepers ought
to be expanded. The three-year review period tends to reflect a typical innovation
cycle in a fast-moving digital industry (and the period usually used to assess the
likelihood of potential competitive entry), while the shorter annual review period
38
DMA, Art.17(4), Recital 26–27.
39
DMA, Art.4.
100 A. de Streel and P. Alexiadis
seems to reflect the reality that revenues and consumer growth in digital platforms
might change very quickly.
Thus, the DMA aims to identify a specific type of market power which lies in the
performance of an intermediation role, without relying on competition law method-
ologies. Indeed, in an assessment of the DMA, the definition of relevant market,
which is based in antitrust terms on an economic analysis of demand and supply
substitutability, is replaced by the delineation of core platform services which is
based on the interpretation of legal definitions found in Article 2 of the DMA.40 In
turn, the dominance assessment associated with the market power assessment under
Article 102 TFEU is replaced under the DMA by an assessment of the three criteria
test used to identify power in the hands of a gatekeeper.
Some commentators regret that the gatekeeper designation under DMA is not
based on competition law methodologies, as is the case for the SMP designation
made under the regulation of the electronic communications sector.41 Ibáñez
Colomo (2021), for example, has expressed concerns that the absence of market
definition and dominance assessment leave too much discretion in the hands of the
Commission. However, it is not uncommon that a new economic law commences its
life by using less nuanced methodologies than those used under competition law,
with an evolution in approach towards the application of more sophisticated meth-
odologies being a foreseeable consequence as the regulator gains more expertise and
experience in applying a more ‘blunt’ regulatory instrument. This analytical journey
is similar to that pursued under EU electronic communications law, as has been
explained by Hancher and Larouche (2011). The first generation of EU Directives in
that sector followed a formalistic approach, as regulated “markets” were directly
defined in legislation, with the threshold for intervention being set at a (relatively
arbitrary) 25% market share level in relation to those pre-defined markets.42 Over the
course of time, regulation developed a more sophisticated approach which became
closely aligned to contemporary competition law methodologies.43
More fundamentally, the use of competition law methodologies in the DMA may
not be appropriate for at least two reasons. First, the traditional tools used to define
relevant markets require some adaptations to take into account the characteristics of
the digital economy, as explained by the OECD (2022). Second, the reliance on an
40
DMA, Recital 23 notes explicitly that: ‘(. . .) Any justification on economic grounds seeking to
enter into market definition
(. . .) should be discarded, as it is not relevant to the designation as a gatekeeper (. . .).’
41
EECC, Arts.63-64.
42
Directive 97/33 of the European Parliament and of the Council of 30 June 1997 on interconnec-
tion in Telecommunications
with regard to ensuring universal service and interoperability through application of the princi-
ples of Open Network Provision
(ONP) OJ [1997] L199/32, esp. Article 4.
43
This shift was made with the 2002 reform: Directive 2002/21 of the European Parliament and of
the Council of 7 March 2002 on a common regulatory framework for electronic communications
networks and services (Framework Directive), OJ [2002] L 108/33, Arts. 14–16.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 101
analysis of relevant markets, which tends to be narrow and built around an expec-
tation of specific anti-competition conduct, may not be appropriate to determine the
scope application of ex ante regulation, which tends to adopt a more holistic
approach (as explained by Hellwig, 2009). Larouche and de Streel (2022, p.181)
underline the point that it is the use of economic methodologies which is important to
achieving good economic regulation and to place useful limits upon the discretion of
the regulator; by contrast, economic methodologies are broader in their goals—and
should not be equated with—competition law methodologies. Given that the DMA
expressly steps away from purist competition policy theory by emphasising the
importance of contestability and fairness in the policy direction of that regulatory
instrument, we should not be surprised if the legal threshold standard departs from
substantive competition policy principles.
A digital platform which has been designated as a gatekeeper for one or several Core
Platforms Services is subject to a range of up to 22 prohibitions and obligations,
many having been inspired by the investigations conducted by the Commission in a
series of competition law investigations (both final and pending). They are divided
into three lists. First, Article 5 enumerates nine items, mostly prohibitions, which are
supposed to be self-explanatory and self-executing. Second, Article 6 lists twelve
items, mostly obligations, which may require additional specification by the Com-
mission. Third, Article 7 applies a horizontal interoperability obligation between
communications apps which, given its complexity, requires a phased form of
implementation. In principle, the two first lists apply automatically to all the CPS
providers that have been designated as gatekeepers independent of the business
models they have adopted. In practice, however, several obligations only apply to
some CPS and not all. The application of the obligations is also limited to the CPS in
relation to which there has been a gatekeeper designation and does not apply to the
other CPSs provided by the online platform.
The ‘black list’ under Article 5 (adopting the jargon of traditional ‘block exemp-
tion’ regulations under EU competition law) comprises the following prohibitions
and obligations on designated gatekeepers:
(1) To refrain from combining personal data sourced from the CPS with personal
data from other services of the gatekeeper or third parties, and from signing on
end-users to other services of the gatekeeper in order to combine personal data,
unless the end-user has been presented with the specific choice and has provided
102 A. de Streel and P. Alexiadis
44
DMA, Art.5(2).
45
https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2019/0
7_02_2019_Facebook.html.This Decision is under appeal, while awaiting the answer from the
Court of Justice in Case C-252/21 to a preliminary question raised by the German Appeal Court on
the GDPR and its relationship with competition law.
46
https://en.agcm.it/en/media/press-releases/2017/5/alias-2380 and https://en.agcm.it/en/media/
press-releases/2018/12/Facebook-fined-10-million-Euros-by-the-ICA-for-unfair-commercial-prac
tices-for-using-its-subscribers%E2%80%99-data-for-commercial-purposes.
47
DMA Proposal, Art.5(3). This provision complements the P2B Regulation, Art.10.
48
Decision of the Commission of 4 May 2017, Case 40 153 Amazon ebooks.
49
See https://ec.europa.eu/competition/ecn/hotel_monitoring_report_en.pdf.
50
DMA, Art.5(4).
51
Case 40 437 Apple - App Store Practices (music streaming).
52
DMA, Art.5(5). This provision complements P2B Regulation, Art.10.
53
DMA, Art.5(6).
54
DMA, Art.5(7).
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 103
55
DMA, Art.5(8).
56
Commission Decision of 18 July 2018, Case 40 099 Google Android. This case is under appeal
before the General Court in Case T-604/18 Google v. Commission.
57
Which is an important competitive force, as is shown by Petit (2020).
58
DMA, Art.5(9).
59
AT. 40 660 and 40 670. In this regard, see also Geradin and Katsifis (2020).
60
DMA, Art.5(10).
61
DMA, Art.6(2). This provision complements P2B Regulation 2019/1150, Art.9.
62
Case 40 462 Amazon Marketplace and Case 40.684 Facebook Marketplace.
63
DMA, Art.6(3).
64
Commission Decision of 16 December 2009, Case 39 530 Microsoft Explorer and Commission
Decision 18 July 2018, Case 40 099 Google Android.
65
DMA, Art.6(4).
66
Case 40 716 Apple - App Store Practices. See also the Decision of the Dutch Competition
Authority in the Apple app store case: https://www.acm.nl/en/publications/acm-obliges-apple-
adjust-unreasonable-conditions-its-app-store.
104 A. de Streel and P. Alexiadis
(4) The prohibition of preferencing in the ranking its own products and services
over competing third party products and services.67 The prohibition on these
forms of self-preferencing has been prohibited in the Google Shopping Case
(as endorsed by the General Court recently) and is now under investigation in
the Commission’s Amazon Buy Box investigation. 68
(5) The prohibition on technical restrictions being imposed on the ability of
end-users to switch between different apps and services to be accessed with
the OS of the gatekeeper.69
(6) An obligation to allow business users and providers of ancillary services with
access to, and interoperability with, the same features that are used by the CPS
gatekeeper in providing ancillary services.70 This type of practice constitutes
another form of self-preferencing, and is currently the subject of analysis in the
ongoing Commission investigation into Apple Pay.71
(7) The obligation to provide online advertisers and publishers, free of charge,
access to the performance measuring tools of the gatekeeper and the informa-
tion necessary to carry out their own independent verification of the ad inven-
tory.72 This is another obligation that is inspired by the need for greater
transparency.
(8) The obligation to provide the effective, continuous and real-time portability of
data generated through the activity of a business user or its end-user, in
particular for the ebenfit of end-users to facilitate the exercise of data portabil-
ity.73 This obligation extends the data portability requirement imposed under
the GDPR by widening its scope to include non-personal data, as well as data
generated by business users.74
(9) The obligation to provide business users, free of charge, with effective, high-
quality, continuous and real-time access to data that is provided for or
67
DMA, Art.6(5). This provision complements P2B Reg, Art.5 and Commission Guidelines of
7 December 2020 on ranking transparency pursuant to Regulation 2019/1150 of the European
Parliament and of the Council, OJ [2020] C 424/1.
68
Commission Decision of 27 June 2017, Case 39 740 Google Search (Shopping) which has been
upheld by the General Court in Case T-612/17 Google v. Commission, EU:T:2021:763; Case
40 703 Amazon - Buy Box.
69
DMA, Art.6(6).
70
DMA, Art.6(7).
71
Case 40 452 Apple - Mobile payments. Note also German legislation on the topic.
72
DMA, Art.6(8).
73
DMA, Art.6(9). This practice is currently being investigated by the Italian Competition Authority
in the Google Case: https://en.agcm.it/en/media/press-releases/2022/7/A552. For an economic
rationale behind such an extensive data portability obligation, see Krämer (2021).
74
GDPR, Art. 20 and Guidelines of 13 April 2017 of Working Party 29 on the right to data
portability, WP242 rev.01.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 105
generated in the context of gatekeeper CPS use by those business users and
their end-users.75
(10) The obligation to provide any third-party providers of online search engines
with access on FRAND terms to ranking, query, click and view data, in relation
to search generated by end-users on online search engines of the gatekeeper.76
(11) The application of FRAND conditions, which can be assessed using different
benchmarking methods, in providing access by business users to app stores,
search engines and social networks, as well as the establishment of alternative
dispute settlement mechanism.77 The blanket establishment of FRAND terms
will remind some of the standard used where an essential facility is demon-
strated to exist, even though the DMA does not purport to address such
situations
(12) A prohibition on the imposition of disproportionate conditions or processes for
the termination of the service.78 This is designed to facilitate customer
switching, so that customer lock-in that has been achieved through such
contractual means does not thwart the goal of contestability.
Although those Article 6 obligations apply in principle directly to the designated
gatekeepers, they may be specified with greater precision by the Commission in the
context of a regulatory dialogue engaged in with the gatekeeper. Such a specification
may be effected at the Commission’s initiative when assessing the measures that
need to be taken by the gatekeeper. It may also be effected at the gatekeeper’s
request, with the gatekeeper notifying the Commission of specific measures to
implement the obligations.79 This specification should be consistent with the pursuit
of two principles: (i) the effectiveness of the measures in achieving the objectives of
the obligation; and (ii) the proportionality of the measures, given the specific
circumstances of the CPS and the gatekeeper.80
In order to better understand the underlying logic of the long lists of obligations
imposed on gatekeepers, Table 2 clusters the prohibitions and obligations around
75
DMA, Art.6(10). This provision would complement some new data sharing obligations which
could be imposed by the Data Act which has been proposed by the Commission COM(2022) 68, in
particular Art.4.
76
DMA, Arts 6(11) and 8(8). For an economic rationale of such obligation, see: Krämer and
Schnurr (2022); cf. Prüfer and Schottmuller (2021).
77
DMA, Arts 6(12) and 8(8). Recital 62 clarifies that: “The following benchmarks can serve as a
yardstick to determine the fairness of general access conditions: prices charged or conditions
imposed for the same or similar services by other providers of software application stores; prices
charged or conditions imposed by the provider of the software application store for different related
or similar services or to different types of end users; prices charged or conditions imposed by the
provider of the software application store for the same service in different geographic regions;
prices charged or conditions imposed by the provider of the software application store for the same
service the gatekeeper offers to itself.”
78
DMA, Art.6(13).
79
DMA, Arts 8(2) and (3).
80
DMA, Art.8(7).
106 A. de Streel and P. Alexiadis
81
On the need to ensure interoperability, see Dinielli et al. (2021).
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 107
The full suite of obligations and prohibitions automatically applies after a gatekeeper
designation has been made, without the possibility for the Commission to select the
most appropriate forms of intervention on the basis of the principle of proportionality
and sensitive to the particular characteristics of the gatekeeper in question, as occurs
in the case of electronic communications regulation.84 Moreover, there is no possi-
bility for the gatekeeper to rely on an efficiency defence to escape the imposition of
an obligation or prohibition, as would be contemplated under competition law.85
Despite the relatively doctrinaire approach of the DMA, there are a number of
avenues of flexibility in its application, the full effects of which will no doubt be
better understood over time as lessons are learned as a result of the practical
application of the legislation.
82
EECC, Art.61(2c), which allows the National Regulatory Authorities to impose on the providers
of number–independent interpersonal communications services obligations to make their services
interoperable, including through a reliance on standards, if: (i) those providers reach a significant
level of coverage and user up-take; (ii) the Commission has determined that there is an appreciable
threat to end-to-end connectivity between end-users and has adopted implementing measures
specifying the nature and scope of any obligations that may be imposed by the National Regulatory
Authorities; and (iii) the obligations imposed are necessary and proportionate to ensure the
interoperability of interpersonal communications services.
83
DMA, Art.48.
84
EECC, Art.68.
85
Guidance of 3 December 2008 on the Commission's Enforcement Priorities in Applying Articles
[102 TFUE] to Abusive Exclusionary Conduct by Dominant Undertakings O.J. [2009] C 45/7,
paras.28–31.
86
DMA, Art.9.
108 A. de Streel and P. Alexiadis
service.87 Given the sweeping nature of this exception, the Commission is obliged to
review its necessity on an annual basis. The suspension of measures may even occur
on a temporary (interim) basis in extreme circumstances.
Second, ‘public interest’ exemption decisions can be adopted by the Commission
in those situations where the gatekeeper can prove, according to a reasoned request,
that it should be exempted, in whole or in part, from the obligations in relation to
individual CPS on the grounds of public health or public security will be threat-
ened.88 The Commission can also exercise this power on an interim basis, where
justified.
To alleviate the risk of regulated gatekeepers ‘undermining’ the impact of the DMA
obligations through contractual, commercial, technical or other means, the DMA
contains an important anti-circumvention clause based on two distinct sets of
criteria.89 As regards the satisfaction of the quantitative thresholds that justify the
designation of a ‘gatekeeper’ status, platforms should ensure that their CPS are not
segmented, fragmented, split, divided or sub-divided as a means of circumventing
those thresholds. The Commission can proceed to make a gatekeeper designation if
such steps of circumvention have been taken. As regards the need to ensure full
compliance with the measures prescribed under Articles 5–7 of the DMA, gate-
keepers should take no actions to subvert end-user or business-user autonomy or
diminishing choice in their decision-making, irrespective of whether they deploy
strategies regarding product structure, design or manner of service operation to
achieve circumvention. These powers of intervention on the part of the Commission
are designed broadly to ensure that the legislation is not undermined by endless acts
of brinkmanship by gatekeepers keen to avoid their obligations under the DMA.
The Commission services explain that the different prohibitions and obligations
have been selected because they “are considered unfair by taking into account the
features of the digital sector and where experience gained, for example in the
enforcement of the EU competition rules, shows that they have a particularly
negative direct impact on the business users and end-users”.90 The selection is
thus backward-looking. In order to be future-proof, however, the Commission can
add new obligations to Articles 5 and 6 when necessary to keep the list up-to-date in
87
DMA Impact Assessment, para.400.
88
DMA, Art.10.
89
DMA, Art.13.
90
DMA Impact Assessment, para.153. See also DMA, Recital 31.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 109
order to address commercial practices that limit the contestability of CPS or that are
unfair in the same way as the practices addressed by existing obligations.91 After
having conducted a market investigation whose outcome is to demonstrate the need
to update the obligations, the Commission could act in this manner with a delegated
act that amends the DMA. To ensure that the Commission does not use this power to
artificially extend the scope of the obligations imposed on gatekeepers, the DMA
circumscribes the scope of the Commission’s power in this regard by allowing an
extension of existing obligations (to new CPS, new beneficiaries, etc.) while not
permitting the imposition of new obligations.
The DMA contains two notable and very different types of transparency measures
which are designed to provide regulatory bodies much greater insight into funda-
mental structural and behavioural practices of gatekeepers. These measures go well
beyond being merely complementary to the effective enforcement of the obligations
under Articles 5 to 7 of the DMA.
91
DMA, Art.12 and Art.19(3b).
92
Council Regulation 139/2004 of 20 January 2004 on the control of concentrations between
undertakings, O.J. [2004] L 25/1.
Art.3.
93
DMA, Art.14.
94
Commission Guidance of 26 March 2021 on the application of the referral mechanism set out in
Article 22 of the Merger
Regulation to certain categories of cases, O.J. [2021] C 113/1. The legality of such a power to
refer otherwise non-notifiable mergers has been confirmed by the General Court in Case T-23/22
Ilumina v. Commission, ECLI:EU:T:2022:447.
110 A. de Streel and P. Alexiadis
5 Institutional Framework
95
Introducing an alternative filing threshold into the EU Merger Regulation for digital mergers
might have been a more coherent approach. Member States such as Germany and Austria have, for
example, introduced thresholds based on transaction size rather than on revenue, in order to catch
more digital transactions. Moreover, the recently concluded studies on digital platforms are virtually
in universal agreement that the key merger enforcement issue in the digital space is the substantive
test for review, rather than the threshold issue of jurisdictional competence.
96
Alexiadis and Bobowic (2020) on the greater likelihood of ‘Type 1’ errors occurring in a merger
review context.
97
DMA, Art.15.
98
DSA, Art.24a in particular.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 111
of expertise required to give effect to that regime and the logistical support required
to administer that regime.
The institutional model for the enforcement of the DMA is a highly centralised given
the pan-European nature of the subject-matter and the importance that regulatory
policy should not become fragmented over time. The logic underpinning the DMA is
that the Commission alone should be responsible for the enforcement of the DMA
and may take the following actions:
– Adopt individual decisions: (i) designating the gatekeepers and their services
subject to regulation; (ii) specifying, when needed, the Article 6 obligations
imposed on them; (iii) monitoring compliance; and (iv) sanctioning gatekeepers
in case of non-compliance or systematic non-compliance.
– Adopt delegated acts specifying the methodology for determining the quantita-
tive thresholds for gatekeeper designation, as well as any necessary updating of
the existing obligations contained in Articles 5-6 of the DMA.99
– Adopt implementing acts on a number of DMA process and procedural issues
such as the notification and reports to be submitted by the gatekeepers.100
– Adopt guidelines on the application of the DMA101 or mandate the European
standardisation bodies (CEN, CENELEC and ETSI) to develop appropriate
standards to the facilitate the implementation of the DMA obligations.102
European lawmakers also consider the Commission to be ultimately accountable to
them. To this end, they require that the Commission reports to them every year on
the operation of the DMA in order to determine whether it is delivering its avowed
goals in terms of achieving contestability, fairness and the development of the
internal market.103
Within the Commission itself, at the time of its proposal, the Commission had
committed to deploying a team which should increase over time from 20 FTEs in
2022 to 80 FTEs by 2025.104 This relatively low number of dedicated personnel to
be responsible for a pan-European system of regulation in such a challenging area
can partly be explained by the fact that the Commission hoped that most of the DMA
99
DMA, Art.49.
100
DMA, Art.46.
101
DMA, Art.47.
102
DMA, Art.48.
103
DMA, Art.35.
104
Commission Explanatory Memorandum to the DMA Proposal, p. 71.
112 A. de Streel and P. Alexiadis
105
https://ec.europa.eu/info/live-work-travel-eu/consumers/enforcement-consumer-protection/coor
dinated-actions_en; https://ec.europa.eu/internal_market/scoreboard/_docs/2019/performance_by_
governance_tool/cpc_en.pdf.
106
https://www.linkedin.com/pulse/sneak-peek-how-commission-enforce-dsa-dma-thierry-
breton/?published=t.
107
For instance, for the Google Shopping antitrust investigation, the Commission had to analyse
very significant quantities of real-world data including 5.2 terabytes of actual search results from
Google (around 1.7 billion search queries): Commission Press Release IP/17/1784 of 27 June 2017.
As an interesting benchmark, the UK CMA has established a team of data analysts and AI experts
numbering nearly 50 FTEs.
108
In 2002, the revised regulatory framework for electronic communications, while administered
directly by National Regulatory Authorities, was subject to Commission veto by a Joint Task Force
constituted respectively by members of DGs Competition & Information Society (the former DG
Connect). The supervisory role was passed exclusively to DG Connect over time, as DG Compe-
tition no longer felt that its participation in decision-making was necessary.
109
Monti and de Streel (2022) reviews the new networks of regulatory agencies that bring together
the different skill-sets necessary to regulate digital platforms. One of the most developed examples
is the UK Digital Regulation Cooperation Forum (DRCF), which includes the Competition &
Markets Authority (CMA), the Information Commissioner’s Office (ICO), Ofcom and the Financial
Conduct Authority (FCA).
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 113
In administering the DMA, the Commission has been provided with a broad range of
procedural powers inspired by the terms of Regulation 1/2003110 and supporting
guidance which applies to the Commission in the field of competition law enforce-
ment. Accordingly, the Commission is able to request information from, and to carry
out interviews with, gatekeeper personnel.111 In what might otherwise appear to be
procedural ‘overkill’ (in the sense that this power is usually associated with a
competition law investigation where an infringement has been alleged), the Com-
mission can also conduct on-the-spot inspections.112 Those inspections might
involve not only the inspection of a gatekeeper’s books and the ability to make
copies of documents, but can also require that explanations be given regarding
technical operations and the use of IT, algorithms and other technical apparatus
and processes, with the Commission also being able to seal the inspected premises.
The Commission can appoint auditors or other external experts to assist it in
conducting such inspections and obtaining the authorisation and assistance of local
officials wherever required
In addition, the Commission is empowered to adopt interim measures where
‘serious and irreparable damage’ is threatened.113 The Commission also has broad
powers to monitor compliance and to investigate non-compliance, with the assis-
tance of external auditors or national officials from competent authorities.114 It is
also possible for the Commission to solicit commitments from gatekeepers to ensure
compliance.115 As a quid pro quo for these sweeping powers lying in the hands of
the Commission while acting in its capacity as a regulator under the DMA, gate-
keepers are extended the sorts of legal rights that one usually associates with
competition law proceedings, namely, the right to be heard, the right to access to
the file, the right to professional secrecy and the non-disclosure of confidential
information in Commission Decisions116
While the range of measures made available to it empowers the Commission in its
role as a regulator to be on a par with those powers it finds itself exercising in a
competition law context (and with those powers enjoyed by national sector-specific
regulators), arguably the most important of these powers lies in its ability to escalate
its enforcement powers on a sliding scale, namely:117
110
Council Regulation 1/2003 of 16 December 2002 on the implementation of the rules on
competition laid down in Articles 81 and 82 of the Treaty, O.J. [2003] L 1/1, as amended.
111
DMA, Arts.21-22.
112
DMA, Art.23.
113
DMA, Art.24.
114
DMA, Art.26.
115
DMA, Art.25.
116
DMA, Arts 34 and 36.
117
DMA, Arts.29-31.
114 A. de Streel and P. Alexiadis
118
The Commission is restricted by limitation periods of five years for the imposition of penalties
and the enforcement of existing provisions.
119
Jacobides et al. (2018). In order to reap such benefits, a gatekeeper straddling such different
businesses is prone to engage in strategies such as self-preferencing activities that favour its own
businesses ahead of others, irrespective of quality or price considerations.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 115
Practical experiences garnered in other regulated sectors suggest that the Commis-
sion needs to be wary of not underestimating the amount of effort required to
monitor the dynamics of such complex industries, run the necessary investigative
and decisional procedures under the DMA, and expand upon obligations that will
require compliance monitoring.120 Therefore, the DMA foresees that National
Competition and Regulatory Authorities most closely connected with the digital
services sector (such as electronic communications sectoral regulators and data
protection authorities) should support the Commission in the formulation, monitor-
ing and ongoing enforcement of regulatory obligations.121 In particular, the DMA
provides that Member State authorities can receive complaints regarding DMA
violations,122 participate to information-gathering123 or monitor DMA compli-
ance.124 The National Authorities may also request the Commission to open market
investigations.125
Moreover, two institutional groups composed of Member State Authorities are
established under the DMA:
– A High-Level Group for the Digital Markets Act which is composed of an equal
number of representatives (no more than 30 personnel in total) from a variety of
EU networks of national regulators: Body of European Regulators for Electronic
Communications (BEREC), European Data Protection Supervisor (EDPS) and
the European Data Protection Board (EDPB), European Competition Network
(ECN), Consumer Protection Cooperation Network and European Regulatory
Group of Audiovisual Media Regulators (ERGA).126 Thus, the High-Level
Group will bring together different elements of expertise that are relevant to the
application of the DMA. This body will be chaired and funded by the Commis-
sion, with its role being to advise and recommend on the types of measures that
should be adopted under the DMA, and to facilitate the coordination and coop-
eration between the Commission and the Member States in their enforcement
decisions.
– A Digital Market Advisory Committee, which is a standard comitology committee
to whom the Commission shall turn to request expert opinions before adopting
120
For the telecommunication sector, the Annex to the DMA Impact Assessment Study, p. 68,
estimates that the implementation of the EU electronic communications framework requires
60 FTEs at the Commission, 28 FTEs at BEREC and 41 FTEs in each national regulatory authority;
amounting to a total of 1 195 FTEs.
121
DMA, Art.37.
122
DMA, Art.27.
123
DMA, Arts.21–23.
124
DMA, Art.26.
125
DMA, Art.41.
126
DMA, Art.40.
116 A. de Streel and P. Alexiadis
Aside from the National Authorities and courts, the Commission will also be
supported by all the other stakeholders active in the platform economy. To this
end, in the spirit of “participatory regulation” promoted by the World Economic
Forum (2020), the Commission will orchestrate an institutional ecosystem of com-
pliance and enforcement with regulated gatekeepers, their business users, end users
and civil society more broadly.
As an integral part of that exercise, gatekeepers must establish a number of
internal mechanisms and processes to support DMA compliance.
– They have to establish a ‘compliance function’ that shall be performed by officers
of the gatekeeper’s company that are independent of its operational functions,
with the gatekeeper needing to ensure that its compliance team is well resourced
and staffed, and able to report directly to management and liaise with the
127
DMA, Art.50 and Regulation 182/2011 of the European Parliament and of the Council of
16 February 2011 laying down the rules and general principles concerning mechanisms for control
by Member States of the Commission’s exercise of implementing powers OJ [2011] L 55/13.
128
See Komninos (2021) and Podszun (2021).
129
DMA, Art.42 and Directive 2020/1828 of the European Parliament and of the Council of
25 November 2020 on representative actions for the protection of the collective interests of
consumers and repealing Directive 2009/22, OJ [2020] L409/1.
130
DMA, Art.39.
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 117
131
DMA, Art.28.
132
In the merger context (and to some extent, competition law field), the Commission outsources
these functions to a monitoring trustee, while in a field such as aviation, the management of airport
slots is a regulatory function performed by a party independent of the airport in question and the
airlines using it.
133
Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46 (General Data Protection Regulation), OJ
[2016] L 199/1, Arts.37-39.
134
DSA, Art.32.
135
DMA, Art.11.
136
DMA, Art.43 and Directive 2019/1937 of the European Parliament and of the Council of
23 October 2019 on the protection of persons who report breaches of Union law, OJ [2019] L
305/17.
137
DMA, Art.8(6).
138
DMA, Art.27.
139
DMA, Art.18(8) regarding systemic non-compliance Decisions, Art.29(4) regarding
non-compliance Decisions.
118 A. de Streel and P. Alexiadis
140
As DMA, Recital 5 in fine clarifies that a dominant position and gatekeeper power are not
identical concepts.
141
Whereas Member States such as Germany, France and the Netherlands strongly supported the
possibility of national enforcement of competition policy, Member States concerned about preserv-
ing the integrity of an EU-level regulatory model included the Scandinavian Member States. On the
question of the complementarity between the DMA and competition law, refer to Larouche and de
Streel (2022).
142
Case C-117/20 Bpost v. Autorité Belge de Concurrence, ECLI:EU:C:2022:202 See
Colangelo (2022).
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 119
the Commission acting in its new regulatory capacity, to ensure that this principle is
not flouted. Matters are complicated by the fact that regulatory principles under the
DMA are imbued with competition law theories of harm, while there will at the same
time always be some risk of ‘regulatory creep’ of non-competition-based principles
under the DMA working their way into competition law enforcement at the national
level.143 In such situations, there is every possibility that designated gatekeepers
might indeed seek recourse to the national courts in litigation against the perceived
double enforcement of regulation and competition law in the same action. In this
regard, the actions of German legislators in extending competition policy to address
the practices of ‘ecosystems’ may be problematic,144 with the Bundeskartellamt
declaring that competition action is possible against several members of Big Tech
because they are undertakings that are ‘of paramount significance for competition
across markets’ (i.e., rather than an undertaking that is ‘dominant’ in a ‘relevant
market’ for antitrust purposes).
With a view to minimising such cross-jurisdictional concerns, the DMA includes
a provision that deals with the issue of coordination and cooperation between the
Commission and National Competition Authorities.145 Those authorities are obliged
to inform themselves of competition enforcement actions under the auspices of the
European Competition Network,146 thereby hopefully providing a structure within
which enforcement overlaps are identified. Moreover, the National Competition
Authorities will have an important role to play in supporting the Commission in
implementing the DMA and where the market investigation mechanism is triggered
under the DMA. On balance, the concerns about competing competence need to be
put in perspective, insofar as there already exists a significant amount of tension in
the parallel application of competition rules by the Commission and Member State
authorities—adding the DMA as a source of tension arguably does little to exacer-
bate a tension that already exists.
As explained by Larouche and de Streel (2022), the DMA is establishing a new field
within EU economic regulation, and the first interpretations and enforcement actions
by the Commission will determine the direction for the future of EU digital economy
regulation. However, those issues will be particularly complex to decide given that
143
A good example of this inter alia is under Belgium, French and German law is that the notion that
dominance may exist where parties are ‘dependent’ on the alleged infringing undertaking.
144
New Section 19a of the German competition law.
145
DMA, Art.38.
146
The ECN is organised by the Directive 2019/1 of the European Parliament and of the Council of
11 December 2018 to empower the competition authorities of the Member States to be more
effective enforcers and to ensure the proper functioning of the internal market, O.J. [2019] L 11/3.
120 A. de Streel and P. Alexiadis
the DMA regulates technologies and business models which are very diverse, fast-
evolving, subject to different supply and demand balances, require different levels of
innovation and data manipulation, and which are not always fully understood.
By including competition policy signals at all stages of the analytical process
(policy objectives, nature of the obligations and the application of a proportionality
test), the DMA should in principle complement competition law in order to make
digital markets work better and to stimulate inter and intra-platform competition. In
doing so, the DMA would come closer to the ‘managed competition’ model that
underpins other EU regimes of economic regulation, such as that which applies to
the electronic communications sector.147 The pursuit of a managed competition
model implies that, while competition policy plays a central role at all times, it is
the regulatory framework that is used to facilitate, channel or structure that compet-
itive dynamic.
While the ‘managed competition’ model seems the best future for the DMA, two
other future scenarios seem possible, but seem significantly less desirable:
(i) ‘fossilization’; or (ii) gatekeeper entrenchment. Under a ‘fossilization’ scenario,
the detailed rules of the DMA will be at best become quickly outdated or, at worst,
readily circumvented. It these alternatives were to materialise, the DMA would
remain a relatively ineffective formal legal document of little practical utility. The
risk of fossilization has been taken seriously by EU lawmakers because, as has been
discussed, the DMA provides for a broad anti-circumvention clause and for the
possibility to the Commission to update the obligations with delegated legal acts.
These powers will need to be used effectively by the Commission to avert the real
risk of fossilization. At some point in the future when the DMA is revised and
experience will have been gained, in its practical application, an evolution towards
more flexible and standards-based provisions may be conceivable (and highly
desirable) in order to increase the resilience of the DMA in an environment which
is moving rapidly. As has already been discussed above, a similar evolution has
taken place as from 2002 in relation to the EU electronic communications sector.
Under the gatekeeper entrenchment scenario, the DMA might evolve into a kind
of all-encompassing ‘public utility’ regulatory regime, based on the US model, while
the role of competition policy recedes and fades away.148 While it might be true that,
under this scenario, users of the platforms are likely to be well-protected and
gatekeeper-user relationships will probably be fair, it is also the case that extensive
regulation will probably not support entry that could threaten gatekeeper power.
Rather, the impact of regulation is more likely to entrench a gatekeeper’s position. In
other words, the DMA would then be protecting those parties that are
complementing the services of designated gatekeepers but not stimulating market
forces to encourage the entry of head-to-head competitors and disruptors that target
specialist markets. This is a scenario that we have seen materialise in some utilities
147
As explained in Hancher and Larouche (2011).
148
For requests for a public utilities approach to the regulation of digital platforms see, among
others, Pasquale (2020).
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 121
and financial sector regulation, where an increase of regulation has not led to a
proportionate increase in competition. Given the fact that innovation and competi-
tion is strong in digital markets149 and that, when platforms and data are fully open to
competition, the benefit of network and ecosystem effects may be combined with
competition, a natural monopoly/utilities type of regulation would not necessarily be
the optimal future model for EU digital markets regulation.150
Another important question for the future is the extent to which international
cooperation will take place in relation to big tech regulation. While the DMA is a
European Union law, it will apply to global platforms. This raises the question
whether regulated gatekeepers would choose to extend their DMA obligations
beyond the EU because it is cheaper and more efficient to run one version of their
business, rather than tailoring it to each jurisdiction. By taking this route, which
Bradford (2020) describes as a de facto Brussels effect, the Commission would
become the de facto global regulator—including the US. At the same time, as has
been shown by Fletcher (2022), more jurisdictions across the globe are about to
adopt their own Big Tech regulation. This raises the possibility of substantive
conflict between those regulations, and hence increased difficulties in starting and
running digital businesses. Clearly, the users of platforms would benefit from greater
regulatory coherence and convergence through coordinated legislative or enforce-
ment actions. All this may require the establishment of an international forum that
can address the sharing of information and best practices to achieve better coherence
of rules across the world, on the one hand, and the coordination on enforcement
actions, on the other. Such an international dialogue also holds out the possibility of
the DMA being rendered more effective, as it would enlarge the body of expertise
actively participating in its enforcement.
References
Alexiadis P, Bobowic P (2020) European Merger review of killer acquisitions in digital markets –
threshold issues governing jurisdictional and substantive standards of review. Indian J Law
Technol 16:64
Alexiadis P, de Streel A (2020) Designing an EU Intervention Standard for Digital Platforms. EUI
Working Paper-RSCAS 2020/14. https://cadmus.eui.eu. Accessed 6 Oct 2022
Alexiadis P, de Streel A (2022) The EU’s digital markets act – opportunities and challenges ahead.
Bus Law Int 23(2):163–201
Bradford A (2020) The Brussels effect: how the European Union rules the world. Oxford University
Press
149
Petit (2020).
150
Similarly, Schweitzer (2021, p. 542) recommends that the DMA should not be read as, nor
should it evolve into, a regime of public utility regulation. In the US, Rogerson and Shelanski
(2020) warn against utility-type regulation for digital platforms and recommend a ‘light-handed
pro-competitive regulation’ which is similar to our concept of managed competition.
122 A. de Streel and P. Alexiadis
Colangelo C (2022) The digital markets act and EU antitrust enforcement: double & triple jeopardy,
European Law Review, Forthcoming
Crémer J, de Montjoye YA, Schweitzer H (2019) Competition policy for the digital era. Report to
the European Commission. https://ec.europa.eu. Accessed 6 Oct 2022
Cremer J et al (2021) ‘Fairness and contestability in the digital markets act’, Yale Tobin Center for
Economic Policy, Policy Discussion Paper 3
Dinielli D et al (2021) Equitable Interoperability: the “Super Tool” of Digital Platform Governance,
Yale Tobin Center of Economic Policy: Digital Regulation Project. Policy Discussion Paper 4
Fletcher A (2022) International pro-competition regulation of digital platforms: healthy experimen-
tation or dangerous fragmentation? Oxford Review of Economic Policy
Furman J, Coyle D, Fletcher A, McAuley D, Marsden P (2019) Unlocking digital competition.
Report of the Digital Competition Expert Panel. https://assets.publishing.service.gov.uk.
Accessed 6 Oct 2022
Geradin D, Katsifis D (2020) Competition in Ad Tech: A Response to Google, available on SSRN.
https://doi.org/10.2139/ssrn.3617839
Hancher L, Larouche P (2011) The coming of age of EU regulation of network industries and
services of general economic interest. In: Craig P, de Búrca G (eds) The evolution of EU law,
2nd edn. Oxford University Press, pp 743–781
Hellwig M (2009) Competition policy and sector-specific regulation in network industries. In:
Vives X (ed) Competition policy: fifty years on from the treaty of Rome. Oxford University
Press, pp 203–235
Ibáñez Colomo P (2021) The draft digital markets act: a legal and institutional analysis. J Eur
Competition Law Pract 7(12):561–575
Jacobides MG, Cennamo C, Gawer A (2018) Towards a theory of ecosystems. Strategic
Manage J:2255–2276
Komninos AP (2021) The digital markets act and private enforcement: proposals for an optimal
system of enforcement. In: Charbit N, Gachot S (eds) Liber Amicorum E Fox, Concurrences, pp
425–444
Krämer J (2021) Personal data portability in the platform economy: economic implications and
policy recommendation. J Competition Law Econ 17(2):263–308
Krämer J, Schnurr D (2022) Big data and digital markets contestability: theory of harm and data
access remedies. J Competition Law Econ 18(2):255–322
Lancieri F, Sakowski P (2021) Competition in digital markets: a review of expert reports. Stanford J
Law Bus Finance 26(1):65–170
Larouche P, de Streel A (2021) The European digital markets act: a revolution grounded on
traditions. J Eur Competition Law Pract 12(7):542–561
Larouche P, de Streel A (2022) The integration of wide and narrow market investigations in EU
economic law. In: Peitz M, Schweitzer H (eds) Motta M. Cambridge University Press, Market
investigations: a new competition tool for Europe, pp 164–215
Monti G, de Streel A (2022) Improving institutional design to better supervise digital platforms,
CERRE Report. https://cerre.eu/publications/improving-eu-institutional-design/. Accessed
6 Oct 2022
OECD (2022) Handbook on Competition Policy in the Digital Age. https://www.oecd.org.
Accessed 6 Oct 2022
Pasquale FA (2020) Internet Non-discrimination Principles Revisited, Brooklyn Law School Legal
Studies Paper 655
Petit N (2020) Big tech and the digital economy: the moligopoly scenario. Oxford University Press
Podszun R (2021) Private enforcement and gatekeeper regulation: strengthening the rights of
private parties in the digital markets act. J Eur Competition Law Pract
Prüfer J, Schottmuller C (2021) Competing with big data. J Ind Econ 69(4):967–1008
Rogerson WP, Shelanski H (2020) Antitrust enforcement, regulation, and digital platforms. Univ
Pennsylvania Law Rev 168:1911–1940
The European Way to Regulate Big Tech: The EU’s Digital Markets Act 123
Schweitzer H (2021) The art to make gatekeeper positions contestable and the challenge to know
what is fair: a discussion of the digital market act proposal. ZEuP 3:503–544
Scott Morton F, Bouvier P, Ezrachi A, Jullien B, Katz A, Kimmelman G, Melamed D, Morgenstern
DJ (2019) Committee for the Study of Digital Platforms, Market Structure and Antitrust
Subcommittee. Stigler Center for the Study of the Economy and the State
World Economic Forum (2020) Agile Regulation for the Fourth Industrial Revolution: A Toolkit
for Regulators. https://www.weforum.org/about/agile-regulation-for-the-fourth-industrial-revo
lution-a-toolkit-for-regulators. Accessed 6 Oct 2022
Alexandre de Streel is Professor, University of Namur, Visiting Professor College of Europe and
SciencesPo Paris, Academic Director Centre on Regulation in Europe, Chair of the European
Commission Expert Group of the EU Observatory on the Online Platform Economy.
Peter Alexiadis is a Visiting Professor at King’s College, London, where he teaches the respective
LLM courses on Competition Law & Regulated Network Sectors and Digital Regulation. In
parallel, he was a competition law practitioner in Brussels for over 33 years until his retirement
in 2021. He is the co-author of a major new forthcoming textbook on telecommunications law and
digital policy from UOP.
“eCommerce and EU Consumers’ Rights”
Abstract As is well known, the European Union has long concerned itself with
consumer protection. This is one reason why, principally since the nineteen eighties,
the European Union has adopted consumer rights directives that have been trans-
posed into the law of each EU Member State. As early as the closing years of the
twentieth century, the European Union realised the importance of the digital market
and also regulated matters relating to electronic commerce. Today, old challenges
have been joined by new, and the EU is once again called upon to respond. In this
chapter we look at some of the solutions the EU has adopted or is in the process of
adopting.
1 Introduction
1
Ferreira de Almeida (1982), pp. 11 et seq.; Ferreira de Almeida (1985), pp. 19 et seq.; von Hippel
(1986), p. 4; Dias Oliveira (2002), pp. 24 et seq..; Dias Oliveira (2017), p. 130; Engrácia Antunes
(2019), pp. 26 et seq.; Passinhas (2019), pp. 258 et seq.; Passinhas (2021), pp. 873 et seq.; Dias
Oliveira (2021), pp. 210 et seq.
2
Iamiceli (2019), p. 396.
3
Engrácia Antunes (2019), p. 12; Dias Oliveira (2021), pp. 211 et seq.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 125
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_7
126 E. Dias Oliveira
the trader. When the contract is entered into, the trader is not in the physical presence
of the consumer, who may, with good grounds, worry whether he really exists, and
about the veracity of the information provided as to the location of physical
establishments, telephone numbers, etc.. In addition, the consumer is often asked
to pay for the goods or services acquired by credit card, which may cause appre-
hension as to whether the trader will honour the contract, or as to the risks associated
with using credit cards online.4
When buying physical goods, consumers are also unable to see them in person
before buying, and it is possible that, on receiving them, they will fail to live up to
the idea they formed of them.
Contracts entered into online are also governed, as a rule, by lengthy general or
standard contractual clauses, which consumers do not always read or properly
understand. In other cases, even when they understand them, they may not always
agree with them, but their limited or non-existent bargaining power and their need
(more or less real) to contract leads them to accept the terms proposed to them. It is
no less than their free will that may be called into question.
The advertising, not always clearly identified, to which consumers are subject
also contributes to a situation where their intentions are shaped by less well-founded
factors. Furthermore, the use of consumers’ data—which they may have consciously
authorised, but may also result from their online profiles, which are constructed over
time on the basis of their online behaviour—enables traders to suggest purchases of
goods or services that are especially appealing to the consumer in question and, for
this very reason, more difficult to resist.5 It is well known that, today, it is traders
who have to reach out to consumers, guessing at their desires or “needs” (sometimes
constructed by the traders themselves), keeping ahead of their competitors and
thereby securing a better position in the market. Widespread use of social media
and the sending of messages to consumers, based on the workings of an algorithm, is
generally found to be effective. As a result, impulse buying, which in itself tends to
be one of the consequences of distance selling,6 is encouraged even more by the use
of these more recent possibilities of interaction.7
The picture we have briefly sketched would not be complete without a reference
to the sudden surge in the importance of the digital environment. It has become
almost a commonplace, although of no little importance to a faithful description of
our world today, to point out that the use made, during the COVID 19 pandemic, of
online contracts, websites and social media for providing and obtaining informa-
tion,8 has highlighted the importance that the online world has acquired, in a process
that is ongoing.
4
Dias Oliveira (2021), p. 211.
5
Dias Oliveira (2021), pp. 211 et seq.
6
With reference to the context in 2001, see, for example, Dias Oliveira (2001), pp. 94 et seq.
7
Dias Oliveira (2021), p. 211.
8
Passinhas (2021), pp. 872 et seq.
“eCommerce and EU Consumers’ Rights” 127
From an early stage, the European Union has regulated matters relating to consumer
protection. Underlying this regulation is a concern to ensure fair treatment of
consumers, but also an environment of healthy competition that allows the internal
market to develop.9 The European Union has also pioneered regulation of
e-commerce, identified from the outset as an important market for developing the
European economy. 10
The multiple new ways of concluding contracts that traders offer consumers
require from the European Union a response that is both up to date and able to
address the new challenges. On 11 April 2018, when announcing its New Deal for
Consumers,11 the European Commission set itself the aim, among other things, of
extending protection for consumers when they are online. Its stated objectives were
to strengthen consumer rights online (through greater transparency in online markets
and concerning search results on online platforms), to give consumers the tools
needed to enforce their rights and obtain compensation, to introduce effective
penalties for breaches of consumer protection legislation, to tackle dual quality of
consumer products and to offer improved conditions for businesses.
This resulted in two proposals for directives: a proposal for amendment of the
Council Directive concerning abusive clauses in consumer contracts, the Directive
on consumer protection in the indication of the prices of products offered to
consumers, the Directive on unfair business-to-consumer commercial practices and
the Directive on consumer rights. A second proposal on representative actions for the
protection of the collective interests of consumers, and repealing Directive 2009/22/
EC on injunctions.
9
Morais de Carvalho (2020), pp. 52 et seq.; Dias Oliveira (2021), p. 213.
10
See, for example, Directive 2000/31/EC of the European Parliament and of the Council of 8 June
2000 on certain legal aspects of information society services, in particular electronic commerce, in
the Internal Market (‘Directive on electronic commerce’), published in the Official Journal L 178, of
17.7.2000, pp. 1–16. At the very start, in recital (1), in fine, the Directive explains that “[t]he
development of information society services within the area without internal frontiers is vital to
eliminating the barriers which divide the European peoples”. Recital (3) goes on to explain that “[C]
ommunity law and the characteristics of the Community legal order are a vital asset to enable
European citizens and operators to take full advantage, without consideration of borders, of the
opportunities afforded by electronic commerce; this Directive therefore has the purpose of ensuring
a high level of Community legal integration in order to establish a real area without internal borders
for information society services”. This directive was transposed into internal Portuguese law by
Decree-Law 7/2004, of 7 January, amended by Decree-Law 62/2009, of 10 March, Law 46/2012, of
29 August, and Law 40/2020, of 18 August. With abundant references to legislation, see Dias
Oliveira (2021), pp. 212 et seq.
11
Available at https://ec.europa.eu/commission/presscorner/detail/pt/IP_18_3041.
128 E. Dias Oliveira
Although the Directive on electronic commerce mentioned above was not aimed
specifically at protecting consumers, 14 it contains a number of rules that contribute
to this objective. Attention may be drawn, for example, to Article 6, which lists a
number of requirements that must be met in commercial communications associated
with an information society service;15 to Article 7, concerning the identifiability of
unsolicited advertising communications; to Article 10, concerning information to be
12
Available at https://ec.europa.eu/info/sites/default/files/communication-shaping-europes-digital-
future-feb2020_en_4.pdf.
13
See, in the same text, the indication that “[a] frictionless single market, where companies of all
sizes and in any sector can compete on equal terms, and can develop, market and use digital
technologies, products and services at a scale that boosts their productivity and global competitive-
ness, and consumers can be confident that their rights are respected”.
14
It should be noted that Article 1 (1) declares that “[t]his Directive seeks to contribute to the proper
functioning of the internal market by ensuring the free movement of information society services
between the Member States”. In order to achieve this, para. 2 indicates that the directive “(. . .)
approximates, to the extent necessary for the achievement of the objective set out in paragraph
1, certain national provisions on information society services relating to the internal market, the
establishment of service providers, commercial communications, electronic contracts, the liability
of intermediaries, codes of conduct, out-of-court dispute settlements, court actions and cooperation
between Member States”, clarifying in para.3 that “[t]his Directive complements Community law
applicable to information society services without prejudice to the level of protection for, in
particular, public health and consumer interests, as established by Community acts and national
legislation implementing them in so far as this does not restrict the freedom to provide information
society services”.
15
Among other things, it requires, for example, that the commercial communication be clearly
identifiable as such; that the person on behalf of whom the commercial communication is made
must be clearly identifiable; that promotional offers be clearly identifiable as such as that the
conditions for taking part in them must be easily accessible and presented in clear and unambiguous
terms, etc.. These are rules which, although not specifically aimed at consumers, nonetheless offer
them protection, insofar as they ensure greater transparency in commercial dealings.
“eCommerce and EU Consumers’ Rights” 129
provided to the recipients of information society services prior to placing the order,16
etc.. As indicated above, this Directive dates from 2000. The phenomenon of
electronic commerce has evolved since then and so it is no surprise that a
European Regulation is currently going through the legislative procedure, currently
still at the Proposal stage, designed among other things to amend this Directive. We
refer to the Proposal for a Regulation of the European Parliament and of the Council
on the single market for digital services (Digital Services Regulations) and amending
Directive 2000/31/EC,17 although this proposal sets out essentially to establish
harmonised rules on the provision of intermediary services in the internal market.18
With the specific aim of protecting consumers in distance contracts, reference
should be made to Directive 2011/83/EU of the European Parliament and of the
Council of 25 October 2011 on consumer rights.19 As indicated in its title, this
Directive amended Council Directive 93/13/EEC on unfair terms in consumer
contracts, and also Directive 1999/44/EC of the European Parliament and of the
Council of 25 May 1999 on certain aspects of the sale of consumer goods and
associated guarantees, as well as repealing Council Directive 85/577/EEC, to protect
the consumer in respect of contracts negotiated away from business premises, and
Directive 97/7/EC of the European Parliament and of the Council on the protection
of consumers in respect of distance contracts. The list of the directives repealed or
amended by Directive 2011/83/EU reflects, per se, some of the developments in
European Union law. Indeed, Directive 2011/83/EU, which includes a significant
body of rules on the protection of consumers in distance contracts, was not the first
piece of legislation that addressed this matter. And it was also not the last. The
European Union has subsequently adopted Directive (EU) 2019/2161 of the
European Parliament and of the Council of 27 November 2019, which amended
Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU
of the European Parliament and of the Council in order to ensure better application
and modernisation of the Union’s rules on consumer protection.20 In view of its
breadth, this has also been called the Omnibus Directive,21 resulting from the
initiatives referred to above in the new deal.
16
Article 10 specifies that the information indicated in them is mandatory in cases where one of the
parties to contract is a consumer.
17
Brussels, 15.12.2020 COM(2020) 825 final 2020/0361 (COD). The explanatory memorandum
starts by clarifying the rationale and aims of the proposal, stating that “[s]ince the adoption of
Directive 2000/31/EC 1 (the “e-Commerce Directive”), new and innovative information society
(digital) services have emerged, changing the daily lives of Union citizens and shaping and
transforming how they communicate, connect, consume and do business. Those services have
contributed deeply to societal and economic transformations in the Union and across the world. At
the same time, the use of those services has also become the source of new risks and challenges,
both for society as a whole and individuals using such services”.
18
See Article 1.
19
OJEU L 304, of 21.11.2011, pp. 64 et seq.
20
Published in OJEU L 328, of 18.12.2019, pp. 7 et seq.
21
Contextualising the adoption of this Directive, see Passinhas (2021), pp. 875 et seq.
130 E. Dias Oliveira
22
It should be stressed that Decree-Law 24/2014 had previously been amended by Law 47/2014, of
28 July, by Decree-Law 78/2018, of 15 October and by Decree-Law 9/2021, of 29 January.
23
Of special relevance to an understanding of this concept is the idea of distance communication
technique, defined in Article 3 w) of Decree-Law 24/2014, as “any means which, without the
physical and simultaneous presence of the supplier of goods or service provider and the consumer,
may be used with a view to concluding the contract between the said parties”.
24
Dias Oliveira (2017), p. 133; Dias Oliveira (2021), pp. 225 et seq. Concerning specifically the free
right of contract termination enshrined in these provisions, cfr. Morais de Carvalho (2020),
pp. 192 et seq., 243 et seq; Passinhas (2021), pp. 890 et seq., and also Passinhas (2015),
pp. 124 et seq.; Teixeira de Sousa (2016), pp. 29 et seq.; Passinhas (2019), pp. 300 et seq.; Engrácia
Antunes (2019), pp. 148 et seq. Concerning the duty to provide information, cfr. Passinhas (2015),
pp. 116 et seq.; Tigelaar (2019), pp. 30 et seq.; Morais de Carvalho (2020), pp. 223 et seq.
“eCommerce and EU Consumers’ Rights” 131
Enshrinement of the right of free termination of contract means that in the event
of buyer's remorse, the consumer can terminate the contract. This protects the
consumer against impulse buys, disappointments that may be suffered on receiving
goods purchased sight unseen, etc.25
Given that, in distance contracts, the parties are not in each other’s physical
presence and performance of the trader’s obligation—be it delivery of goods, or
provision of a service—tends to be deferred, taking place at a time subsequent to
conclusion of the contract, it is important for the consumer that the date of perfor-
mance be fixed. Article 19 of Decree-Law 24/2014 lays down that the supplier of
goods or service provider must perform the order within 30 days (unless otherwise
agreed), counted from the day after conclusion of the contract. If it fails to do so, due
to the unavailability of the goods or service, it must inform the consumer of the fact
and reimburse him or her for the amounts paid within 30 days of learning of such
unavailability. If this 30 day period is exceeded without the consumer being reim-
bursed, the supplier or provider is obliged to return twice the amounts paid by the
consumer within 15 days (all this without prejudice to the right to any compensation
that may be due).26
This Decree-Law also presents a number of specific rules that reflect with special
clarity the reality of online commerce. For example, Article 4 indicates specific
additional information that must be provided to consumers, in a clear and compre-
hensible way appropriate to the means of distance communication used, when a
contract is entered into in an online market. Article 4-B then seeks to ensure that
reviews made by consumers of providers in online markets comply with standards of
rigour and transparency; given that the selection of a trader with whom the contract is
entered into is today often dependent on prior consultation of the opinions of other
consumers, these reviews have taken on a crucial role.27 Another specific rule on
electronic contracts can be found in Article 7, which determines that, up to the start
of the ordering process, e-commerce websites must indicate clearly and legibly the
existence, if any, of geographical or other restrictions on delivery and the means of
payment accepted.28,29
The rules on consumer protection in distance contracts are therefore a good
example of the legislative policies of the European Union seeking to contribute to
25
Dias Oliveira (2021), pp. 226 et seq.
26
Mota Pinto (2015), pp. 88 et seq.; Dias Oliveira (2017), p. 145; Engrácia Antunes (2019), p. 176;
Morais de Carvalho (2020), p. 206; Dias Oliveira (2021), pp. 228 et seq.
27
Stressing the importance of consumer reviews, cfr. Passinhas (2021), pp. 880 et seq.
28
Mota Pinto (2015), p. 68; Dias Oliveira (2017), p. 135; Engrácia Antunes (2019), p. 186; Dias
Oliveira (2021), p. 225.
29
This provision must at present be interpreted in conjunction with Regulation 2018/302 of the
European Parliament and of the Council of 28 February 2018 on addressing unjustified
geo-blocking and other forms of discrimination based on customers' nationality, place of residence
or place of establishment within the internal market and amending Regulations (EC) No 2006/2004
and (EU) 2017/2394 and Directive 2009/22/EC, published in OJEU L 60 I, of 3.3.2018, pp. 1 et
seq.. See, on this matter, Passinhas (2021), pp. 881 et seq.
132 E. Dias Oliveira
a safe online environment, updated in line with changes in the market, encouraging
consumers to enter into contracts.
Other legislation of a more or less general character will also be relevant to
regulating these matters.
Given that the overwhelming majority of contracts concluded over the internet
include general contractual clauses, the rules established in the law of the various
Member States that have transposed into internal law Directive 93/13/EEC,30 which
in the case of Portugal happened in Decree-Law 446/85, of 25 October,31 are
especially relevant and have been amended over time. As well as indicating the
general contractual clauses which may be prohibited, this also regulates the way in
which the clauses must be communicated (Article 5), how information that is
relevant must be given (Article 6), the contractual consequences as regards clauses
that were not duly communicated (Article 8), etc..
In view of the topic under discussions, special reference must also be made to the
rules on the formation of contract established in the Directive on electronic com-
merce, most importantly Articles 9 et seq., which are reflected in Articles 24 et seq.
of Decree-Law 7/2004, of 7 January.32
30
As mentioned above, this was also amended by the Omnibus Directive.
31
This Decree-Law was amended by Decree-Law 220/95, of 31 August, by Rectification 114-B/95,
of 31 August, by Decree-Law 249/99, of 7 July, by Decree-Law 323/2001, of 17 December, by Law
32/2021, of 27 May, by Decree-Law 108/2021, of 7 December and, most recently, in the light of
adoption of the Omnibus Directive, by Decree-Law 109-G/2021, of 10 December.
32
In view of their special complexity, attention is drawn to the diverging opinions in legal doctrine
as regards the interpretation of Article 29 of Decree-Law 7/2004. See, on this matter, Lei do
Comércio Electrónico anotada (2005), pp. 116 et seq.; Lopes da Rocha et al. (2009),
pp. 321 et seq..; Oliveira Ascensão (2003), pp. 58 et seq; Oliveira Ascensão (2004), pp. 112 et.
seq.; Oliveira Festas (2006), pp. 425 et seq..; Costa e Silva (2008), pp. 462 et seq.
33
OJEU L 136, of 22.5.2019, pp. 1 et seq.
34
Cfr. Article 55 of Decree-Law 84/2021.
“eCommerce and EU Consumers’ Rights” 133
services, under contracts concluded between traders and consumers with a view to
the supply of digital contents and services.35
The rules established in Directive (EU) 2019/770 are complemented by the
provisions of Directive (EU) 2019/771 of the European Parliament and of the
Council of 20 May 2019 on certain aspects concerning contracts for the sale of
goods, amending Regulation (EU) 2017/2394 and Directive 2009/22/EC, and
repealing Directive 1999/44/EC.36,37 Indeed, as explicitly stated in recitals (20) et
seq. of Directive (EU) 2019/770 and (13) et seq. of Directive (EU) 2019/771, this
Directive will be applied to contracts concerning the sale of goods, including goods
with digital elements.38 The concept of these goods is defined in Article 2 (3), of
Directive 2019/770, where this is stated to mean “any tangible movable items that
incorporate, or are inter-connected with, digital content or a digital service in such a
way that the absence of that digital content or digital service would prevent the goods
from performing their functions”. This means that the digital content or services that
is incorporated or inter-connected in this way will fall within the field of application
of Directive (EU) 2019/771, if it has been supplied with the goods in accordance
with the respective contract of sale.39
35
As defined in Article 2 (1) and (2), of the Directive, “digital content” means “data which are
produced and supplied in digital form” and “digital service” means “a) a service that allows the
consumer to create, process, store or access data in digital form, or b) a service that allows the
sharing of or any other interaction with data in digital form uploaded or created by the consumer or
other users of that service”. In a detailed analysis of the field of application of this directive, see, for
all, Sein and Spindler (2019), pp. 260 et seq.
36
OJEU L 136, of 22.5.2019, pp. 28 et seq.
37
Morais de Carvalho (2019), p. 66, and in general, on the rules established in both directives. On
the rules established in Drective (EU) 2019/770, see Pinto Oliveira (2020), pp. 1223 et seq.
38
Dias Oliveira (2021), pp. 223 et seq. At an earlier stage, concerning the field of application of the
two directives, cfr. Morais de Carvalho (2019), pp. 72 et seq.; Dias Pereira (2019), pp. 18 et seq.;
Mota Pinto (2021), pp. 519 et seq.
39
The second paragraph of recital (21) of Directive (EU) 2019/770, gives an example that affords a
clearer understanding of the relations between the fields of application of the two directives. It
explains that “(. . .) a smart phone could come with a standardised pre-installed application provided
under the sales contract, such as an alarm application or a camera application. Another possible
example is that of a smart watch. In such a case, the watch itself would be considered to be the good
with digital elements, which can perform its functions only with an application that is provided
under the sales contract but has to be downloaded by the consumer onto a smart phone; the
application would then be the inter-connected digital element. This should also apply if the
incorporated or inter-connected digital content or digital service is not supplied by the seller itself
but is supplied, under the sales contract, by a third party. In order to avoid uncertainty for both
traders and consumers, in the event of doubt as to whether the supply of the digital content or the
digital service forms part of the sales contract, Directive (EU) 2019/771 should apply”. Further help
in clearly demarcating the fields of application of the two directives is provided by recital (13) of
Directive (EU) 2019/771, which clarifies that “(. . .) Directive (EU) 2019/770 applies to the supply
of digital content or digital services, including digital content supplied on a tangible medium, such
as DVDs, CDs, USB sticks and memory cards, as well as to the tangible medium itself, provided
that the tangible medium serves exclusively as a carrier of the digital content. In contrast, this
Directive should apply to contracts for the sale of goods, including goods with digital elements
134 E. Dias Oliveira
The European Union has adopted other legislation that, in a more or less direct
fashion, also seeks to contribute to legal safeguards for consumers in dealings in the
digital market.
In the first place, in Regulation (EU) 2016/679 Regulation (EU) 2016/679 of the
European Parliament and of the Council of 27 April 2016 on the protection of natural
persons with regard to the processing of personal data and on the free movement of
such data, and repealing Directive 95/46/EC (General Data Protection Regulation),41
as its title indicates, we may find rules designed to regulate the collection and
processing of data, as well as the consent requested from the data subject and
circulation of the data.
There is also Regulation (EU) 2019/1150 of the European Parliament and of the
Council of 20 June 2019 on promoting fairness and transparency for business users
of online intermediation services,42 which establishes rules “(. . .) to ensure that
business users of online intermediation services and corporate website users in
relation to online search engines are granted appropriate transparency, fairness and
which require digital content or a digital service in order to perform their functions”. Article
3 (3) clarifies that “[t]his Directive shall not apply to contracts for the supply of digital content or
digital services. It shall, however, apply to digital content or digital services which are incorporated
in or inter-connected with goods in the meaning of point (5)(b) of Article 2, and are provided with
the goods under the sales contract, irrespective of whether such digital content or digital service is
supplied by the seller or by a third party. In the event of doubt as to whether the supply of
incorporated or inter-connected digital content or an incorporated or inter-connected digital service
forms part of the sales contract, the digital content or digital service shall be presumed to be covered
by the sales contract”, whilst para. (4) a) also indicates that the directive does not apply “[a]ny
tangible medium which serves exclusively as a carrier for digital content”.
40
Morais de Carvalho (2019), pp. 66 et seq.; Dias Pereira (2019), p. 15; Passinhas (2021), pp. 888 et
seq.
41
OJEU L 119, of 4.5.2016, pp. 1 et seq.
42
OJEU L 186, of 11.7.2019, pp. 57 et seq.
“eCommerce and EU Consumers’ Rights” 135
43
Art. 1 (1).
44
On the relevance of this Regulation, in general see Passinhas (2021), pp. 878 et seq.
45
Defined in Article 2 (1) of Regulation (EU) 2019/1150, as “any private individual acting in a
commercial or professional capacity who, or any legal person which, through online intermediation
services offers goods or services to consumers for purposes relating to its trade, business, craft or
profession”.
46
Defined in Article 2 (3) of Regulation (UE) 2019/1150, as “any natural or legal person which
provides, or which offers to provide, online intermediation services to business users”.
47
See Passinhas (2021), pp. 878 et seq., on the organisation and presentation of the results of
searches by consumers. See also Iamiceli (2019), p. 402.
48
Brussels, 15.12.2020, COM(2020) 842 final 2020/0374 (COD).
49
Cfr. Articles 2 (1) and 3 (1), of the Proposal for a Regulation.
50
Cfr. Articles 2 (2), of the Proposal for a Regulation.
51
Explanatory memorandum of the Proposal for a Regulation on Digital Markets, p. 1.
136 E. Dias Oliveira
that may be unaffordable for consumers or fail to compensate the damage suffered.52
Online dispute resolution, as well as the use of alternative means of dispute
resolution, can offer a way of facilitating the situation for consumers, who may be
able to have their situation assessed faster and more flexibly, without the need to
travel and with lower costs than in State courts.53
In view of this, the European Union has adopted a number of initiatives to
encourage online alternative dispute resolution. In view of its relevance here,
attention is drawn to Regulation EU 524/2013 of the European Parliament and of
the Council of 21 May 2013 on online dispute resolution for consumer disputes and
amending Regulation (EC) No 2006/2004 and Directive 2009/22/EC, published in
OJEU L165 of 18/6/2013 (Regulation on consumer ODR).54,55 This Regulation was
complemented by Directive 2013/11/EU of the European Parliament and of the
Council of 21 May 2013 on online dispute resolution for consumer disputes and
amending Regulation (EC) 2006/2004 and Directive 2009/22/EC (Regulation on
ADR).56,57 This Directive was transposed into Portuguese internal law by Law
144/2015, of 8 September, regulating alternative resolution of consumer disputes
and creating the legal framework for out-of-court settlement of consumer disputes.
These different pieces of legislation complement one another,58 and so all must be
considered in order to arrive at a correct understanding.
Article 1 of the Regulation on consumer ODR states that its purpose is, “through
the achievement of a high level of consumer protection, to contribute to the proper
functioning of the internal market, and in particular of its digital dimension by
providing a European ODR platform (‘ODR platform’) facilitating the independent,
impartial, transparent, effective, fast and fair out-of-court resolution of disputes
between consumers and traders online”.
It is a precondition for the use of this platform that consumers, as well as
suppliers, be resident in the EU, Norway, Iceland or Liechtenstein (Article 2).
When a dispute arises in connection with an online sales or service contract (indeed,
the Regulation limits its application to disputes arising from these contracts - Article
4 (1) e)), the consumer may initiate the complaints process through this platform,
52
See, on this matter, Passinhas (2021), pp. 893 et seq.
53
Scott (2019), p. 1291; Esteban de la Rosa and Marques Cebola (2019), p. 1252; Salgado (2020),
pp. 187 et seq.; Dias Oliveira (2021), p. 213.
54
OJEU L 165, of 18.6.2013, pp. 1 et seq.
55
Relevant to application of EU Regulation 524/2013 is Commission Implementing Regulation
(EU) 2015/1051 of 1 July 2015 on the modalities for the exercise of the functions of the online
dispute resolution platform, on the modalities of the electronic complaint form and on the modalities
of the cooperation between contact points provided for in Regulation (EU) No 524/2013 of the
European Parliament and of the Council on online dispute resolution for consumer disputes,
published in OJEU L 171, of 2.7.2015, pp. 1 et seq.
56
OJEU L 165, of 18.6.2013, pp. 63 et seq.
57
See, concerning these pieces of legislation and the relations between them, Passinhas (2021),
pp. 894 et seq.; Dias Oliveira (2021), pp. 213 et seq. Earlier, Salgado (2020), pp. 186 et seq.
58
Dias Oliveira (2021), pp. 213 et seq.
“eCommerce and EU Consumers’ Rights” 137
59
This is a list that the competent authorities in each EU Member State must have with the entities
engaged in dispute resolution through this procedure, and that must have been verified to ensure that
it complies with the requirements established in the same directive. They must be entities that offer
serious evidence of credibility.
60
On the rules established in the ADR Regulation, cfr. Fonseca (2017), pp. 460 et seq.; Morais de
Carvalho et al. (2017), pp. 211 et seq.; Dias Oliveira (2021), pp. 213 et seq.
138 E. Dias Oliveira
Having made this brief survey of the legislative measures adopted by the European
Union with regard to consumer rights in electronic commerce, we should not neglect
to include a note on the concept of consumer.
Indeed, although there is no single concept of consumer, we have identified, in
several legislative texts of the European Union, provisions that define consumer, and
tend to coincide in the following: a natural person acting with purposes that do not
include his or her trade, business, craft or profession.61 The main defining element in
this concept is therefore teleologically based, in other words, it lies in the purpose of
the subject's actions. The different definitions also coincide in indicating that a
consumer is a natural person.62
Although the definition of consumer is relatively clear, in real life doubts may
arise as to whether a given person has the capacity of consumer. In view of the issues
that this raises, we refer here to two judgments of the ECJ, concerning the concept of
consumer in connection with the digital market; the Schrems judgment63 and the
Personal Exchange International judgment.64 And although these cases relate to
interpretation of the concept of consumer in the light of Regulation (EC) 44/2001, 65
more specifically, Articles 15 and 16, the guidance given is relevant to European
Union Law in general.66
The Schrems case raised the question of the classification as consumer of a citizen
habitually resident in Austria, owner of a Facebook account which he used to raise
funds to bring data protection proceedings against Facebook, and also to disseminate
his talks and other initiatives concerning the same topic, for which he was paid. The
ECJ accepted a restrictive interpretation of the concept of consumer, reinforcing, in
61
See, for example, Article 2 (1), of Directive 2011/83/EU; Article 2 (4), of Regulation (EU) 2019/
1150; Article 4 (1) a), of Directive 2013/11/EU. In a different formulation, but with the same
meaning, in Article 2 b) of Directive 93/13/EC, a consumer is defined as “any natural person who, in
contracts covered by this Directive, is acting for purposes which are outside his trade, business or
profession”.
62
Dias Oliveira (2021), p. 217.
63
Judgment of 25 January 2018, Case C-498/16, ECLI:EU:C:2018:37.
64
Judgment of 10 December 2020, Case C-774/19, ECLI:EU:C:2020:1015.
65
Council Regulation (EC) 44/2001, of 22 December 2000 on jurisdiction and the recognition and
enforcement of judgments in civil and commercial matters, published in OJ L 12, of 16.1.2001,
pp. 1 et seq.
66
To this effect, see, for example, the Schrems judgment, item 28, which states that “[a]lthough the
concepts used by Regulation No 44/2001, in particular those which appear in Article 15(1) of that
regulation, must be interpreted independently, by reference principally to the general scheme and
objectives of that regulation, in order to ensure that it is applied uniformly in all Member States
(judgment of 28 January 2015, Kolassa, C-375/13, EU:C:2015:37, paragraph 22 and the case-law
cited), account must, in order to ensure compliance with the objectives pursued by the legislature of
the European Union in the sphere of consumer contracts, and the consistency of EU law, also be
taken of the definition of ‘consumer’ in other rules of EU law (judgment of 5 December 2013,
Vapenik, C-508/12, EU:C:2013:790, paragraph 25). See, on this matter, Haslach (2019), p. 568.
“eCommerce and EU Consumers’ Rights” 139
this judgment, the idea that a consumer will have to act “(. . .) outside and indepen-
dently of any trade or professional activity or purpose, solely for the purpose of
satisfying an individual’s own needs in terms of private consumption (. . .)”.67
However, in this judgment, the ECJ eventually decided that, as concerns the
interpretation of Article 15 of Regulation (EC) 44/2001, “(. . .) the activities of
publishing books, lecturing, operating websites, fundraising and being assigned
the claims of numerous consumers for the purpose of their enforcement do not entail
the loss of a private Facebook account user’s status as a ‘consumer’(. . .)”.68 The ECJ
clarified that the concept of consumer “(. . .) is independent of the knowledge and
information that the person in question actually possesses (. . .)”,69 and is not taken
away from them by the experience they may have in the field of the services where
they seek to enforce their rights, nor by assurances given for the purposes of
representing the rights and interests of other users of the same services.70,71 Con-
cluding that an interpretation of the concept of consumer that excluded such activ-
ities in defence of consumer rights would stand in the way of their effective
protection, which would constitute a breach of Article 169 (1) TFUE, under which
the Union must promote the right of consumers to “(. . .) organise themselves in
order to safeguard their interests”.72
In the Personal Exchange International case—once again concerning the inter-
pretation of the concept of consumer in Article 15 (1), of Regulation (EC) 44/2001—
the ECJ was asked whether “(. . .) Article 15(1) of Regulation No 44/2001 must be
interpreted as meaning that a natural person domiciled in a Member State who, first,
has concluded with a company established in another Member State a contract to
play poker on the Internet, containing general terms and conditions determined by
that company, and, secondly, has neither officially declared such activity nor offered
it to third parties as a paid service loses the status of a ‘consumer’, within the
meaning of that provision, where that person plays the game for a large number of
hours per day and receives substantial winnings from that game”.73 Returning to
67
Item 30 of the Schrems Judgement. See, on this matter, Haslach (2019), pp. 568 et seq., who
stresses that “[i]f the contract was concluded partly for professional reasons, the contract must be
considered in its entirety; the party can therefore still be considered a “consumer” if the link between
the contract and the professional or trade is marginal”. Morais de Carvalho (2019), p. 71, note
14, appearing to side with Geraint Howells, stresses that the restrictive interpretation of the concept
of consumer, including in the Schrems judgment, is circumscribed to judicial competence.
68
It was also ruled, in item 2) of the judgment, that “[A]rticle 16(1) of Regulation No 44/2001 must
be interpreted as meaning that it does not apply to the proceedings brought by a consumer for the
purpose of asserting, in the courts of the place where he is domiciled, not only his own claims, but
also claims assigned by other consumers domiciled in the same Member State, in other Member
States or in non-member countries”.
69
Cfr. Haslach (2019), p. 569.
70
Item 39 of the Schrems Judgement.
71
Haslach (2019), p. 570; on this judgement, Dias Oliveira (2021), pp. 218 et seq.
72
Item 39 of the Schrems Judgement.
73
Item 23 of the Personal Exchange International judgement.
140 E. Dias Oliveira
4 Conclusion
References
Costa e Silva P (2008) A contratação electrónica entre empresas: os B-2-B E-Markets. In APDI
(org.) Direito da Sociedade da Informação, vol. VII. Coimbra Editora, Coimbra, pp 459–471
Dias Oliveira E (2001) A protecção dos consumidores nos contratos celebrados através da Internet.
Almedina, Coimbra
Dias Oliveira E (2002) A protecção dos consumidores nos contratos celebrados através da Internet.
Almedina, Coimbra
Dias Oliveira E (2017) Contratação eletrónica e tutela do consumidor. In: Ataíde RPCM, Barata CL
(coord), Estudos de Direito do Consumo, vol. V. AAFDL, Lisboa, pp 129–148
74
Item 33 of the Personal Exchange International judgement.
75
On this judgement, see Dias Oliveira (2021), pp. 218 et seq.
“eCommerce and EU Consumers’ Rights” 141
Dias Oliveira E (2021) Algumas considerações sobre o consumidor no mercado digital no âmbito
do Direito da União Europeia. Revista da Faculdade de Direito da Universidade de Lisboa/
Lisbon Law Review, LXII (1):209–230
Dias Pereira AL (2019) Contratos de fornecimento de conteúdos e serviços digitais. Estudos de
Direito do Consumidor 15:9–36
Engrácia Antunes J (2019) Direito do Consumo. Almedina, Coimbra
Esteban de la Rosa F, Marques Cebola C (2019) The Spanish and Portuguese systems: two
examples calling for a further reform – uncovering the architecture underlying the new con-
sumer ADR/ODR European Framework. Eur Rev Priv Law 6(27):1251–1278
Ferreira de Almeida C (1982) Os direitos dos consumidores. Livraria Almedina, Coimbra
Ferreira de Almeida C (1985) Negócio jurídico de consumo. BMJ 347:11–38
Fonseca P (2017) A Arbitragem e a Mediação. Os desafios do novo regime de resolução alternativa
de litígios. In: Ataíde RPCM, Barata CL (coord) Estudos de Direito do Consumo, vol. V,
AAFDL, Lisboa, pp 447-461
Haslach J (2019) International Jurisdiction in consumer contract cases under the Brussels I
Regulation: Schrems. Common Mark Law Rev 56(2):559–580
Iamiceli P (2019) Online platforms and the digital turn in EU contract law: unfair practices,
transparency and the (pierced) veil of digital immunity. Eur Rev Contract Law 15(4):392–420
Lopes da Rocha M, Correia MP, Rodrigues MF et al (2009) Leis da Sociedade da Informação –
Comércio Electrónico. Coimbra Editora, Coimbra
Ministério da Justiça (2005) Lei do Comércio Electrónico anotada. Coimbra Editora, Coimbra
Morais de Carvalho J (2019) Venda de Bens de Consumo e Fornecimento de Conteúdos e Serviços
Digitais – As Diretivas 2019/771 e 2019/770 e o seu Impacto no Direito Português. Revista
Eletrónica de Direito 3(20):64–87
Morais de Carvalho J (2020) Manual de Direito de Consumo, 7th edn. Almedina, Coimbra
Morais de Carvalho J, Pinto-Ferreira JP, Campos Carvalho J (2017) Manual de Resolução
Alternativa de Litígios de Consumo. Almedina, Coimbra
Mota Pinto P (2015) O Novo Regime Jurídico dos Contratos a Distância e dos Contratos Celebrados
Fora do Estabelecimento Comercial. Estudos de Direito do Consumidor 9:51–91
Mota Pinto P (2021) Venda de bens de consumo apontamento sobre a transposição da Diretiva
(UE) 2019/771 e o Direito português. Estudos de Direito do Consumidor 17:551–561
Oliveira Ascensão J (2003) Contratação electrónica. In: APDI (org.) Direito da Sociedade da
Informação, vol. IV. Coimbra Editora, Coimbra, pp 43–68
Oliveira Ascensão J (2004) Perspectiva jurídica, O Comércio Electrónico em Portugal – O quadro
legal e o negócio, Anacom, pp 104–141. Available at https://www.anacom.pt
Oliveira Festas D (2006) A contratação electrónica automatizada. In: APDI (org.) Direito da
Sociedade da Informação, vol. VI. Coimbra Editora, Coimbra, pp 411–461
Passinhas S (2015) A Directiva 2011/83/UE, do Parlamento Europeu e do Conselho, de 25 de
outubro de 2011, relativa aos direitos dos consumidores: algumas considerações. Estudos de
Direito do Consumidor 9:93–141
Passinhas S (2019) O lugar da vulnerabilidade no direito do consumidor português. Estudos de
Direito do Consumidor 15:255–311
Passinhas S (2021) A proteção do consumidor no mercado em linha. Revista da Faculdade de
Direito da Universidade de Lisboa/Lisbon Law Review LXII 1:871–898
Pinto Oliveira N (2020) O direito europeu da compra e venda 20 anos depois Comparação entre a
Directiva 1999/44/CE, de 25 de maio de 1999, e a Directiva 2019/771/UE, de 20 de maio de
2019. Revista de Direito Comercial:1217-1276, Available at https://www.
revistadedireitocomercial.com/
Salgado C (2020) Breves notas sobre a arbitragem em linha. Revista da Faculdade de Direito da
Universidade de Lisboa/Lisbon Law Review LXI 2:181–203
Scott C (2019) Consumer law, enforcement and the new deal for consumers. Eur Rev Priv Law
6(27):1279–1296
142 E. Dias Oliveira
Sein K/Spindler G (2019) The new directive on contracts for the supply of digital content and digital
services – scope of application and trader’s obligation to supply. Eur Rev Contract Law 15 (3):
257-279
Teixeira de Sousa A (2016) O direito de arrependimento nos contratos celebrados à distância e fora
do estabelecimento: algumas notas. In: Ferreira de Almeida C, Rodrigues LS, Portugal MC et al
(eds) Estudos de Direito do Consumo em Homenagem a Manuel Cabeçadas Ataíde Ferreira.
DECO, Lisboa, pp 18–41
Tigelaar L (2019) How to sanction a breach of information duties of the consumer rights directive?
Eur Rev Priv Law 27(1):27–58
von Hippel E (1986) Verbraucherschutz, 3rd edn. J.C.B. Mohr (Paul Siebeck), Tübingen
Elsa Dias Oliveira is an Associate Professor at the Faculty of Law of the University of Lisbon
where she teaches inter alia Private International Law, Arbitration Law, International Commercial
Law, International Contracts and Civil Law. She is also President of the Arbitration and Dispute
Resolution Center of the Faculty of Law of the University of Lisbon. She has been a member of the
Board of the Portuguese Arbitration Association (APA) since July 2021 and is a member of the
International Academy of Comparative Law (IACL), of the European Association of Private
International Law (EAPIL) and of the Private Law Research Center of the Faculty of Law of the
University of Lisbon.
Online Platforms and Taxes in the EU:
A Compatible Match?
Abstract With the progressive digitization of the economy, the difficulties and
challenges in carrying out the taxation of income generated within the scope of
essentially digital activities became clear. The case of online platforms is one of the
most notorious. Throughout this work, we deal with some of the most relevant
proposals to resolve this tax problem that have been presented, in recent years,
within the scope of initiatives of the G20, the OECD and the EU. We will focus our
attention on the BEPS Project, on the EU Directive Proposals regarding significative
digital presence and digital service tax, and also on the BEPS 2.0 Project, currently in
course.
Over the final decade of the twentieth century and the early decades of the twenty-
first, the world shifted towards a footing significantly different from that in which
many of what we call the “classical” concepts and rules of International Tax Law
first emerged, in particular the concept of permanent establishment.
Major strides were taken towards economic globalisation and technological
development proceeded at an unprecedented pace (most notably in information
and communication technologies).
Profound changes occurred in how the main forms of business are done. Physical
presence became less crucial to concluding business transactions, and indeed the
face-to-face element has become entirely redundant, thanks to the technological
resources available. Dematerialisation has not been limited to the process of reaching
deals and to matching supply and demand, as e-commerce established itself as an
P. R. Pereira (✉)
CIDEEFF/Faculty of Law of the University of Lisbon, Lisbon, Portugal
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 143
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_8
144 P. R. Pereira
1
On the taxation of e-commerce, see, inter alia, Calçada Pires (2011).
Online Platforms and Taxes in the EU: A Compatible Match? 145
2
And, moreover, in the internal tax legislation of individual States.
3
On this topic, cfr. Rodríguez Losada (2019), pp. 67–92.
146 P. R. Pereira
3.1 Introduction
Action 1 in the BEPS Project (Addressing the tax challenges of the digital economy)
identified the main areas where international taxation needs to adapt to the specific
characteristics of the digital economy. Among other things, BEPS action 1 pointed to
the need to rethink the rules on assigning tax-raising powers in international taxation
law, insofar as they continue to be based on a physical presence in the territory of a
given State.
BEPS action 1 went as far as to map out several possible strategies for changing
tax law to respond to the challenges of the digital economy. One of these was to
4
Alongside this, the EU has pursued its own initiatives to combat international tax evasion. Cfr.
Dourado (2018); Almudí Cid et al. (2018); Calderón Carrero and Martín Jiménez (2019).
Online Platforms and Taxes in the EU: A Compatible Match? 147
Proposal for a Council Directive COM (2018) 147 final, of 21 March 2018 (Proposal
for a Directive laying down rules relating to the corporate taxation of a significant
digital presence) set out to create a common system for taxing digital activities in the
EU, taking into account the specific features of the digital economy.
This new system would enable a State to tax profits earned by companies which
engaged in the provision of digital services, even if those companies had no physical
presence that could be characterised as a permanent establishment (in keeping with
the traditional concept).
For the purposes of the rules established in the Proposal for a Directive, digital
services are understood as “services which are delivered over the internet or an
electronic network and the nature of which renders their supply essentially auto-
mated and involving minimal human intervention, and impossible to ensure in the
absence of information technology”.5
The starting point for the Proposal for a Directive COM (2018) 147 final was the
need discovered for alternative indicators of a significant economic and digital
presence, in order to establish tax-raising powers in relation to “new digitalised
business models”.6
Underlying the Proposal for a Directive is the idea that the profits of companies
should be taxed in the place where the value was created - thereby correcting the
mismatch between the place where profits are taxed and the place where value is
5
In Article 3 (5) of the Proposal for a Council Directive COM (2018) 147 final, of 21 March 2018,
laying down rules relating to the corporate taxation of a significant digital presence.
6
Idem.
148 P. R. Pereira
created, which resulted from application to the digital economy of the rules
governing corporate taxation in general.
The Proposal for a Directive envisaged that the provision of digital services
(provided over the internet or an electronic network, in an essentially automated
manner) could, in itself, create a virtual permanent establishment for non-resident
companies, insofar as the presence of users of those services in a given State
indicated that value was being created in that State.
The rules proposed for establishing a taxable nexus between a digital company
and a Member State was based on the following factors: revenues deriving from
provision of digital services, the number of users of digital services or the number of
contracts for a digital service. These criteria were the indicators of economic activity
selected, the reference points for determining that a company created its value in a
given jurisdiction.
More specifically, the objective indicators whereby the existence of a significant
digital presence in a given State is established were set out in Article 4 (3) of the
Proposal for a Directive. Under this paragraph, a company had a significant digital
presence in a given State if the business carried on through it consists wholly or
partly of the supply of digital services through a digital interface and one or more of
the following conditions is met with respect to the supply of those services by the
entity carrying on that business:
(a) the total annual revenues obtained from the supply of the digital services falling
under the rules to users located in a given Member State exceeded EUR
7,000,000;
(b) the number of users of one or more of those digital services, in a given Member
State, exceeded 100,000;
(c) the number of business contracts for the supply of any such digital service,
concluded in a year by users located in that Member State exceeded 3000.
The Proposal for a Directive sought to offer a long term response to the challenges of
the digital economy, representing an alteration in the concept of permanent estab-
lishment for companies that fell within its scope of application, so that they could be
taxed even if they had no physical presence in the State in question.7
In short, significant digital presence was based on the idea that, provided certain
conditions are met, the provision of digital services in a given State, through digital
platforms, was equivalent to the existence in that State of a virtual permanent
establishment of the non-resident company providing the services. The users of
the digital services in question were also seen as central to the creation of value.
Despite the expectations generated and the attention attracted in the business and
academic worlds, the Proposal for a Council Directive COM (2018) 147 final, of
21 March 2018, concerning significant digital presence, was never actually approved
into European law.
7
On this matter, cfr., inter alia, Romero Flor and Campos Martínez (2019), pp. 39–66.
Online Platforms and Taxes in the EU: A Compatible Match? 149
The European Union published a Proposal for a Council Directive COM (2018)
148 final, of 21 March 2018, on the digital service tax applicable to revenues from
the provision of certain digital services. However, this Proposal for a Directive was
never approved.
In the meantime, many countries, including several Member States of the EU,
went ahead with their own digital service taxes, many of them built around the
taxation of certain types of digital platforms.
The EU countries that have implemented this type of tax include Austria, Spain,
France, Hungary, Italy and Portugal.
With progress on the solutions envisaged by BEPS 2.0, which we shall consider
next, European countries may in principle be expected to look again at the short term
solution offered by digital service taxes.
6.1 Introduction
The final BEPS Action 1 report (Addressing the Tax Challenges of the Digital
Economy (Action 1) – Final Report), published in 2015, looked at the various
types of measures for responding to the taxation problems raised by the digital
economy. However, it was not proposed that States should adopt any specific
measure, as it was concluded that the groundwork had still not been done. The
OECD considered it was necessary to conduct a more thorough analysis of several
measures, before it could be decided to opt for a given course of action and adopt the
necessary arrangements. There was also no consensus between States as to the best
way of taxing the digital economy.
The OECD accordingly continued to promote study and debate of fiscal issues
relating to the digitalisation of the economy and large multinationals, with significant
results, above all from 2018 onwards. This process led to what is called the BEPS 2.0
Project.
On 1 July 2021, and then again on 8 October 2021, updated versions were
released of the BEPS 2.0 measures. These proposals garnered a remarkable degree
of international consensus. In effect, 136 countries8 (including Portugal) gave their
agreement to the measures for combating tax evasion discussed in the OECD/G20
Inclusive Framework, which made changes to several significant aspects of interna-
tional taxation rules.
8
Initially 137 countries, but one of them later withdrew its support. Since 8 October 2021, other
countries have also joined the agreement.
150 P. R. Pereira
BEPS 2.0 maps out a strategy, based on two pillars, in order to respond to the
taxation challenges arising from digitalisation of the economy.
Pillar 1 sets out to contribute to a fair division of tax revenues between the
companies where profits are generated, to this end permitting partial reallocation
of taxing rights between jurisdictions.
Pillar 2 goes on to propose the introduction of a minimum rate of tax applicable to
large multinationals.
Pillar 1 subdivides into what are called Amount A and Amount B.9
The measures relating to Amount A in Pillar 1 are aimed essentially at large
multinationals, and in particular at the digital giants. The scope of application of the
provisions of Pillar 1 (Amount A) is envisaged as consisting of multinationals with
global turnover in excess of 20 billion euros10 and a pre-tax profit margin of more
than 10%.
By means of Amount A, Pillar 1 creates a new connecting factor based on the end
consumer market of the product or service in question. Pillar 1 accordingly seeks to
introduce rules enabling allocation of a residual portion of the company’s profits to
the State where the value is created (the State of the market/of the user), even if there
is no physical business presence there.
For the purposes of the provisions of Pillar 1 (Amount A), a jurisdiction is
deemed to be a market State (place where consumption takes place) only when a
multinational company, falling within the scope of application of the rules, obtains
there revenues no less than 1 million euros.11
When this new nexus/connecting factor exists, the market States acquire the right
to tax 25% of the residual profits of the multinational company (defined as profits in
excess of 10% of revenues). In other words, a residual portion of the taxable profits
of the multinational companies is reallocated to those States. Reallocation to the
market States is to be processed using an allocation key based on the revenue
assessed.
By entitling a State to tax part of the business profits, without that right being
conditional on a physical presence (i.e. a permanent establishment) in its territory,
this development represents a departure—albeit still with a relatively limited scope
of application—from the rules currently in force in international tax arrangements, in
particular with regard to the role of physical presence and a permanent establishment
in attributing taxing rights to States.
9
Amount B assumes a fixed return for marketing services operating in the market State, rendering
those services subject to tax. We will not concern ourselves with this aspect, as it lies outside our
field of investigation, which has to do with the concept of permanent establishment.
10
It is anticipated that this might fall to 10 billion euros in future.
11
Or 250,000 euros, in the case of jurisdictions with a GDP of less than 40 billion euros.
Online Platforms and Taxes in the EU: A Compatible Match? 151
References
Almudí Cid JM, Ferreras Gutiérrez JA, Hernández González-Barreda PA (eds) (2018) La
Armonización de las Normas Contra la Elusión Fiscal Relativas a la Fiscalidad Directa en la
Unión Europea. Thomson Reuters - Aranzadi, Pamplona
Calçada Pires R (2011) Tributação Internacional do Rendimento Empresarial Gerado Através do
Comércio Eletrónico – Desvendar Mitos e Construir Realidades. Almedina, Coimbra
Calderón Carrero JM, Martín Jiménez AJ (2019) Derecho Tributario de le Unión Europea. Wolters
Kluwer España, Madrid
Dourado AP (2018) Governação Fiscal Global, 2nd edn. Almedina, Coimbra
Rodríguez Losada S (2019) La calificación de las rentas procedentes de transacciones digitales,
Fiscalidad Internacional y Comunitaria – Monográfico: Nueva Fiscalidad. Editorial Dykinson,
Madrid, pp 67–92
Romero Flor LM, Campos Martínez Y A (2019) Evolución del tradicional concepto de
establecimiento permanente hacia una presencia económica y digital significative. Fiscalidad
Internacional y Comunitaria – Monográfico: Nueva Fiscalidad. Editorial Dykinson, Madrid,
2019, pp 39–66
Paula Rosado Pereira is Professor at FDUL, where she has been teaching, since 2000, national
and international taxation subjects. Integrated Researcher at CIDEEFF, FDUL. Tax Law lecturer in
several training courses for Judges of the Administrative and Tax Courts, taught at the Center for
Judicial Studies. Member of the 2014 IRS Reform Commission. Lawyer and tax consultant.
12
Cfr. Communication from the Commission to the European Parliament and the Council –
Business Taxation for the 21st Century, Brussels, 18-05-2021, COM (2021) 251 final, p. 8,
available at https://ec.europa.eu (consulted on 16 May 2022).
The European Commission there states that “once agreed and translated into a multilateral
convention, the application of Pillar 1 will be mandatory for participating countries. In order to
ensure its consistent implementation in all EU Member States, including those that are not
Members of the OECD and do not participate in the Inclusive Framework, the Commission will
propose a Directive for the implementation of Pillar 1 in the EU”.
Regulating Digital Advertising from
the Perspective of the 4th Industrial
Revolution
1 Introduction
Digital advertising represents one of, if not the most commercially attractive seg-
ments of the online economy. It is increasingly the field in which the largest digital
platforms monetise their data power. With an unprecedented ability to accumulate,
categorise, analyse and experiment with diverse data of various segments of online
users, publishers and advertisers, the most successful online intermediaries offer a
level of matching incomparably higher than any other traditional advertising service
providers, be they media, data brokers, marketing or advertising agencies.1 Indeed,
the latter, despite their far longer presence in this field are losing their market share
exponentially to the rapidly grown digital titans.
The indiscriminate penetration of the data economy into all areas of societal life,
the remarkable increase in accuracy and relevance of algorithmic matching and
1
Andriychuk (2022a).
O. Andriychuk (✉)
Newcastle University Law School, Newcastle upon Tyne, UK
e-mail: [email protected]
S. Nagaraja
Newcastle University, Newcastle upon Tyne, UK
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 153
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_9
154 O. Andriychuk and S. Nagaraja
The discussions on the necessity of adjusting digital market regulation to the context
of the 4th Industrial Revolution are becoming more pervasive and nuanced.2 The
basic intuition behind this concept is fairly straightforward and uncontroversial—
underpinned by a two-fold thesis. The first proposition refers to the condition that in
the course of—and particularly after—the comprehensive and ever-increasing tran-
sition to the online economy, the scale of data generated by the biggest market
players will increase exponentially, enabling these players to leverage the power of
big data in all other sectors of the economic, political, social and cultural life of
individuals and societies in general. Even the most generic data could be interpreted
and refined by ever more sophisticated algorithms to render them meaningful in
terms of learning about various patterns of human life, and transforming this
otherwise generic information into the new triggers for shaping our behaviour.
This part of the thesis is well-captured by the ‘data as the new oil’ truism.
The significance of this part of the big data thesis is self-evident and hardly
questioned. The second proposition, is inseparable from the first but is less widely
discussed or accepted. It further develops the ‘data as new oil’ metaphor, linking the
2
Schwab (2015).
Regulating Digital Advertising from the Perspective of the 4th. . . 155
4th Industrial Revolution with the key factors of the previous phases of technological
history. While the first proposition likens big data to resources, the second focuses
on the operators of these resources. The role of data-generating and data-processing
industries is akin to the role of physical infrastructure: of equal strategic importance,
equally essential/indispensable, and equally decisive in terms of generating and
shaping first industrial development and then its overall progress.
The second proposition in other words implies that a small group of big data
controllers—so-called Big Tech—are not only becoming the largest undertakings in
terms of their market capitalisation, but are also becoming the main channels which
define overall economic progress. This degree of control inevitably expands to
broader sectors of public and private life. Sooner or later, whether directly or not,
the unprecedentedly high economic power of Big Tech will be transformed into the
overall power to design, channel and define not only the conduct of individuals, but
also the role of the nations in the global supply chain. As with any other historical
period, fundamental industrial changes lead to challenges to the existing nation-
beneficiaries of the status quo as well as to opportunities for the nation-confronters.
The industrial revolution implies a moment of paradigmatic reshuffling of global
powers and thus of the global order as a whole.
Evidently, this last feature of industrial revolutions gives rise to the uncomfort-
able hypothesis that the global digital race involves significant elements of a zero-
sum, or winner-takes-all, game. It is more adversarial than mutually beneficial.
These zero-sum elements are not necessarily explicit or well-accentuated, but they
are present and cannot be ignored. The competitive aspects of the global digital race
are inevitable to the same degree that the stakes at the 4th Industrial Revolution are
decisively high.
These aspects would remain important even if the second proposition about Big
Tech seen through the critical infrastructure metaphor is only partially correct.
Clearly, the increasing role of Big Tech in the global supply chain cannot be
compared mechanistically to the role of the physical infrastructure: transportation,
communications, roads, ports and all other components of an effective logistical
system—as with any analogy or metaphor, the parallels are more symbolic than
literal. However, they serve to account for the strategic importance of Big Tech in the
digital future as well as the unprecedented enthusiasm with which most State
authorities attempt first to understand, then to influence, control, steer, channel and
ultimately replicate Big Tech domestically—or at least to clearly articulate their
strategic interest in maintaining their digital sovereignty. Even a partial overlap
between the decisive power of the physical infrastructure in the past and the power
of Big Tech as the main infrastructure for big data suffices to explain the appetite of
most nations to be more active in regulating this industry. If big data processing is
indeed the industry which is to shape—or even define—the position of each polity in
the reconfigured constellation of global economic, political, cultural and social
powers, then such an impact can be seen as a matter of strategic national interest.
This conclusion is not a normative proposition, but rather a descriptive observa-
tion. There is no attempt to justify the increased regulatory attention paid in the EU
and UK to the issues of digital markets and digital society—it is merely an attempt to
156 O. Andriychuk and S. Nagaraja
contribute to our understanding of the origins of this objective regulatory trend. The
period of categorical non-intervention in the evolution of the digital economy is
approaching its end. The laissez faire regulatory ethos is no longer the imperative for
digital governance. Some see this as a linear development, others as a pendulum
shift. Yet for everybody this shift is a self-evident fact.
The elements of adversarial competition between not only the key industrial players,
but also between the public polities to which they are in some way affiliated, make
the discussion on the 4th Industrial Revolution rather unwelcome in established
competition law, economics and policy circles. The discourse of digital sovereignty
is closely linked to the term industrial policy, which itself until recently has been
considered an absolute taboo in the professional antitrust environment.
The main explanation of such categorical rejection lies in the excessive
technicalisation and axiomatisation of the field.3 The societal phenomenon of eco-
nomic competition has been (wrongly) reinterpreted as a purely scientific endeavour
of equilibrium economics. Under such a view, competition was a certain condition in
the market, which is characterised by objective, provable and measurable features.
Economic conduct which infringes these conditions would be seen as
anticompetitive—unless a concurrent remedial effect is demonstrated. Under this
prevalent view there is only one correct condition of competition. Thus this polyse-
mic and utterly complex, unpredictable and spontaneous driver of the entrepreneur-
ial discovery is reduced to the simplistic (by its normative goal) and utterly and
unnecessarily complex (by its methodological apparatus) price-theory model. Not
only would this approach categorically prohibit certain conduct as anticompetitive
(which is in fact understandable as the domain of positive law often has no choice
other than penalising a mere conduct—or even a mere intention—irrespective of its
consequences), but (which is much worse) in an equally mechanistic fashion would
exempt a conduct from antitrust sanctions if alongside the anticompetitive harm it
delivers positive outcomes either to competition itself or to other societal values.4 If
the latter is understandable—inasmuch as the interests of a healthy competitive
process coexist with the variety of other societal interests with no realistic possibility
of reconciling these incommensurable values—then the former has been mutated
into a fairly awkward and semantically absurd—yet commonly acknowledged—
logical syllogism:
3
Andriychuk (2022b).
4
Ezrachi (2016).
Regulating Digital Advertising from the Perspective of the 4th. . . 157
adepts. Even the most societally open and polycentric competition policy should
continue to be based on its two professional methodological pillars: the language of
case-law and the language of mathematical economics. These languages however
should no longer be seen as offering an exclusive, monosemic and scientifically true
explanation of the very phenomenon of economic competition—and thus an author-
itative, categorical and axiomatic prescription about how it should be regulated.
Instead, their role would remain central, yet reduced to the level of technical
expertise. In the world of polycentricity,5 where divergent approaches, interpreta-
tions and narratives compete with each other to be selected and prioritised by policy-
makers on a case-by-case basis, the solutions offered by the skilful reading of
various components of economic theory, factual evidence as well as the creative
interpretation of legislation, soft- and case-law, will vie with each other and with
other societal values to be prioritised by decision-makers. Robust economic and
legal underpinning will continue as a precondition for each legitimate decision. It
will however no longer be the sufficient condition for making an axiomatic decision.
Such an epistemic approach implies a gradual abandoning of the misleading belief in
the existence of some superior or optimal, absolutely true and correct equilibrium
condition in the market, which competition policy should adhere to. The pluralistic
reality renders the enforcement of competition law less axiomatic, singular, holistic
or inquisitorial—and more discursive, polycentric, case-by-case and adversarial. By
abandoning the positivistic determination of discovering the scientific truth about the
markets, competition policy is becoming more flexible, interpretive and adaptable to
the broader societal agenda – including the digital agenda.
As a result of such a far-reaching epistemic metamorphosis of competition policy,
the factor of the 4th Industrial Revolution is no longer seen as irrelevant to compe-
tition policy. It could well persist as an external factor, formally going beyond the
established normative goals or methodological apparatus of competition policy.
Being an external factor rather than an internal competition goal renders it no less
relevant for polycentric competition policy considerations. The factors of the 4th
Industrial Revolution and digital sovereignty implicitly shape the context within
which competition policy is pursued by various polities—including both the EU and
UK—and at least in this capacity they cannot be ignored in competition policy
discussions. In this sense the emerging period of competition policy breaks the
illusion of certainty and predictability. Various factors play a role in defining,
shaping and pursuing competition policy, as well as in balancing it with other
legitimate societal values.
5
Lianos (2018).
Regulating Digital Advertising from the Perspective of the 4th. . . 159
Against this background the interaction of competition policy in a broader sense with
other regulatory sectors of digital markets becomes inevitable. The conceptual
necessity of a greater communicative openness is shifting from the realm of aca-
demic discussion into the area of real-life policymaking.
The establishment of the UK Digital Regulation Cooperation Forum (DRCF)—
an inter-agency communicative platform between four sectoral digital regulators: the
Competition and Markets Authority, the Office of Communications, the Financial
Conduct Authority and the Information Commissioner’s Office, aiming to facilitate
and coordinate the work and priorities of these agencies—epitomises the trend
towards a more holistic, synthetised approach to regulating digital society. The
discussed possible modalities of enforcing the DMA also envisage establishing a
special enforcement unit, comprised of the representatives of various Commission
Directorates-General—and thereby also reflecting the inevitability and more explicit
nature of the broader societal account, at least as far as the ex-ante digital compe-
tition policy is concerned.
This horizontal, inter-enforcer cooperation is further strengthened by concurrent
governmental initiatives to streamline the multilogue between digital regulators and
the UK Government vertically (or diagonally).6 Clearly, this trend should not be
perceived mechanistically. It is not a syncretic merger of all sectoral digital enforce-
ments into a single meta-narrative. Otherwise, the less widespread and commonly
topical societal interests—such as protection and promotion of economic
competition—would always be undermined if in conflict with more universal soci-
etal polices, such as consumer protection or online harm. The communication and
cooperation should mutate into a holistic subordination.
In the more cooperative model, the subject matter differences and specificities are
and remain essential. This in no way negates the decline of the previous regulatory
imperative of a narrow, self-centred and completely independent approach to various
individual areas of digital governance. While the previous model was underpinned
by the monosemic perception of regulation where the correct answer would exist for
each regulatory situation (implying that the narrow insulated expertise of each
enforcer would be more likely to discover this answer), the emerging approach is
much less axiomatic, much less monosemic and much less single-metric.
It becomes clear that what is seen as an opportunity through the lenses of one
digital policy may well be considered a challenge if looked at from the perspective of
another. The regulatory landscape is, and will always remain fragmented.
Establishing a more advanced level of inter-agency communication horizontally,
as well as the communication vertically between the agencies and the Government,
aims to facilitate a better understanding of this fragmented reality, to cooperate
6
UK Department for Digital, Culture, Media & Sport and Department for Business, Energy &
Industrial Strategy, ‘A New Pro-Competition Regime for Digital Markets’, Open consultation,
20 July, 2021.
160 O. Andriychuk and S. Nagaraja
where possible and mitigate where necessary—it does not aim to overcome it via a
syncretic merger of the diverse digital interests into one.
7
UK Competition and Market Authority, ‘Online platforms and digital advertising Market study’
Final report, 1 July 2020.
8
Australian Competition & Consumer Commission, Digital advertising services inquiry, Final
report, September 2021.
9
Google Digital Advertising Antitrust Litigation, United States District Court, Southern District of
New York, Civil Action No.: 1:21-md-03010-PKC.
Regulating Digital Advertising from the Perspective of the 4th. . . 161
At the same time, various comparable studies and investigations have been
completed in a number of jurisdictions, with different regulatory priorities and
interests. All of them echo the same systemic problems faced by the industry,
exhibiting a remarkable conceptual consensus on both diagnosis of the issue and
regarding its overall existential importance for the development of the digital
economy. Clearly, the remedies discussed in different regulatory regimes are more
divergent, reflecting the procedural and contextual specificities of the situation in
each particular geographic location. All these sources though are united in
manifesting the importance of these markets for the overall functioning of digital
society, and in their willingness to challenge the spontaneously emerged status quo.
Among the main distinctive features that characterise digital advertising markets
is its systemic market tipping - enabling a very few of the largest and most successful
Big Tech companies to generate unprecedented market power. Their unlimited
access to various data of their business- and end users enables these companies to
train their algorithms to the utmost advanced level—which in turn facilitates growth
of their client base. This unique position makes possible a highly precise matching of
advertisers with publishers and end users. The deep insight into all behavioural
patterns and commercial interests of various elements of the supply chain together
with the unlimited ability to test various algorithmic modalities, experiment with big
data and constantly adjust and refine their categorisation, tailoring and recommen-
dation capabilities, allows the dominant few to master ad infinitum the accuracy of
the matching between all participants in the digital supply chain.
This unique status is further reinforced by colossal revenues which enable the
indiscriminate purchase of all actual, potential and hypothetical competitors inno-
vating fresh business models or demonstrating at least a degree of success in existing
ones. Inasmuch as most of these acquisitions concern undertakings with low market
capitalisation, as such incapable of scaling up their otherwise successful models,
these acquisitions would usually pass below the jurisdictional thresholds (and often
even regulatory radars) of merger control rules, designed in the pre-digital age for
pre-digital purposes.
Additionally, the largest online advertising market players enjoy a remarkable
ability to discriminate, disintermediate, self-prioritise and arrange their vertical
rankings, bringing comprehensive control over this rapidly emerging industry,
synthetising their enormous market success with other services they provided.
Another factor in this development is the almost complete lack of awareness
regarding the technological mechanics of this industry—particularly as far as pro-
grammatic advertising is concerned—as well as the model’s inherent opacity and the
untraceability of its key elements., The very technology itself encompasses various
stages of the supply chain, the myriad of transactions as they occur in real-time. The
value of each individual transaction is minuscule, and the marketing business
essentially internalises various ethically dubious or borderline practices of online
advertising as almost the sole effective way, in the digital ocean of information
cacophony, to ensure selection by the end user.
The inherent lack of transparency of this business is further reinforced—and
multiplied—by the fact that often most—or even all—elements of the digital
162 O. Andriychuk and S. Nagaraja
advertising supply chain are controlled by the same gatekeeper. No less significant,
this cycle has benefited from the dominant discourse of laissez faire of highly light-
touch intervention, allowing the largest players to entrench and adopt best to the new
regulatory reality. Essentially, in order to replicate the current gatekeepers’ success,
their challengers would need the same level of regulatory benevolence—which is
clearly not the situation they experience now. With a few astonishing exceptions,
one can safely conclude that the entrenched market position of the current gate-
keepers in the area of online advertising is near impossible to replicate or challenge.
All these systemic features prove the obvious: the traditional ex-post mechanisms of
competition law enforcement alone are inadequate to significantly change this
situation. Antitrust was never designed to deal with systemic challenges. Its
ex-post application indicates a purely responsive modality. It penalises wrong-
doers that violate the established rules, protocols and conventions, whereas the
current situation requires a much more proactive, systemic and coordinated set of
measures. This renders antitrust not unnecessary or obsolete, but merely insufficient.
It is insufficient despite the relative elasticity of the available instruments, which
allow regulators to recalibrate enforcement priorities, employ unused instruments
and reinterpret existing rules. These incremental changes are necessary—particu-
larly in the short-term—before more strategic reforms are properly introduced. The
incremental changes will only be capable of remedying the situation incrementally if
complemented by significant qualitative reforms.
One such reform complementing the ex-post approach of digital competition
rules with the new proactive modality is the EU Digital Markets Act (DMA)—an
innovative legal instrument granting the Commission unprecedented power to
enforce rather interventionist rules vis-à-vis the specially designated group of the
largest digital stakeholders—the gatekeepers. Another paradigmatic legislative ini-
tiative comes from the UK CMA. Channelled originally through the Furman Report,
underpinned further by the CMA response to the UK Government consultation on a
New Pro-Competition Regime for Digital Markets, this initiative reflects the same
normative ideas and goals, though envisages a slightly different enforcement mech-
anism. This mechanism proposes establishing within the CMA a dedicated group
specialising exclusively on digital matters and holding a more proactive mandate for
intervention: the Digital Markets Unit (the DMU).10As the final version of the DMA
was adopted by the Parliament on 5 July 2022, while the DMU still functions only in
a skeleton form, it will suffice here to summarise the key features of the EU
10
UK Competition and Markets Authority, ‘The Competition and Markets Authority has delivered
the advice of its Digital Markets Taskforce to government on the potential design and implemen-
tation of pro-competitive measures for unlocking competition in digital markets’, 3 April 2020.
Regulating Digital Advertising from the Perspective of the 4th. . . 163
mechanism for regulating competition in digital markets, implying that most of the
new tools will also be reflected correspondingly in the DMU proposal. The most
important DMA innovations, characterising the new pro-competition approach to
digital markets, include the following: asymmetricity; introducing punitive elements
to the new rules; future-proofness; opacity by design; availability of regulatory
dialogue; pro-competition proactive goal; greater interpretive role; ability to over-
come most of the procedural pitfalls constraining the effective enforcement of the
current ex-post rules; going beyond the traditional narrow scope of the prescriptive
rules, which characterise sectoral ex-ante rules; clear-cut proactive modality,
non-axiomatic approach to measuring market performance; mitigating the ‘King
Midas’ pitfall; abandoning efficiency defence and most of the instances of objective
justification; eliminating the reference to consumer welfare and innovation; over-
coming the restorative, rights-based approach to the business- and in particular to
end users; simultaneous presence of several vertical and horizontal objectives;
acknowledging behavioural biases, encapsulated in the ‘privacy paradox’.
The feature of asymmetricity means the ability to target among the addressees of
these stringent rules only the largest undertakings with entrenched systemic gate-
keeping positions. This would afford smaller horizontal competitors a better oppor-
tunity to close the gap by allowing their business- and end-users a more responsible
access to the services on which they depend, and which hitherto have been subject to
frequent entrenched service provider misuse. Such a binary structure replicates the
procedural rationale of the concept of dominance as reflected in Art 102 TFEU. The
substantive aspects of obligations, of course, go far beyond those envisaged in Art
102 TFEU.
These obligations could in some sense be seen as punitive. The gatekeepers are
penalised by the requirement to comply with the extensive list of obligations by
virtue of their gatekeeper status. In addition to being long, the list of obligations is
also intentionally open-ended, which makes it almost impossible to comply with—
or rather enables the enforcer to identify instances of non-compliance more easily
than in the ex-post regime. This opacity is intentional as reflected in the title of Art
6 DMA: ‘Obligations susceptible of being further specified’. This is a systemic
feature of the DMA, encouraging the regulatory dialogue between the gatekeeper
and enforcer, allowing the latter to impose liability on the former regardless of the
outcomes of the dialogue.
Additionally, the broad scope of obligations mitigates the obvious need for quick
updates of the regulation insofar as most digital markets evolve very rapidly, making
the new rules more future-proof, and allowing more scope for interpretation. Addi-
tionally, the punitive nature of the DMA implies that it seeks to overcome otherwise
common procedural pitfalls, whereby cases are lost in the courts on purely proce-
dural juristic grounds to the detriment of hard-earned enforcement agency results.
The next punitive element concerns the presence of high fines and other remedies.
However, seen against the traditional gatekeeper perception of fines as costs, as well
in light of the more proactive (rather than restorative) modality of the new rules, the
foremost role of fines under the DMA appears more pedagogic and preventative. The
sheer competence to impose high fines matters more for the new pro-competition
164 O. Andriychuk and S. Nagaraja
enforcement mechanism than the imposition itself. The nature of fines is less
compensatory than stimulatory, and their almost unlimited availability does more
to discipline gatekeepers than their actual imposition.
Another reason for the rules' broad conception is an attempt to mitigate gate-
keeper ‘omnipotence’. With their superior expertise in their own business models,
these players appear surprisingly flexible in adopting to various administrative pre-
scriptions, managing to comply with the letter of each narrowly defined individual
rule while continuing to benefit substantively from prohibited conduct. Among the
features of gatekeeper omnipotence, provoking comparison with the King Midas'
touch that turned all to gold, is the ability to deploy the efficiency defence to justify
almost any anticompetitive practice. An illustrative example of the King Midas
scenario arises out of the increased role of privacy in the current digital agenda.
Clearly, most of the gatekeepers have generated their enormous digital power not
least by exploiting the trust of their business- and end users in terms of an indis-
criminate access to their data, exploiting the regulatory lacunae or dominant laissez-
faire approach to digital markets. Only with the landmark public scandals and
revelations did the value and sensitivity of their digital date become clear to a critical
mass of digital citizens, as such requiring proper protection and management. This
rapidly emerging consumer and customer demand however resonates harmoniously
with the new approach to data management adopted by the gatekeepers themselves.
Most of them currently invoke powerful data protection rhetoric and meet the data
protection demand by their users by indeed designing higher standards of data
protection than the majority of their much less technologically advanced competi-
tors. Benefiting from leveraging their entrenched position to various markets and
having a comprehensive access to the first party data of their direct customers and
users – as well as having the technological capability to extrapolate and project
algorithmically the behaviour patterns of various focus groups to more cautious
users, they no longer need to actually use the range of surveillance techniques to
compromise data protection legislation. With all the dubious work consigned to the
previous historical period, most gatekeepers today are only too happy to meet the
increasing demand for greater privacy protection.
Clearly, a highly intuitive solution of this challenge offered by the gatekeepers is
single-homing. For most people, a single reliable digital gateway designated with
de-facto public utility status would be a more satisfactory option than the need to
engage endlessly with the vast array of transactions with myriad digital service
providers, some of which openly cut corners and compromise on data protection
rules—leaving aside blatantly criminal schemes. This raises a fundamental challenge
in terms of strengthening the entrenched status of the platform providing single-
homing services. The tactic by most web browsers of abandoning the third-party
cookies scheme also exemplifies this trend. There is no essential need for a gate-
keeper to use third-party cookies if it has comprehensive access to first-party ones.
This becomes especially obvious if such privacy-enhancing steps push most of their
actual and potential competitors out of the markets, thereby the further bolstering
their already entrenched position, and strengthening the loyalty of their business-
and end users, whose overriding interest in secure data storage instead of any
Regulating Digital Advertising from the Perspective of the 4th. . . 165
8 Conclusion
The overarching assumption underpinning this chapter is that we have entered into a
new evolutionary period of history, a period predetermined by the comprehensive
penetration of digital technologies into our lives. The process is exponential,
unavoidable, universal and objectively predetermined. The globalised digital world
is still embedded in the traditional established institutions of nations and States, but
the role of the digital non-State actors, the technological corporations, is increasing
at a pace. The inherent nature of the data-driven algorithmic digital economy implies
11
Lamadrid de Pablo and Bayón Fernández (2021) p. 576.
Regulating Digital Advertising from the Perspective of the 4th. . . 167
the key role of the network effects and market tipping, so that the big become bigger
and the strong stronger. The Big Tech group has reached the status of an unavoidable
intermediary between producers and consumers, buyers and sellers, providers and
receivers, publishers and readers, advertisers and customers, givers and seekers, TIt
controls and channels all communications and all transactions, collecting digital
footprints and utilising algorithmically even the most trivial and fragmented data.
Such comprehensive generation, processing, interpreting, categorisation, profiling,
targeting, steering, and navigating have become new attributes of the digital super-
power. As the instances of communication and transactions become ubiquitous, the
role of Big Tech is growing exponentially. It’s an objective reality, faced even by
autarchic totalitarian dystopias.
As with any other technological transition, the digital revolution creates many
opportunities and challenges. Some nations, relying on knowledge, competence,
skills, intuition and luck, are among the leaders of the global digital race—others fall
behind. Often, a country’s position in the virtual league table reflects neither its place
in the previous (pre-digital) technological race nor its ambitions in the current one. In
such a case, the country either focuses on designing its own domestic rules,
facilitating the emergence of the national champions—local Big Tech companies—
or seeks to compete on its merits, aiming to gain a secure position in the global
digital supply chain. This explains the importance of the question of what regulatory
steps are needed by the EU/UK authorities to make these polities (even) more
competitive in the global digital race. The race, among other things, is conditioned
by the winner-takes-most/zero-sum context, which explains the politisation and
polarisation of the positions of the main actors. The potential of the EU/UK is
much higher than their current place in the race: not least due to their strong scientific
foundations and developed institutions, but largely due to a significant discrepancy
in terms of the data (qua-oil/currency) being permanently extracted from rich and
law-abiding European/UK consumers in the form of revenues/taxes landing in the
public purses of non-EU/UK jurisdictions. This is an objective trend of the 4th
Industrial Revolution, and the regulatory initiatives aiming to respond to this objec-
tive trend are legitimate and perfectly understandable.
The dynamically changing cyber-reality not only requires from regulators and
policymakers a deep sector-specific expertise, but it also needs a holistic synchroni-
sation and coordination of various regulatory initiatives and enforcement priorities at
the macro-level. Understandably, each area of Internet policy has its specificity. The
more widespread and comprehensive digital relationships become, and the more
conflicting regulatory objectives emerge, the more difficult such a synchronisation is
to achieve. In recent decades, competition law has become inherently interdisciplin-
ary with Law & Economics its lingua franca. This has yielded many advantages but
equally has caused various communicative difficulties and normative misconcep-
tions. Such an economisation and a specific subject matter of competition law
explain its autonomisation, a certain methodological distancing from other areas
regulating Information Technology. At the same time, the rapid development in
other areas of Law & IT has also contributed to further fragmentation of the
discipline. Often, legal discourses developed in various domains of the regulation
168 O. Andriychuk and S. Nagaraja
12
See e.g., a recent public hearing at the House of Lords Communications and Digital Select
Committee ‘Digital Regulation Cooperation Forum gives evidence to Lords Committee’,
2 November 2021, available at https://committees.parliament.uk.
Regulating Digital Advertising from the Perspective of the 4th. . . 169
Thematically, this chapter has addressed one of the main areas of the online
economy—digital advertising. The importance of this sector is paramount. Its
technical opacity and immense commercial attractiveness allow the current indus-
trial champions to generate super-profits, strengthening further their market power
and leveraging it across other sectors of the digital economy. The key question
requiring a proper conceptualisation is whether it is even possible at this stage to
recalibrate the systemic features of this industry by triggering competition via the
combination of efforts by regulators, competitors and business users of Big Tech.
The conclusion is this regard is somewhat sceptical. The understanding of the
complex phenomenon of digital advertising—both in terms of its societal impor-
tance as well as its purely technical mechanism—is at an embryonic level. The
regulatory, enforcement and judiciary process has started, and it is characterised by
remarkable dynamism. It is yet unclear how—apart from the draconian measures of
structural/functional separation—the situation could be changed paradigmatically.
The totality of control of the digital markets as well as the greater role of privacy
concerns and the systemic ability of Big Tech not only to misuse their market power
by exploiting privacy vulnerability of its business- and end users, but also to become
a reliable single home quasi-public utility for these users, make any meaningful new
entry rather implausible and fairly hypothetical. Even if the regulatory attempts in
this regard appear Sisyphean, they should continue to be seen as the only remaining
possibility. Reliance on market self-correction would be even more naïve and
counterproductive than expecting an efficient recalibration of the structure of the
market of digital advertising through regulatory intervention.
Clearly, the race between public authorities and Big Tech in governing the
business of online advertising is only at its initial stage. Yet it is already possible
to project a few common trends around which the discussions will be centred.
Among the most vibrant is the relationship between competition policy and public
security. Advertising platforms have collected large-scale personal data and broken
the State monopoly on intense personal surveillance. They have so far weathered
State hostility by opting to offer this information to the State as a commercial service.
However, they have encroached into the territory controlled exclusively in the past
by public authorities. We anticipate—and plan to explore—the emerging discus-
sions on the philosophical aspects of the clash.
References
Andriychuk O (2022a) How Big Media Handed Digital Advertising to Big Tech. Available at
https://www.promarket.org Accessed 4 Nov 2022
Andriychuk O (2022b) Between microeconomics and geopolitics: on the reasonable application of
competition law. Mod Law Rev 85:598–634
Australian Competition & Consumer Commission (2021) Digital advertising services inquiry, Final
report. Available at https://www.accc.gov.au Accessed 4 Nov 2022
Ezrachi A (2016) ‘Sponge’ 5. J Antitrust Enforcement 5:49–75. https://doi.org/10.1093/jaenfo/
jnw011
170 O. Andriychuk and S. Nagaraja
House of Lords Communications and Digital Select Committee (2021) Digital Regulation Coop-
eration Forum gives evidence to Lords Committee. Available at https://committees.
parliament.uk Accessed 4 Nov 2022
Lamadrid de Pablo A, Bayón Fernández N (2021) Why the proposed DMA might be illegal under
Article 114 TFEU, and how to fix it. J Eur Competition Law Pract 12:576–589. https://doi.org/
10.1093/jeclap/lpab059
Lianos I (2018) Polycentric competition law. Curr Legal Probl 71:161–213
Schwab K (2015) The fourth industrial revolution: what it means and how to respond. Foreign Aff
12. Available at https://www.foreignaffairs.com/world/fourth-industrial-revolution Accessed
4 Nov 2022
UK Competition and Market Authority (2020) Online platforms and digital advertising Market
study. Available at https://www.gov.uk Accessed 4 Nov 2022
UK Competition and Markets Authority (2020) Digital Markets Taskforce. https://www.gov.uk
Accessed 4 Nov 2022
UK Department for Digital, Culture, Media & Sport and Department for Business, Energy &
Industrial Strategy (2021) A New Pro-Competition Regime for Digital Markets. https://www.
gov.uk Accessed 4 Nov 2022
Oles Andriychuk Newcastle University Law School | Director, Digital Markets Research Hub |
Visiting Senior Researcher, Vytautas Magnus University, Kaunas, Lithuania |The usual disclaimer
applies.
Shishir Nagaraja Secure and Resilient Systems Group, Newcastle University. Our research has
been done within the UK Engineering and Physical Sciences Research Council (EPSRC) funded
PETRAS Centre for IoT Cybersecurity under the Robustness-as-Traceability (RoasT) project.
Part III
Security
The European Union Strategy
for Cybersecurity
Margarita Robles-Carrillo
Abstract The chapter analyses the 2020 European Union Strategy for Cybersecu-
rity starting from the main differences with the 2013 Strategy and examining then its
major contributions. The first one lies in the general framework which moves from
the narrow field of security to the broader context of digitalisation. Secondly, from
an axiological point of view, the 2020 Strategy introduces the concept of European
technological sovereignty. Thirdly, from a functional perspective, in the new Strat-
egy, cybersecurity is conceived as a horizontal or cross-cutting policy. Fourthly, as
regards its content, the 2020 Strategy provides an improved balance between
technical and non-technical issues as well as greater attention to certain basic
technological questions. Finally, by its nature, this Strategy evolves from being an
essentially declarative or descriptive policy to become a more operational or exec-
utive policy. There is a qualitative improvement compared to its predecessor that
could even be considered a paradigm change.
M. Robles-Carrillo (✉)
University of Granada, Granada, Spain
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 173
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_10
174 M. Robles-Carrillo
1 Introduction
Cybersecurity was the subject of two main strategies in the European Union in 20131
and 2020,2 The first established basic principles,3 strategic priorities and actions,4
and roles and responsibilities at the national, international and European levels
through an institutional and organic structure that remains unchanged in its essential
aspects, although expanded and improved.5 The most important contribution was
and still is its cosmovision of a cyberspace open, safe and secure according to the
European common values and principles.6
The 2020 Cybersecurity Strategy is based on this common foundation,7 being a
continuation of many of the measures envisaged in 2013.8 However, it represents a
qualitative improvement, even a paradigm change, with respect to its predecessor
from various perspectives.9 Firstly, structurally, the 2020 Strategy is anchored in the
broader and stronger context of the European Digital Strategy, the Commission’s
Recovery Plan for Europe and the Security Union Strategy 2020–2025.10 Secondly,
axiologically, the 2020 Strategy expressly introduces the concept of European digital
sovereignty, by including technological sovereignty in its first line of action and by
1
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2013).
2
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020).
3
Those principles are: 1. Applicability of the common values of the physical world in cyberspace;
2. Protection of fundamental rights and freedoms; 3. Access for all; 4. Democratic and efficient
multi-stakeholder governance; and 5. Shared responsibility to ensure security (European Commis-
sion and High Representative of the Union and Foreign Affairs and Security Union (2013),
pp. 3–4).
4
The five strategic priorities are: Achieving cyber resilience; Reducing cybercrime; Developing
cyber defence policy and capabilities; Increasing industrial and technical resources for cybersecu-
rity; and Establishing a coherent cyberspace policy and promoting core EU values (European
Commission and High Representative of the Union and Foreign Affairs and Security Union
(2013), pp. 4–5).
5
Bederna and Rajnai (2022).
6
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2013), p. 1.
7
According to Dutton et al, the 2020 EU’s strategy “follows on and supports the nine principles
outlined in the Paris Call for Trust and Security in Cyberspace of 2018” (Dutton et al. (2022), p. 3).
However, the only clear shared principle is the promotion of international norms of responsible
behavior in cyberspace (https://pariscall.international/en/principles).
8
Bendiek and Kettemann (2021).
9
According to Szpor, cybersecurity regulation is expanding in terms of its objective and subjective
scope (Szpor (2021) p. 229).
10
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 4.
The European Union Strategy for Cybersecurity 175
2 General Framework
The general framework surrounding the 2020 Cybersecurity Strategy differs sub-
stantially from that of its predecessor. In 2013, the objective of an open, safe and
secure space was translated into a set of principles, a list of priorities and a
succession of measures in the area of security. In 2020, although continuity across
principles and in many of the priorities and measures is evident, there is a structural
paradigm change for three main reasons. Firstly, the Cybersecurity Strategy is an
integral part of a broader, more comprehensive, more elaborate, more robust and
11
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 4–5.
12
Cybersecurity is recognised as “an integral part of Europeans’ security (. . .). The EU’s economy,
democracy and society depend more than ever on secure and reliable digital tools and connectivity.
Cybersecurity is therefore essential for building a resilient, green and digital Europe” (European
Commission and High Representative of the Union and Foreign Affairs and Security Union (2020),
p. 1).
13
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 10–11.
14
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2021a).
15
Bendiek and Kettemann (2021), pp. 2–3.
176 M. Robles-Carrillo
more ambitious overall project: the European Digital Strategy. Moreover, by its
subject matter, it is also naturally connected to the EU Security Union Strategy and
to the Strategic Compass for Security and Defence (A). Secondly, cybersecurity is
included in the general implementation programmes of the Digital Strategy: the
Digital Compass and the Digital Path (B). Thirdly, the Cybersecurity Strategy
coexists with more than ten digital strategies or normative packages on various
topics with a common purpose: the digital transition (C).
(A) In 2013, cybersecurity was conceived as essentially a security issue.16 In
2020, by contrast, cybersecurity was part of the general process and the global model
of European digitalisation following a holistic approach,17 according to which, “[t]
he EU’s new Cybersecurity Strategy for the Digital Decade forms a key component
of Shaping Europe’s Digital Future, the Commission’s Recovery Plan for Europe,
the Security Union Strategy 2020-2025, the Global Strategy for the EU’s Foreign
and Security Policy, and the European Council Strategic Agenda 2019-2024”.18
Taking into account its cross-cutting nature,19 cybersecurity must be integrated into
all the other digital policies and investments.20
Cybersecurity was also included among the strategic priorities of EU Security
Union Strategy for 2020–2025. Firstly, this Strategy recognises the need to guaran-
tee security both in the physical and digital environments and to overcome the false
distinctions between them. Secondly, the Security Strategy proposes a
multidisciplinary and integrated response, grounded in the common European
values, as well as a real whole-of-society multi-stakeholder and horizontal approach.
Thirdly, it states that robust international partnerships are fundamental to prevent,
deter and respond to cyber-attacks.21 Logically, in the light of evolving threats and
risks, many of its measures concern cyber-security, including the prevision of a new
European Cybersecurity Strategy.22
Moreover, the Strategic Compass for Security and Defence, adopted by the
Council on 21 March 2022, also establishes some objectives and measures regarding
cybersecurity issues.23 In line with the Cybersecurity Strategy, the most relevant are
16
In the 2013 Strategy, “the term ‘cybersecurity’, from an EU perspective, entails a combination of
cyber resilience, cybercrime, cyberdefence, (strictly) cybersecurity and global cyberspace issues”
(González Fuster and Jasmontaite (2020), p. 99).
17
Following Irion et al., digitalisation “has set into motion a deep transformation of our societies,
cultures and economies while challenging territoriality-based sovereignty and eroding traditional
regulatory configurations” (Irion et al. (2021), p. 3).
18
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 4.
19
Wessel (2019), p. 2.
20
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 5.
21
European Commission (2020b), pp. 1–2.
22
European Commission (2020b), p. 8.
23
The general purposes of the Strategic Compass are: (a) Providing a shared assessment of the
strategic environment; (b) Bringing greater coherence and a common sense in actions in the area of
The European Union Strategy for Cybersecurity 177
the following: (a) develop the Union’s cyber posture by enhancing the ability to
prevent cyberattacks; (b) provide immediate and long-term responses to threat
authors; (c) support partners in enhancing their cyber resilience and in cases of
cyber crises; to further increase solidarity and mutual assistance; (d) strengthen cyber
intelligence capacities; and e) increase interoperability and information sharing
through cooperation.24 The Strategic Compass recognises that “enhancing our
cybersecurity is a way to increase the effectiveness and security of our efforts on
land, in the air, at sea and in outer space”.25
(B) In the framework and for the implementation of the European Digital
Strategy,26 the Digital Compass was adopted in March 202127 and the Path to the
Digital Decade was presented in September 2021.28 The Digital Compass is a policy
agenda establishing a governance framework for the European digital model and two
new mechanisms for action: multi-country projects and international digital partner-
ships. The former provides a European approach to digital skills development by
organising Member States to make the digital transition in key areas. The interna-
tional digital partnerships are based on the conviction that the degree of digitalisation
of a society is a factor of global influence. The guiding principles of the International
Digital Partnerships include a secure cyberspace. Meanwhile, the Digital Path aims
to ensure that the EU achieves its digital transformation goals in line with its values,
reinforcing its digital leadership and promoting people-centred, inclusive, and sus-
tainable policies. The priority areas of activity of the Digital Roadmap include
developing and deploying an ultra-secure quantum and space-based communication
infrastructures and deploying a network of security operations centres, among
others.
(C) The European Strategy for Cybersecurity was adopted on 16 December 2020.
Before, alongside with the European Digital Strategy, in February 2020, the
European Data Strategy and the White Paper on Artificial Intelligence were
presented. The EU Security Strategy was published in July 2020. In 2020, as well,
several new digital strategies or regulatory packages were introduced as Digital
Industry (10.3.2020 and 5.5.2021), Digital Finance (24.9.2020), Digital Services
Act (15.12.2020) and Digital Markets Act (15.12.2020). After the Cybersecurity
Strategy, in 2021 and 2022, there were launched strategies and measures on
European Digital Identity (28.5.2021), Standardisation (2.2.2022), Chip Act
(8.2.2022), Space (15.2.2022), and European Defence (15.2.2022).
security and defence; (c) Setting out new ways and means to improve collective ability to defend the
security of our citizens and our Union; and d) Specifying clear targets and milestones (Council of
the European Union (2022), p. 2).
24
Council of the European Union (2022), p. 23.
25
Council of the European Union (2022), p. 23.
26
European Commission (2020a).
27
European Commission (2021a).
28
European Commission (2021b).
178 M. Robles-Carrillo
3 Axiological Background
A principal distinction between the 2013 and 2020 Cybersecurity Strategies lies in
their axiological approach. While in the former, the objective was an open, safe and
secure cyberspace, in the latter, moreover, the idea of European technological
sovereignty is introduced as a key component of the strategy. The EU’s Cybersecu-
rity Strategy establishes three areas of action: (a) resilience, technological sover-
eignty and leadership; (b) operational capacity to prevent, deter and respond; as well
as (c) promotion of a global and open cyberspace. The axiological background of the
European Strategy is evident in two of these areas.
The promotion of an open, safe and secure cyberspace is both an area of action and
an objective of the Strategy. The aim of the EU is to promote its own political model
and vision of cyberspace that is “grounded in the rule of law, human rights,
fundamental freedoms and democratic values that bring social, economic and polit-
ical development globally, and contribute to a Security Union”.31 The cyberspace
worldview of the EU differs from that embraced by Russia or China or even the
USA.32 In this case, it is a more ideologically liberal but also more protectionist
29
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 19.
30
For a different opinion, considering that cybersecurity is only security in cyberspace, see Dutton
et al. (2022), p. 3.
31
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 19.
32
Following Calzada, “three global paradigms are distinctively competing on data governance
between one another while pervasively producing entirely different algorithmic and AI disruptive
scenarios. First, in China, the state is super-rich in data and determined to put these data to use
The European Union Strategy for Cybersecurity 179
through what is known as ‘technological nationalism’, whereby large technology companies and the
state embrace a mutually beneficial symbiotic relationship. Second, in the U.S., the so-called GAFA
is driven by large technological private multinationals are collecting massive amounts of data from
global citizens without any informed consent. Third, in Europe, the post-GDPR context is
attempting to address the debate on the digital rights of citizens” (Calzada (2019), p. 3).
33
Bjola (2021); Zeng et al. (2017); Mueller (2020); Flonk et al. (2020); Chander and Sun (2021);
Mainwaring (2020); Irion et al. (2021).
34
Bauer and Erixon (2020); Bjola (2021); Zeng et al. (2017); Mueller (2020); Chander and Sun
(2021); Mainwaring (2020).
35
Bederna and Rajnai (2022).
36
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 22.
37
Following the Report on implementation of the EU’s Cybersecurity Strategy for the Digital
Decade, the High Representative, together with Member States, is working in the Programme of
Action (PoA) to Advance Responsible State Behavior at United Nations level, together with the
53 other co-sponsors, on the basis of the recommendation of the 12 March 2021 consensus report
from the United Nations Open Ended Working Group on Developments in the Field of Information
180 M. Robles-Carrillo
repression. Alongside the legal measures, the Strategy proposes action plans, guide-
lines and improvements in terms of practical implementation.38
(B) Engagement and leadership on international and European standardisation
processes are envisaged as a way to promote and defend the European vision of
cyberspace. This European action has been justified for the following reasons: the
increasing importance of international standards; the growing risk of divergent and
competing frameworks; and the need to ensure the inclusion of European values.39
(C) The EU promotes third country accession to the Budapest Convention as well
as its Second Additional Protocol concerning digital evidence.40 The EU defends the
usefulness of this treaty ahead of any new legal instrument on cybercrime at UN
level. Following the Strategy, this last proposal might amplify divisions and slow
down reforms and capacity building efforts, potentially hindering international
cooperation against cybercrime. However, in March 2022, the Commission
presented a Recommendation for a Council Decision authorising the negotiations
for a comprehensive international convention on countering the use of information
and communications technologies for criminal purposes. According to this proposal,
a new international convention “may affect common Union rules or alter their
scope”. The proposal has been justified “in order to protect the integrity of Union
law and to ensure that the rules of international Law and Union Law remain
consistent”.41 There is a degree of contradiction on this point that could have been
avoided by a more flexible position on a possible treaty in the UN and a less rigid
defence of the European Convention.
Digital or technological sovereignty is an integral part of the first policy area, along
with resilience and leadership. This concept is not further defined or explained.
Apart from that, it is only expressly mentioned when the Strategy provides that the
Cybersecurity Industrial, Technology and Research Competence Centre and Net-
work of Coordination Centres (CCCN) should play a key role “in developing the
and Telecommunications in the Context of International Security (European Commission and High
Representative of the Union and Foreign Affairs and Security Union (2021a), p. 4).
38
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 20–21.
39
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 20.
40
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2021a), p. 5.
41
European Commission (2022), 8; European Commission and High Representative of the Union
and Foreign Affairs and Security Union (2021a), p. 4.
The European Union Strategy for Cybersecurity 181
42
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 11.
43
Chander and Sun (2021), p. 8.
44
Floridi (2020).
45
Ilves and Osula (2020).
46
Christakis (2020); Verellen (2020).
47
Prokscha (2021).
48
German EU Presidency (2020).
49
Bjola (2021).
50
Allen (2021), p. 45.
51
Burwell and Propp (2020), p. 6.
52
Crespi et al. (2021), p. 349.
182 M. Robles-Carrillo
world”.53 Innerarity states that “the term digital sovereignty is used to refer to an
ordered, value-driven, regulated and secure digital sphere that meets the demands of
individual rights and freedoms, equality and fair economic competition”,54 Fiott
defends that this concept represents “the EU needs to secure its values and interests
in new and more determined ways”.55 The EU German Presidency considers digital
sovereignty as “the ability to shape the digital transformation in a self-determined
manner with regard to hard- ware, software, services, and skills. Being digitally
sovereign does not mean resorting to protectionist measures or doing everything
yourself. Being digitally sovereign means, within the framework of applicable law,
making sovereign decisions about the areas in which independence is desired or
necessary”. This concept “essentially addresses the reduction of existing and emerg-
ing dependencies in the digitalizing world”.56
The arrival of this concept on the European scene has attracted both positive
assessments and criticisms. The debate is complicated because, among other things,
the issue is not about applying the idea of State sovereignty to the EU. Analysing the
use of this expression at European level, three conclusions can be reached. Firstly,
European digital sovereignty should not be understood as the online version of
traditional sovereignty that was defined as the absolute, exclusive and excluding
power of States. Secondly, generally, digital sovereignty is not considered a reality,
but rather an ambition or an objective of the European Union. For the time being, no
European State is going to abdicate its sovereignty and European digital sovereignty
can only be defined as a complement to national sovereignty, not a substitute for
it. Thirdly, more than being an abstract or superior power, European digital sover-
eignty is identified with the need of not being dependent on third countries for
critical resources or technologies and to be able to develop European digital
capabilities.
Sovereignty is neither a static concept nor a binary one. Historically, there have
been different notions and meanings, from absolute to popular sovereignty or,
recently, the idea of functional sovereignty that has been used to explain the
governance model at the EU. The idea of a European Digital sovereignty could be
conceived as the evidence of the evolutive potential and flexibility of this concept.
Actually, digital sovereignty is a form of portmanteau expression—with a high
conceptual weight but varied meanings—while always linked to the aim of strategic
autonomy and non-dependence on third parties.
After the European Council Conclusions of 1–2 October 2020, the Statement of
the Members of the European Council on 25 March 2021 underlines “the need to
enhance Europe’s digital sovereignty in a self-determined and open manner by
building on its strengths and reducing its weaknesses and through smart and
53
Madiega (2020).
54
Innerarity (2021), p. 7.
55
Fiott (2021).
56
German EU Presidency (2020), p. 2.
The European Union Strategy for Cybersecurity 183
selective action, preserving open markets and global cooperation”.57 The Versailles
Declaration, adopted on 10 and 11 March 2022, includes the commitment “to take
further decisive steps towards building our European sovereignty, reducing our
dependencies and designing a new growth and investment model for 2030”.58 In
the Cybersecurity Strategy, technological sovereignty is expressly linked to resil-
ience and leadership as an area of action.
The 2020 Cybersecurity Strategy introduces two important substantive and func-
tional innovations: on the one hand, the merger of resilience, technological sover-
eignty and leadership within one of the areas of EU action; and, on the other, the
increased focus on technological issues that were often neglected in this kind of act.
57
Members of the European Council (2021).
58
Informal meeting of the Heads of State or Government (2022).
59
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2017).
60
According to the Report on implementation of the EU's Cybersecurity Strategy for the Digital
Decade, most Member States have already in place frameworks for imposing appropriate restric-
tions on 5G suppliers, requirements on mobile network operators are being reinforced and Member
States authorities have decided to launch an in-depth analysis of the security implications of open,
disaggregated and interoperable network technology solutions (‘Open RAN’) under the EU toolbox
(European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2021a), p. 3).
61
Szpor (2021), pp. 229–231.
184 M. Robles-Carrillo
62
On 16 December 2020, the Commission has presented a Proposal for a Directive of the European
Parliament and of the Council on the resilience of critical entities. COM (2020) 829 final, Brussels,
16.12.2020. Available at: https://eur-lex.europa.eu.
63
Bederna and Rajnai (2022).
64
European Commission (2014), pp. 3 and 5–6.
65
Council of the EU (2014).
66
https://eur-lex.europa.eu.
The European Union Strategy for Cybersecurity 185
starting, at the top of the hierarchy, with the root zone and the thirteen DNS root
servers on which the World Wide Web depends”.67 Actually, the current DNS
system poses a major dependency problem. There are only two EU DNS root server
operators: the i.root-servers operated by Netnod in Sweden and the k.root-servers
operated by RIPE NCC in the Netherlands. As a result, EU relies on a very few
public DNS resolvers operated by non-EU entities. Market concentration increases
security problems.
To tackle these issues, the EU Cybersecurity Strategy introduces some quite
significant proposals: (a) The development of a contingency plan supported by EU
funding in order to deal with scenarios affecting the integrity and the availability of
the DNS root system;68 (b) The adoption of key internet standards including IPv6
and Internet security rules and good practices for DNS, routing, and email security,69
as well as the promotion of the implementation of these standards in other countries
with the aim of supporting the development of the global and open Internet and
counteracting closed and control-based models of the Internet; (c) The adoption of
a DNS resolution diversification strategy; and (d) The promotion and support of a
public European DNS resolver service. The so-called DNS4EU is conceived as a
European alternative service for accessing the global Internet that has to be trans-
parent and secure as well as conform to the European standards and rules. The
DNS4EU has to be part of the European Industrial Alliance for Data and Cloud.70
The technological architecture and infrastructure of Internet, as well as the
implementation of technical protocols and standards are not purely technical matters.
In addition to ensure the functionality, integrity and sustainability of the digital
structure, the principles and values of the European digital model must also be
upheld in this context. Evidence of this can be found, for instance, in such a technical
regulation as the eu-domain.71 The Declaration for the Future of the Internet,
adopted in April 2022 by 32 States, in addition to the EU Member States and the
67
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), p. 10.
68
To this end, the Commission will work with ENISA, the Member States, the two EU DNS root
server operators and the multi-stakeholder community, “to assess the role of these operators in
guaranteeing that the Internet remains globally accessible in all circumstances” (European Com-
mission and High Representative of the Union and Foreign Affairs and Security Union (2020),
p. 10).
69
According to the 2020 Strategy, those standards include DNSSEC, HTTPS, DNS over HTTPS
(DoH), DNS over TLS (DoT), SPF, DKIM, DMARC, STARTTLS, DANE and routing norms and
good practices e.g. Mutually Agreed Norms for Routing Security (MANRS) (European Commis-
sion and High Representative of the Union and Foreign Affairs and Security Union (2020), p. 11).
70
Funding for the development of DNS4EU has been provided under the Connecting Europe
Facility (CEF2) Digital work programme 2021–2023 (European Commission and High Represen-
tative of the Union and Foreign Affairs and Security Union (2021b), p. 2).
71
On 19 March 2019, the European Parliament and the Council adopted the Regulation (EU) 2019/
517 on the implementation and functioning of the .eu top-level domain name and amending and
repealing Regulation (EC) No 733/2002 and repealing Commission Regulation (EC) No 874/2004.
The purpose of the.eu TLD is, through good management, to help enhance the Union identity and
186 M. Robles-Carrillo
US, pays attention to the technological issues as well as the legal, political and
ethical. According to this Declaration, “[t]he immense promise that accompanied the
development of the Internet stemmed from its design: it is an open “network of
networks”, a single interconnected communications system for all of humanity. The
stable and secure operation of the Internet’s unique identifier systems have, from the
beginning, been governed by a multistakeholder approach to avoid Internet frag-
mentation, which continues to be an essential part of our vision”.72
In fact, applying the European model of values and principles—or any other—in
the digital society is not feasible by marginalising or bypassing the technical
architecture of cyberspace and the Internet. Distinctness between technical and
non-technical aspects is a frequent and serious mistake. Technological neutrality is
absent in the architecture of the internet and cyberspace. Recognising the technical
scope and constraints of this technology is a prerequisite for the implementation of
any particular world-view of the digital world.
The 2020 Strategy may be considered a less declarative and more operational action
framework than the previous foreseen in 2013.73 Although the second line of action
is specifically dedicated to operational capacity, in fact, all the three areas of the
Strategy include organisational and operational measures.74
In the first one, resilience, technological sovereignty and leadership, four differ-
ent kind of actions are planned: (a) The building of a European Cyber Shield through
the establishment of Computer Security Response Treams (CSIRT) and Security
Operation Centers (SOC) aiming at the development of a network of Security
Operation Centers across the EU; (b) The definition of an ultra-secure
promote Union values online, such as multilingualism, respect for users’ privacy and security and
respect for human rights, as well as specific online priorities (https://eur-lex.europa.eu).
According to the Commission Implementing Regulation (EU) 2020/857 of 17 June 2020 laying
down the principles to be included in the contract between the European Commission and the .eu
top-level domain Registry in accordance with Regulation (EU) 2019/517 of the European Parlia-
ment and of the Council, “The Registry should manage the .eu TLD in a manner that enhances the
Union identity, promotes Union values online and promotes the use of the.eu domain name”. Its
Article 2, entitled Promotion of the Union values online, stated that “1. The Registry shall contribute
to enhancing the Union identity and promoting the Union values online. In particular, the Registry,
through its policies and its interactions with registrars, registrants and other stakeholders, shall
promote openness, innovation, multilingualism and accessibility, freedom of expression and infor-
mation, respect for human rights and the rule of law and shall take measures to promote users’
security online and to respect users’ privacy” (https://eur-lex.europa.eu).
72
https://digital-strategy.ec.europa.eu.
73
Dutton et al consider that “it omits some elements of capacity building that could be critical to
success” (Dutton et al. (2022), p. 9).
74
Bederna and Rajnai (2022).
The European Union Strategy for Cybersecurity 187
75
This Regulation was adopted on 20 May 2021. It is the Regulation (EU) 2021/887 of the
European Parliament and of the Council of 20 May 2021 establishing the European Cybersecurity
Industrial, Technology and Research Competence Centre and the Network of National Coordina-
tion Centres (https://eur-lex.europa.eu).
76
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 6–12.
77
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 22–23.
78
The EU Cyber Capacity Building Board has been established allowing EU institutions, bodies
and agencies to better coordinate and cooperate among them on the EU’s external cyber capacity-
building efforts (https://www.eucybernet.eu/).
79
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 13–19.
80
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 13–15.
188 M. Robles-Carrillo
its nature, it is not a new organ, nor does it affect any competence of the States or the
European institutions, agencies and bodies. Actually, is a platform created to com-
plete European cybersecurity crisis management framework and maximise the
efficiency of “existing structures, resources and capabilities and promote a ‘need-
to-share’ mind-set”. This platform intends to solve two problems: the need of a
common space for operational and technical cooperation; and the need to fully
engage the multi-stakeholder community in this issue. The three main objectives
of this platform are: (a) Ensure preparedness across cybersecurity communities;
(b) Provide continuous shared situational awareness through information sharing;
and (c) Reinforce coordinated response and recovery. On 23 June 2021, the Com-
mission adopts Recommendation (EU) 2021/1086 on the establishment of a Joint
Cyber Unit which is defined as an important component of the EU Security Union
Strategy, the EU Digital Strategy and the EU Cybersecurity Strategy.81
(B) The EU Cyber Diplomacy Toolbox was established in 2017 by the Council
with the aim to prevent, discourage, deter and respond to malicious cyber activi-
ties.82 Since such situations require a comprehensive response, the EU Strategy
includes two separate measures: the establishment of a Member State EU cyber
intelligence working group within the EU Intelligence and Situation Centre
(INTCEN); and the presentation of a proposal in order to define EU cyber deterrence
posture. In addition, the EU Strategy proposes three substantive actions:
(a) Improving the system of restrictive measures; (b) Updating the implementation
guidelines of the Toolbox, and (c) Strengthening international cooperation.83
(C) Tackling cybercrime is considered a key factor in ensuring cybersecurity.84
There are two main proposals at this point: (a) to foster the cooperation and exchange
between cybersecurity actors and law enforcement; and (b) to expand and improve
the capacity of law enforcement to investigate cybercrime. Both ENISA and Europol
are involved in this policy area. Concerning normative measures, the EU Strategy
refers to the proposals on cross-border access to electronic evidences and to the need
to ensure full implementation of 2013 Directive on attacks against information
systems.85
(D) Boosting cyber defence capabilities. There are four main proposals
concerning cyber defence: (a) A review of the Cyber Defence Policy Framework
in order to enhance coordination and cooperation; (b) The development of state-of-
the-art cyber defence capabilities through different EU policies and instruments;86
(c) Increased cooperation among Member States on cyber defence research,
81
https://eur-lex.europa.eu.
82
Council of the EU (2017).
83
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 16–17.
84
Düll et al. (2018), pp. 315–317.
85
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 15–16.
86
Düll et al. (2018), pp. 317–319.
The European Union Strategy for Cybersecurity 189
innovation and capability development; and (d) Boosted synergies through the
Commission Action Plan on synergies between the civil, defence, and space indus-
tries.87 The Strategic Compass for Security and Defence, adopted on 21 March 2022,
reinforces the organic and operative measures in several ways, particularly,
concerning cyber intelligence capacities, the EU Hybrid Toolbox or the European
Cybersecurity Competence Centre.88
6 Conclusions
Over and above the points discussed in each section, the analysis of the 2020
Cybersecurity Strategy highlights three major developments at the general level.
First, although obvious and unavoidable in practical terms, the full integration of
cyber security into the overall context of the digitalisation process is an extremely
important step. As Csernatoni explains, “digitalization has become the prime driver
of globalization and international competition. States around the world are making
digital autonomy, technological supremacy, and innovation the cornerstones of their
diplomatic, security, and economic efforts. The European Union (EU) is no
exception”.89
Secondly, the increased attention paid to the technological architecture and
infrastructure of Internet and cyberspace in the 2020 EU Strategy suggests the
progressive end of the so-called and effectively non-existent technological neutral-
ity. These technological components of cyberspace have not only technical but also
geopolitical and legal value. The fact that they are indispensable for operation and
stability of the digital ecosystem would require their qualification as a global public
good and their legal protection from that perspective.
Finally, the introduction of the concept of technological or digital sovereignty in
the field of cyber security may respond to this paradigm change which makes it
essential to ensure the EU's technological and strategic autonomy. Cybersecurity can
no longer be conceived merely as the security of cyberspace, but rather as the
security of the entire digital ecosystem, and thus of the world as a whole.
References
Allen S (2021) European Sovereignty I the Digital Age. IIEA, 45. Available at: https://www.iiea.
com Accessed 20 Oct 2022
87
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020), pp. 18–19.
88
Council of the European Union (2022), pp. 23, 27 and 35.
89
Csernatoni (2021), p. 1.
190 M. Robles-Carrillo
Bauer M, Erixon F (2020) Europe’s Quest for Technology Sovereignty: Opportunities and Pitfalls.
ECIP Occasional Paper 2/2020, pp. 8-25. Available at: https://ecipe.org. Accessed 20 Oct 2022
Bederna Z, Rajnai Z (2022) Analysis of the cybersecurity ecosystem in the European Union. Int
Cybersecur Law Rev 1–15. https://doi.org/10.1365/s43439-022-00048-9
Bendiek A, Kettemann MC (2021) Revisiting the EU cybersecurity strategy: a call for EU cyber
diplomacy. (SWP Comment, 16/2021). In Berlin: Stiftung Wissenschaft und Politik -SWP-
Deutsches Institut für Internationale Politik und Sicherheit. https://doi.org/10.18449/2021C16
Bjola C (2021) The European Union’s Quest for Digital Sovereignty and its Implications for the
Transatlantic Relationship. In DigDiploROx Working Paper No 5. Working Paper Series.
Oxford Digital Diplomacy Research Group. Oxford Department of International Development.
University of Oxford. Available at: https://www.qeh.ox.ac.uk Accessed 20 Oct 2022
Burwell FG, Propp K (2020) The European Union and the Search for Building “Fortress Europe” or
Preparing for a New World?. Available at: https://www.researchgate.net Accessed 20 Oct 2022
Calzada I (2019) Technological Sovereignty: Protecting Citizens’ Digital Rights in the AI-driven
and post-GDPR Algorithmic and City-Regional European Realm. Regions eZine, 4. https://doi.
org/10.1080/13673882.2018.00001038
Chander A, Sun H (2021) Sovereignty 2.0. Georgetown University Law Center. Available at:
https://scholarship.law.georgetown.edu Accessed 20 Oct 2022
Christakis T (2020) European Digital Sovereignty. Successfully Navigating Between the Brussels
Effect and Europe’s Quest for Strategic Autonomy. Studies of Digital Governance. Data
Institute. Université Grenoble Alpes. https://doi.org/10.2139/ssrn.3748098.
Council of the EU (2014) Conclusions on Internet Governance. Available at: http://sif.splet.arnes.si
Accessed 20 Oct 2022
Council of the EU (2017) Conclusions on a Framework for a Joint EU Diplomatic Response to
Malicious Cyber Activities (“Cyber Diplomacy Toolbox”). Available at: https://data.consilium.
europa.eu Accessed 20 Oct 2022
Council of the European Union (2022) Strategic Compass for Security and Defence. Available at:
https://www.consilium.europa.eu Accessed 20 Oct 2022
Crespi F, Caravella S, Menghini M et al (2021) European technological sovereignty: an emerging
framework for policy strategy. Intereconomics 56(6):348–354. https://doi.org/10.1007/s10272-
021-1013-6
Csernatoni R (2021) The EU’s rise as a defense technological power: from strategic autonomy to
technological sovereignty. Carnegie Europe Program. Available at: https://carnegieeurope.eu
Accessed 20 Oct 2022
Düll A, Schoch A, Straub M (2018) Cybersecurity in the European Union. Central and Eastern
European eDem and eGov Days 331:313–323
Dutton WH, Creese S, Esteve-Gonzalez P et al. (2022) Next steps for the EU: building on the paris
call and EU cybersecurity strategy. Available at SSRN 4052728, 2022. https://doi.org/10.2139/
ssrn.4052728
European Commission (2014) Communication to the European Parliament, the European Council,
the Council, the European Economic and Social Committee and the Committee of the Regions.
Internet Policy and Governance. Europe’s role in shaping the future of Internet Governance.
COM (2014) 72 final, Brussels, 12.2.2014. Available at: https://eur-lex.europa.eu Accessed
20 Oct 2022
European Commission (2020a) Communication to the European Parliament, the European Council,
the Council, the European Economic and Social Committee and the Committee of the Regions.
Shaping Europe’s digital future. COM (2020) 67 final, Brussels, 19.2.2020. Available at: https://
eur-lex.europa.eu/legal-content Accessed 20 Oct 2022
European Commission (2020b) Communication to the European Parliament, the European Council,
the Council, the European Economic and Social Committee and the Committee of the Regions
on the EU Security Union Strategy. COM (2020) 605 final, Brussels, 24.7.2020. Available at:
https://eur-lex.europa.eu/legal-content Accessed 20 Oct 2022
The European Union Strategy for Cybersecurity 191
European Commission (2021a) Communication to the European Parliament, the European Council,
the Council, the European Economic and Social Committee and the Committee of the Regions.
2030 Digital Compass: the European way for the Digital Decade. COM (2021) 118 final,
Brussels 9.3.2021. Available at: https://eur-lex.europa.eu Accessed 20 Oct 2022
European Commission (2021b) Proposal for a Decision of the European Parliament and of the
Council establishing the 2030 Policy Programme “Path to the Digital Decade”. COM (2021)
118 final, Brussels 15.9.2021. Available at: https://eur-lex.europa.eu Accessed 20 Oct 2022
European Commission (2022) Recommendation for a Council Decision authorising the negotia-
tions for a comprehensive international convention on countering the use of information and
communications technologies for criminal purposes, COM (2022) 132 final. Available at:
https://eur-lex.europa.eu Accessed 20 Oct 2022
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2013) Joint Communication to the European Parliament, the Council, the European
Economic and Social Committee and the Committee of the Regions. Cybersecurity Strategy of
the European Union: An Open, Safe and Secure Cyberspace Resilience, Deterrence and
Defence: Building strong cybersecurity for the EU. JOIN (2017) 450 final, Brussels,
13.9.2017. Available at: https://eur-lex.europa.eu Accessed 20 Oct 2022
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2017) Joint Communication to the European Parliament and the Council. Resilience,
Deterrence and Defence: Building strong cybersecurity for the EU. JOIN (2017) 450 final,
Brussels, 13.9.2017. Available at: https://eur-lex.europa.eu Accessed 20 Oct 2022
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2020) Joint Communication to the European Parliament and the Council. The EU’s
Cybersecurity Strategy for the Digital Decade. JOIN (2020) 18 final, Brussels, 16.12.2020.
Available at: https://eur-lex.europa.eu/legal-content Accessed 20 Oct 2022
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2021a) Joint Communication to the European Parliament and the Council. Report on
implementation of the EU’s Cybersecurity Strategy for the Digital Decade. JOIN (2021)
14 final, Brussels, 23.6.2021. Available at: https://eur-lex.europa.eu Accessed 20 Oct 2022
European Commission and High Representative of the Union and Foreign Affairs and Security
Union (2021b) Annex to the Joint Communication to the European Parliament and the Council.
Report on implementation of the EU’s Cybersecurity Strategy for the Digital Decade. JOIN
(2021) 14 final, Brussels, 23.6.2021. Available at: https://eur-lex.europa.eu Accessed
23 Oct 2022
Fiott D (2021) Achieving Strategic Sovereignty for the EU. European Parliament, Brussels.
Available at: https://www.europarl.europa.eu. Accessed 20 Oct 2022
Flonk D, Jachtenfuchs M, Obendiek A (2020) Authority conflicts in internet governance:
Liberals vs. sovereigntists? Glob Const 9(2):364–386. https://doi.org/10.1017/
S2045381720000167
Floridi L (2020) The fight for digital sovereignty: what it is, and why it matters, especially for the
EU. Philos Technol 33:369–378. https://doi.org/10.1007/s13347-020-00423-6
German EU Council Presidency (2020) Digital Sovereignty. Available at: https://erstelesung.de
Accessed 20 Oct 2022
González Fuster G, Jasmontaite L (2020) Cybersecurity regulation in the European Union: the
digital, the critical and fundamental rights. In: Christen M, Gordijn B, Loi M (eds) The ethics of
cybersecurity. Springer, Cham, pp 97–115
Ilves L, Osula AM (2020) The technological sovereignty dilemma – and how new technology can
offer a way out. Eur Cybersecur J 6(1):24–35
Informal meeting of the Heads of State or Government (2022) Versailles Declaration. Available at:
https://www.consilium.europa.eu Accessed 20 Oct 2022
Innerarity D (2021) European digital sovereignty. Institute of European Democrats, Brussels.
https://www.iedonline.eu. Accessed 20 Oct 2022
192 M. Robles-Carrillo
Irion K, Burri M, Kolk A et al. (2021) Governing “European values” inside data flows: interdis-
ciplinary perspectives. Internet Policy Review. J Inter Regul, 10(3) 1–14. https://doi.org/https://
doi.org/10.14763/2021.3.1582
Madiega T (2020) Towards a more resilient EU. Digital sovereignty for Europe. EPRS Ideas Paper,
1. Available at: https://www.europarl.europa.eu Accessed 20 Oct 2022
Mainwaring S (2020) Always in control? Sovereign states in cyberspace. Eur J Int Secur 5(2):
215–232. Available at: http://wrap.warwick.ac.uk Accessed 20 Oct 2022
Members of the European Council (2021) Statement of the Members of the European Council.
Available at: https://www.consilium.europa.eu Accessed 20 Oct 2022
Mueller ML (2020) Against sovereignty in cyberspace. Int Stud Rev 22:781–801. https://doi.org/
10.1093/isr/viz044
Prokscha A (2021) Digital sovereignty for the European Union - analysing frames and claims for
digital sovereignty in the European Union’s digital strategy. University of Maastrich, Maastricht
Szpor G (2021) The evolution of cybersecurity regulation in the European Union law and its
implementation in Poland. Rev Eur Comp Law 46(3):219–235
Verellen T (2020) European sovereignty now? A reflection on what it means to speak of European
sovereignty. Eur Pap 5(1):307–318. https://doi.org/10.15166/2499-8249/383
Wessel RA (2019) Cybersecurity in the European Union: resilience through regulation. In:
Conde E, Yaneva Z, Scopelliti M (eds) . Routledge handbook of EU security law and policy,
Routledge, pp 283–300
Zeng J, Stevens T, Chen Y (2017) China’s solution to global cyber governance: unpacking the
domestic discourse of ‘Internet sovereignty’. Polit Policy 45(3):4343–4448
Margarita Robles-Carrillo is Full Professor of Public International Law and European Law.
Member of Network Engineering & Security Group (NESG), University of Granada (Spain)
Remarks on the Use of Biometric Data
Systems (and Facial Recognition
Technologies) for Law Enforcement
Purposes: Security Implications
of the Proposal for an EU Regulation
on Artificial Intelligence
R. S. Pereira (✉)
University of Lisbon, Law Faculty, Lisbon, Portugal
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 193
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_11
194 R. S. Pereira
1 Introduction
1
Proposal for a Regulation of the European Parliament and of the Council Laying Down
Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain
Union Legislative Acts, COM(2021) 206 final, available at https://eur-lex.europa.eu/legal-content/
EN/TXT/?uri=CELEX%3A52021PC0206.
2
Madiega and Mildebrath (2021), I.
Remarks on the Use of Biometric Data Systems (and Facial. . . 195
law enforcement purposes (Sect. 3). Thirdly, we will refer to the experience of other
jurisdictions regarding the use of biometric data systems and facial recognition
technologies for law enforcement purposes (Sect. 4). Fourthly, we will present the
core aspects of the EU strategy for Artificial Intelligence and biometric data for law
enforcement purposes (Sect. 5). Finally we will make a short assessment of the
Commission’s Proposal and conclude that it can be seen as a source of inspiration for
other countries to enact (albeit in a restrictive way) comprehensive legislation on the
matter (Sect. 6).
The use of biometric data systems (and facial recognition technologies) for law
enforcement purposes is usually discussed in relation to the activity that the State
must perform to address the commission of criminal offenses.
However, a distinction should be drawn between State activity regarding what
may be considered public security (to prevent the occurrence of risks or the com-
mission of crimes which have not yet taken place) and State activity related to
criminal prosecution (after the crimes have been committed).3
Biometric data systems (and facial recognition technologies) are usually seen as an
important tool for crime prevention. Its use for criminal law enforcement purposes is
then related to State public security activity (including Predictive Policing)4 and not
with the activity involving the prosecution of crimes.5
Of course, the use of biometric data systems and facial recognition technologies
for State activity related to criminal prosecution is also possible. Some may accept
that, for example, facial recognition technologies could be seen as a useful investi-
gative tool to identify suspects by comparing their images or photos with the ones
included in databases (especially, public ones). Nonetheless, the fact that such
potential uses must be set against the fundamental legal principles and guarantees
of criminal procedures law, suggests that the topic of biometric data systems and
facial recognition technologies has to be discussed first in what relates to public
security.
Therefore, the questions and problems raised by biometric data systems and facial
recognition technologies will be characterized as public security issues and will be
addressed considering legitimate security concerns (e.g. national security).
3
For such distinction and its importance regarding data protection law, see Gleizer/Montenegro/
Viana, 2021: 22–27.
4
Concerning the topic of predictive policing, inter alia, see: Egbert and Leese (2021); Hofmann
(2020); Sommerer (2020); Thüne (2020); Gless (2018).
5
Distinguishing between “policing” and “prosecuting”, although recognizing that police powers
have been greatly expanded and the line between security and law enforcement has been blurred,
see Momsen and Rennert (2020).
Remarks on the Use of Biometric Data Systems (and Facial. . . 197
Concerns raised regarding the use of biometric data systems and facial recognition
technologies are usually the result of a combination of two separate (albeit interre-
lated) ideas: (i) the specific characteristics of these types of system and technologies;
(ii) and the potential impacts of such systems and technologies on fundamental
rights.
A primary source of concern relates to technical features and accuracy of the
systems and technologies.6 On the one hand, one may refer to the pervasiveness of
facial recognition technology and the difficulties to implement human control. On
the other hand, one may mention the security risks regarding the collection and
retention of face recognition data, combined with the risk of breach and misuse of
face recognition data. Finally, one may stress the risk of errors of facial recognition
technologies: either by failing to find a face that is present on a picture or by
identifying a non-face structure as a real face. Such risk of error has led some
important companies to withdraw from the facial recognition technologies market.7
Other sources of concern relate to fundamental rights.8 On the one hand, the idea
that more data, including personal data, are constantly being collected and analyzed
through devices (e.g. surveillance cameras or autonomous vehicles), with the use of
improved AI technology (e.g. facial recognition), which may lead to more invasive
outcomes for individual privacy and data protection. On the other hand, experts have
expressed a strong conviction that facial recognition technology may have very high
rates of false positives/false negatives and that the bias of such technology may lead
to various types of discrimination against certain demographic categories (e.g. less
accuracy for women and people of colour than for white men). In particular, the risk
of discriminatory treatment is significant in the law enforcement context. Finally,
several risks related to a possible generalization of the use of facial recognition
technologies have been raised: there is a major possibility that the facial recognition
systems will be used beyond their initially authorized and controlled purpose and,
consequently, this fact may: (i) jeopardize the possibility of moving anonymously in
public spaces; (ii) determine a conformism detrimental to free will, (iii) affect
religious freedoms and the rights of the child; (iv) interfere with a person's freedom
of opinion and expression and have a negative effect on their freedom of assembly
and of association; (v) have a strong impact on the social and psychological
behaviour of citizens; and (vi) highlight important ethical questions.9
6
Haddad (2021). However, mentioning possible mechanisms to guarantee facial recognition tech-
nologies’ accuracy, see Raposo (2022).
7
Madiega and Mildebrath (2021), pp. 5–6.
8
For further discussion, see, colorandi causa: Dushi (2020); O’Flaherty (2020).
9
Madiega and Mildebrath (2021), pp. 6–9.
198 R. S. Pereira
The use of biometric data systems (and facial recognition technologies) enables law
enforcement authorities to make a comparison of the biometric data of anyone with
the face of a wanted person and such comparison has been considered useful on a
number of occasions. Several countries are now using biometric data systems, such
as facial recognition technologies, for law enforcement purposes. Various
approaches regarding the admissibility of using such systems and technologies
may be found. Globally there is no standardized framework or regulatory require-
ments that could be applied.10
The US is one of the main global regions in which facial recognition technology has
rapidly evolved.11 However, the regulation of the use of facial recognition has not
developed.12 Local and State governments have led the way13 but some have
adopted a restrictive approach. Moreover, no federal legislation regulating the use
of such technologies by private companies or in the context of law enforcement has
arisen.
At both State and local level there have been discussions about the use of facial
recognition. Some American cities have banned facial recognition technology in
public spaces.14 However, the State of California has adopted a different approach:
in the early 2020s legislation was passed that places a three-year moratorium on any
facial recognition technology used in police body cameras. Also, the Facial Recog-
nition and Biometric Technology Moratorium Act of 2020 was introduced in the
Senate to prohibit biometric surveillance without statutory authorization.
The lack of federal regulation in the US creates a legal void regarding
the increasing use of facial recognition technologies by national security agencies.
The FBI uses facial recognition technology for potential investigative leads and the
Department of Home Security, and the CIA also use such technology.
10
Almeida et al. (2021).
11
Almeida et al. (2021).
12
Lin (2023).
13
Vidyarthi (2022), p. 679.
14
For example, Berkeley, Boston, Cambridge, Minneapolis, New Orleans, Oakland, Pittsburgh,
Portland, and San Francisco – Vidyarthi (2022), p. 680.
Remarks on the Use of Biometric Data Systems (and Facial. . . 199
Moreover, some literature has pointed out that the existing patchwork of legisla-
tion is not enough, as it has insufficient emphasis on data protection and privacy.15
In Brazil, law enforcement authorities have made increasing use of facial recognition
technologies for law enforcement purposes, specifically with the objective of locat-
ing wanted persons or criminal fugitives.
For example, during the 2019 carnival in Rio de Janeiro, four people against
whom arrest warrants were pending16 were arrested with the help of a facial
recognition system. In the same year, a facial recognition system installed in one
of the accesses of the Carnival of Salvador helped the Brazilian police to identify a
criminal fugitive.17 More recently, in 2020, 42 fugitives from justice were captured
during Salvador's Carnival with the help of a facial recognition system which could
indicate similarity above 90%.
The use of facial recognition technology for public security was encouraged by an
ordinance issued by the former Minister of Justice and Public Security Sérgio Moro.
A pilot-project was initiated to create a national programme for the investigation of
more serious cases, such as homicides and violent crimes.18
Although facial recognition falls within the scope of the item of the Brazilian
General Data Protection Law that deals with sensitive personal data and despite
several problems related to the technology’s efficiency, transparency and bias
(especially towards black people), facial recognition technology has been
implemented in various parts of Brazil. According to a report, at least 22 cases of
facial recognition by the Brazilian government have occurred and the systems used
had their origins in China and Israel.19 In all cases there was no specific regulation
with criteria for employing facial recognition and in several cases the systems were
not considered to be transparent and reliable.20
15
Almeida et al. (2021).
16
However, mentioning the case of a woman wrongfully detained by the Rio de Janeiro’s Military
State Police, see: Greco and Vaz (2021); Wernerck (2019).
17
Gleizer et al. (2021), p. 94, note 187.
18
Lemos (2021).
19
Lemos (2021).
20
Lemos (2021).
200 R. S. Pereira
The possibility of using facial recognition technologies for law enforcement pur-
poses is less clear in Germany.
Several newspapers have propagated the idea that ever more companies and
authorities around the world were using AI for facial recognition (in German,
künstliche Intelligenz zur Gesichtserkennung) and accordingly have created the
impression among the public that the use of facial recognition technology for law
enforcement purposes would also become a near future reality in Germany.
Although several German experts have pointed out that such systems are often
unreliable21 and prejudiced, a proposal for the use of facial recognition technology in
135 train stations and airports was announced by the Federal Ministry of the Interior
(Horst Seehofer from CSU) early in 2020. The Minister sought to introduce biomet-
ric video surveillance via the new Federal Police Act. He wanted to allow the
German federal police to automatically compare the scanner images with biometric
data.22 However, the proposal was severely criticized by leading members of
German political parties, and was subsequently abandoned by the Ministry, probably
as a result of the political pressure.23
The new federal German government immediately expressed its wish to ban
facial recognition and mass surveillance. The coalition agreement (in German,
Koalitionsvertrag)24 of the new government which was presented on 24 November
2021 has made this wish very clear: “We reject comprehensive video surveillance
and the use of biometric recording for surveillance purposes. The right to anonymity
both in public spaces and on the Internet must be guaranteed”.25 As far as we know,
21
Some say that it is possible to bypass such systems – see https://www.deutschlandfunk.de.
22
See https://www.zeit.de.
23
SPD-Chair Woman Saskia Esken has described the move in a social media network as an
“excessive interference with civil liberties”—“Videoüberwachung mit Gesichtserkennung ist
mein zu hoher Eingriff in die Freiheitsrechte. Die falsch positiven Fehlalarme schaden der
Sicherheit mehr als die Überwachung ihr nutzt. Unschuldige Menschen geraten ins Visier”. Before
that, on 21.11.2018, she has expressed a similar view: “Videoüberwachung und Datenabgleich für
jedermann, um eine Ordnungswidrigkeit zu verfolgen, die nur begehen kann, wer von der
Automobilindustrie betrogen und im Stich gelassen wurde? Ich halte das für unverhältnismäßig”.
24
Available at https://www.bundeskanzler.de.
25
See https://www.euractiv.de: “Flächendeckende Videoüberwachung und den Einsatz von
biometrischer Erfassung zu Überwachungszwecken lehnen wir ab. Das Recht auf Anonymität
sowohl im öffentlichen Raum als auch im Internet ist zu gewährleisten”, heißt es in der
Vereinbarung”. On page 109 of the coalition agreement, it is written: “Videoüberwachung kann
die Präsenz einer bürgernahen Polizei nicht ersetzen, sie aber an Kriminalitätsschwerpunkten
ergänzen. Flächendeckende Videoüberwachung und den Einsatz von biometrischer Erfassung zu
Überwachungszwecken lehnen wir ab. Das Recht auf Anonymität sowohl im öffentlichen Raum als
auch im Internet ist zu gewährleisten.”
Remarks on the Use of Biometric Data Systems (and Facial. . . 201
The situation in the UK seems more conducive to the use of biometric data systems
for law enforcement purposes than the envisaged EU Regulation Although UK’s
ongoing adequacy in terms of alignment to EU GDPR will continue to be assessed
by the European Union even after the UK’s departure,29 it is usually said that Police
forces in England use facial recognition to tackle serious violence.
However, an interesting case regarding the use of facial recognition by the police
arose in the UK, more precisely in Wales, in 2019.The case is known as the Edward
Bridges v. the Chief Constable of South Wales Police 2020 case. It is sometimes said
that it was the first case in the world dealing with automatic facial recognition.30 The
court ruled the use of facial recognition in the absence of clear guidelines as
unlawful. The case was brought under both the GDPR and the Human Rights Act.
Edward Bridges, a civil rights campaigner, argued that the active facial recognition
technology deployed by the South Wales Police at public gatherings infringed on the
right to respect for human life under the Human Rights Act 1998 and his privacy
rights under the Data Protection Act 2018 (DPA 2018), the UK implementation of
the GDPR. He also claimed that, since the police failed to account for this infringe-
ment, its DPIA was not performed correctly.
The English Court of Appeal ruled in favour of Bridges, upholding his position,
and additionally finding that the police had too broad a discretion regarding the use
of facial recognition technologies.31 The Court ruled that law enforcement is not
26
Gleizer et al. (2021), p. 94, note 188. For a wider discussion of the topic in the German literature,
see, inter alia: Schindler (2021); Kowalik (2021); Pfeffer (2022).
27
See https://www.faz.net.
28
See https://ddrm.de.
29
Almeida et al. (2021).
30
Gordon (2021), p. 2.
31
Almeida et al. (2021).
202 R. S. Pereira
precluded from using new technologies such as automated facial recognition. On the
contrary: “It is of paramount importance that the law enforcement agencies should
take full advantage of the available techniques of modern technology and forensic
science”. However, the law enforcement official must stay within the parameters of
the law when deploying any new technology. The Court recognized that South
Wales Police facial recognition algorithms were built around data protection. How-
ever, that was not considered enough by the Court. The facial recognition technol-
ogies were deployed indiscriminately, which violated privacy by default. In fact, the
amount of personal data collected was seen as disproportionate with respect to the
intended goal of identifying individuals on watchlists.32
Therefore, the Court of Appeal concluded that: (i) the use of automated facial
recognition by the South Wales Police was not in accordance with the law for the
purposes of Article 8(2) of the European Convention on Human Rights; (ii) the
respondent’s Data Protection Impact Assessment did not comply with section 64(3)
(b) of the Data Protection Act 12 of 1998; (iii) and the respondent had not discharged
its Public Sector Equality Duty.
An increasing use of facial recognition cameras in public spaces for law enforcement
purposes has also been documented in India, whereby the Indian Government has
put into force the National Automated Facial Recognition System (NAFRS), which
was developed by the National Crime Records Bureau (NCRB) under the Ministry
of Home Affairs33 and approved its implementation at Federal level in 2020.34
The National Crime Records Bureau of India aimed to develop and use a national
database of photographs which was to be used in conjunction with a facial recogni-
tion technology system by Central and State security agencies. For that reason, it
released in 2019 a Request for Proposals to create a national database of photo-
graphs, which was envisaged to be used to swiftly identify criminals by gathering
existing data from various other databases including: the passport database under the
Ministry of External Affairs; Crime and Criminal Tracking Network and Systems
(CCTNS) by the National Crime Records Bureau(NCRB) under the Ministry of
Home Affairs (MHA); Interoperable Criminal Justice System (ICJS) by the NCRB
under the MHA; Women and Child Development Ministry’s KhoyaPaya Portal;
32
Almeida et al. (2021).
33
See https://internetfreedom.in.
34
See Government of India’s Press Information of 4th March 2020, available at https://www.mha.
gov.in.
Remarks on the Use of Biometric Data Systems (and Facial. . . 203
Automated Fingerprint Identification System (AFIS) by the NCRB under the MHA;
Any other image database available with police/other entities.35
The intention to establish a country-wide facial recognition system stemmed from
a trial run of a facial recognition software used by the Delhi police in April 2018 to
identify and rescue 3.000 missing children in four days.36
The National Automated Facial Recognition System established a huge facial
recognition technology network known as automated facial recognition system
(AFRS), which makes CCTV monitoring much easier by extracting facial biometric
from films and matching them with photographs stored in a database. The AFRS
uses police records and is accessible only to Law Enforcement Agencies.
The justification for the implementation of such system in India lies in national
security reasons.37 The beneficiaries of such system would be the National Crime
Records Bureau, State police forces, and the Ministry of Home Affairs.
The literature has presented several arguments against this system. Some have
pointed out that such system poses a threat to privacy38 (protected by Article 21 of
the Indian Constitution) and basic human rights (such as the right to privacy, data
protection, freedom of expression, and free assembly and association). Several
concerns have also been raised regarding the fact that the use of facial recognition
will enable the Indian Government to identify the details of the protestors, harming
the person's individual freedom of speech and expression, right to protest, and right
to movement. This is so because, although biometric data are seen as sensitive
personal data and even though there are procedures for their collection, disclosure,
and exchange, such restrictions only apply to “corporations” and not to the Indian
government.39
Others also conclude that “the implementation of NAFRS is illegitimate and is
not proportionate to its need because it lacks statutory authorisation and guidelines
for limiting its usage to exhaustively listed and narrowly defined cases” and that, “in
the absence of safeguards, like a data protection law, it has the potential for mass
surveillance that will significantly impact fundamental rights and civil liberties”.40
Others have even suggested that a moratorium on the use of facial recognition
technology should be imposed until a strong and meaningful data protection law
has been enacted.41 Meanwhile a more serious critique has been presented against
the use of such system for law enforcement purposes: “Indian law is devoid of any
35
See https://internetfreedom.in.
36
See https://en.wikipedia.org.
37
According to the Government of India’s Press Information of 4th March 2020 (available at https://
www.mha.gov.in), “This will facilitate better identification of criminals, unidentified dead bodies
and missing/found children and persons. It will not violate privacy”.
38
Some have mentioned that the NAFRS fails with the three tests provided by the Supreme Court in
the case Justice K.S. Puttaswamy vs Union of India (2017) - https://blog.forumias.com. On the
importance of the Puttaswamy case law, see Mangaldas (2019).
39
Mehta and Jain (2021).
40
Mustafa and Leo (2021).
41
See https://www.civilsdaily.com.
204 R. S. Pereira
In the context of the European Union, the critical parameters for the development
and use of biometric data systems and facial recognition technologies are provided
by data protection, privacy and non-discrimination rules, as well as the proposed AI
regulation of 21 April 2021. However, the European Union approach is not restricted
to such legal frameworks. Several other EU requirements must also be taken into
consideration, such as the rights of the child and of elderly people, the freedom of
expression and freedom of assembly and of association, the right to good adminis-
tration, as well as the right to an effective remedy. Moreover, the biometric data
systems raise questions which are relevant for product safety, product liability and
consumer protection. Furthermore, border control laws must be considered in the
law enforcement context.
42
See Jauhar (2021).
43
Madiega and Mildebrath (2021), pp. 9 and seq.
Remarks on the Use of Biometric Data Systems (and Facial. . . 205
The backgrounds of the Proposal are the European Commission White Paper on
Artificial Intelligence of 2020, as well as several other initiatives of the European
Parliament regarding the definition of limits to the use of facial recognition in the
EU. However, the Commission’s Proposal develops the topic and presents new ideas
for regulating the use of biometric data systems in the European context. The
Proposal of 21 April 2021, takes a step further as it envisages being applied to all
remote biometrics identification systems, including facial recognition
technologies.44
Since a risk-based approach has been adopted by the Commission,45 the key
element in the Proposal is the distinction between high-risk systems and low-risk
systems. Such distinction is essential for the use of facial recognition technologies by
law enforcement authorities. In short, the use of high-risk systems is either
prohibited by the Commission’s Proposal or must comply with strict requirements
enshrined in the Proposal. Conversely, low-risk systems are largely admitted by the
Proposal and subject to only few requirements.
For example, the use of real-time facial recognition systems in publicly accessible
spaces for the purpose of law enforcement is prohibited. An exception is created in
case Member States choose to authorize the use of such systems for important public
security reasons, and in case the appropriate judicial or administrative authorizations
are granted.46 The use of facial recognition technologies for purposes other than law
enforcement (e.g., border control, market places, public transport and even schools)
is permitted subject to a conformity assessment and compliance with some safety
requirements before entering the EU market.47
Low risk systems are only subject to limited transparency and information
requirements. This may be the case of facial recognition systems used for categori-
zation purposes.48
44
Madiega and Mildebrath (2021), p. 24.
45
See Proposal at 3.
46
Madiega and Mildebrath (2021), p. 25.
47
Madiega and Mildebrath (2021), I and pp. 24–25.
48
Madiega and Mildebrath (2021), p. 27.
206 R. S. Pereira
identification systems.49 Moreover, different sets of rules are provided in the Pro-
posal depending on the use of the systems either in real-time, remote or ex-post
remote.
With “real-rime” remote biometric identification systems it is possible to capture
biometric data and run the comparison and identification processes instantaneously
(or without a significant delay), based on ‘live’ or ‘near-live’ material, such as video
footage, generated by a camera or other device. In contrast, ‘ex-post’ biometric
identification systems enable to capture biometric data and comparison and identi-
fication processes to run but after a significant delay, based on pictures or video
footage generated by closed circuit television (CCTV) cameras or private devices.50
To law enforcement purposes, the Proposal prohibits the use of AI systems for
‘real-time’ (or live) remote biometric identification of natural persons in publicly
accessible spaces.51 Thus, the use by the police of facial recognition systems to
identify persons participating in a public protest, or even to locate persons who have
only committed minor offences is not allowed. These would be considered high-risk
systems according to the Commission’s Proposal and, consequently, would be
subject to a general prohibition.52
However, the Commission’s Proposal includes three situations in which high-risk
real-time remote biometric identification systems for law enforcement purposes may
be allowed.53 First, the targeted search for potential victims of crime, including
missing children. Second, the prevention of a specific, substantial and imminent
threat to the life or physical safety of persons or of a terrorist attack. Third, the
detection, localization, identification or prosecution of a perpetrator or individual
suspected of a criminal offence referred to in the European Arrest Warrant Frame-
work Decision (that is, a serious crime).54
Underlying these 3 exceptions are important public security reasons which justify
the use of real-time remote biometric identification systems for law enforcement
purposes. Notwithstanding, the Commission’s Proposal leaves it to the Member
States to decide whether they want to implement the exceptions. This is so because
national security matters largely remain an exclusive competence of the Member
States. In any case, the use of biometric data systems must comply with the
principles enshrined in the GDPR and are to be accompanied by adequate procedural
safeguards (e.g. as a rule, an express and specific authorization should be granted by
a judicial authority or by an independent administrative authority).55
Other possible uses of biometric data systems by law enforcement authorities
may fall under the limitations of the Proposal, whenever the systems are
49
See Recital 8 of the Proposal at 19.
50
Madiega and Mildebrath (2021), p. 25.
51
See Article 5(1)(d) of the Proposal at 43.
52
Madiega and Mildebrath (2021), p. 25.
53
See Article 5(1)(d) of the Proposal at 43–44.
54
Madiega and Mildebrath (2021), pp. 25–26.
55
Madiega and Mildebrath (2021), p. 26.
Remarks on the Use of Biometric Data Systems (and Facial. . . 207
6 Conclusion
56
Madiega, Mildebrath (2021), pp. 26–27.
208 R. S. Pereira
References
Almeida D, Schmarko K, Lomas E (2021) The ethics of facial recognition technologies, surveil-
lance, and accountability in an age of artificial intelligence: a comparative analysis of US, EU,
and UK regulatory frameworks. AI Ethics 2:377–387
Dushi D (2020) The use of facial recognition technology in EU law enforcement: Fundamental
rights implications. Global Campus of Human Rights. Available at https://repository.
gchumanrights.org. Accessed 22 Dec 2022
Egbert S, Leese M (2021) Criminal futures: predictive policing and everyday police work.
Routledge, London
Gleizer O, Montenegro L, Viana E (2021) O direito de proteção de dados no processo penal e na
segurança pública. Marcial Pons, São Paulo
Gless S (2018) Predictive policing – in defense of ‘True Positives’. In: Bayamlıoğlu E, Baraliuc I,
Janssens L et al (eds) Being Profiled: Cogitas Ergo Sum. 10 years of profiling the European
Citizen. Amsterdam University Press, pp 76–83
Gordon B (2021) Automated facial recognition in law enforcement: the queen (on application of
Edward Bridges) v The Chief Constable of South Wales Police”, PER/PELJ 2021(24). https://
doi.org/10.17159/1727-3781/2021/v24i0a8923
Greco L, Vaz M (2021) Vivendo 1984 no ano de 2020 - o reconhecimento facial e a democracia. In
Boletim IBCCrim, 340. Available at https://www.ibccrim.org.br
Haddad GM (2021) Confronting the biased algorithm: the danger of admitting facial recognition
technology results in the courtroom. Vanderbilt J Entertain Technol Law 23:891–918
Hofmann H (2020) Predictive Policing - Methodologie, Systematisierung und rechtliche
Würdigung der algorithmusbasierten Kriminalitätsprognose durch die Polizeibehörden.
Duncker & Humblot, Berlin
Jauhar A (2021) Facial recognition in law enforcement is the litmus test for India’s commitment to
“Responsible AI for All”. Available at https://www.orfonline.org Accessed 22 Dec 2022
Remarks on the Use of Biometric Data Systems (and Facial. . . 209
Rui Soares Pereira is Assistant Professor at the Faculty of Law of the University of Lisbon,
Portugal, where he has taught Criminal Proceedings, Criminal Law, Civil Proceedings and Civil
Law. He is also Professor at the Portuguese Military Academy and a member of the Centre for
Research in Criminal Law and Criminal Sciences (Centro de Investigação em Direito Penal e
Ciências Criminais) and of the Research Centre for Private Law of the University of Lisbon (Centro
de Investigação de Direito Privado), whose research line on Private Law in the Digital Era he takes
part of.
Cyber Operations Threatening the
European Union and its Member States:
The Rise of the European Union as a Cyber
Defence Actor
Abstract According to recent official reports, cyber threats rank among the top
risks facing the world. Cyber operations against the European Union are increasing
in number, sophistication and severity, while the legal framework to combat these
operations is not up to the challenge. The European Union is left with a very lengthy
legislative process and a huge and complex legislative patchwork, difficult to
understand and to apply, even for the most experienced expert. This fragmented
and intricated approach causes entropies and inefficiencies in an environment where
agility and efficiency are decisive. Moreover, most of the legislative responses are
still to be approved and implemented. The lengthy and incomplete legislative
responses contrast with the dizzying speed with which ICT developments keep
succeeding and improving their functionalities. This mismatch can have devastating
results. This chapter analyses some of the main strategies adopted in malicious cyber
operations and confronts these strategies with the current European Union legal
framework and legislative initiatives, pointing out possible paths to ensure that the
European Union will rise as a cyber defence actor, preparing the ground for its
leadership in the fourth industrial revolution.
1 Introduction
1
ICT are also called, although inaccurately, digital technologies. For ease of reference, the expres-
sion digital technologies shall also be used in this chapter to refer to, indistinctly, ICT and
information and communication services.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 211
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_12
212 S. de Vasconcelos Casimiro
2
The rigour and accuracy required in the analysis of these issues also faces substantial terminolog-
ical difficulties. Legislation on digital technologies is particularly eloquent in expressing these
difficulties, from the very first lines written by the legislator in these topics. Frequently, the same
reality is referred to with different terms and the same term have different definitions, with different
scopes, depending on the legal text.
3
European Parliament (2022) plenary session, November I, 2022, available at https://
epthinktank.eu.
Cyber Operations Threatening the European Union and its Member States:. . . 213
2 The Facts
We shall start with the facts. It is a fact that the critical sectors have become
increasingly dependent on digital technologies.
Critical sectors include transport, energy, health, banking and finance, but also
water supply and communication, for example. Many sectors that underpin our
society are supported by digital technologies. Gradually, the traditional manual
and analogue control of the relevant equipment in these sectors has been replaced
in response to the growing need to find an efficient and fully automated way of
controlling that equipment. Digital technologies and information systems have
silently taken control of water levels in dams, cooling systems in nuclear power
plants, traffic signals for rail systems and power grids. This automated control is of
particular importance over long distances.
Among these systems, Operational Technology (OT) such as Supervisory Con-
trol and Data Acquisition (SCADA)—a standardized framework for monitoring and
controlling industrial, infrastructure and facility-based processes—have gained
prominence, becoming widely disseminated during the 70s.4 These systems are
responsible for providing millions of people with important commodities and ser-
vices. Nevertheless, SCADA systems have been subject to a growing number of
attacks in recent years and particularly since the early 2000s, when many of these
systems opened to the public network.5
The most well-known SCADA attack took place against Iran’s nuclear enrich-
ment complex in Natanz. It was carried out by the malicious computer worm
Stuxnet, specifically designed to attack that particular SCADA system. Stuxnet,
which was first uncovered in 2010, is believed to be responsible for causing
substantial damage to the nuclear programme of Iran.6
As new technologies are introduced, cyber risks will inevitably grow. As new
technologies are increasingly used to support critical sectors, cyber risks associated
with critical sectors will also inevitably grow. It is unsurprising, thus, that official
reports, such as the World Economic Forum’s Global Cybersecurity Outlook 2022,
rank cyber threats among the top risks facing the world.7
4
See Chapin and Lehr (2010), Lamba et al. (2017) and Mittal (2015).
5
Lamba et al. (2017), p. 31, and Microsoft (2022), pp. 1 ff.
6
See Brown (2011), Richardson (2011), Trautman and Ormerod (2018).
7
World Economic Forum’s (2022) Global Cybersecurity Outlook 2022 – Insight Report, available
at https://www3.weforum.org.
214 S. de Vasconcelos Casimiro
Cyber operations against the European Union are taking place on a daily basis.8 This
expression cyber operation is used with the meaning given in the Tallinn Manual 2.0
on the International Law Applicable to Cyber Operations, which defines it as “the
employment of cyber capabilities to achieve objectives in or through cyberspace”.9
It is a fact that there are relevant developments at this level, with the detection of
more than 200,000 new malware infections every day.10 As access to the Internet
reached more than 90 percent of households in the European Union in 2022,11 and
devices connected to computer networks—leading to the so-called Internet of
Things (IoT)—continue to multiply, the potential for exploitation of the vulnerabil-
ities in this interconnected society has also increased exponentially. And these
vulnerabilities are actually being exploited. Concepts such as cybersecurity by
design or cyber resilience have become part of our everyday lexicon.
There have been cyber operations against a range of European Union institutions,
such as the European Commission, the European Medicines Agency and the
European Banking Authority.12 At Member States level, there are also many inci-
dents to report in critical sectors, including healthcare, government administration
and the technology industry.13
In this context, questions on how to maintain computer networks freely available,
open and neutral in the European Union, while strengthening its resilience, have
become one of the challenges to address at a political and legislative levels.14
Moreover, cyber operations against the European Union are increasing in number,
sophistication and severity.
8
European Parliament (2021).
9
Schmitt (2017), p. 564.
10
ENISA, the European Union’s cybersecurity agency, has reported the detection of 230.000 new
malware infections every day during the period from January 2019 to April 2020 – ENISA (2021),
p. 9—and has reported that during 2022 these attacks continue to increase – ENISA (2022), p. 7.
According to Microsoft Digital Defence Report 2022, the volume of password attacks has risen to
an estimated 921 attacks every second, representing a 74% increase in just one year – Microsoft
(2022), p. 2.
11
OECD Data, available at https://data.oecd.org/ict/internet-access.htm.
12
European Parliament (2021).
13
ENISA (2021), p. 6.
14
European Parliament (2021).
Cyber Operations Threatening the European Union and its Member States:. . . 215
The official records of cyber incidents in recent years reveal that, in addition to
becoming a daily reality, there is an exponential rise in cyber operations against the
European Union and its Member States.15 Analysis of the recent trends shows that
geopolitics continue to have strong influence on cyber operations, with state-actors
opting for destructive attacks, in concert with kinetic military action.16 The Russia-
Ukraine conflict has had an impact in this regard, contributing to an increase in State-
sponsored cyber operations in Europe.17 But cyberspace is also used for strategic
purposes by non-State actors.
One area of increasing concern is the weaponisation of cyberspace by State and
non-State actors. Besides being a technical domain, cyberspace is also a political
domain where cyber weapons can be used for political ends.
The widespread adoption of software solutions across multiple critical infrastruc-
tures has also created more opportunities to exploit vulnerabilities for strategic
purposes in these infrastructures, usually by compromising the supply chain.18
These enhanced opportunities, in conjunction with the increased professionalisation
of cyber operators, including hackers, and the rise in the hacker-as-a-service busi-
ness model, growing since 202119—making available the manpower necessary to
carry out operations—has created a fertile ground for an increase in malicious cyber
operations.
This same easier access to skilled manpower by resourceful actors, such as States,
can also explain a reported rise in the sophistication of cyber operations and their
impact.20 Similarly, the emerging cybersecurity awareness, among States, organisa-
tions and citizens, and the consequent heavy investment in network and information
security has led to greater sophistication of cyber operations in order to achieve
success.21
This increase in sophistication is also noted in cyber operations resulting in
disinformation campaigns, which have adopted more complex and powerful tools,
15
ENISA (2022). According to this report, from 2021 to 2022 malware targeting IoT almost
doubled – ENISA (2022), p. 50. See also EUROPOL (2021), and European Parliament (2021).
16
ENISA (2022), p. 10.
17
ENISA (2022), pp. 7 ff., and Microsoft (2022), p. 2 (“On February 23, 2022, the cybersecurity
world entered a new age, the age of the hybrid war”). This is true also in relation to other political
tensions around the globe—v.g., ENISA (2022), p. 27.
18
ENISA (2022), p. 23.
19
ENISA (2022), p. 10.
20
Microsoft (2022), p. 1. According to the report of the European Union’s cybersecurity agency
ENISA “Cybersecurity attacks continued to increase during the second half of 2021 and 2022, not
only in terms of vectors and numbers but also in terms of their impact”—ENISA (2022), p. 7.
21
ENISA (2022), p. 4: “The more organisations increase the maturity of their defences and
cybersecurity programmes, the more they increase the cost for adversaries”.
216 S. de Vasconcelos Casimiro
exploiting to the full new developments in technologies such as deep fakes,22 voice
biometrics and artificial intelligence23 in order to optimise their impact.
So, not only there is an increase in the number of malicious cyber operations, but
these operations are more and more sophisticated and their impact is getting worse.
These are the relevant facts to deal with.
Analysing the legal framework applicable to malicious cyber operations, two main
sets of rules can be identified. The first set of rules results from a straightforward
transposition of the existing rules, created for the analogue world, to the digital
world. This set covers most of the rules applicable to online activities, since the
majority of the rules applicable to analogue activities also make sense in online
activities. So, for example, unlawful activities in the analogue world—such as
fraud—continue to be unlawful when performed online. In this case, there is no
need to create specific rules for regulating these online activities. The general
principle is that existing national and regional legal framework applies to cyber-
space, and the rules of this first set are consistent with that principle. This set of rules
can be found where the realities to which they apply have no particular specificities
22
There is a 900%year over year increase in proliferation of deepfakes since 2019—Microsoft
(2022), p. 82.
23
ENISA (2022), p. 86. On the last trends on cyber disinformation campaigns, see ENISA (2022),
p. 29 and pp. 82–86. According to this report, “Several state-backed actors have built the capability
to use social media platforms, search engines and messaging services to disseminate disinformation.
Their approach differs from the traditional disinformation campaigns since these services provide
out-of-the-box tools to test and optimise their content and monitor the outreach and impact of
disinformation campaigns. Moreover, developments in Machine Learning (ML), Artificial Intelli-
gence (AI), deep fakes, and voice biometrics have provided threat actors with powerful tools to
create misleading content for their campaigns” (ENISA 2022, p. 29).
Cyber Operations Threatening the European Union and its Member States:. . . 217
in their corresponding online activities and, thus, can be applied, without adapta-
tions, to these activities.
The second group of rules refers to realities that offer certain specificities in the
digital world, requiring particular rules to address them. The existing national legal
framework that addresses those specificities is mostly fuelled by the European
Union, since this Union has been taking the lead in several topics related to online
activities. Understanding the opportunity to regulate a new field from scratch, with
an uncommon potential to achieve a fast successfully harmonisation, the European
Union has taken this initiative since the massive use of computer networks occurred,
in the nineties.
This set of rules are, thus, mainly enacted by the European Union, usually through
Directives that are later transposed by Member States, or through Regulations that
are directly applicable in these States. The protection of personal data, that requires a
specific legal framework adapted to the challenges posed by the processing of data
through ICT, is a good example of a reality requiring a legislation included in this
second set of rules, materialised in the General Data Protection Regulation
(GDPR).24 This second set of rules reinforces the existing legal framework with a
new layer of rules specifically designed to address particular vulnerabilities of the
digital environment.
There are many examples of this second set of rules in the combat against
malicious cyber operations. The combat against any kind of new forms of illicit
behaviour takes place at least by three distinct, but complementary, approaches:
(i) through prevention, which is usually addressed by the legal framework by
imposing duties of care, such as obligations to protect people, good or services;
(ii) through repression by an extension of the borders of illicit activities, creating new
limitations or prohibitions, such as new types of crimes; and (iii) through improved
judicial and law enforcement rules. While combating malicious cyber operations, all
these approaches have been gradually adopted and implemented in the legal frame-
work of each Member State, with the impulse of the European Union.
Within the prevention approach, there are many relevant initiatives in the area of
cybersecurity and cyber defence. A good example is found in the original cyberse-
curity legal framework, based on the NIS Directive.25 The NIS Directive, and the
NIS2 Directive,26 that replaces and repeals the NIS Directive, have a particular
relevance in this regard, since they impose on certain operators, on the one hand,
24
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation),
OJEU L 119/1.
25
Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016
concerning measures for a high common level of security of network and information systems
across the Union, OJEU L 194/1.
26
On 28 November 2022, the Council adopted a new Network and Information Security Directive
(NIS2), a Directive of the European Parliament and of the Council on measures for a high common
level of cybersecurity across the Union, repealing Directive (EU) 2016/1148, pending publication.
218 S. de Vasconcelos Casimiro
27
Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on
ENISA (the European Union Agency for Cybersecurity) and on information and communications
technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity
Act), OJEU L 151/15.
28
Directive 2013/40/EU of the European Parliament and of the Council of 12 August 2013 on
attacks against information systems and replacing Council Framework Decision 2005/222/JHA,
OJEU L 218/8.
29
Treaty open for signature in Budapest, November 2001.
Cyber Operations Threatening the European Union and its Member States:. . . 219
e-evidence has reached a milestone with the provisional political agreement between
the European Parliament and the Council on the new rules for sharing of electronic
evidence across the European Union.30 On April 17, 2018, the European Commis-
sion has presented a proposal to the adoption of a Directive and a Regulation which
will create a European production order and a European preservation order, provid-
ing national authorities with a reliable channel to obtain e-evidence. This initiative
has been delayed for many years given the difficulty of reaching an agreement on
the matter. Finally, new developments seem to be under way, soon putting an end to
the lengthy judicial cooperation procedures to obtain electronic evidence, with the
related risks of having the data no longer available.31
Lastly, as regards the repression with new limitations or prohibitions, cybercrime
deserves a particular reference. The combat against online illicit activities had led to
new crimes, with a particular highlight to the above-mentioned Directive on attacks
against information systems and, more importantly, the Convention on Cybercrime.
The landscape is always under evaluation and new crimes may be created or have
their scope improved.
Given the inherent transnational nature of cyber operations, although having
addressed national and regional legal framework, references to international law
were inevitable, as illustrated by the Convention on Cybercrime. In many cases, such
as in this of the Convention on Cybercrime, the commitments entered into by the
participating States consist precisely in the adoption of certain national legislation. In
these circumstances, international law is a very useful tool to harmonise national
law. In its other important role, as a promoter of the world order and regulator of the
behaviour of States among themselves, it is briefly addressed in the following
section.
International law also applies to cyberspace. This statement may seem very basic and
evident, but the applicability of international law to cyberspace was intensely
discussed at the international level, in the European Union, in the United Nations
and in NATO. However, at the present moment, it has already been declared by each
one of these organisations that international law actually applies to cyberspace.32
30
European Commission, e-Evidence (2022b).
31
Additionally to this outcome, the Committee of Ministers of the Council of Europe has adopted a
Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and the
disclosure of electronic evidence.
32
At the Wales Summit, in 2014, NATO recognised that international law applies to cyberspace and
that cyber defence is part of NATO’s core task of collective defence. In the context of the United
Nations, see the 2021 Report of the Group of Governmental Experts on Advancing responsible
State behaviour in cyberspace in the context of international security, available at https://dig.watch/
wp-content/uploads/2022/08/UN-GGE-Report-2021.pdf.
220 S. de Vasconcelos Casimiro
A clear international law, applicable to cyberspace, and a national and regional law
that, besides being applicable to cyberspace, is adapting to some of the specificities
brought by the digital environment, seems a good plan to tackle the challenges ahead
in this environment.
However, when taking a step back to look at the bigger picture of, on the one
hand, the evolution of cyber operations in recent years, their capabilities and the risks
they pose to our society, and, on the other hand, the legal framework available to
meet them, it becomes increasingly evident that following that plan will not be
enough. The European Union is left with a huge and complex legislative patchwork,
difficult to understand and to apply, even for the most experienced expert. This
fragmented and intricate approach causes entropies and inefficiencies in an environ-
ment where agility and efficiency are decisive. Moreover, most of the legislative
responses are still to be approved and implemented. The lengthy and incomplete
legislative responses contrast with the dizzying speed with which ICT developments
keep succeeding and improving their functionalities. This mismatch can have dev-
astating results.
In a kinetic scenario, the extent of the forces and potential actions of the adversary
is usually possible to observe or to assess. States display their kinetic military power
and make a point of exhibiting it, notably at military parades. It is possible to trace
the kinetic material fairly accurately to its source. It is possible to monitor the
capabilities, whether it is the number of troops, weapons, tanks or aircrafts. And it
is possible to monitor activities, observing the deployment of the resources. Defence
is, therefore, also feasible, as a response may be diligently prepared.
Cyber Operations Threatening the European Union and its Member States:. . . 221
When the confrontation takes place in cyberspace, there is a veil of opacity that
makes it impossible to know the capabilities and predict activities. States make an
effort to conceal their activities and even their capabilities. There is no need for a
factory, a military base or impressive military materials. All that is needed is a
computer, an Internet connection and a computer expert. The development of
offensive cyber weapons and offensive cyber operations is very difficult to monitor.
That is why cyber operations are asymmetric. Their preparation may be
undetectable, they occur at the speed of light and there is little or no warning time
to react. Additionally, the initial strike is likely to block any effective defence.
These are the new battlefields made possible by the digital era. These battlefields
are characterised by interdependencies: between sectors—such as the communica-
tion sector and the energy sector–, between fields of knowledge—such as computer
engineering and military tactics and strategies–, between multiple subjects—such as
States, companies and everyday Internet users–, between multiple layers of
society—government, academia and private industry—and many others. In a bat-
tlefield characterised by interdependencies, the weakest link becomes a target. The
glue that may organise and give consistency to this complex puzzle consists in the
legal framework, which cannot be a puzzle itself.
In order to understand how the European Union may build a legal framework
better suited to cope with this new reality, it is important to know some of the main
strategies used in malicious cyber operations. Within these, malware dissemination
and manipulation of information stand out.
For many years, unauthorised access to computer systems by States had the primary
goal of exfiltrating sensitive information, falling under the category of espionage.
Recently, that unauthorised access has been increasingly used for destructive activ-
ities, such as sabotage.33 In both cases, malware dissemination plays a central role.
Malware consists in a software intentionally designed for malicious purposes.
There are many types of malwares, for all kinds of uses. There are viruses, spywares,
worms, trojans, ransomwares, and many others. Some well-known malwares illus-
trate the diversity of their potential, such as Wannacry, BlackEnergy, Petya,
NotPetya or Flame, to mention just a few. The malware is commonly spread through
phishing techniques, which continues to be the usual way to obtain access to a
device,34 although an infection may have several possible origins.
Although the purposes of the disseminator can vary, they all can be reduced
to one: causing harm, whether for financial gain, to defend ideological causes or for
the benefit of a State or group of States. This harm can have varying degrees of
33
CNN (2022).
34
ENISA (2022), p. 5.
222 S. de Vasconcelos Casimiro
severity. Certain techniques applied to certain computer systems may even kill. This
means the spreading of malware can pose serious risks to the security of a State, in its
three dimensions.
35
See information manipulation by classical Greek oligarchs in the context of the polis in Simonton
(2017), pp. 186–223.
36
ENISA (2022), p. 86.
37
See Wu (2019).
38
IAB (2022).
39
See Cavaliere (2022), Keppo (2021), Khrustalev and Masolletti (2021) and Ng and
Taeihagh (2021).
Cyber Operations Threatening the European Union and its Member States:. . . 223
understanding that peace, democracy and respect for human rights cannot be taken
for granted.
The realisation of the fragility of the democracy and other fundamental values
that are at the core of modern Western society should be incentive enough to
mobilise a robust legal response against cyber operations designed to manipulate
information. However, freedom of speech has been advocated as a value that would
be irretrievably compromised by any move in that direction, and, for that reason, has
been an obstacle to such a vigorous response.40
Among the most known cases where manipulation of information has been used
is the 2016 United Kingdom Brexit referendum41 and several European elections,
such as the 2017 French presidential election, 2017 German federal elections and
2018 Sweden general elections.42 The techniques range from pure disinformation,
with the spreading of fake news, to the spreading of truthful information illegally
obtained, in such a way as to mislead the public. The intended purposes are to
manipulate public opinion, divide Europeans, promote populism and political
extremism, interfere with the internal affairs and democratic institutions.43
40
This has been considered the free speech paradox, where “legal decisions intended to protect
citizens and free idea markets can achieve the opposite”—Alstyne and Marshall (2022).
41
Jon Danzig, investigative journalist and founder of the Reasons2Remain platform, is one of the
journalists to expose the disinformation orchestrated in the United Kingdom’s mainstream media
(see https://eu-rope.ideasoneurope.eu/tag/jon-danzig). The House of Commons Digital, Culture,
Media and Sport Committee on ‘Fake News’ (2018) concluded Russia has engaged in unconven-
tional warfare against the United Kingdom voters through social media, to have them vote for
Brexit – House of Commons Digital, Culture, Media and Sport Committee on ‘Fake News’ (2018).
See McGaughey (2018) and Bastos and Mercea (2018).
42
Microsoft (2022) and ENISA (2022). See Cavaliere (2022), Khrustalev and Masolletti (2021), Ng
and Taeihagh (2021), Alstyne and Marshall (2022). On 7 November 2022, Russian businessman
Yevgeny Prigozhin, the founder of the Russian Wagner Group, admitted he had interfered in United
States elections and would continue doing so in the future—Reuters (2022).
43
This is also a common practice in other parts of the world. In the United States, during the 2016
presidential campaign, the Democratic National Committee (DNC) servers were hacked and many
e-mails were stolen and made available in the Wikileaks platform. The leak included thousands of
e-mails involving Hillary Clinton, casting suspicions of misconduct by the Democrat candidate. It
affected her image and reputation. It is believed that the incident was a determinant factor for the
results of the presidential election, giving the victory to Donald Trump. The attacker or attackers,
operating under the pseudonym “Guccifer 2.0” are alleged to be Russian intelligence agency
hackers, according to indictments carried out by the Mueller investigation—U.S. Department of
Justice (2018, 2019).
224 S. de Vasconcelos Casimiro
After examining two of the main strategies used in cyber operations against the
European Union, it can be concluded that there are many unanswered questions
when trying to accommodate the facts described in the current legal framework.
These unanswered questions start with the difficulty of attribution. Attribution is
the first topic that should be dealt with in cyber operations. Is it possible to, without
any doubt, attribute a specific cyber operation to a certain State? Does the current
legal framework and technological tools allow such attribution? For that purpose,
can it be applied, as in other contexts, the criteria of the responsibility of States for
internationally wrongful acts? And, in some circumstances, can it be considered a
self-inflicted harm, that cannot be attributed to a third-party (for example when the
victim himself, without knowing, places a certain malware in his own device)?
Even if the attribution question is overcome, there are many other questions. If it
is known for a fact that a certain State was the one who performed the cyber
operation, does it represent—in case of malware dissemination or in case of manip-
ulation of information—a non-authorised intervention in another State, capable of
being qualified as a violation of the principle of sovereignty? Or, in some other cases
where the malware causes victims or severe material damage, may it qualify as a use
of force or an armed attack? Well, these are all concepts that have no clear
boundaries in international law.44
And, from another perspective, what are the legitimate responses in the event of a
malware causing severe damage or victims? Can a State strike back? And if it can
strike back, how should this be done? Can retaliation be qualified as self-defence?
And the questions continue to mount up.
It is possible to summarize all these questions into two main questions: how will
the European Union react in the event of a major cyber operation that causes
significant destruction in one or more Member States (for example, if the cyber
operation destroys a European Union’s critical infrastructure)? Does the European
Union have the legal framework it needs to respond under those circumstances?
Contrarily to all the other answers, it is possible to answer to these two questions
fairly easily: the European Union is going to react in a disorganised and
unpredictable way; and the European Union does not have the legal framework it
needs to successfully respond to the challenges posed by malicious cyber operations.
44
This is the reason why it is correctly affirmed that the question has shifted from asking whether
international law is applicable to cyberspace to how it applies—Osula et al. (2022).
Cyber Operations Threatening the European Union and its Member States:. . . 225
Not all is bad news, though, as there have been some positive developments in recent
years. In this regard, the European Union is raising awareness of the existing new
threats;45 it is adopting common strategies in several areas, in order to achieve a
certain harmonisation;46 and it is making serious efforts, at an international level, to
harmonise points of view and standards in relation to the application of international
law in cyberspace, in the United Nations and in NATO.47
The European Union has also approved a legal framework for applying restrictive
measures against cyber-attacks threatening the Union or its Member States, called
the Cyber Diplomacy Toolbox. A joint diplomatic response to malicious cyber
activities has resulted in the negotiation and adoption, on 17 May 2019, of this
Cyber Diplomacy Toolbox, comprised of the following legal instruments: Council
Regulation (EU) 2019/79648 and Council Decision (CFSP) 2019/79749 concerning
restrictive measures against cyber-attacks threatening the Union or its Member
States.50
Both legal instruments provide a framework to impose sanctions to any natural or
legal person, entity or body who is involved in cyber-attacks with a significant
effect—including attempted cyber-attacks with a potentially significant effect—
which constitute an external threat to the Union or its Member States. Where deemed
necessary, it may also be applied in response to cyber-attacks with a significant effect
against third states or international organisations.
The restrictive measures set forth in those legal instruments consist in freezing
funds and economic resources (provided for in the Decision and in the Regulation),
and in preventing the entry into, or transit through, the territories of Member States
(provided for in the Decision).
45
For example, every year the European Commission promotes the European Cybersecurity Month,
partnering with the European Union Agency for Cybersecurity (ENISA) and with Member States,
dedicated to promoting cybersecurity among European Union’s citizens and organisations through
awareness raising activities and sharing of good practices (cf. https://cybersecuritymonth.eu/about-
ecsm).
46
In her 2021 State of the Union address, Ursula von der Leyen, European Commission President
stated: “If everything is connected, everything can be hacked. Given that resources are scarce, we
have to bundle our forces. (. . .) This is why we need a European Cyber Defence Policy, including
legislation on common standards under a new European Cyber Resilience Act” – von der Leyen
(2021). In March 2022, the European Union has formally approved the Strategic Compass, a plan of
action for strengthening the European Union’s security and defence policy.
47
See, for example, the European Union’s 2020 Cybersecurity Strategy, available at https://eur-lex.
europa.eu/legal-content/EN/ALL/?uri=JOIN:2020:18:FIN.
48
Council Regulation (EU) 2019/796 of 17 May 2019 concerning restrictive measures against
cyber-attacks threatening the Union or its Member States, OJEU L 129 I/1.
49
Council Decision (CFSP) 2019/797 of 17 May 2019 concerning restrictive measures against
cyber-attacks threatening the Union or its Member States, OJEU L 129 I/13.
50
See Miadzvetskaya and Wessel (2022).
226 S. de Vasconcelos Casimiro
51
Recital 9 of the Council Decision 2019/797 of 17 May 2019: “Targeted restrictive measures
should be differentiated from the attribution of responsibility for cyber-attacks to a third State. The
application of targeted restrictive measures does not amount to such attribution, which is a
sovereign political decision taken on a case-by-case basis. Every Member State is free to make its
own determination with respect to the attribution of cyber-attacks to a third State.”
52
Council of the European Union, EU resilience (2022a).
53
Council Directive 2008/114/EC of 8 December 2008 on the identification and designation of
European critical infrastructures and the assessment of the need to improve their protection, OJEU L
345/75. This Directive establishes a procedure for identifying and designating European Critical
Infrastructures (ECIs) in the transport and energy sectors that, were they to be disrupted or
destroyed, would have significant cross-border impacts.
54
Proposal for a Regulation of European Parliament and of the Council on horizontal cybersecurity
requirements for products with digital elements and amending Regulation (EU) 2019/1020.
Cyber Operations Threatening the European Union and its Member States:. . . 227
software products. Among many important aspects, this proposal embraces the
concept of security by design, laying down essential requirements for the design,
development and production of products with digital elements. Additionally, it sets
out rules for the placing on the market of products with digital elements, to ensure
the cybersecurity of such products.
In November 2022, the European Union adopted the Regulation on the digital
operational resilience of the financial sector (DORA), to strengthen the ICT security
of financial entities such as banks, insurance companies and investment firms,
harmonising key digital operational requirements across the Union.55 Under this
Regulation, European Supervisory Authorities (ESAs) are to develop technical
standards for financial services institutions to abide by, so as to make their ICT
operations resilient in the face of severe operational disruption and cyber-attacks.
The European Union is also working on two legislative proposals, presented in
March 2022, for a Cybersecurity Regulation and for an Information Security Reg-
ulation.56 These proposals lay down common measures to ensure cybersecurity and
information security across the institutions, bodies, offices and agencies of the
Union. The proposals to approve will require those entities to have frameworks in
place for governance, risk management, and control, in the context of cyber
incidents.
Understanding the importance of establishing a cooperation platform to build a
coordinated response to large scale cyber incidents and crises, the 2020 European
Union Cybersecurity Strategy outlined the need for a Joint Cyber Unit (JCU), that
would strengthen cooperation among institutions, agencies, bodies and the author-
ities in the Member States. In June 2021, through the adoption of a Recommenda-
tion,57 the Commission proposed the creation of that Unit, as cyberattacks grew in
number, scale and consequences, impacting public services, businesses and citi-
zens.58 This Joint Cyber Unit will work on a technical and operational level,
enabling mutual assistance between those entities as well as the private sector. The
plan is to build the Unit gradually with the operationalisation completed by
31 December 2022.59
55
Council of the European Union, Digital finance (2022b).
56
European Commission, New rules to boost cybersecurity and information security in EU insti-
tutions, bodies, offices and agencies (2022a).
57
Recommendation (EU) 2021/1086 on building a Joint Cyber Unit, OJEU, L 237/1.
58
European Commission, Joint Cyber Unit Factsheet (2021b).
59
European Commission (2021a).
228 S. de Vasconcelos Casimiro
60
Helena Carrapiço and André Barrinha have addressed this issue in 2017 as a lack of coherence—
Carrapiço and Barrinha (2017).
Cyber Operations Threatening the European Union and its Member States:. . . 229
6 Conclusion
After this brief tour through the major legal challenges brought about by the digital
environment, related to cybersecurity and cyber defence, it is possible to conclude
that appearances can be highly deceptive.
At first glance, there is a clear international law, applicable to cyberspace, and a
national and regional law that, besides being applicable to cyberspace, is adapting to
some of the specificities brought by the digital environment, and, in particular, has
been increasingly adopting initiatives to tackle the challenges of malicious cyber
operations against the European Union, in their multiple layers, given the fact that
these operations have been increasing in number, sophistication and severity. These
legislative initiatives cover prevention, repression and judicial and law enforcement,
and also comprehensively cover various sectors and subjects.
However, when taking a step back to look at the bigger picture, it becomes
evident that, despite all efforts, currently the European Union lacks the legal
framework it needs to successfully respond to the challenges posed by malicious
cyber operations. There is a mismatch between the slow pace of the legislative
process and the speed in which cyber operations evolve. Malware dissemination,
manipulation of online information or any new strategy or technique that may be
under development is going to evolve long before the legislative measure prepared
for its older version has entered into force. The likelihood is that the legal measure is
already outdated, once it eventually comes into force.
Moreover, the current legal framework is fragmented, intricate and complex. The
European Union is left with a huge and complex legislative patchwork, which with
time is becoming larger and more complex to deal with. What was already difficult to
understand and to apply, even for the most experienced expert, is becoming a foreign
and esoteric language just for the initiated.
A lengthy and fragmented legal framework is not compatible with the agility and
effectiveness required in a response to malicious cyber operations. If it is true that
security and defence have evolved, and a simple moat around the castle will no
longer be enough, then legal processes and strategies must also rise to the new
challenges ahead. In order to build a European Union that has an active role as a
cyber defence actor, the European legislator has to step up its legislative process.
Besides creating a faster and more agile legislative process—that in no way com-
promises representative democracy and free participation from each Member
State—the European Union must make an effort to codify its laws regarding online
activities, thus organising them and giving them internal systematisation and coher-
ence, according to a scientific criterion.
Assuming this path is taken, decisions will have to be taken and some matters will
have to be autonomised in separate, albeit related, codes. Codification is never
perfect. Nevertheless, instead of having extremely fragmented rules, scattered
among different long legal texts, comprising endless recitals, with codification it
will be possible to facilitate access to the applicable laws, making available a logical
comprehensive organised legal body.
230 S. de Vasconcelos Casimiro
From a legal perspective, and taking into account current legal challenges, taking
this path will ensure the European Union a role as a cyber defence actor, preparing
the ground for its leadership in the fourth industrial revolution.
References
Alstyne V, Marshall W (2022) Free Speech, Platforms & The Fake News Problem. Available at
https://ssrn.com/abstract=3997980
Bastos MT, Mercea D (2018) The public accountability of social platforms: lessons from a study on
bots and trolls in the Brexit campaign. Philos Transact Royal Soc A 376:1–12
Brown G (2011) Why Iran didn’t admit Stuxnet was an attack. Joint Force Q 63:70–73
Carrapiço H, Barrinha A (2017) The EU as a Coherent (Cyber)security actor? J Common Mark Stud
55:1249–1272
Cavaliere P (2022) The truth in fake news: how disinformation laws are reframing the concepts of
truth and accuracy on digital platforms. Eur Conv Human Rights Law Rev 3:481–523
Chapin J, Lehr W (2010) SCADA for the rest of us: unlicensed bands supporting long-range
communications. Available at https://ssrn.com/abstract=1988184
CNN (2022) US confirms military hackers have conducted cyber operations in support of Ukraine.
Available at https://edition.cnn.com
Council of the European Union (2022a) Press Release, EU resilience: Council adopts a directive to
strengthen the resilience of critical entities. Available at https://www.consilium.europa.eu.
Accessed 22 Dec 2022
Council of the European Union (2022b) Press Release, Digital finance: Council adopts Digital
Operational Resilience Act. Available at https://www.consilium.europa.eu. Accessed
22 Dec 2022
European Commission (2021a) Press Release, EU Cybersecurity: Commission proposes a Joint
Cyber Unit to step up response to large-scale security incidents. Available at https://ec.
europa.eu. Accessed 22 Dec 2022
European Commission (2021b) Joint Cyber Unit Factsheet. Available at https://cyberwatching.eu.
Accessed 22 Dec 2022
European Commission (2022a) Press Release, New rules to boost cybersecurity and information
security in EU institutions, bodies, offices and agencies. Available at https://ec.europa.eu.
Accessed 22 Dec 2022
European Commission (2022b) Press Release, e-Evidence: Commission welcomes political agree-
ment to strengthen cross-border access for criminal investigations. Available at https://ec.
europa.eu. Accessed 22 Dec 2022
European Parliament (2021) Plenary I, June, 2021, Recent cyber-attacks and the EU’s cybersecurity
strategy for the digital decade. Available at https://www.europarl.europa.eu Accessed
22 Dec 2022
European Parliament (2022) Plenary Session, November I, 2022. Available at https://
epthinktank.eu. Accessed 22 Dec 2022
Keppo J, Kim MJ, Zhang X (2021) Learning Manipulation Through Information Dissemination.
https://doi.org/10.2139/ssrn.3465030
Khrustalev V, Masolletti M (2021) The problem of fake news and disinformation in media. https://
doi.org/10.2139/ssrn.3980706
Lamba A, Singh S, Balvinder S et al (2017) Mitigating cyber security threats of industrial control
systems. In: International Journal for Technological Research in Engineering, 3rd International
Conference on Emerging Technologies in Engineering, Biomedical, Medical and Science, pp
31–34. https://doi.org/10.2139/ssrn.3492685
McGaughey E (2018) Could Brexit be void? King’s Law J 29:331–343
Cyber Operations Threatening the European Union and its Member States:. . . 231
Miadzvetskaya Y, Wessel RA (2022) The externalisation of the EU’s cybersecurity regime: the
cyber diplomacy toolbox. Eur Pap 7:413-438
Mittal S (2015) The issues in cyber-defence and cyber-forensics of the SCADA systems. Indian
Police J 62:29–41
Ng LHX, Taeihagh A (2021) How does fake news spread? Understanding pathways of disinfor-
mation spread through APIs. Policy Inter 13:560–585
Osula AM, Kasper A, Kajand A (2022) EU common position on international law and cyberspace.
Masaryk Univ J Law Technol 16:89–123
Reuters (2022) Russia’s Prigozhin admits interfering in U.S. elections. Available at https://www.
reuters.com. Accessed 22 Dec 2022
Richardson JC (2011) Stuxnet as cyberwarfare: applying the law of war to the virtual battlefield.
John Marshall J Infor Technol Priv Law 29:1–27
Schmitt MN (ed) (2017) Tallinn manual 2.0 on the international law applicable to cyber operations.
Cambridge University Press, Cambridge
Simonton M (2017) Classical Greek oligarchy: a political history. Princeton University Press,
Princeton
Trautman LJ, Ormerod P (2018) Industrial cyber vulnerabilities: lessons from Stuxnet and the
internet of things. Univ Miami Law Rev 72:761–826
Unites States Department of Justice (2018) Justice News, Grand Jury Indicts 12 Russian Intelli-
gence Officers for Hacking Offenses Related to the 2016 Election. Available at https://www.
justice.gov. Accessed 22 Dec 2022
von der Leyen U (2021) 2021 State of the Union address, as European Commission President.
Available at https://ec.europa.eu. Accessed 22 Dec 2022
Wu T (2019) Blind spot: the attention economy and the law. Antitrust Law J 82:771–806
ENISA (2021) Main incidents in the EU and worldwide – ENISA Threat Landscape. Available at
https://www.enisa.europa.eu. Accessed 22 Dec 2022
ENISA (2022) ENISA Threat Landscape. Available at https://www.enisa.europa.eu. Accessed
22 Dec 2022
EUROPOL (2021) Internet Organised Crime Threat Assessment 2021 (IOCTA 2021). Available at
https://www.europol.europa.eu. Accessed 22 Dec 2022
Group of Governmental Experts on Advancing Responsible State Behaviour in Cyberspace in the
Context of International Security (2021) Report of the Group of Governmental Experts on
Advancing responsible State behaviour in cyberspace in the context of international security.
Available at https://dig.watch. Accessed 22 Dec 2022
House of Commons Digital, Culture, Media and Sport Committee on ‘Fake News’ (2018) Disin-
formation and ‘fake news’: Interim Report, DCMS Committee, Fifth Report of Session
2017–19, HC 363, 29 July 2018
IAB (2022) Internet Advertising Revenue Report. Available at https://www.iab.com. Accessed
22 Dec 2022
Microsoft (2022) Microsoft Digital Defense Report 2022. Available at https://www.microsoft.com.
Accessed 22 Dec 2022
OECD (2022) OCDE Internet Data. Available at https://data.oecd.org. Accessed 22 Dec 2022
Unites States Department of Justice (2019) Report on the investigation into Russian interference in
the 2016 presidential election. Available at https://www.justice.gov. Accessed 22 Dec 2022
World Economic Forum (2022) Global Cybersecurity Outlook 2022 – Insight Report. Available at
https://www3.weforum.org. Accessed 22 Dec 2022
232 S. de Vasconcelos Casimiro
Sofia de Vasconcelos Casimiro is a Professor at the Faculty of Law of the University of Lisbon,
Portugal and at the Portuguese Military Academy. She works with the Portuguese Army and the
Ministry of National Defence in cybersecurity and cyberdefence matters and she is a member of the
Implementation Working Group of the Cyber Academy and Innovation Hub, as well as a member of
the Board of Directors of the Portuguese Society for Intellectual Property Law.
Part IV
People
Data Protection Litigation System Under
the GDPR
Abstract The data protection litigation system is based on three main defence
mechanisms: (i) lodging a complaint with a supervisory authority; (ii) bringing an
administrative action, with petitioning or annulment purposes; or (iii) bringing a civil
action, with express compensatory purposes. Some domestic laws complement this
triad with the possibility of litigating before criminal courts. It is an eclectic solution
that offers potentially injured parties a multifaceted set of solutions. In the present
chapter we will analyse the general features of the GDPR litigation system, in
particular Articles 77, 78, 79 and 82 of the GDPR. We will also dedicate a section
to class actions, as provided for in Article 80 of the GDPR. We will take a practical
approach, based on the following questions: (i) who can bring actions under each
provision; (ii) against whom; (iii) to whom; and (iv) in what situations?
1 Introduction
Data protection litigation includes all the judicial and extrajudicial mechanisms that
the legal system makes available to natural and legal persons within the framework
of the General Data Protection Regulation (GDPR),1 national implementing laws2
and other related European and national legislation.
Data protection Law has historically been particularly concerned with its practical
application and the need for judicial protection of the various interests involved,
particularly the interests of data subjects. The original OECD Guidelines on the
Protection of Privacy and Transborder Flows of Personal Data recommended that
1
Regulation (EU) 2016/679 of the European Parliament and of the Council, of 27 April 2016, on the
protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
2
Wagner and Benecke (2016), p. 353; Král (2008), p. 243.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 235
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_13
236 A. B. Menezes Cordeiro
Under Article 77(1) of the GDPR, every data subject has the right to lodge a
complaint with a supervisory authority if they consider that the processing of
personal data relating to them infringes the GDPR, the applicable national
implementing laws and other related legislation.9 This right is reinforced by Article
3
Guideline 19. https://www.oecd.org/sti/ieconomy/
oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm.
This concern has been maintained in the current version from 2013: https://legalinstruments.
oecd.org/en/instruments/OECD-LEGAL-0188.
4
Article 10. https://rm.coe.int/1680078b37.
5
This cooperation is recognised by the entities themselves: Council of Europe (1981), p. 9 ff. and
OECD (2001), p. 26.
6
Case C-319/20 Meta Platforms Ireland (ECLI:EU:C:2022:322), para 54 ff.
7
Recital 149 of the GDPR: “Member States should be able to lay down the rules on criminal
penalties for infringements of this Regulation, including for infringements of national rules adopted
pursuant to and within the limits of this Regulation”.
8
Kuner (2010), p. 181.
9
Recital 141(1) of the GDPR.
Data Protection Litigation System Under the GDPR 237
10
Pötters and Werkmeister (2018), para 10.
11
Boehm (2019a), para 8.
12
Nemitz (2018a), para 5.
13
Recital 141(1) of the GDPR.
14
Bergt (2020a), para 5.
15
Körffe (2018), para 2.
16
Case C-362/14 Schrems I (ECLI:EU:C:2015:650), para 63.
238 A. B. Menezes Cordeiro
Article 78(1) of the GDPR allows any person, whatever their nature—natural or
legal, private law or public law person—to appeal against a decision concerning
them by a supervisory authority. The following may therefore appeal: data
17
Körffer (2018), para 2.
18
Article 74(4) of the GDPR.
19
Recital 141(4) of the GDPR.
20
Case C-311/18 Schrems II (ECLI:EU:C:2020:559), para 157.
21
Article 57(1)(f) of the GDPR.
22
Case C-311/18 Schrems II (ECLI:EU:C:2020:559), para 158.
23
Article 78(2) of the GDPR.
Data Protection Litigation System Under the GDPR 239
24
Article 4(1) of the GDPR.
25
Article 4(7) of the GDPR.
26
Article 4(8) of the GDPR.
27
Article 4(10) of the GDPR.
28
Recital 129(8) of the GDPR.
29
Article 58 of the GDPR.
30
Recital 143(9) of the GDPR. Case C-311/18 Schrems II (ECLI:EU:C:2020:559), para 110.
31
Article 77 of the GDPR.
32
Recital 143(6) of the GDPR.
33
Case C-645/19 Facebook v Gegevensbeschermingsautoriteit (ECLI:EU:C:2021:5), para 104.
34
Recital 143(7) of the GDPR.
35
Recital 143(8) of the GDPR.
36
Recital 143(11) of the GDPR.
37
Article 70 ff. of the GDPR.
38
Recital 143(1)-(3) of the GDPR.
240 A. B. Menezes Cordeiro
or decisions of the Board on which the decisions of the national authorities appealed
against are based.39 From a procedural point of view, the GDPR only clarifies that
the addressee of decisions of the Board has two months from their publication on the
Board's website to lodge an appeal.40 Everything else is governed by the general
regime of the CJEU.
Article 79(1) of the GDPR states that any data subject “shall have the right to an
effective judicial remedy where he or she considers that his or her rights under this
Regulation have been infringed as a result of the processing of his or her personal
data in non-compliance with this Regulation”. Article 79(1) is an expression of
the general principle set out in Article 47(1) of the Charter of Fundamental Rights of
the European Union: “Everyone whose rights and freedoms guaranteed by the law of
the Union are violated has the right to an effective remedy before a tribunal in
compliance with the conditions laid down in this Article”.41 Article 79(2) has
different nature and purposes, in that it establishes certain general procedural rules
regarding the bringing of actions against controllers and processors.
Article 79 of the GDPR allows data subjects to appeal directly to courts, regard-
less of whether they have lodged a complaint with the supervisory authority. The
multiple possibilities provided for by the litigation system implemented may lead to
the production of contradictory decisions.42 This is not, however, an intrinsic
weakness of this system, but rather a natural consequence of the independence of
the supervisory bodies and the courts. The direct appeal to the courts will only be
justified, from the point of view of the data subjects, in very straight forward
situations. In all others, it would be preferable to complain first to the national
supervisory authorities and await the results of the investigations carried out in
that context, before taking any legal action.43 This view is reinforced by the fact
that, in principle, complaints incur no costs for the complainant.44
Comparing the content of Articles 79(1) and 82(1) of the GDPR it seems that the
scope of application of the former is narrower—limited to infringement of rights in
the processing of personal data—than that of the latter—extending to all
39
Recital 143(12) of the GDPR.
40
Recital 143(3) of the GDPR.
41
Joined Cases C-245/19 and C-246/19 Etat Luxembourgeoise v B and others (ECLI:EU:C:2020:
795), para 64: Joined Cases C-511/18, C-512/18 and C-520/18 La Quadrature du Net (ECLI:EU:
C:2020:791) para 190.
42
Nemitz (2018b), para 8.
43
Nemitz (2018b), para 2.
44
Article 57(4) of the GDPR.
Data Protection Litigation System Under the GDPR 241
Under Article 82(1) of the GDPR, “any person who has suffered material or
non-material damage as a result of an infringement of this Regulation—national
implementing laws and other related European and national legislation52—shall
have the right to receive compensation from the controller or processor for the
damage suffered”.
The use of the expression “any person who has suffered damage” has given rise to
heated doctrinal debate.53 In abstract terms, three possible hypotheses are admitted:
Article 82 may be invoked (i) by all legal and natural persons; (ii) only by natural
persons; or (iii) only by the data subjects concerned. We find no decisive argument
45
Boehm (2019b), para 10.
46
Bergt (2020b), para 6.
47
Martini (2018), para 1.
48
Bergt (2020b), para 10.
49
On the concept of establishment: Recital 22 of the GDPR. Case C-230/14 Weltimmo (ECLI:EU:
C:2015:639), para 26; Case C-191/15 Verein für Konsumenteninformation (ECLI:EU:C:2016:612),
para 77.
50
Nemitz (2018b), para 1.
51
Nemitz (2018b), para 4.
52
Recital 146(5) of the GDPR.
53
Menezes Cordeiro (2019), p. 495.
242 A. B. Menezes Cordeiro
54
Recital 146(5).
55
Joined Cases C-46/93 and C-48/93 Brasserie du pêcher (ECLI:EU:C:1996:79), para 90.
56
Case C-271/91 Marshall (ECLI:EU:C:1993:335), para 26; Case C-407/14 Arjona Camacho
(ECLI:EU:C:2015:831), para 31.
57
Recital 146(3).
58
Recital 146(3).
59
Case C-228/96 Aprile (ECLI:EU:C:1998:544), para 18; Case C-69/14 Târșia (ECLI:EU:C:2015:
662), para 27; Case C-494/16 Santoro (ECLI:EU:C:2018:166), para 30.
60
Nemitz (2018c), para 12.
61
Kreße (2018b), para 5.
62
Bergt (2020c), para 19.
63
Nemitz (2018c), para 13.
Data Protection Litigation System Under the GDPR 243
64
Wybitul et al. (2018), p. 114.
65
Case C-557/12 Kone (ECLI:EU:C:2014:1317), para 24 ff.
66
Frenzel (2018), para 13.
67
Menezes Cordeiro (2019), pp. 497–498.
68
Van Alsenoy (2016), p. 288; Menezes Cordeiro (2019), p. 498.
69
Recital 146(7) of the GDPR.
244 A. B. Menezes Cordeiro
can be invoked by national courts. Article 82(5) of the GDPR gives both controllers
and processors a right of recourse against other actors.
Under Article 82(6) of the GDPR—referring to Article 79(2)—these actions can
be brought (i) before the courts of the Member State where the controller or
processor is established; or (ii) before the courts of the Member State where the
data subject has their habitual residence.
Article 80 of the GDPR allows claims and legal actions arising from infringements
of the GDPR to be brought by non-profit bodies, organisations or associations. The
model implemented enables, on the one hand, to address the eventual inertia of
supervisory authorities70 and, on the other hand, to counterbalance the existing
inequality, at all levels, between data subjects and controllers and processors: the
knowledge, influence, information gathering ability and monetary capacity held by
these specialised entities allows them, in principle, to litigate with less risk and more
efficiently.71 Article 80 of the GDPR is a useful mechanism for confronting large
technology conglomerates.72 The submission of actions, claims and appeals referred
to in Article 80 of the GDPR is limited to (i) non-profit; (ii) bodies, organisations or
associations; (iii) properly constituted under national laws; (iv) which have statutory
objectives which are in the public interest; and (v) whose activity covers the
protection of the rights and freedoms of data subjects. It is for the supervisory
authorities and national courts to verify, on a case-by-case basis, the fulfilment of
these legal requirements.73
It is not relevant what is the nature, form or internal structure assumed by these
organisations.74 However, the legal text allows the exclusion of natural persons from
the scope of application of the provision.75 The exclusion of profit-making entities
was only introduced during tripartite negotiations. This was intended to avoid the
development of a “commercial claims culture in the field of data protection”.76 This
has no effect on the ability of such institutions to raise funds, including on a
professional basis.77 Otherwise, it would not even be possible for them to litigate
before national supervisory authorities and courts. The organisations mentioned can
only complain and litigate to the extent that they are constituted under the terms of
70
Gierschmann (2016), p. 53.
71
Nemitz (2018d), para 1.
72
Dieterich (2016), p. 265.
73
Nemitz (2018d), para 6.
74
Kreße (2018a), para 4.
75
Boehm (2019c), para 7.
76
Council, 5419/1/16 REV 1 ADD 1, 8 April 2016, 31.
77
Kreße (2018a), para 6.
Data Protection Litigation System Under the GDPR 245
the domestic law of Member States.78 The expression “statutory” is used in a broad
sense: Article 80 of the GDPR does not require that the pursuit of public interests79
be stated in the statutes of the institutions—there need not even be statutes—but the
existence of evidence to prove it, in particular binding documents, is sufficient.80
The pursuit of the protection of the rights and freedoms of data subjects need not be
exclusive or even predominant. As paradigmatic examples of institutions that fulfil
these elements are consumer protection associations or trade unions.81
The bringing of actions of the lodging of claims under Article 80(1) of the GDPR
always presupposes the agreement of the data subject allegedly concerned. It does
not, therefore, constitute an effective class action. These associations may complain
to supervisory authorities (Article 77 of the GDPR), bring actions against supervi-
sory authorities (Article 78 of the GDPR) and bring actions against controllers and
processors (Article 79 of the GDPR). The right to receive compensation pursuant to
Article 82 of the GDPR may be exercised only to the extent that Member State
national law provides for this possibility.
The bringing of actions or the lodging of claims under Article 80(2) of the GDPR
does not presuppose the consent of the data subjects allegedly concerned. It is, in this
sense, an effective popular action. Associations can in this sense complain to
supervisory authorities (Article 77 of the GDPR), bring actions against supervisory
authorities (Article 78 of the GDPR) and bring actions against controllers and
processors (Article 79 of the GDPR). This provision, unlike number 1, contains no
open clause allowing Member States to legislate on the exercise of the right to
compensation.82
7 Conclusion
The data protection litigation system of the GDPR provides potentially injured
parties and, in particular, data subjects, with a varied and multifaceted set of defence
mechanisms. This system must, naturally, be combined with the other mechanisms
enshrined in the domestic laws of the Member States, regardless of their public or
private nature.
78
Recital 142(1).
79
Kreße (2018a), para 7.
80
Boehm (2019c), para 7.
81
Case C-319/20 Meta Platforms (ECLI:EU:C:2022:322), para 65.
82
Recital 143(3).
246 A. B. Menezes Cordeiro
The analysis of Articles 77, 78, 79 and 82 of the GDPR, following the four
questions initially identified—(i) who can bring actions under each provision;
(ii) against whom; (iii) to whom; and (iv) in what situations?—allows us to identify
some internal inconsistencies and raises important interpretative doubts, especially
when comparing the legislative solutions found for each of these defence
mechanisms:
References
Bergt M (2020a) Article 77 of the GDPR. In: Kühling J, Buchner B (eds) Datenschutz-
Grundverordnung, Bundesdatenschutzgesetz Kommentar, 3rd edn. Beck, Munich
Bergt M (2020b) Article 79 of the GDPR. In: Kühling J, Buchner B (eds) Datenschutz-
Grundverordnung, Bundesdatenschutzgesetz Kommentar, 3rd edn. Beck, Munich
Bergt M (2020c) Article 82 of the GDPR. In: Kühling J, Buchner B (eds) Datenschutz-
Grundverordnung, Bundesdatenschutzgesetz Kommentar, 3rd edn. Beck, Munich
Boehm F (2019a) Article 77 of the GDPR. In: Simitis S, Hornung G, Spiecker I (eds)
Datenschutzrecht DSGVO mit BDSG Großkommentar. Nomos, Baden-Baden
Boehm F (2019b) Article 79 of the GDPR. In: Simitis S, Hornung G, Spiecker I (eds)
Datenschutzrecht DSGVO mit BDSG Großkommentar. Nomos, Baden-Baden
Data Protection Litigation System Under the GDPR 247
Boehm F (2019c) Article 80 of the GDPR. In: Simitis S, Hornung G, Spiecker I (eds)
Datenschutzrecht DSGVO mit BDSG Großkommentar. Nomos, Baden-Baden
Council of Europe (1981) Explanatory Report on the Convention for the Protection of Individual
with Regard to Automatic Processing of Personal Data. Strasburg
Dieterich T (2016) Rechtsdurchsetzungsmöglichkeiten der DS-GVO – Einheitlicher Rechtsrahmen
führt nicht zwangsläufig zu einheitlicher Rechtsanwendung. ZD 6:260–266
Frenzel EM (2018) Article 82 of the GDPR. In: Paal BP, Pauly DA (eds) Datenschutz-
Grundverordnung – Bundesdatenschutzgesetz Kompakt-Kommentare, 2nd edn. Beck, Munich
Gierschmann S (2016) Was “bringt” deutschen Unternehmen die DS-GVO? – Mehr Pflichten, aber
die Rechtsunsicherheit bleibt. ZD 6:51–55
Körffe B (2018) Article 77 of the GDPR. In: Paal BP, Pauly DA (eds) Datenschutz-
Grundverordnung – Bundesdatenschutzgesetz Kompakt-Kommentare, 2nd edn. Beck, Munich
Král R (2008) National normative implementation of EC regulation: an exceptional or rather
common matter. EL Rev 2(33):243–256
Kreße B (2018a) Article 80 of the GDPR. In: Sydow G (ed) Europäische
Datenschutzgrundverordnung Handkommentar, 2nd edn. Nomos, Baden-Baden
Kreße B (2018b) Article 82 of the GDPR. In: Sydow G (ed) Europäische
Datenschutzgrundverordnung Handkommentar, 2nd edn. Nomos, Baden-Baden
Kuner C (2010) Data protection law and international Jurisdiction on the Internet (Part 1). Int J Law
Inf Technol 2(18):176–193
Martini M (2018) Article 79 of the GDPR. In: Paal BP, Pauly DA (eds) Datenschutz-
Grundverordnung – Bundesdatenschutzgesetz Kompakt-Kommentare, 2nd edn. Beck, Munich
Menezes Cordeiro AB (2019) Civil liability for processing of personal data in the GDPR. EDPL 5:
492–499
Nemitz P (2018a) Article 77 of the GDPR. In: Ehmann E, Selmayr M (eds) Datenschutz-
Grundverordnung Kommentar, 2nd edn. Beck, Munich
Nemitz P (2018b) Article 79 of the GDPR. In: Ehmann E, Selmayr M (eds) Datenschutz-
Grundverordnung Kommentar, 2nd edn. Beck, Munich
Nemitz P (2018c) Article 82 of the GDPR. In: Ehmann E, Selmayr M (eds) Datenschutz-
Grundverordnung Kommentar, 2nd edn. Beck, Munich
Nemitz P (2018d) Article 80 of the GDPR. In: Ehmann E, Selmayr M (eds) Datenschutz-
Grundverordnung Kommentar, 2nd edn. Beck, Munich
OECD (2001) Guidelines governing the protection of privacy and transborder flows of personal
data. OECD, Paris
Pötters S, Werkmeister C (2018) Article 77 of the GDPR. In: Gola P (ed) Datenschutz-
Grundverordnung Kommentar, 2nd edn. Beck, Munich
Van Alsenoy B (2016) Liability under EU Data Protection Law: From Directive 95/46 to the
General Data Protection Regulatio. JIPITEC 7:271–288
Wagner J, Benecke A (2016) National legislation within the framework of the GDPR. EDPL 3(2):
353–361
Wybitul T, Haß D, Albrecht JP (2018) Abwehr von Schadensersatzansprüchen nach der
Datenschutz-Grundverordnung. NJW 71:113–118
248 A. B. Menezes Cordeiro
António Barreto Menezes Cordeiro graduated from the University of Lisbon Law School
(2008), LLM from King’s College London (2009) and PhD from the University of Lisbon Law
School (2014). He is a Professor at the University of Lisbon Law School, since 2013, where he has
been teaching in the areas of Civil Law, Banking Law, Securities Law, Comparative Law and
Corporate Law, and at the European University (Laureate Group), since 2015, where he has been
teaching in the areas of Civil Law. He has over 40 publications, including Da simulação no Direito
civil (2014), Do trust no Direito civil (2014), Securities Law, Vol. I (2015), Handbook of Securities
Law (2016) and English Contract Law, Vol. I (2017). Founder and member of the editorial board of
the Journal of Civil Law (2014) and member of the editorial board of the Journal of Company Law
(2014). Vice-President of the Consumer Law Institute, since 2016, and Vice-President of the
Brazilian Law Institute in 2016 and 2017. Arbitrator and Legal Consultant.
R2D: The Right to Disconnect from Work
Abstract This chapter intends to describe the state of the art regarding the right to
disconnect from the international and EU labour law perspectives, emphasizing the
2020 Agreement on Digitalisation and the 2021 EU Parliament Directive Proposal,
as well as examining recent examples of domestic legislation and initiatives at the
company level. We underline the several difficulties related to implementing the
right to disconnect and for extending it to all employee’s, highlighting challenges
and alerting to certain issues. We discuss the demand for global legal protection
ensuring and encompassing, not only the right, but also the duty to disconnect, and
we propose solutions, aiming to provide a set of ideas for further and more detailed
legal analysis and development.
1 Introduction1
The employee’s R2D from work has not yet been formally assumed or conceptual-
ized, while disconnection raises a myriad of sometimes conflicting issues.
Amidst the unremitting progress of NICT in the world of work and the continuous
status of hyperconnection brought about by the plethora of professional digital
devices, four main ideas and concepts stand out as particularly relevant.
1
We will use the following abbreviations: AI, artificial intelligence; CDFUE, Charter
of Fundamental Rights of the European Union; CJUE, Court of Justice of the European Union;
D2D, Duty to Disconnect; DSA, Digital Services Act; DMA, Digital Markets Act; DGAPE,
Directorate-General for Administration and Public Employment; EC, European Commission; EP;
European Parliament; EPSR, European Pillar of Social Rights; ESP, European Social Partners; EU,
European Union; ICT, information and communication technologies; ILO, International Labour
Organization; GDPR, Regulation (EU) 2016/679, of 27-04; MEP, Members of the European
Parliament; MS, Member States; OSH, Occupational Safe and Health; R2D, right to disconnect;
SP, Social Partners; WHO, World Health Organization; TFUE, Treaty on the Functioning of the
European Union.
I. V. Borges (✉)
Faculty of Law of the University of Lisbon, Lisbon, Portugal
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 249
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_14
250 I. V. Borges
The right to disconnect, enabling the employee to refrain from working outside
working hours. The right to be disconnected, as the right to switch off and to stay
disconnected. The employer’s duty not to contact, to refrain from contacting the
employee, also extended to co-workers. And the employer’s duty to disconnect,
understood as an obligation to ensure that employees are not reachable and do not
work during rest periods.
These approaches can be criticized as being unnecessary, in that they are not new
employee’s rights but merely reinterpret the existing working time provisions and
employee rest protection, while there are already sufficient provisions to entitle
workers to disconnect, so that no further legislation is necessary.2
Further to this criticism, even though legal systems do not expressly provide for a
R2D, the existing rules still enable the same ultimate result of respect for the
worker’s personal time: labour law regulates working time, designates the relevant
situations, establishes maximum limits, provides for and sets rest times and their
minimum duration, regulates overtime work (what it is, when it can be demanded
and refused, it is remunerated) and imposes registration of the work performed, all
guided by general principles of protection of OSH and of a balance between
professional life against family and personal life.
However, under the EP Resolution of 21 July 2021, to be dealt with more fully
below, the employee’s R2D is understood specifically for all workers using digital
work equipment: 1. The right to routinely not perform work outside the working
period, which includes the rights to be able to: (a) disengage from work during
non-work hours; (b) refrain from engaging in work-related ICT, such as emails,
telephone calls, WhatsApp, texted messages or other messages; (c) switch off their
technological devices after work, or if they are turned on, not answer or respond to
them; 2. The right to not be penalised or face consequences for refusing to attend to
work matters outside working hours; 3. The duty to respect another person’s R2D by
not routinely emailing or calling them outside working hours.3
Doubts arise as to whether this Resolution provides for a distinct and truly
autonomous employee’s R2D, or whether it merely elaborates an existing field of
legislation, In view of its more detailed definitions, however, it can no longer be
regarded as superfluous in that it also implies the employee’s right to ‘say no’, to
‘switch off’, to ‘turn off’ or to ‘hang up’, regarded as an original, autonomous and
essential new employee’s right, distinct from the general right to rest. A right
engendered by the new world of work, forged by unremitting electronic and digital
connection, designed for a different need, and an example of the enhanced legal
protection that repose and family time demands in the modern digital era.
2
As pointed out by Ramalho (2021), pp. 424–425, it is not necessary to expressly enshrine the R2D
in the law, because this right is already implicit in the limits legally imposed on working time and
also stems from two general guiding principles of working time regimes: the principle of compat-
ibility of working time with the right to rest and protection of the worker’s health; and the principle
of balance between professional, private and family life.
3
Cfr. recitals 16, 20, 10; Articles 1 and 2; Eurofound (2019, 2021g, 2022f).
R2D: The Right to Disconnect from Work 251
This is the essential thesis of the following analysis and the question that
prompted our final remarks.
The concept of the R2D and the debate around whether it is needed as such, has
arisen in the context of increasing digitalisation of working life and the inherent
status of constant connection. Over the last two decades, ICT’s became key tools and
have significantly changed people’s lives and the world of work, making it possible
to work anytime, anywhere and in multiple sectors. Working life is undergoing a
reconfiguration of its traditional foundations. Each day and to an ever-increasing
extent, we find flexible working time patterns and new flexible models of work,
which enable incessant connectivity facilitated by ICTs, blur the boundaries between
work and private life and break through the traditional regular office-based refer-
ences of time and place and of when and where work is conducted.
This was particularly significant during the pandemic, effectively a catalyst of
social change, where the expansion of telework has encouraged governments to
modify regulations or undertake new legislative initiatives, including some related to
the R2D,4 which further the interrelatedness of the debates on the R2D and on
telework.5 Nonetheless, the issues raised by the R2D extend to all types of work
developed, totally or partially, through or based on ICT, arising whenever the work
connection can be activated by the employer during non-work and rest time, and
assuming greater practical importance in cases where work is performed beyond the
face-to-face context, that is, in situations of working at a distance, where the use of
digital tools is common.6
Eurofound research has shown that ICT-based flexible work at the same time has
advantages and disadvantages regarding a holistic perspective of work and working
conditions.7
The positive effects include shortening of commuting time, greater working time
autonomy, ensuring business continuity, opportunities for improving work-life
balance, and higher productivity.
The negative consequences of over-connection give rise to a number of signifi-
cant ethical, legal and employment related challenges, comprising: threats to the
right to privacy posed by technology-enabled control and surveillance through
software and AI tools, remote real-time monitoring and time-tracking of work;
new interferences linking work and personal life; the progressive loss of clear-cut
distinctions between working and non-working time, blending private and
4
Cfr. Eurofound (2021b, 2022c).
5
Cfr. Eurofound (2022a); European Parliament Research Service (2020).
6
Cfr. Commissioner Schmit’s (2022).
7
Cfr. Eurofound (2020a, 2021a, 2022b); Ramalho (2022); Moreira (2019), pp. 129–152.
252 I. V. Borges
8
Cfr. Eurofound (2020b, e, g, 2021d); Ramalho (2014, 2018); Martins (2021), pp. 52–54.
9
Nowadays, as Ramalho (2021), p. 423, remarks, because workers have a smart phone, a laptop or
an Ipad, they can be contacted at all times by their employer; and, as this equipment is often
provided by the employer, the latter considers such contact to be legitimate and, for the same reason,
the worker has difficulty in not answering.
10
Cfr. Eurofound (Eurofound 2020d, f, 2021f).
11
However, sometimes are the workers themselves who, obsessed with performance, are to blame
of technological “infobesity” because they send emails with copy for everyone, many times
classifying them as urgent (to seem important), forwarding them (to give the information) and
use systematically “reply all” to show activism and participation. Cfr. Moreira (2019), pp. 129–152.
R2D: The Right to Disconnect from Work 253
and to respond to professional communications, long after the working day has
ended, and even when not entirely necessary.12
These assumptions further the vibrant perception, among governments and SP,
that specific risks must be addressed, while digital technology transforms the
fundamental configuration of the temporal, spatial and functional limits to work.
Thus, the presumption that workers will be constantly available through online or
mobile interaction, is now also regarded in terms of the scale of psychological risks
to workers health and wellbeing (physical and mental), and work-life balance.13
The 2021 EP Resolution clearly illustrates the vital importance of R2D to
employee’s legal protection, appealing to the fundamental need for disconnection
to be addressed in a structured and systematized way across national borders. Here
was offered a plausible rapid response recognizing that the advantages of connec-
tivity must be weighed against the associated ethical, legal, and labour risks.
There is no specific legal provision of International Law addressing the R2D, but
there are fundamental principles and regulations on working time organization and
on rest and privacy, fair working conditions, OSH, and balance between work and
private life, which build essential foundations so that R2D can be legitimately
claimed on a legal basis.
Regarding ILO, we highlight the 1919 Hours of Work (Industry) Convention
(No. 1); the 1930 Hours of Work (Commerce and Offices) Convention (No. 30); the
1981 Collective Bargaining Recommendation (No. 163); the 1981 Convention on
Workers with Family Responsibilities (No. 156); and the 2019 Centenary Declara-
tion on the Future of Work.
Concerning the Council of Europe’s Revised European Social Charter of
03 May1996, we emphasize Article 2 on the right to just working conditions,
including reasonable working hours and rest periods; Article 3, on the right to safe
and healthy working conditions; Article 6, on the right to collective bargaining; and
Article 27, on the protection of workers with family responsibilities.
As for the Universal Declaration of Human Rights, we draw attention to Article
24, which states that everyone has the right to rest and leisure, including the
reasonable limitation of working time and periodic holidays with pay.
12
In Austria, 2020, 50% of teleworkers reported being available to respond to their employer during
resting time. In Italy and Slovenia, 2021, 25% stated having been contacted daily. In Greece, 2021,
33% described being compelled to be contactable outside working hours every day. Cfr. Eurofound
(2022d, e).
13
Cfr. EU strategic framework on OSH 2021–2027. OSH in a changing world of work, Commu-
nication from the EC, COM (2021) 323 final, Brussels, 28-06-2021.
254 I. V. Borges
14
Cfr. WHO and the ILO (2021), pp. 7, 13.
R2D: The Right to Disconnect from Work 255
rest breaks, and annual leave, and regulation of night work, shift work and work
patterns.
Additionally significant are the EU initiatives on digital rights: EP 2019/2111
(INI) Resolution, on social rights for digital platform workers; the EU DSA and
DMA, in force in 16 November 2022, for a reform of the digital space; the
2021proposal for a Directive on improving working conditions in platform work;
and the 14 November 2022 Political Agreement on the signature of the EU Decla-
ration on Digital Rights and Principles for the Digital Decade, Chapter II: “Everyone
has the right to fair, just, healthy and safe working conditions and appropriate
protection in the digital environment as in the physical workplace, regardless of
their employment status, modality or duration. We commit to ensuring that everyone
shall be able to disconnect and benefit from safeguards for work-life balance in a
digital environment”.
Concerning the EU social dialogue, two adopted instruments are the most
important: the 2002 ESP Framework Agreement on Telework, and the 2020 ESP
Framework Agreement on Digitalisation, as we describe below.
On 05 July 2022, after the 2021 Resolution on the R2D, the EU Parliament
adopted another Resolution on protecting mental health in the digital workspace and
health and privacy risks of teleworking, warning about techno-stress and digital
exclusion, and calling for preventive measures, a Directive on minimum standards
and conditions to ensure all workers the effective R2D and to regulate the use of
work digital tools, and appealing to an EU Mental Health Strategy, a European Care
Strategy and national action plans.
Fundamental to the interpretation and application of EU law, the CJEU has
clarified certain major aspects of the Working Time Directive, while not referring
to R2D.
The Directive 2003/88/EC provides for the right to uninterrupted daily, weekly,
and annual rest periods, during which the worker should not be contacted, and there
is no explicit EU provision that enforces the right to be always unavailable outside
working hours.
In particular, we underline the settled CJEU case-law that 'on-call time', during
which a worker is required be physically present at a place specified by the employer,
is to be regarded as “wholly working time [. . .], irrespective of the fact that, during
periods of on-call time, the person concerned is not continuously carrying on any
professional activity”,15 that ‘standby time’, which a worker is obliged to spend at
home, while being available to the employer, is to be considered working time;16 that
'on-call time' must be fully counted as working time;17 that Directives 89/391/EEC
and 2003/88/EC require employers to set up a system enabling the duration of time
worked each day by each worker to be measured and that such a system be
15
Cfr. Pfeiffer and others, 05-10-2004, C-397/01 to C-403/01, §93.
16
Cfr. Matzak, 21-02-2018, C-518/15, §66.
17
Cfr. SIMAP, Sindicato de médicos de asistencia pública, 2000, C-303/98; KielvJaeger, 2003,
C-151/02.
256 I. V. Borges
“objective, reliable and accessible”;18 that minimum rest periods are “rules of
Community social law of particular importance from which every worker must
benefit as a minimum requirement necessary to ensure protection of his safety and
health”;19 and that the criteria established in its settled case-law are to be applied to
assess whether someone has the legal status of a worker, which means whether “a
person performs services for and under the direction of another person, in return for
which he receives remuneration” is determinative and “the legal characterisation
under national law and the form of that relationship, as well as the nature of the legal
relationship between those two persons [are not] decisive in that regard”.20
18
Cfr. Federación de Servicios de Comisiones Obreras, 14-05-2019, C-55/18, §60.
19
Cfr. Commission v United Kingdom, 07-09-2006, C-484/04, §36.
20
Cfr. Betriebsrat der Ruhrlandklinik, 07-11-2006, C-216/15, §27.
R2D: The Right to Disconnect from Work 257
the work, focusing on prevention as the highest priority, which demands a system of
defined rights, responsibilities and duties, informed by the principle of prevention.
As clear measures, procedures and actions to be included, the 2020 Agreement
lists the following: (a) training and awareness raising; (b) respect of working time
and teleworking rules; (c) ensure compliance, providing policies, guidance and
information, including the use of digital tools for private purposes during working
time; (e) commitment from management to create a culture that avoids out of hours
contact; (f) work organisation, workload, work processes and number of staff,
identified and evaluated jointly; (g) achievement of organisational objectives not
requiring out of hours connection; (h) the worker is not obliged to be contactable;
(i) appropriate compensation for any extra time worked; (j) alert and support pro-
cedures in a no-blame culture to find solutions and to guard against detriment for
workers for not being contactable; (l) regular exchanges between managers and
workers and/or their representatives; (m) alert and support; (n) prevention of isola-
tion at work.
The 2020 Agreement commits the signatory parties and its members to imple-
ment the Agreement within three years. After this period, which ends in 2023, we
may have more data yielding better insights into the current situation on R2D in
the EU.
21
Cfr. recitals H, 9.
258 I. V. Borges
22
Cfr. recitals (11), (20), 13, 15, 18.
23
Cfr. recitals 16, (10), (11).
24
Cfr. recitals 3, (22), 10, 20.
R2D: The Right to Disconnect from Work 259
Combining these provisions, we can conclude that the proposal imposes obliga-
tions for two subjects: the employers—obliged to ensure the workers’ R2D, to not
require workers to be directly or indirectly available or reachable outside their
working time, and to implement all the conditions above; but also the co-workers—
obliged to refrain from contacting their colleagues outside working hours for work
purposes.25
Article 4(1), on “measures implementing the R2D”, calls on the MS to ensure the
establishment of arrangements, after consulting the ESP, to enable workers to
exercise their R2D and that employers implement it in a fair and transparent manner,
providing, at least: practical mechanisms for deactivating digital tools for work
purposes, including any work-related monitoring tools; the system for measuring
working time; and the awareness-raising measures, including in-work training, to be
taken by employers.
Regarding the criteria for any derogation by employers from their requirement to
implement a worker’s R2D, and for determining how compensation for work
performed outside working time is to be calculated in accordance with Directives,
national law and practices, Article 4 states that must be “provided for only in
exceptional circumstances, such as force majeure or other emergencies, and subject
to the employer providing each worker concerned with reasons in writing, substan-
tiating the need for the derogation on every occasion on which the derogation is
invoked”, stressing the overall goal of protecting workers’ health and safety.26
With reference to collective agreements, Article 4(2) enables MS entrusting ESP
to conclude it, in accordance with national law and practice, providing for or
complementing the working conditions referred to above, ensuring that workers
not covered are also protected. The importance of ESP is underlined for the effective
implementation and enforcement of the R2D. It is also emphasized that the work
already carried out in this respect should therefore be taken into account; that MS
should lay down the arrangements for the implementation and proper enforcement of
the R2D in accordance with the effective involvement of the ESP, respecting their
autonomy and supporting them in establishing those collective agreements; also
through the national labour inspection authorities.27
Article 5, on “Protection against adverse treatment”, calls on the MS to ensure
that prohibitions are in place of any discrimination, less favourable treatment,
dismissal, and other adverse measures by employers on the ground that workers
have exercised or sought to exercise their R2D; that employers protect workers and
workers’ representatives, from any retaliatory measure or consequences resulting
from a complaint or from any proceedings initiated with the aim of enforcing
compliance with the rights provided for in this proposal; that when workers consider
they have been dismissed or subject to other adverse treatment on the grounds that
they exercised or sought to exercise their R2D, before a court or other competent
25
Referring to the horizontal strand of R2D as applied to colleagues, cfr. Moreira (2019), p. 173.
26
Cfr. recital (25).
27
Cfr. recitals 21, (23), (24), (28), 5, (21).
260 I. V. Borges
28
Cfr. recitals 22, (26), (27), (29).
29
Cfr. recitals 12, 23, 24.
30
Cfr. recital (28).
31
Cfr. recitals I, 17.
32
Cfr. recitals 25, 9, 18, 19.
R2D: The Right to Disconnect from Work 261
commuting times, and facilitate the management of personal and family obligations,
thus creating a better work-life balance.33
In addition and nevertheless, the EP accentuated that this asset must be weighed
with regard to the associated ethical employment related challenges, legal and labour
risks and several disadvantages, such as: (a) intensifying work and extending
working hours resulting from inducing workers to work outside their working time
and from constant connectivity and excessive use of technological devices, can
negatively affect workers’ fundamental rights, their fair working conditions, their
work-life balance, and their physical and mental health and well-being, with great
impact on OSH and psychosocial risks; (b) the blurring between private life and
professional life, in particular with those who care for their household, or women and
more vulnerable groups; (c) the challenges and complexities of remote work control
and monitoring, using increasingly sophisticated and potentially intrusive forms of
AI, given the respect for the worker’s privacy.34
Unfortunately, we still have a probably long wait for further EU legislative
initiatives in this field. The EC committed to follow up with a legislative act, after
launching a large-scale study to support evidence-based possible legislative
and policy measures, but it is still conducting broad research on the trends, evolution
and implications of telework and the R2D, gathering evidence and contributions, and
exchanging good practices at EU level.35
MS are reacting differently to the challenges posed by ICT-flexible work, and even
the first European countries incorporating the R2D in their legislation (France, Italy,
Belgium, Spain and Portugal), differ in the way they do so, depending on, namely:
where provisions are based, whether labour law or other legislation; who provisions
are applicable to, whether certain workers or companies or general workforce; the
nature of the legal figure, whether right, duty, guarantee, or mandatory negotiations;
implementation, whether with or without SP or worker’s representatives; whether or
not penalties are provided for.
Among these countries, trade unions tend to favour legislation as a means to more
effective enforcement, while a great number of court cases have indicated the need
for legal clarification and raising awareness of the need to change working time
patterns and foster a cultural change towards a healthier organisation of work. A lack
of debate on legislation about a R2D, is associated with a low prevalence of
33
Cfr. recitals B, 1, (7), 8, 14, F, G.
34
Cfr. recitals C, D, E, G, 2, 4, 5, 6, (8), (9).
35
Cfr. Commissioner Schmit’s (2022), 15-03.
262 I. V. Borges
ICT-based work, the idea that existing legislation is sufficient to allow workers to
disconnect, and to an active preferred collective bargaining approach.
7.1 France
36
Cfr. Eurofound (2014).
R2D: The Right to Disconnect from Work 263
of digital tools, then it is the employer who will establish the rules, without any direct
sanctions for non-compliance.
7.2 Italy
In Italy, the R2D was initially provided for under Law 81/2017, of 22 May, which
approved a set of measures for the protection of non-entrepreneurial self-employ-
ment and aimed at encouraging flexible articulation in the times and places of
subordinate work.37
Indeed, its Article 19(1), although without a R2D definition, establishes that “The
agreement on the ‘agile’ working mode is signed in writing for the purposes of
administrative regularity and proof and regulates the execution of the work
performed outside the company premises, also with regard to the forms of exercise
of the managerial power of the employer and the tools used by the worker. The
agreement also identifies worker times of rest as well as technical and organizational
measures necessary to ensure worker disconnection from technological work tools”.
The law, also known as regulating Smart Working, aimed to promote and provide
a framework for new forms of remote work and to improve the work-life balance.
However, a sectoral scope of application was stipulated, limiting it to agile labor,
defined as “the way of organizing work by phases, cycles and objectives, without
setting a time and place of work, and based on the possibility of using technological
instruments in the provision of work activity”; a kind of hybrid remote work, similar
to a tertium genus between face-to-face work and remote work or work carried out
at home.
Therefore, Law 81/2017 establishes the R2D but not as a right and only for
remote or smart workers and does not apply to the workforce generally.
The recent legislation about R2D is Law 61/2021 which recognised “without
prejudice, for the public employment, the discipline of the institutes of agile work
established by national collective agreements, the worker who carries out the activity
in agile mode is recognized the [R2D] from the technological instruments and IT
platforms, in compliance with any agreements signed by the parties and without
prejudice to any agreed availability periods. The exercise of the [R2D], necessary to
protect the rest time and the health of the worker, cannot have repercussions on the
employment relationship or on remuneration”. While, in the past, the source of the
R2D was the employment contract, the new rule has a substantial significance as it
openly attributes to the employee the R2D.
The National Protocol on Work in Agile Mode, signed on 07 December 2021,
revising the pandemic standards, determine that compliance takes place on a volun-
tary basis and demands signing an individual agreement providing for the duration of
the smart working period (fixed or indefinite time), the alternation between the
37
Cfr. Eurofound (2017b).
264 I. V. Borges
face-to-face and smart work, the possible exclusion of some places from work
remotely, and the tools to use (which, unless otherwise agreed, must be provided
by the employer). The agreement must also guarantee the R2D, or a time slot in
which employees are not required to be available for work reasons.
Distinctively, Article 3 provides that “the working day carried out in an agile
mode is characterized by the absence of a precise working time and by the autonomy
in carrying out the service within the scope of the set objectives, as well as in
compliance with the organization of the activities assigned by the manager to
guarantee the operation of the company and the interconnection between the various
company functions” and how the smart working day can “be divided into time
bands” with identification, in any case, of the disconnection range in which the
worker will not work. As, we underline, specific technical and organizational
measures must be adopted to assure the disconnection range.
The smart working R2D does not inhibit the employer from sending the worker
an e-mail outside of working or availability hours, but it legitimizes the work.
Nevertheless, these rules are only applicable to agile working arrangements and do
not provide for sanctions.
7.3 Spain
In Spain, the Organic Law 3/2018, of 5 December, on personal data and digital
rights, enshrined the R2D under Article 88: “1. Public workers and employees shall
have the right to digital disconnection in order to guarantee, outside of the legally or
conventionally established working time, respect for their rest time, permits and
vacations, as well as their personal and family privacy. 2. The modalities of
exercising this right will take into account the nature and object of the employment
relationship, will enhance the right to reconcile work activity and personal and
family life and will be subject to the provisions of collective bargaining or, in its
default, as agreed between the company and the workers’ representatives. 3. The
employer, after hearing the representatives of the workers, will prepare an internal
policy aimed at workers, including those who occupy managerial positions, in which
they will define the modalities of exercising the [R2D] and the training and aware-
ness actions staff on a reasonable use of technological tools that avoids the risk of
computer fatigue. In particular, the right to digital disconnection will be preserved in
the cases of total or partial performance of remote work as well as at the employee’s
home linked to the use of technological tools for work purposes”.38
According to this legislation, both public employees and workers, including
managerial positions, have the R2D outside working hours, in order to respect
their right to rest, family and personal privacy, and the balance between family life
and work life. The R2D must be preserved when the work is carried out remotely,
38
Cfr. Eurofound (2017a); Martín (2021); Montesdeoca Suarez (2022); Trujillo Pons (2021, 2022).
R2D: The Right to Disconnect from Work 265
totally and partially, and when the employee uses digital tools. However, for this
right to be exercised, the modalities must be established in internal policies and
within what is agreed between the representatives of the workers and the company.
The Law 3/2018 amended the Article 20bis of the Spanish Workers' Statute Law,
under the heading “Workers’ rights to privacy in relation to the digital environment
and to disconnection”, in force from 7 December2018, which now states that
“Workers have the right to privacy in the use of digital devices made available to
them by the employer, to digital disconnection and privacy against the use of video
surveillance and geolocation devices in the terms established in current legislation
on protection of personal data and guarantee of digital rights.”
Regulation along this line is also promoted through Article 18 of Royal Decree-
Ley 28/2020, that regulates remote work, with special mention to preserve the R2D
of workers covered by telework protocols who work from home.
All in all, despite the notable advantages inherent to the express provision of R2D
and reinforce its protection, namely by integrating data protection and teleworking,
the Law 3/2018 still leaves its regulation of a purely internal nature to the employer,
does not provide any sanction in case of non-compliance or any complaint channels,
and does not provide the R2D with any content or formulated it as an employer
obligation to guarantee.
7.4 Belgium
In Belgium, digital disconnection from work was addressed for the first time by
Articles 15 to 17 of the Act of 26-03-2018, in force after 9 April 2018, regarding the
strengthening of economic growth and social cohesion. These provisions did not
enshrine a real R2D but only an employer’s duty to consult about it. Belgian
companies had the freedom to adopt accordingly measures but were encouraged to
hold a consultation with employees on disconnecting workers and to draft bespoke
agreements on the use of digital work tools.
The law provided that the private sector employers had the obligation to organize
regular consultation within the Committee for prevention and protection at work,
about disconnection and using digital communication, at regular intervals and
whenever those employee representatives so request. In the absence of that Com-
mittee, the consultation would take place with the union delegation or, failing that,
directly with the workers. There were no sanctions provided and no imposed fixed
frequency and the agreements arising from the consultation could be incorporated
into the work regulations or in a collective labour agreement.
Since 1 February 2022, by Royal Decree of 2 December 2021, in force as of
1 February 2022, a real R2D has been anchored in the Belgium Federal Staff
Regulations for all civil servants, whether statutory, trainee, agent or contractual,
allowing them the right to ignore calls from their employer outside their working
hours and to switch off work emails, texts and phone calls received out of hours.
266 I. V. Borges
This new legislation provides for the obligation for organizations to carry out an
annual consultation on the subject, following a roadmap explaining the topics that
can be addressed. Workers are protected against reprisals with two exceptions: staff
members may be contacted outside their working hours in the event of exceptional
and unforeseen circumstances which cannot wait until the next work period, or if the
staff member is assigned to a service on guard.
The R2D was enshrined in Belgian legislation through the “Deal for employ-
ment”, a package of measures to modernize the labour market, implemented through
the Law of 3 October 2022, published on 10 November 2022. Among others, the
measures include, in Chapter 8, about R2D, Articles 29 to 33 and amended Articles
16 and 17 of the Law of 26-03-2018.
The new Article 16 states that: “For employers who employ at least 20 workers,
the terms and conditions of the worker’s [R2D] and the implementation by the
company of mechanisms to regulate the use of digital tools, with a view to ensuring
compliance rest times as well as the balance between private life and professional
life, must be the subject of a collective labor agreement concluded at company level,
in accordance with the law of 5 December 1968 on collective agreements and the
joint committees and, in the absence of such collective labor agreement, these must
be included in the work regulations [. . .].”
The new Article 17 states that: “The terms and arrangements referred to in Article
16 must, at a minimum, provide for: - the practical arrangements for the application
of the worker’s right not to be contactable outside his working hours; - the instruc-
tions relating to the use of digital tools which ensure that the worker’s rest periods,
holidays, private and family life are guaranteed; - training and awareness-raising
actions for workers as well as management staff on the reasoned use of digital tools
and the risks associated with excessive connection.”
Articles 31 and 32 inserted Article 17(1)(2), worded as follows: "(1). The
collective labor agreement concerning the terms and conditions referred to in
Articles 16 and 17 must be filed with the registry of the General Directorate of
Collective Labor Relations [. . .]. In the event that the provisions concerning the
procedures and arrangements referred to in Articles 16 and 17 have been included in
the work regulations, the employer sends a copy of them to the official designated by
the King [. . .].” “(2). When a collective labor agreement [. . .] is concluded within the
competent joint committee or within the National Labor Council, and is made
compulsory by the King, the obligation to conclude a collective labor agreement
on this subject at company level or to include the provisions decided on this subject
in the work regulations ceases [. . .].”
The employer must provide awareness-raising measures for the reasonable use of
digital tools intended for workers but also for management staff, and are obliged to
adopt this company policy by 1 January 2023 at the latest.
R2D: The Right to Disconnect from Work 267
7.5 Ireland
In April 2021, the Irish Workplace Relations Commission adopted the Code of
Practice for Employers and Employees on the R2D, which has been promulgated by
statutory instrument.39
The Code requires employers to create an internal policy in consultation with
employees or their representatives, and establishes that all workers have the R2D
from work compassing three main elements: (a) the right to not routinely perform
work outside normal working hours; (b) the right to not be penalised for refusing to
attend to work matters outside of normal working hours; (c) and the duty to respect
another person’s right to disconnect, e.g. by not routinely emailing or calling outside
normal working hours.
The Code ensured, inclusively, that employees had the right to switch off from
work outside of normal working hours, and the right to not respond immediately to
emails, telephone calls or other messages, and there is an expectation in that sense.
Also it calls attention to the need to address the issue of working across global
time zones, and to occasional legitimate situations where unforeseeable circum-
stances may arise, such as emergency or business and operational reasons depending
on the service, on the employee’s role, on the interests of clients and on critical
services. Furthermore, it underlines the consequences of the use of certain tone in
work communications, alerts the workers to avoid giving a sense of urgency to any
interaction that might burden the rest time of work colleagues, and indorses man-
agers to create a culture suitable to disconnection from work.
Nevertheless, the Code offers no solutions for questions arise from flexible
patterns or remote working, in particular about the monitoring and recording of
working time, is silent regarding sanctions, and charges regulation of the R2D to the
employers and companies.40
7.6 Portugal
In Portugal, many legal proposals were present regarding the R2D, and all of them
were rejected on the ground that Portuguese labour law had sufficient mandatory
instruments to guarantee the minimum of rest or non-working periods.41
39
Cfr. S.I. 159/2021—Workplace Relations Act 2015 (Workplace Relations Commission Code of
Practice on the Right to Disconnect), available at https://www.irishstatutebook.ie/eli/2021/si/159/
made/en/print.
40
Cfr. DETE (2021).
41
On the R2D in the portuguese system, cfr. Amado (2017, 2021, 2022); Carvalho (2017); Cochofel
(2021); Fernandes (2021a, b, 2022); Fernandes (2017); Machado and Oliveira (2021); Martins
(2021); Moreira (2017, 2019, 2020a, b, 2021a, b); Ramalho (2022), Ramalho (2021), pp. 403–404,
423–424, Ramalho (2019); Ramos (2022).
268 I. V. Borges
Following the various labour problems reinforced by the pandemic and within the
scope of a teleworking regime, Law 83/2021, of 6 December, in force as of 1 January
2022, introduced, among other changes to the Portuguese Labor Code, disconnec-
tion from work, adding Article 199-A.42
The institution of the R2D, however, was implemented, not via the provision of
an employee’s right, but through the imposition on the employer of a duty to refrain,
or abstain, from contacting the worker outside working hours. In this sense, Article
199-A provides that “1 - The employer has the duty to refrain from contacting the
worker during the rest period, except in situations of force majeure. 2 – For the
purposes of Article 25, any less favourable treatment given to a worker, namely in
terms of working conditions and career progression, is a discriminatory action, for
exercising the right to a rest period, under the terms of the previous number. 3 - It is a
serious offense to violate the provisions of number 1”.
The choice between a worker’s right or an employer’s duty, although with similar
practical consequences, matters effectively because it also defines the reversal of the
burden of proof. Thus, under an employer’s duty, the onus is on the employer to
prove that the duty to ensure employee rest entitlement has been fulfilled, and that it
refrained from contacting the employee and from disturbing or encumbering
that rest.
The emphasis of the Portuguese legislator was, therefore, to reinforce the guar-
antee of the worker’s right to rest, to accentuate the employer’s obligations,
strengthening the inherent duty of respect for the employee’s rest and enshrining
the employer’s legal position as guarantor of compliance with the working condi-
tions of its workers, including the rules on time and OSH.
This legal structure grants additional worker protection as it does not make the
R2D exercise dependent on the worker’s decision, providing a shield against being
placed in a vulnerable position subject to various forms of retaliatory discrimination;
and, secondly, it refers expressly to the guarantee provided by the reversal of the
burden of proof, applicable within the scope of the principle of equality and
non-discrimination, which protects the worker against any form of less favourable
treatment, in particular in terms of career progression and working conditions,
resulting from the exercise of their right to rest (Articles 199-A(2) and 25). This
means that the refusal by the employee to accept any contact established by the
employer during their rest time is legitimate and cannot constitute a disciplinary
sanction or the object of any type of punishment.
Article 199-A(3) also provides for sanctions, as it qualifies the respective
non-compliance as a serious administrative infraction with high fines, calling for
the mandatory intervention of the Portuguese Working Conditions Authority.
42
The Portuguese system also includes, as a generic diploma, the Portuguese Charter of Human
Rights in the Digital Age, approved by Law 27/202, of 05-17-2022, that does not refer to the digital
disconnection.
R2D: The Right to Disconnect from Work 269
Although the Law does not qualify violation of the duty to abstain from contact as
a practice of harassment, this practice can occur, for example, in certain situations, if
the other requirements, provided for in Article 29 of the Labor Code, are verified.43
The duty to refrain is a generic concept in that it applies to all employees or
subordinated workers or workers with an employment contract, who work remotely
or in person, as well as workers without legal subordination, but with economic
dependence.
However, it is an obligation that applies to anyone who, under Portuguese
legislation, is employer by law, that is, an entity that exercises management and
disciplinary powers over the worker, typical of the legal subordination that charac-
terizes the employment contract. Thus, the new Portuguese rules do not provide for
the general duty to refrain from contacting co-workers.
Article 199-A also raises several reservations of practical interpretation, which
require a specific assessment on a case-by-case basis, a situation that has hampered
its implementation by companies. These questions can be grouped into four essential
problems.
First, the scope of the notion of contact is that contact must cover all forms of
communication that may disturb or encumber the worker’s rest: emails, telephone
calls or voice and text messages, including Teams meeting, Zoom, WhatsApp,
Messenger, Linkedin or similar.
Secondly, the problems are accentuated in situations where the worker performs
overtime, in flexibility or adaptability terms, or even in on-call work situations, or
where the worker undertakes to be available to be contacted or to work outside the
hours and place of work.
Assuming that there can be compatibility between the various themes and that it is
feasible to ensure a duty of abstention from contact during the rest period of workers
in special regimes (such as, for example, exemption from working hours), it is
important to determine criteria for the articulation and combination of applicable
rules. In fact, both the R2D and the duty of abstaining were conceived with reference
to the traditional model of work conducted in well-defined periods of time, with
demarcated work and rest times. In atypical forms of working time, especially those
in which a clear margin of rest time is not identified, the worker’s contact, also due to
this facility, may even appear as admissible and exempt from legal protection.
A third set of questions focuses the issue of exceptions to this prohibition of
contact, which the law calls “situations of force majeure” without, however, indi-
cating any type of definition. It will always be possible to accommodate what is
understood as force majeure in other parallel provisions within the scope of the
working time regime provided for in the Labor Code, as in the case of overtime
work. It turns out that these provisions appeal to highly objective situations that
cannot be manipulated by the parties, usually associated with unpredictable and
unavoidable natural phenomena, such as fires, floods, earthquakes.
43
About harassment, cfr. Borges (2017a).
270 I. V. Borges
Additionally, this interpretation may seem too limiting in the face of the real
possibilities that are imposed on employers, especially considering the Directive
proposal that expressly refers to other emergency situations in which there is a
legitimate need for the employer to contact the worker. “It is an indeterminate
concept, which must be assessed on a case-by-case basis. The case of force majeure
has the underlying idea of inevitability: it will be any natural event or human action
that, although predictable or even prevented, could not be avoided, either in itself or
in its consequences. Force majeure situations will be, namely, those indispensables
to prevent or repair serious damage to the body or service”.44
Finally, certain difficulties are identified in the legal framework of cases in which
the employee, during their rest period, refuses the contact established by the
employer, when it can demonstrate that it was a situation of force majeure.45 And
there are many problems related to the legitimate reaction of the employee in relation
to the different types of contact that can be established by the employer. On this
point, we refer our analysis to the conclusions below.
Since the amendment to the Labour Code, made by Act 311/2001 Coll., on
19 February 2021, and in force as of 1 March 2022, among other employment-
related new rules, Slovak law explicitly provides for a specific R2D, applicable only
to employees working outside the employer’s workplace which are classified into
three categories: homeworking, remote working and home working occasionally or
in exceptional circumstances. The new section 52(10), states that those employees
have the R2D, understood as the right not to use work equipment as computers or
telephones, and to refuse work during their defined rest periods, except in cases of
overtime or emergency work, so the employer cannot treat it as a breach of duty and
work discipline.
In Ontario, Canada, the government has proposed a new employment law to
amend the Employment Standards Act 2000, and passed the Bill 27, Working for
Workers Act 2021, in 02-12, requiring employers with 25 or more employees to
introduce, until 02-06-2022, a workplace Policy regarding disconnecting from work
defined as “not engaging in work-related communications, including emails, tele-
phone calls, video calls or the sending or reviewing of other messages, so as to be
free from the performance of work.” Although it is applicable to all employees, the
law gives no requirements about the content of the internal policy. The employers
are responsible for communicating their expectations about the R2D, accordingly to
44
Cfr. the Portuguese DGAPE, FAQ – Telework, 26, updated on 10-11-2022, available at www.
dgaep.gov.pt.
45
Referring disconnection during annual paid leave, cfr. Borges (2015, 2017b).
R2D: The Right to Disconnect from Work 271
the individual company needs, so some sectors may regard disconnection periods as
irrelevant.
In Canada, Quebec, Bill 1097, The R2D Act was presented in March 2018 but
was abandoned after three months. Under this draft, employers had the obligation to
write a policy that indicated the weekly periods when employees could disconnect
from all work-related communication, and a protocol for the use of communication
tools after hours, with fines for employers who failed to produce a disconnection
policy or an annual status report.
Outside Europe, Chile is probably the first country to introduce a R2D legislation as,
on 26 March 2020, it passed the Law 21.220 adding to labour laws a new chapter on
remote work and teleworking,46 in the wake of a proposal already dated from April
2018. However, contrary to this proposal, Law 21.220 provides for the R2D only
regarding remote work. It states that employer and employee can reach an agreement
of flexible working hours, where the employee can work from a location outside of
the company’s facilities, taking into account the general provisions of working
hours, and an agreement regarding the disconnection time.
Following several legislative initiatives to regulate remote work, the Argentine
Senate passed Law 27.555/2020, regarding teleworking, on 30 July 2020, in force as
of 1 April 2021. Pursuant to Article 5, “the person who works under the
telecommuting modality will have the right not to be contacted and to disconnect
from digital devices and/or information and communication technologies, outside of
their working day and during leave periods”, and cannot be penalized for making use
of this right, and from whom the employer “may not require to perform tasks”, “nor
send communications, by any means, outside the working day”.
On 20 January 2021, Decree 27/2021 was published, regulating Law 27.555 and
establishing that the employer may send work-related off-hours communications
when the activity is carried out in different time zones or when is essential due to
objective reasons, although the employee is not be required to answer before the
beginning of the working day, except in the exceptional or urgent cases provided in
Section 203 of the Employment Contract Law 20.744.
In Colombia, the Presidency of the Republic sanctioned Law 2191, of 6 January
2022, as the new Labor Disconnection Law, to improve OSH and guarantee family
time. Under Article 1, “The purpose of this law is to create, regulate and promote the
labor disconnection of workers in labor relations within the different contracting
modalities in force in the Colombian legal system and its forms of execution, as well
as in legal and/or regulatory relations, in order to guarantee the effective enjoyment
46
Cfr. UNI Global Union Professionals and Managers (2019a).
272 I. V. Borges
of free time and rest times, licenses, permits and/or vacations to reconcile personal,
family and work life.”
The new regulation is mandatory and welcomes workers from private and public
sectors and defines labour disconnection, in Article 3, as “the right to have no
contact, by any means or tool, whether technological or not, for issues related to
their field or work activity, at times outside the ordinary or maximum legal working
day, or agreed, or in their vacations or breaks.” The Law also states that “the
employer shall refrain from issuing orders or other requirements to the worker
outside of the working day”, and, in Article 4, that the R2D begins after the working
day, and that the non-observance of the R2D “may constitute workplace harassment,
under the terms and in accordance with the provisions of Law 1010 of 2006.”
Article 5 provides for the obligation to have an internal regulation labour discon-
nection policy, which will define at least: (a) The way in which such right will be
guaranteed and exercised; including guidelines regarding the use of ICT; (b) A
procedure that determines the mechanisms and means for public workers or servants
to file complaints against the violation of the right, in their own name or anony-
mously; (c) An internal procedure for the processing of complaints that guarantees
due process and includes mechanisms for conflict resolution and verification of
compliance with the agreements reached and the cessation of the conduct.
Further, Article 6 establishes three exceptions: (a) workers who hold positions of
direction, trust and management; (b) those who, due to the nature of the activity or
function they perform, must have permanent availability, among them the public
force and relief agencies; and (c) situations of force majeure or fortuitous event, in
which it is required to fulfill extra duties of collaboration with the company or
institution, when they are necessary for the continuity of the service or to solve
difficult or urgent situations in the operation of the company or the institution,
provided that the non-existence of another viable alternative is justified.
Finally, according to Article 7, workers can file complaints, with the Labor
Inspectorate or the Attorney General’s Office, that must preemptively order the
employer to initiate the procedures referred to in Article 5.
7.9 Germany
German labour law does not include statutory R2D, it being understood that artic-
ulated interpretation of the working time legal system and the OSH legal framework,
is sufficient to define the worker’s availability, obviating the need for more specific
regulation on R2D.47 Indeed, the Working Time Act (Arbeitszeitgesetz, ArbZG), of
06-06-1994, and the OSH Act (Arbeitsschutzgesetz, ArbSchG), of 07-08-1996,
along with the Directive 2003/88/EC, set legal limits to the constant availability of
47
Cfr. Bundesministerium für Arbeit und Soziales (2017), pp. 119–190.
R2D: The Right to Disconnect from Work 273
employees, protecting them against health risks arising from workload, assuring
them sufficient time off, as their rights to human dignity and privacy.
The German labour law system rests on the premise of the on-call duty, which is
part of a continuous employment relationship, when the employer does not contin-
uously provide work for the employee but has the option of calling the employee in
as and when needed.48 As this situation seams similar to R2D, the same rules are
considered reasonable applicable.49
The on-call work in Germany, arises mainly under three forms.50 One is known as
Arbeit auf Abruf, or on-call work, and consists in an employment contract under the
§12 of the Part-Time and Fixed-Term Employment Act (Teilzeit und
Befristungsgesetz, TzBfG), which has been subject to a 2019 legal reform. This
form, parallel to the above definition, implies that the employees decide where they
are and only must be ready for eventual work, but the employer expects that they can
be on site at any time in a reasonable time. This on-call duty and its respective stand-
by time is not qualified as working time, only the work effectively done.
The other two forms are Rufbereitschaft, or on-call work at home, where
employees can be reached at home and called in to work at short notice, such as
firefighters; and Bereitschaftsdienst, or on-call duty, where employees remain at
their place of work and are available to be used as and when required. Both last types
occur in addition to regular working time, in planned on-call shifts, count as working
time, and are paid for as stand-by times, by force of the CJEU case-law described
above.
In 2016, the German white paper Work 4.0 (Weissbuch Arbeiten 4.0)51 on the
modern work environment, set out a guideline for finding a balance between the
flexibility needs of companies and workers, the protection of uninterrupted rest
periods, and the respect for OSH, and emphasized the potential of SP and work
councils, to find tailor-made specific solutions. It concluded that there is no need for
further legislative action and that the best way to address the R2D issue is to
negotiate collective agreements, making flexibility compromises and drafting
works agreements, because statutory provisions cannot meet the various require-
ments of the diversity of business structures and activity sectors. the paper detailed
that a reform should focus on the Working Time Act, about protection from
overwork, dissolution of work boundaries, and flexibility compromises.
The referred German legislation establishes a mutually exclusive nature of work
and rest in the sense that there are no middle classifications or grey areas in between
the respective provisions, which are mandatory for individual agreements. However,
48
Cfr. Eurofound (2015).
49
Cfr. Bundesanstalt für Arbeitsschutz und Arbeitsmedizin (2016), 74, 75, 82. Considering constant
availability only that of workers who are also available for work outside normal working hours, this
report shows that 22% consider it expected to be frequently contacted in their private life for work
matters, by colleagues, superiors or clients, although they are also contacted by family and friends
during working hours, which means that there is a greater mix of work and private life.
50
Cfr. Jaehrling and Kalina (2020).
51
Cfr. Bundesministerium für Arbeit und Soziales (2017).
274 I. V. Borges
these working time limits are irrelevant in companies with flexible working time
models or trust-based working hours and may result in a legal grey area involving in
which cases employees must be available.
Still, German law has yet to comply with the CJEU case-law concerning the
employer’s duty to create a system for measuring the daily working time of each
worker, because this duty is only established on overtime hours, in the §16(2), of the
Working Hours Act.
Dutch labour law does not include the R2D. However, a legislative proposal about it,
Wet op het recht op onbereikbaarheid, was submitted in July 2020, and meant to
adjust Articles 3 and 5 of the Working Conditions Act, Arbeidsomstandighedenwet,
with the main goal of employers taking precautions against the damaging impacts of
employees being always connected. The proposal referred to a mandatory debate
between the individual employer and its employees, employee representative bodies,
work councils or trade unions, on their R2D after working hours, as an obligation on
employers resulting from the working conditions policy, which includes the
employee’s R2D of work activities outside working hours. The bill also incorporated
the employer’s obligation to record working times so as to ensure the resting time,
and to prove that they attempted to discuss. If this discussion demonstrates that an
employee’s availability outside of working hours is burdensome and/or adding to the
employee’s stress levels, this should be classified as a risk that needs to be addressed.
The bill provided that the Dutch Inspectorate will monitor relevant conversations
and will check whether the discussion took place and, if not, it can issue a warning,
impose a demand for compliance and a fine in case of non-compliance. Neverthe-
less, the Dutch proposal empowers the employees merely to initiate a conversation
with the employer about the possibility to be disconnected outside working hours, in
order to create awareness under employers about the negative impact of always
being connected.
In Luxembourg, any R2D indirectly derives from the Labour Code’s provisions
on the duration of work and from the employer’s OSH obligations to its employees.
Still, the Parliament has received a draft law n. ° 7890, of 28-09-2021, aimed at
introducing into the Labour Code the obligation for each company to define pre-
cisely the rules governing the R2D concerning all employees using digital tools for
professional purposes. The proposal adds Articles L.312-9 and L.312-10 and a new
section 8 in the chapter devoted to employers’ OSH obligations, reproducing the
opinion of the Economic and Social Council of 30-04-2021. It demands measures
adapted to the specific situation of the organisation, setting out the practical and
technical arrangements for disconnecting from digital devices; awareness-raising
and training measures; and compensation arrangements in the event of one-off
exceptions. Also it includes administrative fines, with delayed entry into force.
However, the proposal only states that employers will be responsible for introducing
R2D: The Right to Disconnect from Work 275
a scheme making sure that the R2D is respected, at the sector or company level, and
within the workers representatives, so it can fail to produce effective legislation.
In the United Kingdom there is no specific legislation addressing the R2D, but
only the rules engendered from the duties and hours of work set out in the contract of
employment and in the Working Time Regulations 1998. In April 2021, the House
of Lords Select Committee on Covid-19 appealed to the Government to consider a
R2D, under the report called Beyond Digital: Planning for a Hybrid World, but there
is still no indicated intention to take it forward.
In the United States there have been no federal level initiatives regarding R2D,
but some bills have been submitted in same States, like in California and in
New York, with Bill 0726/2018, without leading to regulations. Companies have
preferred, instead, some internal measures regarding “work life balance” or
“employee health”, as, in the light of a workaholic culture, the R2D may be
considered unsuited for flexible working conditions, productivity maximization
and working across different time zones.
There is no specific law in the Brazilian legal system dealing with R2D, but this
can be supported by the provisions of Article 6 of the Consolidation of Labor Laws
(CLL), that includes working through telematic means, e-mails, WhatsApp, among
other remote communications. Also, the matter has already been dealt with by the
Superior Labor Court, whose 7th class, unanimously, in the AIRR-
2058-43.2012.5.02.0464 case, admitted the compensation of a worker for violating
her R2D. Furthermore, Law 14442, of 02-09-2022, amended CLL regarding
telework regime and Article 75-B(5) now states that time spent using technological
equipment, necessary infrastructure, software, digital tools, or internet applications
for teleworking outside the employee’s normal working day, is not part of a time
available or on-call or on-notice regime, unless provided for in the contract or in a
collective agreement.
In Philippines, Senate Bill 2475 “Workers’ Rest Law”, was filed on 17-01-2022,
as well as the House Bill 10717, filed on 27-01-2022, aimed to amend the labor code
and regulate a R2D. The proposal states that any employee may decline to perform
overtime work, unless that is permitted by law or by employee’s written consent,
including not opening or answering communications received during rest hours,
without fear of being “reprimanded, punished, or otherwise subjected to disciplinary
action”, and with monetary sanctions and fines, and even imprisonment in case of
criminal behaviour. The definition of working hours covers also the time spent
reading and responding to work-related communications after working hours and
added the duty for the employer to establish the time limits when employees cannot
send or answer work-related communications. According to this draft, during resting
time, employers are prohibited of requiring the employee to work, and of contacting
them through phone, e-mail, message, and other means of communication, unless to
notify them of legal emergency or urgent work according to Articles 89 and 92 of the
Labour Code. The bill covers employees, but exempts field personnel, domestic
helpers, persons in the personal service of another, and workers paid by results.
In India, the Bill 211/2018, regulating the R2D, was presented in December 2018,
and encompassed the right to not answer to calls after working time, the duty to
276 I. V. Borges
8 Company-Level Initiatives
Beyond the formal recognition of a R2D by the national legal system, many
spontaneous initiatives emerged in a sectoral context or in a company universe,
which were materialized through collective agreements or internal policies.52
In France, the telecommunications group Orange signed a company collective
agreement on 27 September 2016, establishing a R2D for employees, although this
right had already been mentioned in the zero-email program, launched in 2011 by the
France-based information technology services firm Atos Origin; in an agreement of
31 May 2012 by the nuclear energy Areva group, as well as in a group agreement
signed by electronics specialists Thales, on 4 February 2014. The texts invite every
employee, including managements, to disconnect from the company network and
not to send emails outside normal working hours. The social protection company
Reunica decided to switch off the mail system, except from top managers, between
20.00 and 07.00 on weekdays and from Friday at 20.00 to Monday at 07.00, and its
agreement states that “every employee has the right of respect during rest periods
and no interference with private life, including limited use, through their own
initiative, of communication means”.53
On 21 October 2020, Italy, a joint Declaration was signed within UniCredit
Group, stating guidelines, principles, and minimum standards applicable to all
employees: “The Group is committed to develop a culture oriented to: [. . .] preserve
the daily, weekly, holiday rest times and sick leave of employees provided by the
country regulations, national contracts and laws, avoiding any inappropriate use and
abuse of digital channels (i.e. SMS, video-calls, Whatsapp, chats, calls); respect
privacy of colleagues, bearing in mind that the use of personal devices for business
needs could be allowed only in case of real urgency; avoid texting, calling and
emailing to personal devices for business reasons”.
In Spain, the collective agreement granted in July 2017, within the insurance
company AXA, in force from 2020, aimed to promote the R2D after the working
day, establishing, in its Article 14, that “except for reasons of majeure or exceptional
circumstances, AXA recognizes the right of workers not to respond to emails or
professional messages outside their working hours”. In 2019, an agreement was
52
Cfr. Eurofound (2021c, g, e); Eurofound and Cedefop (2021); Dima and Högback (2020);
Montesdeoca Suarez (2022), pp. 95–98.
53
Cfr. Eurofound (2014).
R2D: The Right to Disconnect from Work 277
54
Cfr. Centro de Relações Laborais (2019); On digital economy and collective bargaining in
Portugal, cfr. Ramalho (2019). On levels of collective bargaining in Portugal, cfr. Borges (2017c).
55
Cfr. Centro de Relações Laborais (2021, 2022).
56
Cfr. Eurofound (2020a, c); UNI (2019b).
278 I. V. Borges
9 Implementation Mechanisms
The disconnection and constant digital connection topic coexists with an enormous
technical complexity that facilitates irregular behaviours and makes it difficult to
control and sanction.
Considering the multiple practical implications, particularly combined with the
regulation of privacy and data protection, it is significant to define the content of
many essential operational concepts, such as the concepts of work and of rest, and,
above all, the way in which technical tasks of connection and disconnection are
constituted and reinforced or avoided.
The idea that regulation be adapted to the specific needs of sectors and compa-
nies, meets with general consensus among countries, SP and even legislators, to
whom the approaches, modes and ways of connection and disconnection must be
settled and approved through social dialogue at the company or sectoral level.
However, it is known that these methods cannot function effectively in countries
with low unionisation and irregular collective bargaining. At the same time, trades
unions frequently have other priority agendas.
For this reason, many laws or practices refer to the responsibility of employers,
with the intervention of workers’ representatives, in drawing up policies or guide-
lines, pursuing the purposes of information and training about the applicable theo-
retical rules, adapting them to the concrete reality of the company, the profession and
the sector. Yet, this route also has disadvantages, namely those related to the
attribution of too much power to employers, who are always naturally concerned
with productivity and profitability, and the risk of this corrupting the viability of the
fairest and most appropriate solutions.
The difficult choice of the best solution, should be based on more detailed general
guidelines that apply generically to all companies and activities, under the principle
of legal origin, and on a supplementary basis and in the event that the parties cannot
reach an agreement.
In any case, even for these options, it is important to bear in mind the several
possibilities in technical terms and to be familiar with practices already tried in some
countries and in certain companies. Although we are of the opinion that this subject
deserves further investigation, namely by specialists and technicians in IT, admin-
istration and human resources, we think that some ideas are always welcome, at least
to inform future developments.
There are divergent techniques to address the R2D and potential various answers
across sectors, areas, zones, businesses, companies and employers, especially if the
needs are about providing the flexibility to tailor solutions.
In a first approach, the R2D, or the implementation of disconnection, is being
operationalised through a variety of hard and soft measures, mostly by the adoption
of disconnection software. The hard measures or the hard disconnection encom-
passes, for example, connectivity shutdowns after a pre-defined hour; blocking of
incoming messages or calls, as well as destruction of emails or messages, during the
rest period; forwarding of messages and emails to the next workday. The softer
R2D: The Right to Disconnect from Work 279
an easy and clear way, for all workers who use work digital tools, independent of
their status and their working arrangements, and for all sectors, both public and
private.
This should include:
– measuring working time and recording it; criteria for derogations and for deter-
mining compensation for overwork; protection against adverse treatment; ways of
dealing with complaints; health and safety assessments, including psychosocial
risk; right to redress; guidance, information, awareness, and training; SP and
workers representatives’ intervention; penalties.
– fair, lawful, and transparent, measures, procedures, practical arrangements, sys-
tems, and mechanisms, and provide for the worker not to engage and to deactivate
digital tools for work purposes, and any work-related monitoring or surveillance
tools, expressly stating that workers are not required to be available during rest
time and that they will be provided with the appropriate compensation for any
extra time worked.
– place obligations on the employers to commit management to create a culture that
avoids out of hours contact, to ensure they do to not require workers to be directly
or indirectly available or reachable outside their working time, and to support
procedures in a no-blame culture; and on the co-workers, to refrain from
contacting their colleagues outside the agreed working hours for work purposes.
– encompasses specific instructions in particular not to respond to e-mails or cell
phone calls; standby devices for computer servers outside working hours; acti-
vation of absence and redirection messaging; use of automatic signature indicat-
ing the non-imperative nature of an immediate response.
– ways to handle issues related to management style, level of support offered from
co-workers, workload and long working hours, all the challenges described
below, and to the variety forms of implementation on legislation, sectoral or
company agreements or policies.
Despite these general orientations, an internal specific approach must be adjusted.
From the viewpoint of a company or sector, it is obvious that not all types of contact
to employees have the same consequences in terms of disrupting their rest time, and
that there are ways of reducing these risks, at least in a theoretical approach.
For any employee, there are varying degrees of urgency of matters and varying
forms of reaction in response. A phone call indicates a more urgent matter than an
email and puts pressure on an effective interruption of the worker’s rest time. An
email denotes a less assertive contact. A direct email whose subject is to marked as
urgent does not require an immediate response. The lack of insistence on an
unanswered call or an unanswered message indicates that the problem has been
dealt with or that it is no longer important. Emails received during rest are
disregarded in contexts involving many workers with different time zones. A
phone call without a prior text message asking the worker to answer the call might
even suggest a pocket call.
R2D: The Right to Disconnect from Work 281
10 Challenges
The broad variety of recent laws, multiple opinions and serious concerns regarding
the legislative efforts involved in the attempt to regulate the R2D, serves to empha-
size the range of complications emerging and the numerous legal challenges still to
be resolved.
From the outset, we underline the controversial and extremely practical topic
related to the definition of derogations from the implementation of R2D, which is
highly dependent on the universe of each company and sector, and even on the
functions performed by each worker. Then, comes the coordination with overtime
work and the specificities of telework. In addition, there is the multiplication of
complexities brought by the atypical situations of working time, maxime, overtime
work, exemption from working hours, flexible hours, adaptability regimes, avail-
ability to work, work on call, or working in different time zones and legal systems.
Considering the volume of challenges, we group them in three main groups:
difficulties about the R2D application; problems related to the R2D enforceability;
and questions regarding the balance between R2D and the worker’s desire for
flexibility and autonomy. To ensure protection from hyperconnection, all these
complications must be addressed within the relevant legal frameworks and the
options taken through distinct levels of normative production.
Ultimately, the great challenge will lie in changing mentalities, modernizing
tradition, updating the cultures of organizations, and promoting awareness of an
ideology that weighs the scales well and that adequately assesses the demands and
priorities of the work and balances the individual and family, social and cultural
interests of the worker as a person and as a citizen.57 It will not be easy to reverse a
mentality, of both employers and employees, that judges unfavourably a worker who
disconnects, and that considers it normal to contact workers outside working hours
and, in particular, that assumes and expects that they will respond.
10.1 Application
The application of the R2D can easily fail due to the lack of clarity and the absence
of precise general and abstract parameters regarding the numerous levels and phases
of relevance.
In a first phase, many uncertainties are revealed on the content and the definitions
involved and associated with the R2D. About what is considered working time,
non-working time, rest time, availability time, paid time, any third kind of time. Also
about what is technically classified as digital connection or virtual disconnection.
57
“This is not a technological problem: screens can be switched off, computers unplugged, phones
put on flight mode. [. . .] without a right to disconnect culture, we may struggle to find the right
work–life balance.” Cfr. Eurofound (2022).
282 I. V. Borges
These main problems are solved by the law of Directives and the CJEU case-law,
although clearer rules are always desirable.
On a second level, many doubts and reservations are identified concerning the
legal nature of the R2D and the respective purposes, objectives, and foundations.
What in legal terms can be qualified as a power, obligation, responsibility,
guarantee, right or duty. Whether there is the power to contact, the obligation to
disconnect, the guarantee of disconnection, the right to disconnect, the right to
disconnection, the right to be disconnected, or the duty not to contact. Whether the
responsibility for disconnection lies with the employer, or whether the decision rests
with the employee, or whether it is also a duty for all co-workers. Doubts also arise
about the superior scope of R2D, if it lays in the objective protection of working
time¸ if it is modeled on a fundamental right to rest or to health, or if it is a
manifestation of the principle of dignity. In this matter, the examples we refer to
above, will be very useful, in particular concerning the Portuguese employer’s duty
to refrain.
In a third phase, the recognized difficulties of operationalization and delimitation
of the R2D implementation modalities stand out, as well as the choice between the
still complex adequate mechanisms for its practical execution. Problems that we
already addressed above.
On the option between legislating, more or less precisely, or delegating to the SP
and companies. Regarding comprehensive requirements, the existence of general
rules or on the assumption of diversity which is impossible to determine and
designed to meet the needs of each company or sector. About appropriate technical
measures for disconnection, with weak or strong intensity and with a short or
extended scope of personal application, in time and space. On the form of articula-
tion between interested subjects, with or without a duty to listen to workers and their
representatives. About the basis of establishing regulations, whether based on
collective agreements, resulting from internal policies or codes of conduct. On the
mode of interaction between the interested parties, whether in mandatory negotia-
tions, obligations to implement final documents, or with the objective of incorpo-
rating the right into the company model. About the strength of action required of the
employer, including awareness and information duties, essential training duties so
that workers are aware and able to exercise their R2D, whether changes should be
implemented in the organizational culture or talent management and training of
leaders, or assuming responsibility for the result of the effective disconnection of
workers.
In a fourth phase, the enormous obstacles raised by flexibility and autonomy, will
become evident, as we will describe in the following.
On a fifth level, resulting from the combination of the previous levels, all the
problems related to the adoption of limitations and the identification of potential
borders and exceptions to the exercise of R2D. About whether the employer can, in
certain situations, contact the worker during their rest period; on how and when
workers can exercise their R2D; about whether they should read messages but not
respond; on whether they should answer phone calls but not comply with orders
R2D: The Right to Disconnect from Work 283
immediately, and whether they can deactivate computers, phones and applications,
or not accept any contact.
With regard to derogations of the R2D, the Directive proposal states those happen
only if the employer, on every occasion a derogation is invoked, gives to each
worker prior written notice substantiating the need for derogation, if there are
preceding criteria for calculating the compensation for work performed outside
working time, and if the overall goal of OSH is respected. Concerning whether
derogations should exist or not, and which ones should be implemented, the proposal
merely states that they should be provided for only in exceptional circumstances,
covering two types of examples: cases of force majeure and cases of other emer-
gencies, to be adopted in accordance with the understanding of the employer and the
practice of the sector.
These are indeterminate legal concepts and difficult to implement in practice.
Force majeure cases are normally related to situations of natural catastrophe, and
circumstances intrinsically linked to ideas of inevitability and exceptionality, or
resulting from factors that, as they cannot be controlled or foreseen by man, cannot
be imputed to them. These reasons are generally required by law for employers to
issue overtime work orders or to provide for adaptations to rest times.
Urgent cases for reasons related to the company and its services operations are
more complicated to determine objectively because they imply attributing a certain
margin of discretion to the employer and, as they can be manipulated by the latter,
they are easily converted into easy evasions of the guardianship regime of R2D.
These may, therefore, raise more doubts and be more consistent in terms of limiting
the protection of workers’ rights.
As the legitimacy of the exceptional contact is based on emergencies, it is not
important to analyze whether or not the contact contains an order for the worker to
act immediately or respond only on the next working day. In the case of the 2022
interpretative note of the DGAPE in Portugal, it was held that the email sent to the
worker during their rest period is valid, if it does not request a response or determine
any other immediate action on the part of the worker. This was subsequently the
object of criticism and later amended.58
It seems reasonable to call on the situations that Directive 2003/88/EC considers
justify breaches of the rules on worker rest, which must happen without prejudice to
OSH and of equivalent periods of compensatory rest. Regarding workers in a
managerial or confidential position, they admittedly cannot fall within the scope of
working hours, but they cannot also be expected to be permanently connected either.
Compelling reasons related to the operation of the company are at stake, or
situations of unavoidability and indispensability to prevent or repair serious damage
to the company or to its viability, as well as merely chance events, of various kinds.
58
Cfr. Amado (2022); the portuguese DGAPE, FAQ, cit. On the need for adaptation by the Spanish
system, underlining the sanctions, cfr. Montesdeoca Suarez (2022), pp. 81–94; Trujillo Pons
(2022), p. 54, (Trujillo Pons 2021), pp. 72–73, 77–78, 82, 87, 93.
284 I. V. Borges
Other emergencies can include the cases in which the need to contact the worker
outweighs the worker’s own protected interests, for example, to prevent the worker
from performing a task that is no longer necessary, of changing dates scheduled for
work, meetings or long-distance travel (such as a message sent to the telephone
before the worker goes to the airport); to alert them to changes in assumptions
essential for carrying out tasks, not previously known to the employer (such as a
supervening fact that makes it unfeasible or that causes the need to settle a dispute in
court, only known the day before; or the unexpected negative result of a technical
audit before hiring an external service provider); to inform them of any personal
situation if only the employer can contact them.
It will be pertinent to qualify the reason and justification of the order as an
objective emergency situation for the company, for the service and for the provision
of work. An emergency also must be analyzed in a legal framework of conflicting
functional rights of the company and the employer and the employee’s personal
rights and informed by proportionality balancing the interests involved (the emer-
gency of the employer and the damage caused to the worker); reasonableness of the
sacrifice required of the worker in losing or interrupting their rest (if the contact only
causes a suspension of the rest that does not affect its continuation, or if it implies the
consequent loss of all the rest in question); to minimize all unnecessary risks
(avoiding the ways of causing greater harm to the worker); and relative to the
circumstances in which the contact is actually made (if immediately after or before
the work period ends or begins; or if in the middle of the night between work days; or
if during a day of rest; if by way of phone call, if using only text messages; if sending
email). In our opinion, contact is not justified, for example, when an email is sent that
requires the reading of several documents, in the middle of the night before a
meeting scheduled for 08:00 am, simply to notify the employee that the meeting
was postponed by 30 min.
These derogations may raise serious doubts when it comes to the eventual need to
transpose the Directive into national legislation. A need that will be imposed, for
example, even for the Portuguese Labour Code which, despite expressly providing
for R2D through the duty of the employer to refrain from contacting the worker
during their rest period, only considers exceptions in situations of force majeure, and
not also the urgent cases of the company.59
In any case, one cannot forget two essential aspects of these derogations: (a) they
presume that the constant order of the contact made is an order to provide overtime
59
On cases of the Spanish labor inspection, referring to the qualification as harassment and
concluding that companies do not violate the R2D for the simple fact of sending messages during
the rest period, but only if these are abusive and discriminatory and require an immediate response,
cfr. Trujillo Pons (2022), p. 52, (2021), p. 92. On examples of rules provided for in four collective
agreements of 2021, cfr. Martins (2021), referring to cases in which there are extraordinary
circumstances that may pose a serious risk to people or potential corporate harm to the business,
its customers and/or its shareholders; as well as any other legal and/or regulatory nature whose
urgency requires the adoption of special measures or immediate responses; and to the personnel
with strategic responsibility and positions whose performance entails greater availability.
R2D: The Right to Disconnect from Work 285
and that, therefore, the work performed during the break is considered and is
remunerated as overtime, as a result of the proposed Directive; (b) they presume
the guarantee of sufficient rest or compensatory rest for the worker, which may also
imply the calculation of a new rest period, in cases where the interrupted rest is
mandatory rest that must be taken continuously.
Therefore, if the contact represents an overtime order, it means that the contact
can only be legitimately promoted if the legal requirements for that overtime are met.
This helps to find the appropriate basis for emergency cases concerning the
employer.
However, we admit to having doubts about the existence of derogations, consid-
ering, among several difficulties, a highly practical problem related to some contra-
diction also between the rules contained in the proposed Directive. Our essential
question lies in the following: if the R2D grants the employee the right to deactivater
the digital means of work communication, immediately after the end of the daily
work period and during all the rest time, we don’t understand how the employee can
receive any type of contact for urgent purposes.
If the worker exercises their right to deactivate the phone and the computer, it is
unclear how the worker can realize that they must receive the employers contact for
urgent reasons, because they do not even know when a phone call, a text message or
an email has been received.
We underline that we are only considering digital tools made available by the
employer and used by the employee for their work. As we know, the employer
cannot contact the worker through their cell phones or personal computer, and the
worker may legitimately refuse to provide these contact details, for the reasons
imposed by the GDPR. With this in mind, many functions and many workers are
out of this debate, once they can only be contacted via the computer because they do
not have a service cell phone assigned by the employer, and contacts made by the
computer are, by nature, less invasive because they are not based on continuous
portability and do not constantly accompany the worker, contrary to what happens
with the cell phone.
The answer to these questions may, in fact, determine that, due to the admission
of derogations and the possible emergency, the employee is obliged to keep the
digital work tools connected so that they can be reached by the employer. Which
would lead us to contradictorily conclude that the worker has the R2D but does not
have the right to switch off.
In this topic the options are all or nothing: either the worker is protected against
illicit contacts, which implies preventing all contacts; or the possibility of contact is
released, and protection is reduced only to the illegality of some contacts. We are led
to reconsider the essential rule in this theme: connection or disconnection. Whether,
during a rest period, the worker is obliged to be connected, to be available for
eventual contact, or whether, on the contrary, the worker has the fundamental right to
be disconnected, not being available for any contacts.
And, returning to the origins of working time limits and the protection of worker’
rest, rules that remain imperative and unavoidable, we are compelled to support the
first alternative and defend that the worker has the R2D and, added, the right to
286 I. V. Borges
switch off or to be disconnected. As the worker is not obliged to keep the digital
work tools connected, they cannot receive any contact in the event of an emergency,
and therefore no derogations from the R2D can be envisaged—at the cost of
allowing a clear vitiation of the objectives underlying the R2D.
If the worker keeps the telephone or computer connected, they will always have
to spend their rest time to receive the contact, to read the message and to conclude
that it is urgent or not, or to always respond when the text of the message raises
doubts about be immediately complied with or not, for fear of reprisals or being
misunderstood by colleagues and managers as a bad worker or a worker who does
not “wear the shirt” or who is not a “company ambassador”.
In fact, we only imagine possible derogations when are implemented licit forms
of contact only activated in the event of emergencies, such as, for example, a device
that the worker accesses continuously and that only alerts with a signal to the need to
consult a phone or computer.
Within this understanding, overtime is also limited because its respective orders
can only happen during working time, ruling out urgent situations known only
immediately before the urgency occurs and only covering cases in which overtime
is required in advance and, therefore, not during the rest. The practical implemen-
tation of this rule will require the determination of the most appropriate instrument or
ways for extraordinary contacts.
The prediction of derogations assumes the occurrence of overtime, that is, the
legitimacy of the worker being contacted for this purpose, which, in essence, means
that the worker is always obliged to provide overtime (which, by its nature, will
occur during the worker’s rest).
It turns out that the central problem of R2D, meaning the essential reason for
protection against hyperconnection, or the foundation for promoting disconnection
from work, is not the contact including an overtime order, but rather the contact
serving illicit purposes, namely, implying psychological pressures for the worker to
work during their rest period, in a concealed, hidden manner, and therefore, in breach
of the law and without any type of payment.
Combating the culture of hyperconnection implies a direct attack on the roots of
the problem, and this only happens when the possibility of contacting workers
during their rest is eliminated.
10.2 Enforcement
Also composing a strong challenge to the regulation of R2D, are the inherent
struggles related with its execution, effectiveness, and successful appliance, a subject
that is strictly linked to the discussion about providing sanctions in case of
non-compliance with the rules of R2D, determined by law, collective agreements
or internal policies.
The lack of negative consequences for non-compliance generates a negative spirit
of impunity, decreases the respect for the law, raises irregularities, multiplies the
R2D: The Right to Disconnect from Work 287
fears of retaliation, and reduces worker protection and guarantees. However, this
task is not exhausted when a legal system chooses to establish sanctions. This system
will still have to select the most appropriate way to implement it: if providing for
rules and related sanctions, if strengthening compliance control and guaranteeing its
application, or if also with statistical control and constructive evolutionary
monitoring.
Thus, sanctions can assume various forms, which their effectiveness also
depends: administrative offenses or fines, of greater or lesser value, or of significant
amounts; disciplinary action for superiors or colleagues; and possible civil or
criminal offenses.
In addition, the negative consequences do not necessarily have to be of a punitive
nature, as it may be considered sufficient, or even complementary, to introduce
specific rules on the qualification of the work carried out by the employee, when they
are available for contact, when the contact is made, and when they comply with the
orders contained in the contact. We thought, for example, in the absence of adequate
rules agreed by the parties, paid working time can be considered as not only the time
in which the employee is available to be contacted, but also the time during which
they is waiting for a previously announced contact; and as paid time of effective
work, not only the time that the employee spends answering calls or reading
messages, but also the time that they take to carry out the sequential steps requested
therein.
We also think that all the work previously referred must be qualified as overtime
work, and therefore more highly paid, as well as an interruption of continued rest
periods and engendering consequent restart of its count with renewed rest rights.
And we believe, concretely and with a particularly positive practical effect, that it
can constitute moral harassment when the employee is frequently contacted during
their rest period, as already regulated under the terms of the ILO Convention
No. 190, of 2019, about Violence and Harassment at work, in force since 25 June
2021, and its Recommendation No. 206. This implies the application of the entire
associated protection regime, rules regarding the protection of the principle of
equality and non-discrimination, along with the imperative OSH regime, and also
the possible application of the compensation rules for damage resulting from
accidents at work and occupational diseases.
The last set of challenges for future R2D legislators is related to the necessary and
extremely complex coexistence involving digital disconnection with flexible and
autonomous work.
Disconnection is most easily implemented and effective in a traditional work
period that imposes rigid time schedules. Yet, workers increasingly want to benefit
from enough flexibility to shape their own work agenda outside the standard, aiming
at the balance between personal and professional life, enabling errands to be run or
288 I. V. Borges
exceptions would be contrary to all Labour Law, questioning its very legal nature,
and opening a wide door to non-compliance.
Given the lack of international and EU R2D legislation, the 2020 Agreement on
Digitalisation, and the 2021 EP Resolution, along with the recent experiences of
company level initiatives and national law, described above, allow one to formulate
ideas on the controversy and on legislative initiatives, and construct a real-life
approach through examples of R2D implementation.
According to our present knowledge, European countries that currently have
legislation on R2D are: France, in the French Labour Code, Article L. 2242-17(7),
amended in 2018 and 2022; Italy, in Law 81/2017, of 22-05; Spain, in Law 3/2018,
Article 88; Belgium, in Law of 26-03-2018, Articles 16 and 17, amended in 03-10-
2022; Portugal, in Portuguese Labour Code, Article 199-A, amended by Law
83/2021, of 12-06-2022; Ireland, in Workplace Relations Commission Code of
Practice on the R2D, order 2021; and Slovakia, in Act 311/200, since 01-03-2022.
Outside of EU, we found Chile, in Law 21.220, of 26-03-2020; Argentina, in Law
27.555/2020, Article 5, amended by Decree 27/2021, since 01-04-2021; and Colom-
bia, by Law 2191, of 06-01-2022.
In France, Italy, Portugal, Ireland and Slovakia, R2D appears through labour law,
while Belgium and Spain include it in broader economic or data protection legisla-
tion. In France, Spain, Portugal and Ireland, R2D is applied to general workforce, but
Belgium applies it only to companies with more than 20 workers, in Italy is specific
to smart working workers, and in Slovakia only for remote workers. France, Spain,
and Italy establish the R2D as a right of the worker or a duty of the employer to
regulate and implement it, whereas Belgium relegated it to a programmatic plan to
initiate periodic collective negotiations, Ireland only referrers to a mandatory policy,
and Portugal provides a general duty of the employer to abstain from contacting the
employee. In France and Belgium, the law requires SP to implement the R2D and to
set out the modalities, while Portugal appeals to collective agreements and manda-
tory internal rules, Spain impose policies, and Ireland provides practical measures.
The essential difference on how to regulate R2D, depends on the political options
related to weighing the advantages and disadvantages between legislating a R2D, not
legislating at all, or legislating about R2D only referring to collective bargaining or
to internal policies. Among others, some arguments support the decision not to
legislate regarding the R2D.
First, the existing legislation is already sufficient to ensure that the worker, after
working hours, may not respond to the employer’s requests, without negative
consequences at work, so that the emphasis must be given to compliance with and
inspection of the rules already existing.
Second, the deterioration of the worker’s health is not only associated with the
workplace, but rather engenders from a multitude of reasons, namely, the corporate
290 I. V. Borges
culture, the competitiveness among the best performers and the resulting financial
and career progression compensation, the type of functions and the degree of
responsibility or autonomy associated with them, so that the reaction focused on a
single aspect being therefore reductive.
Third, the R2D is inseparable from the characteristics of each organization, being
preferable to adopt a programmatic vision, like a guidance, not legal imposition, or
leave its implementation to employers and unions, so it is not suitable a single recipe.
Fourth, the use of ICT in the work context is still widespread, it was artificially
driven by the reaction to the pandemic context, and it is uncertain that it will be
transmuted into a structural option, rather than merely conjunctural, at least in a
global perspective.
Lastly, the argument that involves the broader context of this new social and
economic reality, stating that the prevailing labour issues do not reside in the R2D,
but in other points of interest and concern, namely, in the control, registration and
payment of overtime work.
The R2D legal enshrinement constitutes the most effective way of protecting
workers' rights in face of the new forms of rest, born and imposed by the modern
methods of intense and constant communication. By a new digital world of work that
encourages the maximum elevation of job objectives and the productivity of com-
panies; that divides generations and employees with more or less computer vocation,
available time and specific digital training; which enhances the unbridled self-
proactivity of workers in search of better wages and career advancement; that easily
risks the violation of privacy, personal data, the worker’s life as a person and a
citizen; that fractures the continuous rest and blurs the boundaries between work and
personal life; which creates all the pernicious objective conditions related to OSH,
generating new physical risks and renewed psychosocial risks, multiplying conflic-
tual situations at work based on emotions resulting from the unavoidable profes-
sional interrelationship and endangered by poor training of managers; increasing
cyber harassment, nervous breakdowns and burnout.
Therefore, it is urgent to legislate a new type of repose, born of a forward-looking
way of working and that causes a specific new type of tiredness, intersecting among
inextricably mixed areas: digitization, telework, pandemic, privacy, personal data,
work time, rest time, health, dignity.
In fact, it seems to us ineffective and without protective potential, any reference to
the responsibility of employers in promoting internal regulations, even with the
intervention of workers’ representatives. Not just because this always represents a
self-commitment on the part of companies which, when in doubt, leads to omission,
but also because not all countries and sectors show a sufficiently strong presence of
union representation or activity.
On the other hand, only the standardization of R2D promotes the adequate
possibility of predicting adequate and effective penalties. The past already demon-
strated that merely programmatic norms or rules based on the impunity of offenders
do not serve the purpose of protecting workers’ rights, especially in times of crisis
and when companies reduce costs and dismiss workers. Rules should be provided on
the imposition of high fines, about the classification of irregular contacts as overtime
R2D: The Right to Disconnect from Work 291
paid work, about recounting periods of rest interrupted by those contacts, and about
qualifying these behaviours also as harassment at work.
Above all, it seems to us that formal and explicit legislation on R2D will bring
real added value to the existing framework and is the only way for workers to clearly
realize that they freely have the right to say no in face of demands for more work
during their rest.
Only by law will it be possible to change mentalities and start a new organiza-
tional culture that does not marginalize employees with their own lives and who
want to live them, despite being effective professionals and desiring to maintain their
happiness at work. More. The real cutback in the permanent connectivity of
employees does not only imply a change in the constant responsiveness work culture
of organizations but also of the employees themselves. Although not obliged to do
so, workers often keep answering e-mails outside working hours because they
themselves experience difficulties disconnecting from digital tools.
Only under the force of a law enshrining R2D and providing for the forms of its
practical implementation, workers will be able to reliably know what to do in relation
to their digital work tools, how to work and stop working, if and when to turn off,
call back and reconnect. In the uncertainty of which action to take, employees are
dominated by the fear of reprisals and dismissals, so they refrain from using their
R2D. For this reason, it very important to implement the forms of operationalization
of R2D, as we developed in point 8, to which we refer.
However, even if the final option is to provide for the R2D, we find still many
doubts and uncertainties that should be identified in advance and minimized by
proposing appropriate solutions, as we highlighted above, within the several chal-
lenges, in point 9, above.
As more difficult to discover and implement, we underline the rules on how to
react and legislate R2D in atypical working time situations, and on R2D derogations.
We believe that the R2D must be: (i) understood as the employee’s right to be
disconnected or the employee’s right to switch off; (ii) implemented as a guarantee;
(iii) operationalized as a duty to disconnect, or D2D, a co-worker’s and employer’s
duty to disconnection.
As a result, the R2D, emerges as an active right to avoid contact, to promote the
defense of the rest time, to remain disconnected, to turn off the means of disturbing
rest time.
The protection of the R2D also as a guarantee and not just a right, means a
reinforced protection of the worker that does not depend on their initiative, which
imputes to the employer the responsibility, directly and in the first line, for the
maintenance of respect for this worker’s right. And that implies that the employer
has the obligation to carry out the R2D and to implement all the necessary measures
for its exercise by the employees, ensuring that this R2D is also respected by his
colleagues, including the superiors, and by the clients and every person who can
contact him for service reasons. And encompassing responsibility for creating a
culture that encourages and promotes respect for this disconnection from work.
The D2D means the employer’s and co-worker’s duty not to be connected with
the employee, or the duty to refrain from contacting the worker. Contacts in the
292 I. V. Borges
broadest sense, covering forms of contact and not of content and meaning that the
employer cannot send email, regardless of its content. Difference from the simple
R2D, the D2D implies the higher protection brought by the inversion of the regime
on the burden of proof.
A D2D which corresponds, but not only, to the R2D understood as the right to be
let alone, or the right to be not disturbed, because these duties result from the general
rules that already impose a ban on worker contact due to the obligation to promote
rest. Incidentally, the prediction of a R2D in this perspective, even implies the
opposite of the general rules because it seems that interference by the employer,
through technological means, in the worker’s rest period, is legitimate and just needs
to be regulated and limited. In this content, D2D only aims to reinforce compliance
with general duties in the context of digital communications.
The D2D understood, above all, as the duty to disconnect, that is, the duty to
guarantee the worker’s disconnection.
Ultimately, we can even conclude that, from the a imperative nature of the limits
to working time, the worker not only has the right to a time rest, but also that they
have the duty not to work for the employer during this rest period, that is, the duty
not to perform the contract, which would lead to the defense of an employee’s duty
to disconnect.
If the solution is focused on the worker’s right (right to disconnect or to be
disconnected), the responsibility is transferred to the worker—the hard decision to
say no whenever the employer violates working time limits. This solution does not
mitigate the possibility of the worker being effectively contacted, because it acts
only in the next moment, clarifying that the employee has the right to say no after
being contacted. On the other hand, if it is seen as the employer’s duty, his obligation
to comply with legal work limits, will be reinforced, which will reduce the chances
of the worker being effectively contacted, because the duty acts in a previous
moment, before the employer contact the worker, ensuring the illegality of his
conduct.
However, if the R2D is implemented as a guarantee, the employers are obliged to
ensure the employee’s R2D at any cost.
Responding to our initial doubts, we can now admit that the goal was achieved by
the content of the 2020 Agreement and the Directive proposal, providing for a
different and truly autonomous employee’s R2D. The legal protection granted to
the R2D must be broad, complete, and multi-level, covering all the perspectives in
which it can be entrusted and obviating any type of violation or non-compliance.
Only in this way will it be possible to reach all its possibilities and contribute to the
urgent change of mentalities.
Also referring to the thesis posed in the introduction, we understand the R2D not
as a superfluous provision, but as an original, autonomous and essential new
employee’s right, distinct from the general right to rest from work: as the right to
rest in the new world of work, or the right to digital rest, and as a 2.0 protection of the
worker’s right to rest, in a modern version of the traditional ending of the working
day, updated to the world of digital and distance work.
R2D: The Right to Disconnect from Work 293
References
Amado JL (2017) Tempo de trabalho e tempo de vida: sobre o direito à desconexão profissional. In:
Roxo MM (ed) Trabalho Sem Fronteiras? - O papel da Regulação. Almedina, Coimbra, pp
113–127. Also available at https://juslaboris.tst.jus.br
Amado JL (2021) Teletrabalho: o novo normal dos tempos pós-pandémicos e a sua nova lei. In:
Observatório Almedina. Available at https://observatorio.almedina.net. Accessed 8 Aug 2022
Amado JL (2022) A desconexão profissional e a DGAEP: tomemos a sério o dever de abstenção de
contacto. In: Observatório Almedina. Available at https://observatorio.almedina.net. Accessed
8 Aug 2022
Borges IV (2015) Férias Laborais: dever de gozo efectivo e margem de liberdade. PhD Thesis,
Faculty of Law of the University of Lisbon (in loco consultation only)
Borges IV (2017a) Notas sobre o obstar culposamente ao gozo das férias. In: Abrantes JJ, Xavier
BGL, Ramalho MRP et al (eds) Estudos de Direito do Trabalho em Homenagem ao Professor
António Monteiro Fernandes, 1. Nova Causa, Lisboa, pp 485–524
Borges IV (2017b) Assédio e assédio moral no Código do Trabalho. In: Méndez LM (ed) Los
actuales cambios sociales y laborales: nuevos retos para el mundo del trabajo, 2. Peter Lang,
Berna, pp 243–274
Borges IV (2017c) Níveis de negociação colectiva e eficácia erga omnes da convenção colectiva de
trabalho: abordagem tradicional e novas tendências. Notas. In: Ramalho MRP, Moreira TC (eds)
Contratação Colectiva: Velhos e Novos Desafios em Portugal e Espanha, Estudos APODIT,
3. Lisboa, AAFDL, pp 103–163
Bundesanstalt für Arbeitsschutz und Arbeitsmedizin (2016) Arbeitszeitreport Deutschland 2016.
Availabe at https://www.baua.de. Accessed 8 Aug 2022
Bundesministerium für Arbeit und Soziales (2017) Weissbuch Arbeiten 4.0. Available at https://
www.bmas.de. Accessed 8 Aug 2022
Carvalho CO (2017) O impacto da jurisprudência do Comité Europeu dos Direitos Sociais em
matéria laboral no ordenamento jurídico português. In: Revista Jurídica de los Derechos
Sociales – Lex Social, 1, 211–243. Available at https://repositorio.ucp.pt. Accessed 8 Aug 2022
Centro de Relações Laborais (2019) Relatório Anual Sobre a Evolução da Negociação Coletiva em
2018. Ministério do Trabalho e da Segurança Social, Lisboa. Available at https://www.
crlaborais.pt. Accessed 8 Aug 2022
Centro de Relações Laborais (2021) Relatório Anual Sobre a Evolução da Negociação Coletiva em
2020. Ministério do Trabalho e da Segurança Social, Lisboa. Available at https://www.
crlaborais.pt. Accessed 8 Aug 2022
Centro de Relações Laborais (2022) Relatório Anual Sobre a Evolução da Negociação Coletiva em
20201. Ministério do Trabalho e da Segurança Social, Lisboa. Available at https://www.
crlaborais.pt. Accessed 8 Aug 2022
Cochofel T (2021) Direito à desconexão: um debate ou uma medida necessária? Available at https://
whatnext.law. Accessed 8 Aug 2022
Commissioner Schmit’s (2022) Opening Remarks. Conference of the European Parliament and the
Commission on telework and the right to disconnect. Available at https://ec.europa.eu. Accessed
8 Aug 2022
DETE, Department of Enterprise, Trade and Employment (2021) Making Remote Work. National
Remote Work Strategy. Available at https://enterprise.gov.ie. Accessed 8 Aug 2022
Dima L, Högback A (2020) Legislating a Right to Disconnect. Friedrich-Ebert-Stiftung and UNI
Global Union Professionals and Manager. Romania. Available at https://library.fes.de.
Accessed 8 Aug 2022
Eurofound (2014) France: A legal right to switch off from work. Publications Office of the
European Union, Luxembourg. Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2015) New forms of employment. Office of the European Union, Luxembourg.
Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
294 I. V. Borges
Eurofound (2017a) Italy: New rules to protect self-employed workers and regulate ICT-based
mobile work. Publications Office of the European Union, Luxembourg. Available at www.
eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2017b) Spain: AXA recognises workers’ right to turn phones off out of working hours
Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
Eurofound (2019) The right to switch off. Publications Office of the European Union, Luxembourg.
Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2020a) Right to disconnect in the 27 EU Member States. Telework and ICT-based
mobile work: Flexible working in the digital age. Publications Office of the European Union,
Luxembourg. Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2020b) New forms of employment: 2020 update. Publications Office of the European
Union, Luxembourg. Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2020c) Women and labour market equality: Has COVID-19 rolled back recent gains?.
Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
Eurofound (2020d) Employee monitoring and surveillance: The challenges of digitalisation. Pub-
lications Office of the European Union, Luxembourg. Available at www.eurofound.europa.eu.
Accessed 8 Aug 2022
Eurofound (2020e) Regulations to address work–life balance in digital flexible working arrange-
ments. Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
Eurofound (2020f) Does the new telework generation need a right to disconnect? Publications
Office of the European Union, Luxembourg. Available at www.eurofound.europa.eu. Accessed
8 Aug 2022
Eurofound (2020g) Labour market change: trends and policy approaches towards flexibilisation.
Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
Eurofound (2021a) Workers want to telework but long working hours, isolation and inadequate
equipment must be tackled. Publications Office of the European Union, Luxembourg. Available
at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2021b) Living, working and COVID-19. Publications Office of the European Union,
Luxembourg. Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2021c) Right to disconnect: Exploring company practices. Publications Office of the
European Union, Luxembourg. Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2021d) Working conditions and sustainable work: An analysis using the job quality
framework. Publications Office of the European Union, Luxembourg. Available at www.
eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2021e) The digital age: Implications of automation, digitisation and platforms for work
and employment. Publications Office of the European Union, Luxembourg. Available at www.
eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2021f) What can companies do to spark innovation in the workplace? Publications
Office of the European Union, Luxembourg. Available at www.eurofound.europa.eu. Accessed
8 Aug 2022
Eurofound (2021g) Right to disconnect. European Industrial Relations Dictionary, EurWork.
Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2022a) Fifth round of the Living, working and COVID-19 e-survey: Living in a new era
of uncertainty. Publications Office of the European Union, Luxembourg. Available at www.
eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound (2022b) Telework in the EU: Regulatory frameworks and recent updates. Publications
Office of the European Union, Luxembourg. Available at www.eurofound.europa.eu. Accessed
8 Aug 2022
R2D: The Right to Disconnect from Work 295
Eurofound (2022c) Recovery from COVID-19: The changing structure of employment in the
EU. Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
Eurofound (2022d) Telework and teleworkability during COVID: An analysis using LFS data.
Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
Eurofound (2022e) Working conditions in telework during the pandemic and future challenges.
Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
Eurofound (2022f) Do we really have the right to disconnect?. Publications Office of the European
Union, Luxembourg. Available at www.eurofound.europa.eu. Accessed 8 Aug 2022
Eurofound and Cedefop (2021) Innovation in EU companies: Do workplace practices matter?
Publications Office of the European Union, Luxembourg. Available at www.eurofound.
europa.eu. Accessed 8 Aug 2022
European Parliament Research Service (2020) The right to disconnect. Available at https://www.
europarl.europa.eu. Accessed 8 Aug 2022
Fernandes AM (2021a) Uma lei sobre o teletrabalho. . .e não só. In: Direito Criativo Blog. Available
at https://direitocriativo.com. Accessed 8 Aug 2022
Fernandes AM (2021b) O dever de abstenção de contacto na Lei 83/2021. In: Direito Criativo Blog.
Available at https://direitocriativo.com. Accessed 8 Aug 2022
Fernandes AM (2022) Abstenção de contacto e e-mails fora de horas?. In: Direito Criativo Blog.
Available at https://direitocriativo.com. Accessed 8 Aug 2022
Fernandes FL (2017) Organização do Trabalho e tecnologias de informação e comunicação
(Exhausted but unable to disconnect). Questões Laborais 50:7–17
Jaehrling K, Kalina T (2020) Grey zones within dependent employment: formal and informal forms
of on-call work in Germany. Transfer: Eur Rev Labour Res 26:447–463. https://doi.org/10.
1177/1024258920937960
Machado C, Oliveira J (2021) Direito à desconexão – Como evitar a intrusão e a exaustão? Revista
Internacional de Direito do Trabalho, I:743-772. Available at https://idt.fdulisboa.pt. Accessed
8 Aug 2022
Martín E (2021) Las ‘circunstancias excepcionales’ donde no opera el derecho a la desconexión
digital: próxima parada, los tribunales. In: Desc-Labor. Available at https://www.desclabor.
com. Accessed 8 Aug 2022
Martins JZ (2021) Novos Riscos Profissionais na Era Digital. Minerva: Revista de Estudos Laborais
XI(1):44-60. Available at http://revistas.lis.ulusiada.pt. Accessed 8 Aug 2022
Montesdeoca Suarez A (2022) El eterno debate sobre la desconexión digital en las relaciones
laborales. Politica.eu 48:77–106. Available at http://www.rivistapolitica.eu. Accessed
8 Aug 2022
Moreira TC (2017) O direito à desconexão dos trabalhadores. In: Revista Questões Laborais, n.° 49.
Almedina. Coimbra. pp 7–28
Moreira TC (2019) Algumas questões sobre o direito à desconexão dos trabalhadores. Minerva:
Revista de Estudos Laborais, IX(1):129-166. Available at http://repositorio.ulusiada.pt.
Accessed 8 Aug 2022
Moreira TC (2020a) Privacidade em tempos de pandemia. In Rosário MRP, Moreira TC (eds)
Covid-19 e teletrabalho: o dia seguinte. Estudos Apodit 7. AAFDL, Lisboa, pp 65–88
Moreira TC (2020b) Algumas questões sobre o trabalho 4.0. Revista Eletrónica do Tribunal
Regional do Trabalho da 9.ª Região, 9:152–167. Available at https://juslaboris.tst.jus.br.
Accessed 8 Aug 2022
Moreira TC (2021a) Direito do Trabalho na Era Digital. Almedina
Moreira TC (2021b) Um novo tempo de trabalho e o direito à desconexão. In: Moreira TC
(ed) Direito do Trabalho na Era Digital. Almedina, Coimbra, pp 101–129
296 I. V. Borges
Ramalho MRP (2014) Tutela da personalidade e equilíbrio entre interesses dos trabalhadores e dos
empregadores no contrato de trabalho. Breves notas. Supremo Tribunal de Justiça. Available at
https://www.stj.pt. Accessed 8 Aug 2022
Ramalho MRP (2018) Tempo de trabalho e conciliação entre a vida profissional e a vida familiar –
algumas notas. Supremo Tribunal de Justiça. Available at https://www.stj.pt. Accessed
8 Aug 2022
Ramalho MRP (2019) A Economia Digital e a Negociação Coletiva. Centro de Relações Laborais.
Available at https://www.crlaborais.pt. Accessed 8 Aug 2022
Ramalho MRP (2021) Tratado de Direito do Trabalho, Parte II – Situações Laborais Individuais, 8.ª
ed. Almedina, Coimbra
Ramalho MRP (2022) Delimitação do teletrabalho, âmbito de aplicação do regime legal e acordo de
teletrabalho. Revista do Supremo Tribunal de Justiça, 1. Available at https://arevista.stj.pt.
Accessed 8 Aug 2022
Ramos MP (2022) The right to disconnect – or as Portugal calls it – the duty of absence of contact.
Available at http://global-workplace-law-and-policy.kluwerlawonline.com. Accessed
8 Aug 2022
Trujillo Pons F (2021) Un estudio acerca de la eventual directiva comunitaria sobre el derecho a la
desconexión digital en el trabajo. IUSLabor 2:66–96. Available at https://raco.cat. Accessed
8 Aug 2022
Trujillo Pons F (2022) El control y la vigilancia del derecho a la desconexión digital en el trabajo.
Seguridad y Salude en el trabajo 10:50-61. Available at https://www.researchgate.net. Accessed
8 Aug 2022
UNI Global Union Professionals and Managers (2019a) The Right to Disconnect: Best Practices.
Available at http://www.thefutureworldofwork.org. Accessed 8 Aug 2022
UNI Global Union Professionals and Managers (2019b) Legislating a Right to Disconnect. Avail-
able at https://uniglobalunion.org. Accessed 8 Aug 2022
World Health Organization and the International Labour Organization (2021) Healthy and safe
telework: technical brief. Geneve. Available at https://apps.who.int. Accessed 8 Aug 2022
Isabel Vieira Borges is a Portuguese PhD associate professor at the Faculty of Law of the
University of Lisboa (FLUL), teaching since 1992, practiced as a lawyer during several years,
took a HR and Leadership Management certified course, by the Catholic University Business &
Economics School, is a professional certified teacher in legal E-learning, by the Lisbon Open
University, and works at the HR Management Department at Group CTT. She is founding member
of the Labour Law Portuguese Association; Private Law Research Centre and Labour Law Institute
of the FLUL, and member of the Community for Labour and Occupational Research and Study and
of the International Association on Workplace Bullying & Harassment. She is also consultant,
advising employees, companies and unions of workers and employers; executive and scientific
coordinator and speaker, in several national and international congresses; teacher and lecturer in
many regular certified courses; reviewer in peer review processes; and has professional experience
on management, HR, leadership, labour, employment, social security, civil service, insurance, and
data protection law. She published various publications about: labour law; future of work, NTIC
and GPS at the workplace; platforms, GPDR, and R2D; violence and harassment, psychosocial
risks, digital era, pandemic; harassment, sexual and moral harassment; absences and annual paid
leave; collective bargaining and labor agreements; just cause of dismissal and breach of trust; drug
addiction and personality rights; worker’s polyvalence; qualification of the employment contract
and legal presumption.
Is There a Need for an EU Catalogue
of Fundamental Digital Rights?
Abstract The topic of this chapter is the discussion whether a European Union
catalogue of fundamental digital rights is needed or not. Departing from a brief
overview on the impact of artificial intelligence on fundamental/human rights, the
article follows by a succinct explanation of how the fundamental/human rights have
emerged and what is the current state-of-art. The chapter pursues presenting some
new legal issues derived from the use of the new technologies and questioning which
rights could be enclosed in a new catalogue of fundamental digital rights. Finally, it
addresses the advantages and disadvantages that such a catalogue may bring to the
overall system.
1 Introduction
The purpose of this chapter is to discuss whether there is a need for a European
Union (EU) catalogue of fundamental digital rights.1 An affirmative response to this
question presupposes that the existing catalogue is inadequate to protect individuals
and enterprises in the digital era.
As is well-known, the impact of Artificial Intelligence (AI) on human rights is one
of the most crucial factors that define the period in which we live. AI involves a
world of opportunities as well as a world of risks. If, on the one hand, it may increase
1
In this chapter I use the terms fundamental rights when I refer to domestic and European Union law
and I use the terms human rights when I address the Convention or international law.
A. M. G. Martins (✉)
University of Lisbon, Faculty of Law, Lisbon, Portugal
European Court of Human Rights, Strasbourg, France
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 297
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_15
298 A. M. G. Martins
The legal culture of each lawyer will inevitably determine their response to the need
for new legal instruments. Lawyers from legal systems with a common law tradition,
which is largely based on the stare decisis principle, may tend to rely more on courts
decisions than on formal statute. By contrast, lawyers like me, who come from legal
systems founded on the civil law tradition, with its formal anchor in statute, are
usually more receptive to new formal legal instruments. Accordingly, it is hardly
surprising that there is a strong current in favour of a new EU fundamental digital
rights’ catalogue, for instance, in Germany.
In this chapter, I will consciously avoid a firm stand either for or against any
trend. It would be inappropriate for a judge of the European Court of Human Rights
(ECtHR), to either support or criticise the approval of a new EU catalogue of
fundamental digital rights—itself primarily a political decision that should be left
to legislative bodies, either at the domestic level or at the EU level.
2
See for further developments Kettemann and Benedek (2019), pp. 58 ff.
3
For further developments see Özgür Heval Çinar (2021), pp. 25, 1, 26 ff.; Jonathon Penney (2019),
pp. 44 ff.
4
See, for instance, Human Rights Council and General Assembly Resolutions.
5
For an overview of the doctrine on this matter see Miloon Kothari (2019), pp. 15 ff.
6
Jonathan Andrew (2021), p. 2.
7
For further developments see Hin-Yan Liu (2019), pp. 75 ff.
Is There a Need for an EU Catalogue of Fundamental Digital Rights? 299
In my view, judges are supposed to intervene at a posterior stage i.e., when they
are called to apply the rules as promulgated by parliaments or approved by govern-
ments, as well as instruments approved by European or international institutions.
That said, it is worth noting that judges are particularly well equipped to engage in
this debate, especially those who daily interpret and apply fundamental/human
rights, in that they are frequently confronted with the insufficiencies of the legal
frameworks and their lacunae when applied to concrete, real-life situations.8
From the foregoing may be inferred that I will purposely express my views in a
rather self-restrained manner and strictly in my academic, as opposed to a judicial,
capacity.
1.3 Outline
In the first section, entitled ‘digital rights in Europe’, I begin with a brief overview on
the impact of AI on fundamental/human rights, followed by a brief explanation of
how the fundamental/human digital rights have emerged and what is the current
state-of-art.
In the second section, entitled ‘the need for a specific fundamental/human digital
rights regulation’, departing from the presentation of some new legal issues derived
from the use of the new technologies, I proceed by questioning which rights could be
enclosed in a new catalogue of fundamental digital rights. Subsequently, I elaborate
on the advantages and disadvantages that such a catalogue may bring to the overall
system.
The final section provides for the conclusions to be derived from this chapter.
8
For an overview of the courts’ role in the digital sphere see Lorna McGregor (2020), pp. 183 ff.
9
https://www.coe.int/en/web/commissioner/thematic-work/artificial-intelligence.
300 A. M. G. Martins
At the outset, the use of AI was almost exclusively studied from the perspective of
the right of privacy; over the years however the rapid and far-reaching evolution of
novel technologies has shown that it impacts almost all fundamental/human rights.
Currently, it is rather clear that the new technologies have a huge impact on
several fundamental/human rights.10 With the aim of clarifying this point, I would
just draw the reader’s attention to certain examples. Technologies such as
wiretapping, forensic DNA research, and camera surveillance may contribute to
more security, but it is not excluded that they may also have a profound and negative
impact on privacy. Furthermore, new technologies such as social media and
deepfakes may contribute to the freedom of expression, but they can at the same
time facilitate fake news and hate speech.11 The extent to which sophisticated data
analysis interferes with an individual’s privacy or the extent to which the use of risk
profiling is potentially discriminatory against particular population groups of are also
issues that have to be taken into consideration.
With the purpose of combatting specific contents online (incitement to violence
or hate speech, incitement of terrorism, drugs use, prostitution, etc.) many internet
intermediaries deploy automated moderation tools. Although in some cases this
automated moderation may contribute to an increase of human rights’ protection,
in other cases, due to the complexity of various discourses, the use of automated
moderation tools may lead to a removal of legitimate content, which itself constitutes
a violation of freedom of expression.12 Apart from the negative impact that the use of
the automated moderation may have on the freedom of expression, these mecha-
nisms may simultaneously jeopardise the democratic system and rule of law.
The novel technologies may also raise issues regarding the right to a fair trial, and
above all, the right of access to a court. For instance, the use of algorithms by
the courts may have a positive impact on the reduction of backlogs and on improving
the efficiency of criminal investigation. However, the use of new technologies by the
courts is not always advantageous, but it also implies many dangers. In fact, the use
of algorithms by courts the legal analysis may lead to outcomes that are unantici-
pated by either the parties or the judges, and that can hardly be justified by a judge.13
The use of digital evidence similarly poses numerous challenges to the courts.14
10
See Fundamental Rights in the Digital Age, Report of the Conseil d’ État - France, available on
https://www.conseil-etat.fr/Media/actualites/documents/reprise-_contenus/etudes-annuelles/funda
mental-rights-in-the-digital-age.pdf.
11
See Bart Custers (2002), p. 4.
12
For further developments see Jukka Viljanen (2019), pp. 225 ff.; Bernard and Pejchal (2021),
pp. 167 ff; Guido Keel, Hate Speech and Journalism (2021), pp. 185 ff.
13
On the use of algorithms by courts see, among others, Algorithms and Human rights—Study on
the human rights dimensions of automated data processing techniques and possible regulatory
implications, 10 ff. available on https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-
study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-reg
ulatory-implications.html.
14
Especially on this issue see Salvatore di Cerbo (2016), pp. 23 ff.
Is There a Need for an EU Catalogue of Fundamental Digital Rights? 301
Furthermore, during the pandemic crisis, the courts had to adapt their practices to
the restrictions of freedom of movement imposed by almost all States globally.
Hearings by videoconference, limitations on contact between the lawyers and
prisoners or arrested individuals, the restrictions imposed on the hearings, which
are even more problematic in procedural criminal law, became common in many
countries. During the pandemic crisis many courts had to cope with these issues and,
consequently, they become more aware of the problems that the use of new tech-
nologies by the courts may raise.
Another fundamental right than can be hugely impacted by the new technologies
is the freedom of elections and political participation in liberal democracies, which
have a direct effect regarding the quality of the democratic process. The increase in
storage and exploitation of data with the aim of influencing citizen choices in
elections may result in opinion manipulation and adversely affect the freedom of
elections.15
I could continue to elaborate on the impact of the AI on human rights, but submit
that the given examples are sufficiently illustrative.
To sum up, the increasing use of digital technologies by governments and
companies raises numerous questions regarding the regulation of these new tech-
nologies, particularly as regards which rights and legal protections citizens have or
are entitled to.
The aforementioned issues are not novel. In fact, the impact of the new technologies
on the fundamental rights has been discussed in the USA since the mid-1960s and in
Europe since the late 1960s and early 1970s. However, those debates were partic-
ularly centralised on the threats that the computers posed to the right of privacy.
While in the USA, lawmakers had never adopted comprehensive data protection
laws, relying on the courts’ decisions; in Europe, especially the countries with a civil
law tradition, have adopted laws which regulate data protection.
At the international level, the Council of Europe (CoE) and the Organization for
Economic Co-operation (OECD) sought to coordinate and harmonise these laws
while the European Union and previously the European Communities have also
developed a legal framework on data protection.16
Just to cite some emblematic examples, in 1968, the Parliamentary Assembly of
the CoE adopted Recommendation 509 (1968) on human rights and modern
15
For further developments see Jérôme Duberry (2021), pp. 135 ff; Nula Frei (2021), pp. 151 ff.
16
For further developments see Handbook on European Data Protection Law, 2018 edition, 18 ff.
available on https://fra.europa.eu/en/publication/2018/handbook-european-data-protection-law-
2018-edition.
302 A. M. G. Martins
scientific and technological developments,17 warning about the challenges that the
development of new technologies could pose for the right to privacy. In 1980, the
OECD adopted its Guidelines on the protection of personal data and transborder
flows of personal data.18 In 1981, the CoE adopted the Convention 108, which
contained a set of principles—the so-called “Data Protection Principles”.19
The EU adopted its first comprehensive data protection instrument in 1995, which
was the Directive 95/46/EC on the protection of individuals with regard to the
processing of personal data and on the free movement of such data,20 as recently
replaced by the General Data Protection Regulation.21 The data protection principles
enshrined, in Article 5, of this Regulation—lawfulness, fairness, transparency,
purpose limitation, data minimisation, accuracy, storage limitation, integrity, confi-
dentiality and responsibility22—are broadly accepted even by States that are non-EU
members.
The courts—be it at the domestic level or at the European and international
levels—have played a significant role in adapting the traditional rules and principles
to the new digital era.
In the present chapter it is obviously impossible to elaborate on each and every rule
and principle in force in Europe concerning the use of AI by private and public
entities, as it would imply taking three different legal systems—the domestic, the EU
and the CoE—into consideration.23
In short, the original treaties of the European Communities contained no refer-
ence to fundamental digital rights.24 Although the digital issues had been discussed
in the European Communities since late 1960s, subsequent amendments of the
treaties had not reflected these discussions. The first mention, in primary law, of
data protection was in the so-called Treaty of Lisbon.
17
Available on https://pace.coe.int/en/files/14546/html.
1 8
Available on https://www.oecd.org/sti/ieconomy/
oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm.
19
Available on https://rm.coe.int/1680078b37.
20
Available on https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:31995L0046&
from=EN.
21
Available on https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L:2016:119:
FULL&from=EN.
22
For further developments see Handbook. . . 115 ff.
23
In some of my former studies I have elaborated on the multilevel protection of fundamental rights
in Europe. See, above all, Ana Guerra Martins (2016), pp. 27 ff.
24
More broadly, they did not provide for any catalogue of fundamental rights.
Is There a Need for an EU Catalogue of Fundamental Digital Rights? 303
25
For an overview of the RGPD see Christopher Doncksey (2020), pp. 47 ff.; Tiina Pajuste (2019),
pp. 303 ff.
26
Available on https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016L0680&
from=EN.
304 A. M. G. Martins
Moreover, the case-law of the Court of Justice of the European Union (CJEU) has
clarified the scope and the meaning of the data protection principles and the
fundamental right to personal data protection.27
Finally, certain non-binding instruments should to be highlighted, such as the
Tallin Declaration on eGovernment (2017),28 the Berlin Declaration on Digital
Society and Value-Based Digital Government (2020),29 which culminated in the
EP, Council and Commission joint European Declaration on Digital Rights and
Principles for the Digital Decade proclaimed during the Portuguese Presidency
(2022).30
The CoE is also aware of the importance of data protection in Europe. Hence, it
has stressed the issues raised by AI in several documents, such as the European
Ethical Charter on the use of artificial intelligence in judicial systems,31 the Guide-
lines on Artificial Intelligence and Data Protection,32 the Declaration by the Com-
mittee of Ministers on the manipulative capabilities of algorithmic processes33 and
the Study on the human rights dimensions of automated data processing techniques
and possible regulatory implications.34
However, the most efficient approach to human rights within the CoE remains the
European Convention on Human Rights (ECHR), as interpreted by the ECtHR.
Human rights, such as the freedom of expression35 or right to privacy,36 have been
interpreted in the context of the digital era.
At the domestic level, individual European States, such as Portugal,37 have
approved a Charter of Fundamental Digital Rights,38 whereas in other States, like
France, despite the debates on this issue, no consensus has been achieved.
27
For an overview of the case-law of the CJEU see Carsten M. Wulff (2019), pp. 163 ff.
28
Available on https://digital-strategy.ec.europa.eu/en/news/ministerial-declaration-egovernment-
tallinn-declaration.
29
Available on https://digital-strategy.ec.europa.eu/en/news/berlin-declaration-digital-society-and-
value-based-digital-government.
30
Available on https://futurium.ec.europa.eu/en/digital-compass/digital-principles/library-video/
lisbon-declaration-digital-democracy-purpose.
31
Available on https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c.
32
Available on https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091
f9d8.
33
Available on https://search.coe.int/cm/pages/result_details.aspx?ObjectId=090000168092dd4b.
34
Available on https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-
human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-impli
cations.html.
35
For an overview of the case-law of the ECtHR on digital matters see Siofra O’Leary (2020),
pp. 93 ff.; Dirk Voorhoof (2019), pp.13 ff.
36
Özgür Heval Çinar (2021), pp. 25, 1, 26 ff.
37
The adoption of the Charter was anything but consensual.
38
Law No. 27/2021 of 17 May 2021 (Carta Portuguesa de Direitos Humanos na Era Digital),
Portuguese Official Journal, Serie I, of of 17 May 2021.
Is There a Need for an EU Catalogue of Fundamental Digital Rights? 305
Most will accept that the use of new technologies constitutes a significant challenge
to the content of pre-existent fundamental/human rights as well as that it may imply
new restrictions and limitations. Moreover, in a digital sphere, the balance of rights
may generate new principles or a different approach. Lastly, taking the specificities
of the AI into account, there are perhaps specific issues that call for fresh regulation.
So far, one can admit that new technologies may call for new rights, as there are
apparently new issues resulting from new technologies, for which as yet no rights
exist.39
The doctrine has highlighted several digital rights, as new fundamental rights,
such as the right to be forgotten, the right to be offline, the right to internet access, the
right not to know, the right to change your mind, the right to start over on a clean
digital slate, the right to expiry dates for data, the right to know the value of your
data, the right to a clean digital environment, the right to a safe digital environment,
the right to a digital education. That does not mean that the scholars who have drawn
attention to the new rights necessarily argue that a new catalogue is needed nor do
they advocate that all those rights should be enshrined in such a catalogue.40
In any case, some digital rights are not actually new rights but extensions of
pre-existing ones, such as the right to digital education, which can be seen as an
extension of the right to education, or the right to a clean digital environment, which
can be regarded as an extension of the right to a clean environment, while the right to
a safe digital environment can be assessed as an extension of the right to security.
These fundamental digital rights need no be included in any formal instrument, as
the courts merely need to interpret the traditional rights in the new digital context.
By contrast, other new digital rights, such as the right to be forgotten, the right to
be offline or the right to internet access could well merit inclusion in the new
catalogue. However, this is not any urgent matter, as they have already acquired a
formal status in individual States and have been accepted by a number of
jurisdictions.
Briefly, the right to be forgotten41 was first established, in May 2014, in a ruling
by the European Court of Justice (ECJ).42 The Court found that European data
protection law gives individuals the right to request search engines like Google to
delist certain results for queries related to a person’s name. In deciding what to delist,
search engines must consider if the information in question is “inaccurate,
39
See Bart Custers (2002), pp. 5 ff. See also Handbook. . . 203 ff.
40
See Bart Custers (2002), pp. 5 ff.
41
For further developments see Pagallo and Durante (2019), pp. 197 ff.
42
CJEU, C-131/12, Google Spain (GC), 13 May 2014, paras. 81–83.
306 A. M. G. Martins
43
For further developments see among many others, Ebert and Wildhaber (2021), pp. 117 ff.
44
For further developments Carolina Aguerre (2019), pp. 31 ff.
45
Dror-Shpoliansky and Shany (2021), pp. 1261 ff.
46
Available on https://digitalcharta.eu/wp-content/uploads/DigitalCharter-English-2019-Final.pdf.
Is There a Need for an EU Catalogue of Fundamental Digital Rights? 307
The initial draft of the Charter was published on 30 November 2016, in German,
English, French and Spanish. After a broad public discussion, the initiators
published a second revised version in April 2018.
In 18 articles, the Charter makes proposals on the autonomy and freedom of the
individual, on the use and development of artificial intelligence, on informational
self-determination and data security, and on other important aspects, such as dealing
with hate and harassment on the Internet. In order to better safeguard fundamental
rights in the digital space, new binding guidelines are also intended to apply across
national borders.
The Charter was rather criticised, especially due to its intended binding effect.
Consequently, it has not merited legislative approval from whatever appropriate
source, but nevertheless became a recognised topic on the European agenda and was
seen to pave the way for other initiatives. One of these might well be the European
Declaration on Digital Rights and Principles for the Digital Decade.47
47
Available on https://digital-strategy.ec.europa.eu/en/library/declaration-european-digital-rights-
and-principles#Declaration.
48
On this Declaration see Giovanni de Gregorio (2022).
49
For space reasons this chapter cannot go deeper into this issue.
308 A. M. G. Martins
50
Giovanni de Gregorio (2022).
51
Giovanni de Gregorio (2022).
Is There a Need for an EU Catalogue of Fundamental Digital Rights? 309
expect to exercise, and what is their content. Secondly, it would also clarify the scope
and extent of the individuals and enterprises’ protection concerning digital matters.
Thirdly, the scope of a fundamental digital rights catalogue would be expressly
addressed to the digital world, whereas most declarations of rights were not con-
ceived for the digital era.
Finally, a catalogue on fundamental digital rights might contribute to reinforce
the democratic legitimacy of the EU, provided it was approved by the EP and the
Council. As it stands at present, the judiciary enjoys a wide margin of discretion in
interpreting the traditional norms that were not created with the digital context
in mind.
In spite of these potential benefits, a formal catalogue of fundamental digital
rights will also entail several disadvantages. The first that occurs to me is the
increasing complexity within the EU fundamental rights law, which already, as is
well-known, somewhat complicated. More complexity certainly does not speak in
favour of efficiency, as a more complex system generally tends to be less efficient.
As I have already pointed out in former studies,52 the existence of multiple
declarations of rights does not necessarily mean enhanced protection of individuals
or enterprises. On the contrary, in a multilevel legal environment like the European
system, the slightest divergence in wording can lead the courts to contradictory
interpretations. This will certainly undermine the desirable harmony between the
different levels of protection.
Another disadvantage of a new fundamental rights catalogue might well be the
crystallisation of law. Once AI and new technologies are matters, which evolve
quickly and deeply, the freezing of law is particularly predictable and negative.
The territorial scope of application of the new catalogue will also raise new
problems. While the internet is a universal phenomenon, an EU fundamental digital
rights catalogue is, by its nature, applicable solely within EU borders and will
consequently be of rather limited territorial scope.53 However, as we have already
experienced in other EU matters, EU rules and principles are at times applied outside
the EU. That said, it is possible that complex questions of extra-territorial application
of the new catalogue will arise, causing difficulties to the extent that Member States
are somewhat reluctant to apply in their territory rules produced outside without their
agreement. Every State wishes to uphold its own legal sovereignty.
The material scope of application of such a catalogue may also not be entirely
harmonious. Assuming that the philosophy of Article 51 of the CFREU is transposed
to the new catalogue, it will only be “addressed to the institutions, bodies, offices and
agencies of the Union with due regard for the principle of subsidiarity and to the
Member States only when they are implementing Union law”. That means many
issues will remain outside its substantive scope of application.
52
See, above all, Ana Guerra Martins (2016), pp. 27 ff.
53
On the challenges that the internet poses to the principle of territoriality see Bertrand de la
Chapelle (2020), pp. 123 e seq.
310 A. M. G. Martins
In the event that the rights’ catalogue is conceived with a binding nature, then
several additional problems may emerge, concerning the relationship between the
old and general CFREU and the new and specialised catalogue. The potential
competition between both catalogues is not desirable, as it risks diminishing the
protection of individuals and enterprises.
I could continue to elaborate on the advantages and disadvantages of a new
catalogue of fundamental digital rights, but in my view the aforementioned are
sufficient to establish that the path to an EU Charter of Fundamental Digital Rights
has to be well balanced, and that it is a long process still at its outset.
4 Conclusion
References
Aguerre C (2019) Right of access to the internet – global approaches. In: Susi M (ed) Human rights,
digital society and the law, a research companion. Routledge, London, pp 31–43
Andrew J (2021) Introduction. In: Andrew J, Bernard F (eds) Human rights responsibilities in the
digital age. Hart, Oxford, pp 1–18
Bernard F, Pejchal V (2021) The European approach to governing harmful speech online. In:
Andrew J, Bernard F (eds) Human rights responsibilities in the digital age. Hart, Oxford, pp
167–185
Chapelle B (2020) Territoriality and the cross-border internet: three exemplary challenges. In:
O’Boyle M (ed) Human rights challenges in the digital age: judicial perspectives. Council of
Europe Publishing, Strasbourg, pp 123–138
Çinar ÖH (2021) The current case law of the European Court of Human Rights on privacy:
challenges in the digital age. Int J Human Rights 25:26–51
Committee of Experts on Internet Intermediaries (2018) Algorithms and Human rights - Study on
the human rights dimensions of automated data processing techniques and possible regulatory
implications. Available on https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-
study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possi
ble-regulatory-implications.html. Accessed 13 Oct 2022
Custers B (2002) New digital rights: imagining additional fundamental rights for the digital era.
Comput Law Secur Rev 44:105636
de Gregorio G (2022) The Declaration on European Digital Rights and Principles: A First Analysis
from Digital Constitutional. Available on https://digi-con.org/the-declaration-on-european-
digital-rights-and-principles-a-first-analysis-from-digital-constitutionalism/. Accessed
13 Oct 2022
di Cerbo S (2016) Digital evidence changing the paradigm of human rights protection. Wolf Legal
Publishers, Oisterwijk
Doncksey C (2020) The EU approach to the protections rights in the digital environment: today and
tomorrow – State Obligations and responsibilities of private parties – GDPR rules on data
protection, and what to expect from the outcoming ePrivacy regulation. In: O’Boyle M
(ed) Human rights challenges in the digital age: judicial perspectives. Council of Europe
Publishing, Strasbourg, pp 47–78
Dror-Shpoliansky D, Shany Y (2021) It’s the end of the (offline) world as we know it: from human
rights to digital human rights – a proposed typology. Eur J Int Law 32:1249–1282
Duberry J (2021) Freedom to think and to hold a political opinion: digital threats to political
participation in liberal democracies. In: Andrew J, Bernard F (eds) Human rights responsibilities
in the digital age. Hart, Oxford, pp 135–150
Ebert I, Wildhaber I (2021) Privacy in the workplace: a human rights due diligence approach. In:
Andrew J, Bernard F (eds) Human rights responsibilities in the digital age. Hart, Oxford, pp
117–134
European Union Agency for Fundamental Rights and Council of Europe (2018) Handbook on
European Data Protection Law. Available on https://fra.europa.eu/en/publication/2018/
handbook-european-data-protection-law-2018-edition Accessed 13 Oct 2022
Frei N (2021) Is there a human right obligation to protect democratic discourse in cyberspace? In:
Andrew J, Bernard F (eds) Human rights responsibilities in the digital age. Hart, Oxford, pp
151–166
Keel G (2021) Hate speech and journalism, challenges and strategies. In: Andrew J, Bernard F (eds)
Human rights responsibilities in the digital age. Hart, Oxford, pp 185–204
Kettemann MC, Benedek W (2019) Freedom of expression online. In: Susi M (ed) Human rights,
digital society and the law, a research companion. Routledge, London, pp 58–74
Kothari M (2019) The sameness of human rights online and offline. In: Susi M (ed) Human rights,
digital society and the law, a research companion. Routledge, London, pp 15–30
312 A. M. G. Martins
Liu HY (2019) The digital disruption of human rights foundations. In: Susi M (ed) human rights,
digital society and the law, a research companion. Routledge, London, pp 75–86
Martins AG (2016) Opinion 2/13 of the Court of Justice in the context of multilevel protection of
fundamental rights and multilevel constitutionalism. Zeitschrift für öffentliches Recht/J Public
Law 71:27–57
McGregor L (2020) The role of courts addressing the human rights implications of new and
emerging technologies. In: O’Boyle M (ed) Human rights challenges in the digital age: judicial
perspectives. Council of Europe Publishing, Strasbourg, pp 183–200
O’Leary S (2020) Data protection and privacy questions in the digital age: whither jurisdiction and
the ECHR? In: O’Boyle M (ed) Human rights challenges in the digital age: judicial perspectives.
Council of Europe Publishing, Strasbourg, pp 93–122
Pagallo U, Durante M (2019) Human rights and the right to be forgotten. In: Susi M (ed) Human
rights, digital society and the law, a research companion. Routledge, London, pp 197–208
Pajuste T (2019) The protection of personal data in a digital society – the role of the GDPR. In: Susi
M (ed) Human rights, digital society and the law, a research companion. Routledge, London, pp
303–316
Penney J (2019) The right to privacy – the end of privacy fatalism. In: Susi M (ed) Human rights,
digital society and the law, a research companion. Routledge, London, pp 44–57
Viljanen J (2019) Combating hate speech online. In: Susi M (ed) Human rights, digital society and
the law, a research companion. Routledge, London, pp 225–238
Voorhoof D (2019) Same standards, different tools? The ECtHR and the protection and limitations
of freedom of expression in the digital environment. In: O’Boyle M (ed) Human rights
challenges in the digital age: judicial perspectives. Council of Europe Publishing, Strasbourg,
pp 11–46
Wulff CM (2019) The jurisprudence of the ECJ and the ECtHR regarding data protection in the
internet. In: Susi M (ed) Human rights, digital society and the law, a research companion.
Routledge, London, pp 163–177
Ana Maria Guerra Martins is Full Professor at the Faculty of Law – University of Lisbon and
Judge at the European Court of Human Rights.
Countering Terrorism Propaganda Online
Through TERREG and DSA: A Battlefield
or a Breath of Hope for Our Fundamental
Human Rights?
Eugénie Coche
Abstract This chapter explores the freedom of expression risks entailed in recent
EU regulatory initiatives aimed at countering terrorism propaganda online, through
the Regulation on addressing the dissemination of terrorist content online
(‘TERREG’) and the Digital Services Act (‘DSA’). After analysis of the comple-
mentary nature of these instruments, the rule of law risks under lex specialis, namely
TERREG, are exposed. As argued, these risks result from, inter alia, hosting service
providers’ likely use of automated tools and the lack of judicial oversight, which is
further exacerbated by the cross-border nature of removal orders and the lack of
definitional clarity surrounding what constitutes ‘terrorist content’. By contextualiz-
ing these risks and mapping out the regulatory initiatives that preceded these
Regulations in the field of terrorism propaganda online, important improvements
in relation to privatised enforcement risks are highlighted.
1 Introduction
‘With today’s agreement we ensure that platforms are held accountable for the risks
their services can pose to society and citizens’. 1 These were the words pronounced
by Executive Vice-President for a Europe Fit for the Digital Age, Margrethe
Vestager, as the political agreement for the Digital Services Act (DSA) was reached
This chapter builds on Coche’s 2018 article, published in Internet Policy Review, entitled
“Privatised enforcement in a world confronted with terrorism propaganda online”.
Coche (2018).
1
European Commission (2022) Digital Services Act: Commission welcomes political agreement on
rules ensuring a safe and accountable online environment. https://ec.europa.eu Accessed
8 August 2022.
E. Coche (✉)
Amsterdam Business School, University of Amsterdam, Amsterdam, The Netherlands
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 313
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_16
314 E. Coche
in April 2022. With its formal adoption by the European Parliament three months
later, as well as its expected adoption by the Council in September 2022, the Act will
apply from 1 January 2024 onwards or fifteen months after its entry into force,
whichever is the later date. Unlike the Digital Markets Act, which mainly focuses on
reducing economic damage caused by the tech industry through the imposition of
new ‘gatekeeper’ obligations, the DSA is primarily aimed at mitigating societal risks
stemming from illegal internet use as is the case with hate speech proliferation or the
sale of illegal goods online.
Terrorist content is also one of the types of materials covered by the DSA, giving
rise to certain obligations thereunder. Importantly, when terrorist content is at stake,
this Act will only take precedence when lex specialis laws, in that case the Terrorist
Content Online Regulation (TERREG), fall short in offering guidance on the
matter.2 Indeed, as explained in its memorandum, the Act is to fulfil a merely
complementary role with regard to already existing sector-specific instruments and
builds further on previously issued soft law instruments such as the 2018 Recom-
mendation on Tackling Illegal Content online and the EU Internet Forum with regard
to terrorist content.3 Taking into account that TERREG was, during the DSA
negotiations, still a regulatory proposal but has in the meantime entered into force
and triggered a wave of human rights-centred criticism, it appears important to
explore these criticisms and find out in what ways the DSA may offer some relief.
This chapter therefore aims to shed light on the intersection that exists between
illegal content and terrorist content online and what the countering of such type of
speech under TERREG and DSA means for internet users’ fundamental human
rights, more particularly vis-à-vis their right to freedom of expression as protected
under Article 10 of the European Convention on Human Rights (ECHR) and Article
11 of the Charter of Fundamental Rights of the European Union. In order to do so,
the first section explores existing regulatory gaps—in terms of material scope—
under TERREG so as to find out in what ways the DSA complements such
instrument in terrorism-related situations. This is followed by an analysis of the
human rights risks accompanying the EU’s ambition to ensure that ‘what is illegal
offline is also illegal online’,4 starting with an overview of various criticisms that
were recently expressed regarding the countering of terrorism propaganda online
through TERREG (Sect. 2). By mapping out the EU’s regulatory efforts preceding
that regulatory instrument and the issues it seeks to address associated with
‘privatised enforcement’, a contextualisation of these risks is given (Sect. 3). Such
contextualisation is accompanied by a brief analysis of certain core provisions of the
DSA, in order to conclude whether both instruments, taken together, may afford a
2
European Commission (2020) Proposal for a Regulation of the European Parliament and of the
Council on a Single Market for Digital Services (Digital Services Act) and Amending Directive
2000/31/EC6. COM (2020), 825 final, Art. 1 (5) sub (d) jo. Rec. 9.
3
Ibid., explanatory memorandum, p. 4.
4
This statement was made by European Commission President Ursula von der Leyen when
announcing the political agreement reached on 23 April 2022. https://ec.europa.eu Accessed
8 August 2022.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 315
breath of hope for our fundamental human rights in the EU’s fight against terrorism
online. This chapter builds further on a 2018 article concerned with privatised
enforcement risks in the fight against terrorism propaganda online.5
5
Coche (2018).
6
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online.
7
Ibid., Rec. 4.
8
Ibid., Rec. 5.
9
Ibid., Art. 12 jo. 3 & Art. 5.
10
Ibid., Art 5.
11
EU Directive 2017/541 of the European Parliament and of the Council of 15 March 2017 on
combatting terrorism and replacing Council Framework Decision 2002/475/JHA and amending
Council Decision 2005/671/JHA.
12
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Art. 3 (1) sub. a-j.
316 E. Coche
13
Ibid.
14
Ibid., Art. 1(3) jo. Rec. 12.
15
Handyside v UK (App no 5393/72) ECHR, 1976, para. 49.
16
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Rec. 11.
17
European Commission (2020) Proposal for a Regulation of the European Parliament and of the
Council on a Single Market for Digital Services (Digital Services Act) and Amending Directive
2000/31/EC6. COM (2020), 825 final, Art. 1(5) sub. (d) jo. Rec. 9.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 317
18
Ibid., explanatory memorandum, p. 4.
19
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Rec. 12.
20
Ibid., Rec. 14.
21
European Commission (2020) Proposal for a Regulation of the European Parliament and of the
Council on a Single Market for Digital Services (Digital Services Act) and Amending Directive
2000/31/EC6. COM (2020), 825 final, Art. 2 (f).
22
Ibid., Art. 2 (g) jo. Rec. 12.
23
EU Directive 2018/1808 of the European Parliament and of the Council of 14 November 2018
amending Directive 2010/13/EU on the coordination of certain provisions laid down by law,
regulation or administrative action in Member States concerning the provision of audiovisual
media services directive (Audiovisual Media Services Directive) in view of changing market
realities, Art. 6.
24
Council Framework Decision 2008/913/JHA of 28 November 2008 on combatting certain forms
and expressions of racism and xenophobia by means of criminal law.
25
Ibid., Art. 1 (a).
318 E. Coche
discriminatory element and also covers material inciting feelings of ‘hatred’, which
in itself would not amount to ‘terrorist content’ under the Terrorism Directive. In its
2017 communication regarding EU progress in building an effective and genuine
Security Union, the close ties between terrorism propaganda and illegal hate speech
were however highlighted as both types of speech were identified as contributing to
radicalisation leading to violent extremism in the form of terrorist acts.26 When
discussing this topic and calling for a multi-faceted response, the European Com-
mission stressed the need to tackle both types of speech in order to ‘address the root
causes of extremism’, further specifying that (feelings of) discrimination and xeno-
phobia are ‘factors that can be exploited by recruiters’.27 In terms of case law, this
link was also illustrated by the European Court of Human Rights in the Surek
v. Turkey case, in which feelings of hatred, based on discrimination regarding
race, were identified as capable of causing violent actions including terrorist acts
(para. 62). 28 In addition to illegal hate speech and given the complex and multi-
layered roots of terrorism, no exhaustive list could be provided in terms of materials
that could contribute to terrorism propaganda. To illustrate, a recent study explored
how disinformation disseminated through social media may contribute to domestic
terrorism.29
In view of the wide range of content that potentially fall within the ambit of
‘terrorism propaganda’, ranging from illegal hate speech and eventually disinforma-
tion, one may thus conclude that TERREG would fall short in countering the issue
on its own. This explains DSA’s complementary nature in the EU’s fight against
terrorism propaganda online, both with regards to the type of content covered and to
the procedural safeguards and obligations present thereunder. Notwithstanding such
complementarity in terms of aims and safeguards, it remains necessary to explore
how far TERREG, when applying as lex specialis to DSA, is capable of achieving
the EU’s hybrid goals, namely to ensure removal of terrorist content online whilst
safeguarding at all times the individuals’ human rights in the digital sphere.
26
European Commission (2017) Communication from the Commission to the European Parliament,
the European Council and the Council. Eight progress report towards an effective and genuine
Security Union. COM (2017), 354 final.
27
Ibid., p. 3.
28
Surek v Turkey (No 1) (App no 26682/95) ECHR, 1999.
29
Piazza (2022).
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 319
TERREG’s very first Recital illustrates its hybrid goal. On the one hand, it is to
prevent hosting services from being used for terrorist purposes so as to strengthen
EU public security and, on the other hand, it is to allow for better protection of the
individuals’ right to freedom of expression online. As further specified, this right is
broadly defined and includes ‘the freedom to receive and impart information and
ideas in an open and democratic society and the freedom and pluralism of the media’.
30
At EU level, such right finds protection under Article 10 ECHR and Article 11 of
the Charter of Fundamental Rights of the European Union. Its broad scope flows
from a volume of case law, such as Jersild v. Denmark31 and its essential nature,
without which no democratic society is deemed to exist, has been repeatedly
emphasised by the European Court of Human Rights (ECtHR). 32 Importantly, as
would be the case for illegal hate speech and terrorist content, interference with this
right may only be allowed in cases where the three-step test, as enshrined in Art
10 (2) ECHR is met, meaning that the interference (i) must be based on a proper legal
basis; (ii) serves one of the legitimate aims listed thereunder; and (iii) be necessary in
a democratic society. Each condition has been further developed and elaborated on
by the ECtHR.33 Despite TERREG’s intention and the introduction of numerous
safeguards to effectively protect such a right and ‘avoid any unintended or erroneous
decision leading to the removal of or disabling of access to content that is not
terrorist content’,34 numerous points of criticism have been raised with regards to
the unintended risks this instrument may entail.
Already before its final adoption in April 2021, warnings were expressed regarding
the plausible risks entailed within TERREG as to internet user rights to freedom of
expression. In a letter addressed to the European Parliament, seventy-eight parties,
30
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Rec. 1.
31
Jersild v Denmark (App no 15890/89) ECHR 1994; Handyside v UK (App no 5393/72) ECHR,
1998, para 49.
32
Hertel v Switzerland (App no 59/1997/843/1049) ECHR, 1998, para 46; Animal Defenders
International v the United Kingdom (App no 48876/08) ECHR, 2013, para. 100.
33
Kruslin v France (App no 11801/85) ECHR, 1990, para. 27-36; Sunday Times v UK (App no
6538/74) ECHR, 1979, para. 45.
34
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Rec. 23.
320 E. Coche
35
Council of Europe (2014), p. 10.
36
Ibid.
37
Access now et al. (2021) Dear Member of the European Parliament. https://dq4n3btxmr8c9.
cloudfront.net/files/nS1GUt/MEP_TERREG_Letter_EN.pdf Accessed 8 August 2022.
38
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Art. 3 (7).
39
Ibid., Art. 5 (4).
40
Ibid., Art. 5 (2).
41
Ibid., Art 5 (3) sub d.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 321
42
EDRI (2021b) European Parliament confirms new online censorship powers. https://edri.org
Accessed 8 August 2022.
43
Surek v Turkey (No 1) (App no 26682/95) ECHR, 1999, para. 62; Gokceli v Turkey (App no
27215/95 and 36194/97) ECHR, 2003, para. 38.
44
Allan (2017) Hard questions: Who should decide what is hate speech in an online global
community? https://about.fb.com Accessed 8 August 2022.
45
Ibid.
46
Access now et al. (2021) Dear Member of the European Parliament. https://dq4n3btxmr8c9.
cloudfront.net Accessed 8 August 2022.; Chee (2021) Civil rights groups urge EU lawmakers to
rebuff online terrorist content law. https://www.reuters.com Accessed 8 August 2022.
47
Macdonald et al. (2019), pp. 183–197.
48
Ibid., p. 194.
322 E. Coche
paired with unskilled human reviewers having insufficient dialect expertise, have
already led to a significant amount of content being wrongfully removed.49 In light
of these risks, it is questionable whether the mere requirement of ‘human oversight
and verification’ under TERREG is sufficient to guarantee respect of the rule of law
when using automated tools.50 Indeed, in a rule of law context, one may wonder
whether human oversight only makes sense if guarantees exist that such reviewers
are vested with the necessary expertise to assess supposedly illegal content in light of
the law? Taking into account that even national courts may fail at this task, it seems
risky to put this task into the hands of social media companies with profit-making
purposes as their main interest.51 Moreover, the Regulation seems to exacerbate
these rule of law risks by suggesting that companies’ ‘terms and conditions’ are to
take precedence over the law. This is particularly striking in the last provision of
article 4 which seems to allow that ‘lawful’ content, which has been the object of
wrongful removal and must therefore be reinstated, could still be enforced under the
companies ‘terms and conditions’.52 The issue of terms and conditions taking the
lead over the law, without a guarantee that such terms are in line with it, has
frequently been warned against by legal scholars.53 Arguably, one way to outweigh
these risks would be to ensure that the authority involved in issuing removal orders is
subject to judicial oversight, which brings us to the second point of criticism.
49
Debre and Akram (2021) Facebook’s language gaps weaken screening of hate, terrorism. https://
apnews.com Accessed 8 August 2022.
50
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Art. 5(3) sub d.
51
Solon (2017) Underpaid and overburdened: The life of a Facebook Moderator. https://www.
theguardian.com Accessed 8 August 2022.
52
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Art. 4 (7).
53
Kuczerawy (2018).
54
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Art. 13 (1).
55
Access now et al (2021) Dear Member of the European Parliament. https://dq4n3btxmr8c9.
cloudfront.net Accessed 8 August 2022.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 323
Indeed, whenever removal orders are issued, the Regulation specifies that these are
to have Union-wide effect, meaning that notified content must be removed
(or disabled access to) throughout the EU. When seen in light of the previous
criticism, where (mis)interpretation of terrorist content can occur, it has been argued
that such cross-border extension may lead to a race-to-the-bottom risk where the
most stringent national laws of what constitutes ‘terrorist content’ are obeyed by,
56
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Art. 3(9).
57
Cres-Terminal et al. (2021) Lettre commune appelant à voter contre le règlement de prévention de
la diffusion de contenus à caractère terroriste en ligne. https://www.ldh-france.org Accessed
8 August 2022; Conseil Constitutionnel (2020) Décision n 2020-801 DC du 18 juin 2020 https://
www.conseil-constitutionnel.fr Accessed 8 August 2022.
58
Conseil Constitutionnel (2020) Décision n 2020-801 DC du 18 juin 2020, para. 7.
59
Cres-Terminal et al. (2021) Lettre commune appelant à voter contre le règlement de prévention de
la diffusion de contenus à caractère terroriste en ligne. https://www.ldh-france.org Accessed
8 August 2022; EDRI (2021b) European Parliament confirms new online censorship powers.
https://edri.org Accessed 8 August 2022.
324 E. Coche
thereby opening the door for political ‘opportunism’.60 Such opportunism would
exist when an authority adopts a definition of ‘terrorist content’ that is not in line
with the rule of law, such as speech that merely criticizes the government. Indeed,
what constitutes a ‘threat to the life of a nation’ tends to vary from one country to
another and may be applied in an overly broad fashion.61 Although Art 4 TERREG
introduces safeguards to prevent the cross-border removal of content that ‘seriously
or manifestly’ infringes the Regulation, these safeguards are mere ‘possibilities’,
with no obligation for competent authorities to automatically scrutinise removal
orders issued by foreign Member States.62
Having regard to the aforementioned risks, the next section seeks to contextualise
these dangers and provide insights as to why the EU would allow for the Regulation
to see the light of day in the first place and the reasons for entrusting internet
intermediaries in taking take the lead on such a controversial issue.
The reasons for entrusting internet intermediaries in the fight against terrorism
propaganda online is explained, as highlighted in Recital 5 of the Regulation, by
their ‘central role’ and ‘technological means and capabilities’ giving rise to ‘partic-
ular societal responsibilities’. As early as in 2008, the EU Council highlighted the
internet as an important tool for the dissemination of terrorist propaganda. 63 In
2014, the Commission acknowledged the limits of traditional law enforcement in
countering radicalisation and the need to engage ‘all of society’ in that regard. 64 One
year later, without yet imposing any obligations on the part of internet intermedi-
aries, the urgent need to better align criminal laws with the (mis)use of social media
was highlighted by specifying that the ‘decentralised approach facilitated by a
network of accounts on a variety of social media platforms allows for rapid dissem-
ination of terrorist and radical materials through constantly adapting use of
60
Euractiv (2021) EU adopts law giving tech giants one hour to remove terrorist content. https://
www.euractiv.com Accessed 8 August 2022.
61
EDRi (2021a) EU terrorist content online regulation could curtail freedom of expression across
Europe. https://edri.org Accessed 8 August 2022.
62
Art. 4 (3) uses the word ‘may’ and Art. 4(4) introduces a ‘right’.
63
Council Framework Decision 2008/913/JHA of 28 November 2008 on combatting certain forms
and expressions of racism and xenophobia by means of criminal law.
64
European Commission (2014) Communication from the Commission to the European Parliament,
the Council, the European Economic and Social Committee and the Committee of the Regions.
Preventing Radicalisation to Terrorism and Violent Extremism: Strengthening the EU’s response.
COM (2013), 941 final, para. 8.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 325
65
European Commission (2015b) Proposal for a Directive of the European Parliament and of the
Council on combatting Terrorism and replacing Council Framework Decision 2002/475/JHA on
combatting terrorism. COM (2015), 625 final, para. 10.
66
EU Directive 2017/541 of the European Parliament and of the Council of 15 March 2017 on
combating terrorism and replacing Council Framework Decision 2002/475/JHA and amending
Council Decision 2005/671/JHA, Rec. 22.
67
European Commission (2015a) EU Internet forum: bringing together governments, Europol and
technology companies to counter terrorist content and hate speech online. https://ec.europa.eu
Accessed 8 August 2022.
68
European Commission (2016a) EU internet forum: a major step forward in curbing terrorist
content on the internet. https://ec.europa.eu Accessed 8 August 2022.
69
European Commission (2016b) The EU code of conduct on countering illegal hate speech online.
https://ec.europa.eu Accessed 8 August 2022.
70
Christchurch call (2019) https://www.christchurchcall.com Accessed 8 August 2022.
71
European Commission (2019) Fighting terrorism online: EU internet forum committed to an
EU-wide crisis protocol https://ec.europa.eu Accessed 8 August 2022.
72
European Commission (2018) Proposal for a Regulation of the European Parliament and of the
Council on preventing the dissemination of terrorist content online. COM (2018), 640 final,
explanatory memorandum, para. 3.
73
Ibid., explanatory memorandum, p. 6.
326 E. Coche
recognizes, it is equally important to counter the human rights risks entailed in the
practice of ‘privatised enforcement’. 74
As specified in its Recitals, TERREG builds further on voluntary efforts made at the
EU level, more particularly the 2018 Recommendation on tackling illegal content
74
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Rec. 5.
75
Council of Europe (2014), p. 88.
76
Ibid.
77
Angelopoulos et al. (2015), p. 6.
78
Macdonald et al. (2019), p. 185.
79
The EU Code of Conduct on countering illegal hate speech online, the Communication and
Recommendation on tackling illegal content online.
80
Coche (2018).
81
See Sect. 2 of this chapter.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 327
82
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Rec. 6.
83
Coche (2018), pp. 10–11.
84
Article 19 (2021) EU Regulation of Notice and Action procedures in the Digital Services Act.
https://www.article19.org Accessed 8 August 2022.
85
European Commission (2018) Proposal for a Regulation of the European Parliament and of the
Council on preventing the dissemination of terrorist content online. COM (2018), 640 final,
explanatory memorandum, para. 6.
86
Ibid., pt. 3.
87
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online, Art. 21.
328 E. Coche
important step towards more legal certainty throughout the Union. Its
all-encompassing nature in terms of type of (illegal) content covered means that
minimum safeguards must always be in place, functioning as sort of safety net for
respect of our fundamental human rights online. Without going into the specifics of
the safeguards offered by DSA, as this chapter is primarily concerned with terrorism
propaganda meeting the definitional properties of TERREG, one important comple-
mentary safeguard must however be pointed out, namely the updates regarding
hosting providers’ liability exemptions.
A core element of the DSA concerns its updated provisions on liability exemptions
regarding notice-and-action procedures, which were previously dealt with under the
e-Commerce Directive. 88 That Directive established that hosting service providers
may be exempted from liability when they ‘expeditiously remove or disable access
to illegal content after being notified of such content’s presence’. 89 Uncertainties
regarding the ‘scope and terms’ of this exemption 90 triggered, under previous
voluntary frameworks, concerns regarding private censorship issues. 91 This refers
to situations where ‘censorship measures are delegated to private entities’,92 which
are likely to arise when doubts exist regarding the legality of notified content. In
other words, when unsure about the potential (un)lawfulness of notified content,
hosting service providers may be inclined to remove it in order to secure at utmost
their liability, making them a ‘judge in their own cause’.93 Such protective behaviour
would add to previously identified rule of law risks, given the legal uncertainties
entailed in such practice. This has led to calls for more legal certainty regarding the
scope and terms of the liability exemption, identified by many stakeholders as an
88
EU Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on
certain legal aspects of information society services, in particular electronic commerce, in the
Internal Market.
89
Ibid., Art. 14 jo. Rec. 46.
90
European Commission (2012) Commission Staff Working Document, online services, including
e-commerce, in the Single Market Accompanying the document: Communication from the Com-
mission to the European Parliament, the Council, The European Economic and Social Committee
and the Committee of the Regions, A Coherent framework to boost confidence in the Digital Single
Market of e-Commerce and other online services. SEC (2011), 1641 final, pp. 43–46.
91
Coche (2018), p. 7.
92
United Nations General Assembly, Human Rights Council (2011) Report of the Special Rappor-
teur on the Promotion and Protection of the Right to freedom of Opinion and Expression (A/HRC/
17/27), para. 45 jo. 75.
93
Kuczerawy (2015), p. 48.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 329
important factor for over-removal of content, as well as for less fragmentation within
the EU. 94
In that regard, TERREG offers limited relief. Whilst Recital 6 stresses that the
Regulation is to be interpreted in line with the e-commerce Directive, Recital
7 specifies that ‘specific measures’ taken by hosting service providers should not
preclude them from benefiting from the liability exemption. This specification
should be understood in light of EU case law, requiring hosting service providers
to be of a ‘mere technical, automatic and passive nature’ in order for the exemption
to apply.95 Having regard to the possibility for hosting providers to make use of
automated tools under TERREG, it is unclear how such an interpretation converges
with EU jurisprudence relating to the ‘neutrality’ and passivity requirement. These
controversies were already identified in relation to the ‘proactive measures’ present
under TERREG as regulatory proposal (now replaced by ‘specific measures’).96
These critics must be understood in light of the existing ‘Good Samaritan’ paradox,
implying that ‘a proactive stance increases the probability that the hosting provider
acquires knowledge of the illegal status of the content it hosts and, by extension, its
exposure to liability’. 97
In that regard, the DSA seems to offer some hope as it harmonizes and maintains
the main features of the e-Commerce Directive, and at the same time introduces
binding notice-and-action mechanisms. Regarding the former, Article 5 governs the
liability exemption for hosting service providers and precludes liability where (1) the
provider has no knowledge of illegal content being present on its service and (2) acts
expeditiously to remove or disable access to such content. Article 7 maintains the
prohibition on ‘general monitoring or active fact-finding obligations’, entailing that
service providers may neither be required to monitor all content present on their
services, such as would be the case with general filtering obligations, 98 nor to
actively seek facts indicating illegal activity. In addition, Article 14 harmonizes the
notice-and-action mechanisms by requiring hosting service providers to put in place
‘easy to access, user-friendly mechanisms’ to allow any individual or entity to notify
illegal content. Paragraph 3 of that article further specifies when ‘actual knowledge’
will be deemed to exist. In light of these clarifications and although criticisms were
raised regarding the privatised enforcement risks still present under the Act as well as
94
European Commission (2012) Commission Staff Working Document, online services, including
e-commerce, in the Single Market Accompanying the document: Communication from the Com-
mission to the European Parliament, the Council, The European Economic and Social Committee
and the Committee of the Regions, A Coherent framework to boost confidence in the Digital Single
Market of e-Commerce and other online services. SEC (2011), 1641 final, pp. 43–46.
95
Judgment of 12 July 2011, L’oreal SA and others v Ebay International A.G. and others, C-
324/09, EU:C:2011:474, para. 111–116.
96
Kuczerawy (2018).
97
Van Hoboken et al. (2018), p. 42.
98
Judgment of 16 February 2021, Sabam v Netlog, C-360/10, EU:C:2012:85.
330 E. Coche
5 Conclusion
The present article has exposed the EU’s current state of affairs regarding the
countering of terrorism propaganda online and explored in what ways the new
obligations, imposed through TERREG and the DSA, challenge our right to freedom
of expression.
As put forward, both instruments give rise to significant rule of law risks.
Regarding TERREG, such risks result from, inter alia, hosting service providers’
likely use of automated tools and the lack of judicial oversight, which is further
exacerbated by the cross-border nature of removal orders and the lack of definitional
clarity surrounding what amounts to ‘terrorist content’. The need for more detailed
definitions must be seen in light of DSA’s complementary nature and the fine line
that exists between ‘illegal hate speech’ and ‘terrorist content’, whereby under
TERREG only the latter type of speech requires removal within one hour.
However, by contextualizing these risks and mapping out the regulatory initia-
tives that preceded these Regulations, the safeguards offered by these new instru-
ments become more apparent and seem to offer a breath of hope for our fundamental
human rights. In that regard, significant improvements have been made, namely
these instruments’ binding and directly applicable nature and the harmonization of
liability exemptions under the DSA.
Despite these steps forward in EU endeavours to outweigh the risks stemming
from privatised enforcement, important steps still need to be made to ensure com-
pliance with our fundamental human rights.
References
Access now et al. (2021) Dear Member of the European Parliament. https://dq4n3btxmr8c9.
cloudfront.net. Accessed 8 Aug 2022
Allan R (2017) Hard questions: who should decide what is hate speech in an online global
community? https://about.fb.com. Accessed 8 Aug 2022
Angelopoulos C, Brody A, Hins W et al (2015) Study of fundamental rights limitations for online
enforcement through self-regulation. Institute for Information Law IViR, Amsterdam., https://
pure.uva.nl. Accessed 8 Aug 2022
99
Iris Malone (2022) Will the EU’s Digital Services Act reduce online extremism? https://www.
justsecurity.org Accessed 8 August 2022.
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 331
Animal Defenders International v the United Kingdom (App no 48876/08) ECHR, 2013
Article 19 (2021) EU Regulation of Notice and Action procedures in the Digital Services Act.
https://www.article19.org. Accessed 8 Aug 2022
Chee FY (2021) Civil rights groups urge EU lawmakers to rebuff online terrorist content law.
https://www.reuters.com. Accessed 8 Aug 2022
Christchurch Call (2019). https://www.christchurchcall.com. Accessed 8 Aug 2022
Coche E (2018) Privatised enforcement and the right to freedom of expression in a world confronted
with terrorism propaganda online. Internet Policy Rev 7(4). https://doi.org/10.14763/2018.4.
1382
Conseil Constitutionnel (2020) Décision n 2020-801 DC du 18 juin 2020 https://www.conseil-
constitutionnel.fr. Accessed 8 Aug 2022
Council Framework Decision 2008/913/JHA of 28 November 2008 on combatting certain forms
and expressions of racism and xenophobia by means of criminal law
Council of Europe (2014) The rule of law on the internet and in the wider digital world (Issue
Paper). Strasbourg, Council of Europe
Council on preventing the dissemination of terrorist content online. COM (2018), 640 final
Cres-Terminal et al (2021) Lettre commune appelant à voter contre le règlement de prévention de la
diffusion de contenus à caractère terroriste en ligne. https://www.ldh-france.org. Accessed
8 Aug 2022
Debre and Akram (2021) Facebook’s language gaps weaken screening of hate, terrorism. https://
apnews.com. Accessed 8 Aug 2022
EDRi (2021a) EU terrorist content online regulation could curtail freedom of expression across
Europe. https://edri.org. Accessed 8 Aug 2022
EDRI (2021b) European Parliament confirms new online censorship powers. https://edri.org.
Accessed 8 Aug 2022
EU Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain
legal aspects of information society services, in particular electronic commerce, in the Internal
Market
EU Directive 2017/541 of the European Parliament and of the Council of 15 March 2017 on
combatting terrorism and replacing Council Framework Decision 2002/475/JHA and amending
Council Decision 2005/671/JHA
EU Directive 2018/1808 of the European Parliament and of the Council of 14 November 2018
amending Directive 2010/13/EU on the coordination of certain provisions laid down by law,
regulation or administrative action in Member States concerning the provision of audiovisual
media services directive (Audiovisual Media Services Directive) in view of changing market
realities
EU Regulation 2021/784 of the European Parliament and of the Council of 29 April 2021 on
addressing the dissemination of terrorist content online
Euractiv (2021) EU adopts law giving tech giants one hour to remove terrorist content. https://www.
euractiv.com. Accessed 8 Aug 2022
European Commission (2012) Commission Staff Working Document, online services, including
e-commerce, in the Single Market Accompanying the document: Communication from the
Commission to the European Parliament, the Council, The European Economic and Social
Committee and the Committee of the Regions, A Coherent framework to boost confidence in the
Digital Single Market of e-Commerce and other online services. SEC (2011), 1641 final
European Commission (2014) Communication from the Commission to the European Parliament,
the Council, the European Economic and Social Committee and the Committee of the Regions.
Preventing Radicalisation to Terrorism and Violent Extremism: Strengthening the EU’s
response. COM (2013), 941 final
European Commission (2015a) EU Internet forum: bringing together governments, Europol and
technology companies to counter terrorist content and hate speech online. https://ec.europa.eu.
Accessed 8 Aug 2022
332 E. Coche
European Commission (2015b) Proposal for a Directive of the European Parliament and of the
Council on combatting Terrorism and replacing Council Framework Decision 2002/475/JHA
on combatting terrorism. COM (2015), 625 final
European Commission (2016a) EU internet forum: a major step forward in curbing terrorist content
on the internet. https://ec.europa.eu. Accessed 8 Aug 2022
European Commission (2016b) The EU code of conduct on countering illegal hate speech online.
https://ec.europa.eu. Accessed 8 Aug 2022
European Commission (2017) Communication from the Commission to the European Parliament,
the European Council and the Council. Eight progress report towards an effective and genuine
Security Union. COM (2017), 354 final
European Commission (2018) Proposal for a Regulation of the European Parliament and of the
Council on preventing the dissemination of terrorist content online. COM (2018), 640 final
European Commission (2019) Fighting terrorism online: EU internet forum committed to an
EU-wide crisis protocol https://ec.europa.eu. Accessed 8 Aug 2022
European Commission (2020) Proposal for a Regulation of the European Parliament and of the
Council on a Single Market for Digital Services (Digital Services Act) and Amending Directive
2000/31/EC6. COM (2020), 825 final
European Commission (2022) Digital Services Act: Commission welcomes political agreement on
rules ensuring a safe and accountable online environment. https://ec.europa.eu. Accessed
8 Aug 2022
Gokceli v Turkey (App no 27215/95 and 36194/97) ECHR, 2003
Handyside v UK (App no 5393/72) ECHR, 1976
Hertel v Switzerland (App no 59/1997/843/1049) ECHR, 1998
Jersild v Denmark (App no 15890/89) ECHR 1994
Judgment of 12 July 2011, L’oreal SA and others v Ebay International A.G. and others, C-
324/09, EU:C:2011:474
Judgment of 16 February 2021, Sabam v Netlog, C-360/10, EU:C:2012:85
Kruslin v France (App no 11801/85) ECHR, 1990
Kuczerawy A (2015) Intermediary liability & freedom of expression: recent developments in the
EU notice & action initiative. Comput Law Secur Rev 31(1):46–56. https://doi.org/10.1016/j.
clsr.2014.11.004
Kuczerawy A (2018) The proposed Regulation on preventing the dissemination of terrorist content
online: safeguards and risks for freedom of expression. https://limo.libis.be. Accessed
8 Aug 2022
Macdonald S, Correia SG, Watkin AL (2019) Regulating terrorist content on social media:
automation and the rule of law. Int J Law Context 15:183–197. https://doi.org/10.1017/
S1744552319000119
Malone (2022) Will the EU’s Digital Services Act reduce online extremism? https://www.
justsecurity.org. Accessed 8 Aug 2022
Piazza JA (2022) Fake News: the effects of social media disinformation on domestic terrorism. Dyn
Asymmetric Confl Pathways Toward Terrorism Genocide 15(1):55–77. https://doi.org/10.1080/
17467586.2021.1895263
Solon (2017) Underpaid and overburdened: the life of a Facebook Moderator. https://www.
theguardian.com. Accessed 8 Aug 2022
Sunday Times v UK (App no 6538/74) ECHR, 1979
Surek v Turkey (No 1) (App no 26682/95) ECHR, 1999
United Nations General Assembly, Human Rights Council (2011) Report of the Special Rapporteur
on the Promotion and Protection of the Right to freedom of Opinion and Expression (A/HRC/
17/27)
Van Hoboken J, Quintais PJ, Poort J et al (2018) Hosting intermediary services and illegal content
online: An analysis of the scope of Article 14 ECD in light of developments in the online service
landscape https://www.ivir.nl. Accessed 8 Aug 2022
Countering Terrorism Propaganda Online Through TERREG and DSA:. . . 333
Eugénie Coche holds a cum laude research master degree from the Institute for Information law
(IViR) and is currently a PhD student at the Amsterdam Business School, University of Amsterdam,
the Netherlands.
AI and Fundamental Rights: The People,
the Conversations, and the Governance
Challenges
Roger Brownsword
Abstract This chapter, having sketched four conversations in which the people of
Europe engage with new technologies (such as AI and machine learning), then
identifies three key talking points. First, what should we make of the conspicuous
European concern that applications of AI should be ‘human-centric’? The interpre-
tation of this concept is considered as is the status of a human interest in such
applications. Secondly, what do we understand by ‘catastrophic’ applications of AI
and how do we guard against them? Connecting catastrophe to the compromising of
the generic conditions for human existence and agency, the importance of protecting
and maintaining the global commons is emphasized. Thirdly, how does the Rule of
Law fit into the governance of, and by, AI? It is proposed that the scope of the Rule
of Law should be extended in two ways, one by applying it beyond governance by
rules to governance by technologies and technical measures, and the other by
treating it as applicable to both public and private governance.
1 Introduction
If we were to sketch the way that the people of Europe, the politicians and their
publics, engage with new technologies (such as robotics, AI, machine learning and
so on), we would see that there is much to discuss and decide, we would see that
there are many conversations in many places, and we would find a great deal of
‘noise’. For the purposes of this chapter, we will pick out four salient conversations
and then we can ask how three conversational talking points fit into the sketch.
The four salient conversations are: (i) unconstrained prudential and moral con-
versations in the public square; (ii) regulatory conversations (as in legislative
assemblies); (iii) traditional legal conversations in the courts; and (iv) high ground
conversations about fundamental rights.
R. Brownsword (✉)
King’s College London and at Bournemouth University, Poole, UK
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 335
D. Moura Vicente et al. (eds.), The Legal Challenges of the Fourth Industrial
Revolution, Law, Governance and Technology Series 57,
https://doi.org/10.1007/978-3-031-40516-7_17
336 R. Brownsword
Our sketch features the following four particular, and distinctive, conversations:
(i) conversations in the public square; (ii) regulatory conversations; (iii) conversa-
tions about the application of traditional legal principles; and (iv) high ground
conversations about fundamental rights.
1
Abbott (2020).
2
Chesterman (2021).
3
See, e.g., Wachter et al. (2017); Kroll et al. (2017); Deakin and Markou (2020).
AI and Fundamental Rights: The People, the Conversations, and. . . 337
conflicting views in the public square. So, typically, those who are speaking in
support of the technologies will emphasise their prospective benefits, particularly
benefits for human health and well-being, or their efficiency and economic value;
those who are against will appeal to a broad range of preferences, risks, and values.
This is all somewhat messy but this is democracy in action. It is inclusive and all
voices are heard. After a while, the principal considerations, for and against, will
have been articulated and it will now be for regulators to find an acceptable legal
position (such as a permission subject to some qualifications).
For regulators, new technologies present many challenges.4 However, the central
regulatory aim is to find a sweet spot that neither over-regulates (and risks stifling
innovation) nor under-regulates (and exposes citizens to unacceptable risks).5 Where
there is a plurality of views, whatever position regulators come up with, it will not
satisfy everyone. For those who are discontent with the outcome, there is only cold
comfort: democracy gives no guarantee that our favoured view will prevail, it only
gives us the chance to make our case.6
Sometimes, before the regulatory framework has been put in place, there will already
be cases coming to court in which a claimant alleges that some recognized legal
harm has been caused by the latest technology and that there should be a remedy in
accordance with settled legal principles (particularly the principles of tort and
property law).7 Courts and judges do operate with some constraints (for example,
not setting new policies for the community) but we should not under-estimate the
flexibility of traditional legal principles.8
Famously, Samuel Warren and Louis Brandeis argued that a right to privacy
should be recognised.9 The question, in their own words, was ‘whether the existing
law affords a principle which can be properly invoked to protect the privacy of the
individual; and, if it does, what the nature and extent of such protection is’; and, their
response was that, thanks to ‘the beautiful capacity for growth which characterizes
4
Brownsword (2008); Brownsword and Goodwin (2012).
5
Brownsword (2019a).
6
Brownsword (2010).
7
See, e.g., Fox (2019).
8
Fairfield (2021).
9
Warren and Brandeis (1890), p. 193.
338 R. Brownsword
the common law’, such a principle could be found and, moreover, it could be relied
on ‘to afford the requisite protection, without the interposition of the legislature.’
The reason why the question needed to be asked was (as the authors judged it) the
unacceptable use of ‘mechanical devices’ associated with the camera and photogra-
phy which had led to the ‘unauthorized circulation of portraits of private persons.’ In
some contexts, it was not practical to take photographs without the consent of the
parties who would sit for the photographer; here, the interest of the individual in their
privacy could be protected either by contract law or by the law of confidentiality.
However, because ‘the latest advances in photographic art [had] rendered it possible
to take pictures surreptitiously’, the answer needed to be based in tort law. Even
though the unauthorised taking and circulation of photographs did not cause any
injury or damage to a person or their property, Warren and Brandeis argued that a
right to privacy was already implicit in the general principles of law relating to the
protection of the interest in personality and the right to be let alone. The precise way
in which the jurisprudence of the privacy right was to be developed would be left to
the courts,10 but the enduring message of the article is that the common law is
flexible and has the capacity to be applied in ways that respond appropriately to
technological applications that are unacceptable.
Our fourth conversation could take place anywhere, in the public square, in the
political arena, in the courts, in think-tanks, NGOs, the academies, and so
on. However, wherever it does take place, it takes the high ground against all
other conversations. In Europe, as in any community that ‘takes rights seriously’,
if a recognized fundamental right is engaged, then this means that it now sets a red
line for all other conversations. For example, if a fundamental right to privacy or data
protection is engaged, then the to and fro of public square conversations about the
governance of social media sites is overtaken by the right. Balancing and accom-
modating competing interests is no longer the focal issue. Similarly, if human
dignity is engaged by some responses to AI, then this constrains the positions that
are available to regulators. Fundamental rights, as Ronald Dworkin evocatively put
it, operate as ‘trumps’ in these debates.11
10
Prosser (1960).
11
Dworkin (1978).
AI and Fundamental Rights: The People, the Conversations, and. . . 339
3 Human-Centric Applications of AI
Given these conversations, we can turn to our first talking point, namely: What do
we make of the commonly expressed principle that applications of AI should always,
and only, be human-centric? For example, what do we make of the European
Commission’s explanation of the purpose of the proposed Regulation on AI?12
According to the Commission, the Regulation’s purpose is to deliver an ecosystem
of trust by proposing a legal framework for trustworthy AI. The proposal is based on
EU values and fundamental rights and aims to give people and other users the
confidence to embrace AI-based solutions, while encouraging businesses to develop
them. AI should be a tool for people and be a force for good in society with the
ultimate aim of increasing human well-being. Rules for AI available in the Union
market or otherwise affecting people in the Union should therefore be human centric,
so that people can trust that the technology is used in a way that is safe and compliant
with the law, including the respect of fundamental rights.
In this context, two particular questions arise. First, how are we to interpret such
an ecosystem and such an idea of human-centricity? And, secondly, if we recognize
that applications of AI should only be human-centric, do we treat this as a legitimate
interest to be pressed in the public square or as a fundamental right?
3.1 Interpretation
12
European Commission’s (2021), para 1.1.
13
Brownsword (2014a, 2018).
14
Asimov’s (1942).
15
Pasquale (2020).
340 R. Brownsword
Famously, Isaac Asimov proposed three laws of robotics.16 Importantly, these laws
might be viewed as a precursor for a twenty-first century statement of the basic
requirements of human-centric applications of robotic technologies. The three
laws are:
1. A robot may not injure a human being or, through inaction, allow a human being
to come to harm.
2. A robot must obey the orders given it by human beings except where such orders
would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.
16
Asimov (1942).
AI and Fundamental Rights: The People, the Conversations, and. . . 341
Subsequently, recognising some ambiguity in the three laws, Asimov added a Fourth
Law, namely:
4. A robot may not harm humanity or, by inaction, allow humanity to come to
harm
What, then should we make of these laws? Are they essentially about avoiding harm
to humans, or about keeping humans in control, or are both ideas implicated?
Starting with the First Law, which is explicitly designed to avoid harm (or injury)
to humans, we can agree with Jacob Turner that this suffers from ‘gaps, vagueness
and oversimplification’.17 But, then, any rule that turns on the concept of ‘harm’ will
be open to this kind of criticism. Frank Pasquale points up another problem with the
First Law.18 Let us suppose that human being A is about to injure human being
B. Now, if a robot could intervene to protect B but only by harming A, the First Law
will be broken whatever the robot does: if the robot intervenes and harms A, that
violates the first part of the law; and, if the robot does not intervene, so that B comes
to harm, that violates the second part of the Law.
The problem flagged up by Pasquale also affects the Second Law. Thus, if a third
human, C, orders the robot to intervene so as to prevent injury to B, then prima facie
the robot should follow this order but this is subject to there being no conflict with
the First Law. Given that it is unclear what the First Law requires in the scenario, we
do not know whether inflicting harm on A would break the First Law. Of course, this
all applies, mutatis mutandis, if C’s order to the robot is not to intervene. Moreover,
as Turner remarks, these Laws ‘do not say what a robot should do if it is given
contradictory orders by different humans’—for example, if in our simple scenario, C
orders the robot to intervene but D orders it not to intervene.19 By the time that we
get to the Third Law, we are not entirely clear about the requirements of either the
First or the Second Law. So, the proviso in the Third Law (that the robot’s self-
protection should not conflict with the other Laws) is going to be difficult to apply in
scenarios of the kind we have been considering where the application of the other
Laws is problematic.
For our purposes, however, these puzzles are of marginal interest. The key
question is whether governance associated with Asimov’s Laws is about avoiding
harm to humans or ensuring that governance is by humans. On the face of it, these
are Laws that are being made by humans; humans are in control of their governance.
Faced with robotics, these humans have tried to instate a regime of rules that will
protect humans against being harmed by robots. So, governance here is human-
centric in the sense that humans remain in control (it is humans who make the Laws);
and the particular Laws that have been made are human-centric in the sense that they
seek to minimise the risk of human/robot interactions being harmful relative to what
humans judge to be in their own best interest. In the end, Asimov’s Laws and the
17
Turner (2019), p. 207.
18
Pasquale (2020), p. 3.
19
Turner (2019), p. 2.
342 R. Brownsword
20
Brownsword (2022b, 2023).
21
Pasquale (2020), p. 12.
22
Pasquale (2020), p. 3.
AI and Fundamental Rights: The People, the Conversations, and. . . 343
In the case of three of the four Laws, the emphasis is on resisting the de-centring
of humans. Thus, the ‘organizing principle’ underlying the First Law ‘is the impor-
tance of meaningful work to the self-worth of people [humans] and the governance
of communities’;23 in the case of the Second Law, there is an injunction against
robots passing themselves off as humans but also against trying to make machines
that mimic humans (this confusing the ‘abolition of humanity’ (at 8) with its
advance); and, the Fourth Law maintains the centrality of humans to our discourse
and practice of responsibility and accountability. By contrast, the Third Law is
designed to protect humans against applications of AI (such as lethal autonomous
weapons systems and social credit scoring) that would be damaging to the conditions
for human existence and agency and which would be liable to set humans against
one another in unproductive competitive ways. As Pasquale explains in relation to
the regimented world of social credit scoring:24
Conflict and competition are part of life, and we should expect technological
advance to inform them. But AI and robotics threaten to make social control too
perfect, and competition. . .too fierce. Human security and creativity thrive in envi-
ronments that balance predictability and openness, order and flux. If we fail to limit
robotics embedded in systems of social control, that balance will be upended.
Or, to put all this in another way, while Asimov’s Laws are directed at
maintaining order and the safety of humans, Pasquale’s Laws are more focused on
sustaining the centrality of humans in both social and economic settings as well as in
the conversations that we have about the governance of our communities.
We can test out our second question about an interest or right relating to human-
centric applications in a couple of ways.
First, in high-tech hospitals of the future, some patients might prefer the human
touch25 particularly where, say, a poor prognosis is indicated. This preference is
clearly legitimate but, if it is recognised only as a human interest, it is liable to be
overtaken by other legitimate considerations (favouring efficiency). In the balance of
interests, the interest in the human touch might not prevail. However, if a funda-
mental right to human-centric applications of technologies is recognised, and if it
covers de-centring of humans as suggested above, then it might be arguable that the
patient’s interest actually is covered by this right and there is no longer a question of
balancing. We are now in a high ground conversation where the only resistance to
the patient’s entitlement can come from other fundamental rights or provisos (for
emergencies or the like) that are specified alongside the right itself.
23
Pasquale (2020), p. 4.
24
Pasquale (2020), p. 11.
25
Brownsword (2014b).
344 R. Brownsword
Secondly, we can consider the already much discussed Article 22 of the General
Data Protection Regulation (GDPR) (2016/79), which, like its predecessor Directive
95/46/EC, is designed to keep humans in the loop where automated decision-making
threatens significant human interests. As many commentators have already noted,
Article 22 poses more questions than it answers.26 In particular, what constitutes a
decision that is ‘solely automated’? Indeed, what constitutes a ‘decision’? And,
which decisions, if they do not have adverse legal effects, are still sufficiently
‘significant’ to engage the protection of this Article? However, for our purposes,
the live question is whether Article 22 confers a fundamental right on citizens.
Should we understand Article 22 as a legislative compromise that speaks to the
balance of interests advanced in the public square or as a fundamental right? In much
EU jurisprudence, where rights are recognised, it is unclear at which level they stand.
For example, in the well-known Google Spain case,27 the balancing exercise under-
taken by the court seems to mix fundamental rights with non-fundamental rights or
legitimate interests. Similarly, although Article 22 of the GDPR recognises a right, in
certain circumstances, not to be subject to solely automated decisions, the status,
scope and strength of this right remain to be clarified.
If we were clearer about the status of the protected interest in Article 22, this
might clarify the status of the concerns that we have that AI might be applied in ways
that are not human-centric. That said, regardless of what we make of Article 22,
Europeans have good reason to clarify their understanding of human-centric appli-
cations of AI.
26
Brownsword and Harel (2019); Lynskey (2019).
27
Case C-131/12, Google Spain SL, Google Inc v Agencia Española de Protection de Datos
(AEPD), Mario Costeja González.
28
Hawking (2018), p. 188.
29
O’Neill (2016).
AI and Fundamental Rights: The People, the Conversations, and. . . 345
4.1 Catastrophe
The two focal points for AI having catastrophic consequences are the conditions for
humans to exist and the conditions for human agency.31
The human species is defined by its biology; and the prospects for human life depend
on whether the conditions are compatible with the biological characteristics and
needs of the human species. Most planets will not support human life. The condi-
tions on planet Earth are special for humans. For humans, the conditions on Earth are
neither too hot nor too cold to permit our survival. However, the conditions are not
specially tailored to the needs of any particular human; these are the generic
conditions for the existence of any member of the human species.
It follows that regulators should take steps to protect, preserve and promote the
natural ecosystem for human life. At minimum, this entails that the physical well-
being of humans must be secured; humans need oxygen, they need food and water,
they need shelter, they need protection against contagious diseases, if they are sick
they need whatever medical treatment is available, and they need to be protected
against assaults by other humans or non-human beings. Failed states are ‘failed’
precisely because their governance has not succeeded in maintaining these basic
conditions.
30
Brownsword (2022a), Ch. 11.
31
Brownsword (2019b, 2020, 2022a).
346 R. Brownsword
4.1.2 Agency
It is characteristic of human agents that they have the capacity to choose and to
pursue various projects and plans whether as individuals, in partnerships, in groups,
or in whole communities. Sometimes, the various projects and plans that they pursue
will be harmonious; but, often, human agents will find themselves in conflict or
competition with one another. However, before we get to particular projects or plans,
before we get to conflict or competition, there needs to be a context in which the
exercise of agency is possible. This context is not one that privileges a particular
articulation of agency; it is prior to, and entirely neutral between, the particular plans
and projects that agents individually favour; the conditions that make up this context
are generic to agency itself.
It follows that the conditions for meaningful self-development and agency need to
be constructed: there needs to be a sufficient sense of self and of self-esteem, as well
as sufficient trust and confidence in one’s fellow agents, together with sufficient
predictability to plan, so as to operate in a way that is interactive and purposeful
rather than merely defensive. Let me suggest that the distinctive capacities of
prospective agents include being able: to freely choose one’s own ends, goals,
purposes and so on (‘to do one’s own thing’); to understand instrumental reason;
to prescribe rules (for oneself and for others) and to be guided by rules (set by oneself
or by others); and, to form a sense of one’s own identity (‘to be one’s own person’).
Accordingly, the essential conditions are those that support the exercise of these
capacities.
With existence secured, and under the right conditions, human life becomes an
opportunity for agents to be who they want to be, to have the projects that they want
to have, to form the relationships that they want, to pursue the interests that they
choose to have, to form and populate communities that are in line with their interests,
and so on.
It is also a function of the commons to secure the conditions for an aspirant moral
community, that is, for a community that aspires to just governance not merely to
order. These infrastructural conditions apply whether the particular community is
guided by teleological or deontological standards, by rights or by duties, by com-
munitarian or liberal or libertarian values, by virtue ethics, and so on. The generic
AI and Fundamental Rights: The People, the Conversations, and. . . 347
context for moral community is impartial between competing moral visions, values,
and ideals; but it must be conducive to ‘moral’ development and ‘moral’ agency in a
formal sense. Mindful of this precondition, in our high-tech societies, we should be
particularly concerned if technological management leaves agents with no practical
option other than to do what those who manage the technology judge to be the right
thing.32
Reasoning impartially, each human agent will see itself as a stakeholder in the
commons; and, it will be understood that these essential conditions must be
respected. While respect for the commons’ conditions is binding on all human
agents, this does not rule out the possibility of prudential or moral pluralism. Rather,
the commons represents the pre-conditions for both individual self-development and
community debate, giving each agent the opportunity to develop his or her own view
of what is prudent as well as what should be morally prohibited, permitted, or
required. However, the practice of articulating and contesting both individual and
collective perspectives (like all other human social acts, activities, conversations,
and practices) is predicated on the integrity of the global commons.
Thus far, the picture of humans who are debating the governance of AI is of beings
who have the capacity to think and act for themselves but also beyond their own
interest. They have the capacity to think and act in ways that take into account the
interests of other humans, simply as fellow human agents, and to think and act in
ways that are consistent with the collective interest of the group or community of
which they are members.
What is missing from this picture is a further capacity, namely, the capacity to
think and act in a way that goes beyond the interests of one’s own particular
community by taking into account the interests of humanity—a dimension that is
waiting to be drawn out from, say, our recognition of crimes against humanity.33
This dimension of human agency is constituted by an awareness of the preconditions
for humans to exist, to co-exist, and to form communities coupled with the capacity
to think and act in a way that respects the conditions that make it possible for any
scheme of human governance to be viable. For a community to prioritise order, or
just order, or self-governance, there have to be conditions for the formation of a
community, for the setting of priorities, for the operationalisation of governance, and
so on. In short, there also has to be an understanding of, and a sensitivity to, the
conditions that make it possible for humans to be in the picture in the first place and
for those humans to have a sense of their individual and collective interests so that
they can constitute their community and set in train its governance. Unless that
32
Brownsword (2005); Vallor (2016).
33
Brownsword (2014c).
348 R. Brownsword
We can join up some of these several dots by introducing the idea of a triple licence
for legitimate governance measures.36
The first and most important element of the triple licence is that the measures in
question must be compatible with respect for the pre-conditions for human social
existence (with the pre-conditions that constitute the global commons). The second
34
Fairfield (2021), p. 143.
35
Yeung (2019), p. 42.
36
Brownsword (2020, 2022a).
AI and Fundamental Rights: The People, the Conversations, and. . . 349
strand of the triple licence, the community licence, demands that, within a particular
community, the application of technological measures should be compatible with the
fundamental values of that community—with the values, so to speak, that give the
community its distinctive identity, that make it the particular community that it
is. Finally, the third element of the triple licence is the social licence, a licence that
hinges on regulators reaching a reasonable accommodation of whatever plurality of
views (for example, views about the importance of innovation and the balance of
benefits and risks) there might be in their community (and which they identify in
their consultative and deliberative processes as well as in public square
conversations).
Now, it is sometimes said that we need to have a new social contract or compact
for the application of modern technologies;37 and, the triple licence might be
presented as just such a contract or compact. However, it needs to be understood
that the first element of the licence (concerning the global commons) is fundamental
to each and every community’s contract; this is a non-negotiable standard term.
Once we get to the second and third elements of the licence, each community has
some freedom to take its own distinctive position. It is, of course, essential that the
fundamental values to which a particular community commits itself, or its accom-
modations of plurality, are consistent with (or cohere with) the commons conditions.
Provided that this is the case, then regulatory legitimacy turns on maintaining fidelity
with the community’s constitutive values and taking up positions that are some-
where in the range of reasonableness. No doubt, there will be many interpretive
questions here but the exercise is internal to the commitments of the particular
community and its practice in accommodating competing and conflicting interests.
Accordingly, in principle, the fundamental values of Community A might be quite
different to Community B and again to Community C; and, the practical accommo-
dations in Community A might also be different to those reached in Community B
and again in Community C. At this level, there can be a plurality of communities
each with their own distinctive community and social licences.
It follows that there might be many triple licences, displaying different cultural
preferences and different community-defining aspirations; but, in all places, there
should be no green light for technological applications unless they meet the require-
ments of the non-negotiable commons licence.
Our third conversational talking point concerns the Rule of Law. We have not yet
located it in our sketch but it surely must be one of the key elements along with
fundamental rights.
37
See, e.g., Lucassen et al. (2016).
350 R. Brownsword
38
Brownsword (2022a).
39
Craig (1997).
40
Brownsword (2019b, 2020).
41
Hildebrandt (2015), p. 17.
AI and Fundamental Rights: The People, the Conversations, and. . . 351
If we do not learn how to uphold and extend the legality that protects individual
persons against arbitrary or unfair state interventions, the law will lose its hold on our
imagination. It may fold back into a tool to train, discipline or influence people
whose behaviours are measured and calculated to be nudged into compliance, or, the
law will be replaced by techno-regulation, whether or not that is labelled as law.42
In other words, it is the ideal of legality together with the Rule of Law that stands
between us and a disempowering techno-managed future.
Hence, the discipline of the Rule of Law must apply to both rules and technical
measures. Regulators, whether in online or offline environments, should not be able
to by-pass the Rule of Law simply by relying on technical measures or new
technologies either in support of rules or in place of governance by rules. For
example, it makes no sense to insist that end-user licences or terms and conditions
imposed by contract must be compatible with the Rule of Law but then exempt the
designed-in protection of the identical interests (as with digital rights management)
from the scrutiny of the Rule of Law.
In a different context, Bernard Harcourt has cautioned that criminal justice
thinking is increasingly driven by our realisation that we can make use of new
technologies to prevent crime but without attention being paid to whether we should
so utilise these tools and, in particular, whether such use is compatible with an
independent theory of just punishment.43 Harcourt expresses his concern as follows:
The perceived success of predictive instruments has made theories of punishment
that function more smoothly with prediction seem more natural. It favors theories of
selective incapacitation and sentencing enhancements for offenders who are more
likely to be dangerous in the future. Yet these actuarial instruments represent nothing
more than fortuitous advances in technical knowledge from disciplines, such as
sociology and psychology, that have no normative stake in the criminal law. These
technological advances are, in effect, exogenous shocks to our legal system, and this
raises very troubling questions about what theory of just punishment we would
independently embrace and how it is, exactly, that we have allowed technical
knowledge, somewhat arbitrarily, to dictate the path of justice.
When Harcourt refers to theories of just punishment that have a normative stake
in the criminal law, we can, as he rightly says, draw on a ‘long history of Anglo-
Saxon jurisprudence—[on] centuries of debate over the penal sanction, utilitarian-
ism, or philosophical theories of retribution’.44 However, the general point is that,
whichever of the candidate theories we ‘independently embrace’, we will have a
view about the justification for an actuarial approach that turns on considerations
other than whether the technical knowledge that we have ‘works’ in preventing and
controlling crime.
Putting all this in other words, although emerging technologies—such as AI for
risk management, for facial recognition, for resource allocation, and so on—might
42
Hildebrandt (2015), xiii.
43
Harcourt (2007), p. 3.
44
Harcourt (2007), p. 188.
352 R. Brownsword
Julie Cohen has argued that it is self-evident that ‘institutions for recognising and
enforcing fundamental rights should work to counterbalance private economic
power rather than reinforcing it.46 Obligations to protect fundamental rights must
extend—enforceably—to private, for-profit entities if they are to be effective at all.’
The point is that, even if public regulators respect the conditions set by the commu-
nity, it will not suffice if private regulators are left free to use technologies in ways
that compromise the community’s moral aspirations, or violate its constitutive
principles, or exceed the agreed and authorised limits for its use. Accordingly, it
should be a condition of the Rule of Law (the Rule of Law 2.0, as Cohen terms it47)
that the private use of technical measures, machines, and other technologies should
be compatible with the general principles for their (public) use. This applies whether
the private regulators are operating at a distance from public regulators or, as is
increasingly the case in some contexts (such as the development of ‘smart’ cities) in
partnership with them.
This need for an extended application of the Rule of Law is reinforced by
Shoshana Zuboff’s critique of the surveillance capitalism that thrives in our
connected and networked societies.48 The fact of the matter is that our information
societies are wired for connection and designed to be ‘smart’. Increasingly, human
interactions and transactions—whether in commerce or communication, in transport
or health— are mediated by these technologies; increasingly, the smooth running of
our societies depends on the integrity of its technological infrastructures. Without
doubt, these technologies are hugely enabling. Nevertheless, as Zuboff observes,
while we ‘celebrate the networked world for the many ways in which it enriches our
capabilities and prospects’, ‘being connected’ is by no means an unqualified good.49
In particular, Zuboff argues, we should not assume that ‘the networked form has
some kind of indigenous moral content, that being “connected” is somehow
45
Compare et al. (2008).
46
Cohen (2019), p. 267.
47
Cohen (2019), p. 237.
48
Zuboff (2019).
49
Zuboff (2019), p. 4.
AI and Fundamental Rights: The People, the Conversations, and. . . 353
6 Conclusion
The people of Europe have their own personal preferences and priorities; but they
also have their own distinctive collective values and identity. In this chapter, I have
suggested that the concept of ‘human-centric’ applications of new technologies, such
as AI, merits further analysis; that its relationship to the ‘de-centring’ of humans
might be critical for Europeans; and that the status of whatever human-centric
interest we have (as a mere preference or as a privileged fundamental right) needs
to be determined.
Beyond this, simply as humans, the people of Europe have rights and responsi-
bilities relating to the protection of the global commons. There is a universal
responsibility for all humans to avoid catastrophic development of, or applications,
of AI. The applications of AI might or might not be in line with our personal
preferences: but in Europe, the applications have to be compatible with our funda-
mental rights; and, everywhere, applications of AI must be compatible with respect
for the global commons.
Finally, I have also suggested that we need to be more explicit in explaining how
our commitment to the Rule of Law fits into the picture of how we Europeans engage
with the governance of, and the prospect of governance by, new technologies. My
proposal is that we declare that, for Europeans, the Rule of Law has particular
substantive commitments (as it does for everyone in relation to the stewardship of
the global commons) as well as applying to both public and private governance of
(and by) AI and other emerging technologies of our times.
50
Zuboff (2019), p. 9.
354 R. Brownsword
References
Abbott R (2020) The reasonable robot: artificial intelligence and the law. Cambridge University
Press, Cambridge
Asimov I (1942) Runaround. In: Asimov I, Astounding Science Fiction, Street and Smith,
New York
Brownsword R (2005) Code, control, and choice: why east is east and west is west. Legal Stud 25:
1–21. https://doi.org/10.1111/j.1748-121X.2005.tb00268.x
Brownsword R (2008) Rights, regulation, and the technological revolution. Oxford University
Press, Oxford
Brownsword R (2010) Regulating the life sciences, pluralism, and the limits of deliberative
democracy. Singapore Acad Law J 22:801–832
Brownsword R (2014a) Human dignity from a legal perspective. In: Duwell M, Braarvig J,
Brownsword R, Mieth D (eds) Cambridge handbook of human dignity. Cambridge University
Press, Cambridge, pp 1–22
Brownsword R (2014b) Regulating patient safety: is it time for a technological response? Law
Innov Technol 6:1–29. https://doi.org/10.5235/17579961.6.1.1
Brownsword R (2014c) Crimes against humanity, simple crime, and human dignity. In: van
Beers B, Corrias L, Werner W (eds) Humanity across international law and biolaw. Cambridge
University Press, Cambridge, pp 87–114
Brownsword R (2018) Developing a modern understanding of human dignity. In: Grimm D,
Kemmerer A, Möllers C (eds) Human dignity in context. Nomos/Hart, Baden-Baden/Oxford,
pp 299–323
Brownsword R (2019a) Legal regulation of technology: supporting innovation, managing risk and
respecting values. In: Pittinsky T (ed) Handbook of science, technology and society. Cambridge
University Press, New York, pp 109–137
Brownsword R (2019b) Law, technology and society: reimagining the regulatory environment.
Routledge, Abingdon
Brownsword R (2020) Law 3.0. Routledge, Abingdon
Brownsword R (2022a) Rethinking law, regulation and technology. Edward Elgar, Cheltenham
Brownsword R (2022b) Technology, governance and respect for law: pictures at an exhibition.
Routledge, Abingdon
Brownsword R (2023) Law’s Imperfect Governance (forthcoming, on file with author)
Brownsword R, Goodwin M (2012) Law and the technologies of the twenty-first century. Cam-
bridge University Press, Cambridge
Brownsword R, Harel A (2019) Law, liberty and technology—criminal justice in the context of
smart machines. Int J Law Context 15:107–125. https://doi.org/10.1017/S1744552319000065
Chesterman S (2021) We, the Robots? Regulating artificial intelligence and the limits of the law.
Cambridge University Press, Cambridge
Cohen JE (2019) Between truth and power. Oxford University Press, Oxford
Craig PP (1997) Formal and substantive conceptions of the rule of law: an analytical framework.
Public Law:467–487
Deakin S, Markou C (eds) (2020) Is law computable? Hart, Oxford
Dworkin R (1978) Taking rights seriously, Rev edn. Duckworth, London
European Commission (2021) Explanatory Memorandum to the proposed Regulation on AI. COM
(2021) 206 final, Brussels
Fairfield JAT (2021) Runaway technology. Cambridge University Press, New York
Fox D (2019) Birth rights and wrongs: how medicine and technology are remaking reproduction
and the law. Oxford University Press, Oxford
Harcourt BE (2007) Against prediction. University of Chicago Press, Chicago
Hawking S (2018) Brief answers to the big questions. John Murray, London
Hildebrandt M (2015) Smart technologies and the end(s) of law. Edward Elgar, Cheltenham
AI and Fundamental Rights: The People, the Conversations, and. . . 355
Kroll JA, Huey J, Barocas S, Felten EW, Reidenberg JR, Robinson DG, Yu H (2017) Accountable
algorithms. Univ Pa Law Rev 165:633–705
Lucassen A, Montgomery J, Parker M (2016) Ethics and the Social Contract for Genomics. In:
NHS, Annual Report of the Chief Medical Officer 2016: Generation Genome. Available
at. https://discovery.ucl.ac.uk/id/eprint/1561587/1/Montgomery_Ethics%20and%20the%20
social%20contract%20for%20genomics%20in%20the%20NHS.pdf. Accessed 26 Sept 2022
Lynskey O (2019) Criminal justice profiling and EU data protection law: precarious protection from
predictive policing. Int J Law Context 15:162–176. https://doi.org/10.1017/
S1744552319000090
O’Neill C (2016) Weapons of math destruction. Crown Publishing, New York
Pasquale F (2020) New laws of Robotics. The Belknap Press, Cambridge
Prosser WL (1960) Privacy. Calif Law Rev 48:383–423
Turner J (2019) Robot rules. Palgrave Macmillan, Cham
Vallor S (2016) Technology and the virtues. Oxford University Press, Oxford
Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-
making does not exist in the general data protection regulation. Int Data Priv Law 7:76–99.
https://doi.org/10.1093/idpl/ipx005
Warren SD, Brandeis LD (1890) The right to privacy. Harv Law Rev 5:193–220
Yeung K (2019) Why worry about decision-making by machine? In: Yeung K, Lodge M (eds)
Algorithmic regulation. Oxford University Press, Oxford, pp 21–48
Zuboff S (2019) The age of surveillance capitalism. Profile Books, London
Roger Brownsword is based at King’s College London where he was the founding Director of the
Centre for Technology, Ethics and Law (TELOS). He is the general editor of the journal, Law,
Innovation and Technology; he co-edited the Oxford Handbook on Law, Regulation and Technol-
ogy; and his two most recent books are Law 3.0 and Rethinking Law, Regulation and Technology.