0% found this document useful (0 votes)
204 views

The Risk-Based Approach Under The New EU Data Protection Regulation A Critical Perspective

This document summarizes an article from the Journal of Risk Research that critically examines the risk-based approach to data protection adopted in the new EU General Data Protection Regulation (GDPR). The GDPR introduces a risk-based approach that relies predominantly on self-regulation by data controllers through measures like data protection impact assessments. However, the article argues this approach may weaken effective data protection given the challenges of assessing non-physical risks to individual rights. It compares the approach to risk regulation in environmental law to highlight potential shortcomings and opportunities.

Uploaded by

Dita Priambodo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
204 views

The Risk-Based Approach Under The New EU Data Protection Regulation A Critical Perspective

This document summarizes an article from the Journal of Risk Research that critically examines the risk-based approach to data protection adopted in the new EU General Data Protection Regulation (GDPR). The GDPR introduces a risk-based approach that relies predominantly on self-regulation by data controllers through measures like data protection impact assessments. However, the article argues this approach may weaken effective data protection given the challenges of assessing non-physical risks to individual rights. It compares the approach to risk regulation in environmental law to highlight potential shortcomings and opportunities.

Uploaded by

Dita Priambodo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Journal of Risk Research

ISSN: 1366-9877 (Print) 1466-4461 (Online) Journal homepage: https://www.tandfonline.com/loi/rjrr20

The risk-based approach under the new EU data


protection regulation: a critical perspective

Maria Eduarda Gonçalves

To cite this article: Maria Eduarda Gonçalves (2020) The risk-based approach under the new
EU data protection regulation: a critical perspective, Journal of Risk Research, 23:2, 139-152,
DOI: 10.1080/13669877.2018.1517381

To link to this article: https://doi.org/10.1080/13669877.2018.1517381

Published online: 12 Jan 2019.

Submit your article to this journal

Article views: 1367

View related articles

View Crossmark data

Citing articles: 5 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rjrr20
JOURNAL OF RISK RESEARCH
2020, VOL. 23, NO. 2, 139–152
https://doi.org/10.1080/13669877.2018.1517381

The risk-based approach under the new EU data protection


regulation: a critical perspective
Maria Eduarda Gonçalves
^ ACET
Instituto Universitario de Lisboa (ISCTE-IUL), DINAMI  – IUL, Lisboa, Portugal

ABSTRACT ARTICLE HISTORY


The first broad reform of personal data protection legislation in the Received 5 March 2018
European Union entered into force in May 2018 (Regulation (EU) 2016/ Accepted 15 August 2018
679, the General Data Protection Regulation). Remarkably, with this
KEYWORDS
reform a risk-based approach has been introduced as the core data pro-
Personal data protection;
tection enforcement model, while data protection authorities see their risk-based approach; big
regulatory role significantly weakened. The risk-based approach is to be data technologies;
implemented by the data controllers (i.e. the operators) via data protec- European Union
tion impact assessments (evoking the established environmental impact
assessment procedure) and notification of breaches, among other proce-
dures. Hence the scope of both the concepts of risk and risk regulation
spread beyond conventional domains, namely the environment, public
health or safety, i.e. physical risks, to encompass risks to intangible val-
ues, i.e. individual rights and freedoms, presumably harder to assess and
manage. Strikingly, the reform has been accompanied by a confident
discourse by EU institutions, and their avowed belief in the reform’s
ability to safeguard the fundamental right to data protection in the face
of evolving data processing techniques, specifically, big data, the
Internet of Things, and related algorithmic decision-making. However,
one may wonder whether there isn’t cause for concern in view of the
way the risk-based approach has been designed in the data protection
legislation. In this article, the risk-based approach to data protection is
analysed in the light of the reform’s underlying rationality. Comparison
with the risk regulatory experience in environmental law, particularly
the environmental impact assessment procedure, is drawn upon to
assist us in pondering the shortcomings, as well as the opportunities of
the novel risk-based approach.

1. Introduction
Regulation (EU) 2016/679 of the European Parliament and the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free
movement of such data (the General Data Protection Regulation - GDPR) entered into force in
May 2018.1 This is the first broad reform of the data protection legislation of the European
Union (EU), initially adopted in 1995.2 At stake essentially are the threats to human rights and
freedoms brought about by expanding uses of Information and Communication Technologies
(ICT). These threats are not new, but they are amplified by the increasing pervasiveness of per-
sonal data entailed by Internet platforms, particularly search engines and social networks, along

CONTACT Maria Eduarda Gonçalves [email protected] Instituto Universitario de Lisboa (ISCTE-IUL),


^ ACET
DINAMI  – IUL, Lisboa, Portugal
ß 2019 Informa UK Limited, trading as Taylor & Francis Group
140 M. GONÇALVES

with the spread of devices that pervade our daily lives from smartphones and related apps,
which track our contacts and our preferences, to home equipment and medical sensors provid-
ing information about our routines or our health – plus the degree of automation inherent to
data mining and data analytics involved in big data applications.
The GDPR restates the data protection principles and the rights of the data subjects contained
in Directive 95/46/CE. Yet, a remarkable originality of the present reform is the turn to a risk-
based approach as the privileged enforcement method. Relying predominantly on the data con-
trollers’ self-rule, while undermining the role of data protection authorities, this emerging
enforcement paradigm may cause apprehension concerning its ability to ensure effective data
protection, and ultimately the fundamental right to data protection.
This article is focused on the risk-based approach introduced by the GDPR. The approach’s
underlying rationality is discussed. We argue, centring the regime on a risk-based approach
means a major change in both the rationale and the operation of personal data protection. A
critical issue ultimately is whether the novel approach as designed is a suitable way to guarantee
effective compliance with data protection principles and rights in the context of current ICT
applications. Comparison with risk regulatory experiences in other technology-related domains,
particularly in the field of environmental law, is drawn upon to assist us in pondering the draw-
backs, as well as the prospects opened up by the risk-based approach to data protection.

2. The risk-based approach to data protection in perspective


The GDPR restates the data protection principles and the rights of the data subjects already con-
tained in Directive 95/46/CE. Key established principles to be observed by the data controllers
and processors3 include: lawfulness, loyalty and transparency (personal data may be processed
only if the data subject has unambiguously given his prior consent or other legitimate grounds
apply); purpose limitation (personal data may only be collected for specified, explicit and legitim-
ate purposes and may not be further processed or re-used in a way incompatible with those pur-
poses); and data minimisation (processing of personal data must be restricted to the minimum
amount of data necessary). Additionally, the data subjects are assigned a set of rights empower-
ing them to have access to information about them registered in databases, to rectify the data,
and to object to data processing in certain situations. The Charter of Fundamental Rights of the
European Union, of 2000, did later on affirm the right to data protection as a fundamental right
guaranteed under EU law.
Let’s recall that EU data protection law has from its inception sought to reconcile the protec-
tion of personal data with the ‘free movement of data’, to use both the Directive and the
Regulation’s wording itself. The Directive included a catalogue of exceptions to the data protec-
tion principles largely justified by the intent not to raise unjustified obstacles to the movement
of personal data (Koops and Leenes 2014, 159). This is especially clear with respect to consent.
Though, according to Article 7 (a) of the Directive, personal data might be processed only if the
data subject has unambiguously given his consent, Article 7 (b) to (f) ultimately allowed the
processing of the data on almost any ground, a door opened by exceptions provided to the
‘legitimate interests pursued by the controller’. The GDPR reiterates these exceptions in Article 6.
Actually, under the GDPR, the inherent balancing test is left to a case-by-case determination by
the data controllers with just some non-binding general guidance from the Article 29 Data
Protection Working Party (Art. 29 DPWP) (Art. 29 DPWP 2014b; Zanfir 2014: 237). This requires a
demanding evaluation, which data controllers may not have the competency to undertake apart
from being in a position of conflict of interest (Ferretti 2012, 2).
Indeed, a critical issue under the new Regulation lies in leaving the main responsibility to
judge how to apply the data protection principles to the data controllers, while the regulatory
role of data protection authorities is converted into a somewhat subsidiary one. The EU
JOURNAL OF RISK RESEARCH 141

legislator’s option in this regard was justified by its intent ‘to simplify the regulatory environ-
ment’ and ‘substantially reduce the administrative burden’ on data controllers and processors
(Reding 2011). For the European Commission, prior notifications to the data protection author-
ities were ‘a bureaucratic burden which costs business 130 million euro every year’. ‘The reform
will scrap these entirely’, the Commission announced (EC 2015).
One should recall that the data protection regime was initially devised considering the com-
puter systems of large organisations, both public and private, to the extent that they collect,
store and process personal data for the purposes of their own activities. A significant role was
assigned under Directive 95/46/EC to Member States’ data protection authorities endowed with
investigative and intervention powers, including to receive and examine notifications by data
controllers prior to their carrying out any wholly or partly automatic processing operation or set
of such operations.
In its Communication on a comprehensive approach to the protection of personal data in the
EU, which set the stage to the reform, the European Commission recognised that ‘rapid techno-
logical developments and globalisation have profoundly changed the world around us, and
brought new challenges for the protection of personal data’ (EC 2010). As the agreement on the
proposal for a General Data Protection Regulation (GDPR), submitted by the European
Commission in January 2012, was being reached in December 2015, the Commission stressed,
‘Today’s agreement is a major step towards a Digital Single Market. … With solid common stand-
ards for data protection, people can be sure they are in control of their personal information’.
The Commission also proclaimed the GDPR to be ‘an essential step to both strengthening
citizens’ fundamental rights in the digital age’ and ‘facilitate business by simplifying rules for
companies in the Digital Single Market’ (EC 2015).
These stances revealed a somewhat perplexing neglect of the challenges arising for the pro-
tection of personal data from the growing availability of e-platforms and large datasets as well
as sophisticated tools in data mining and data analytics, so-called big data, used in opaque
modes by online operators for the purposes of their own activities and to deliver data services
to third parties.4 It is easy to infer that the automation inherent to data mining and reuse of
large data sets obtained from diverse unrelated sources render consent at the stage of data col-
lection extremely hard to apply (Tene 2010, 15; Colonna 2014, 299; De Hert and
Papakonstantinou 2016, 32). This is all the more so as the precise purposes of any secondary use
of the data may not be known when the data are initially collected. Moreover, most users have
no way of knowing who the data brokers are, let alone finding the tools the companies provide,
so that current rights provide only the illusion of transparency (Brill 2013). The legitimate interest
clause, ‘a sort of legal umbrella’ for companies ‘to shelter under’ (Horten 2016), ends up being
the criterion upon which the majority of personal data processing takes place (Le Metayer and
Monteleone 2009, 136).
The expectation from big data is that it may lead to better and more informed decisions (Art.
29 DPWP 2013). The other side of the coin, however, is the growing use of big data for monitor-
ing human behaviour either for purposes of consumer profiling or for surveillance and control
based on big data’s predictive potential (Boyd and Crawford 2012; Rosen 2012; European
Parliament 2013; Mantelero and Vaciago 2013; Cardon 2015; Morozov 2015). The European Data
Protection Supervisor (EDPS) itself admitted, ‘The internet has evolved in a way that surveillance
- tracking people’s behaviour – is considered as the indispensable revenue model for some of
the most successful companies’ (EDPS 2015). Also, the traditional search for causal relationships
is being replaced by the search for probabilities and correlations (Boehme-Neßler 2016).
Notwithstanding their promising applications, big data thus raise a true apprehension that this
new stage of data processing and use may alter the balances of power with respect to personal
data appropriation and control in favour of data controllers and to the detriment of
data subjects.
142 M. GONÇALVES

Related concerns have been expressed regarding the impact of so-called governance of algo-
rithms or algorithmic regulation. Algorithms used in data processing underlying decision-making
in ever more domains of activity have been blamed for being selected and applied in opaque
and unaccountable modes, as both the algorithm itself and the input data are typically consid-
ered proprietary. Such challenges are set to grow as algorithms increase in complexity and inter-
act with each other’s outputs to take decisions (Mittelstadt et al. 2016). Algorithms are already
being used by organisations such as insurance companies to check the behaviour of clients,
moulding insurance premiums accordingly.5 Their use in politics, aiming to influence voters, is
starting to be recognised (Zittrain 2014; Schultz 2016; Shaw 2017).6 Thus the notion that the
GDPR empowers the individual by giving him/her rights to control the processing of his/her data
looks illusory.
Against this background, the emerging risk-based approach to data protection may raise
doubts regarding its ability to ensure effective data protection. At the end of the day, too much
will depend on how the data controllers will interpret and fulfil their responsibilities under the
GDPR. In the same vein, one might wonder whether legality will not yield to security as a critical
factor behind data protection.

3. The turn to the risk-based approach to data protection: replacing legality


with security?
Under the GDPR, the risk-based approach to data protection entails the obligation of the data
controllers to assess the risks of data processing to the rights and freedoms of natural persons
throughout every stage of the data life cycle (collection, storage, processing, retention, sharing
and disposal). To fulfil its duties the data controller is expected to evaluate the likelihood and
severity of risks for individual rights in the light of the nature, the scope, the context and the
purposes of the processing. The Art. 29 DPWP judged that this evaluation refers primarily to the
right to privacy but may also involve ‘other fundamental rights such as freedom of speech, free-
dom of thought, freedom of movement, prohibition of discrimination, right to liberty, conscience
and religion’, what renders the evaluation even more demanding (Art. 29 DPWP 2014a). It is also
up to the controllers to take the initiative to consult the data protection authority prior to proc-
essing in case a DPIA indicates that the processing would result in a high risk in the absence of
measures taken by the controller to mitigate the risk.
The risk assessment by the data controllers should lead to ‘appropriate technical and organ-
isational protection measures’ ‘to ensure a level of security appropriate to the risk, including inter
alia as appropriate the pseudonymisation and encryption of personal data’ (Article 32 on
Security of processing).
Where high risks of data processing for the rights and freedoms of natural persons are identi-
fied (such as those involving a systematic and extensive evaluation of personal aspects which is
based on automated processing, processing on a large scale of specially sensitive data or a sys-
tematic monitoring of a publicly accessible area on a large scale), the data controllers should
undertake a data protection impact assessment (Articles 35 and 36). Besides, they must promptly
notify data breaches aiming to prevent ex-post misuse of data, ‘unless the personal data breach
is unlikely to result in a risk for the rights and freedoms of individuals’ (Articles 33 and 34).
Remarkably, the notion of ‘personal data breach’ is defined as a ‘breach of security’ leading to
the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to
personal data transmitted, stored or otherwise processed’ (Article 4 on Definitions, al. 12).
Actually, the risk-based approach to data protection is not a new concept, since it was already
foreseen under Directive 95/46/EC with regard to the security of processing (Article 17) and the
prior checking obligations by data protection authorities in case of operations likely to present
specific risks to the rights and freedoms of data subjects (Article 20) (Lynskey 2015, 81 ff). Yet, it
JOURNAL OF RISK RESEARCH 143

is the GDPR that gives full expression to such an approach by turning it into the key enforce-
ment model, while relying mostly on the data controllers to apply it.7 In addition to obligations
relating to security of processing and the obligation to carry out a data protection impact assess-
ment, as already pointed out, the risk-based approach has been extended and reflected in other
implementation measures such as the data protection by design principle (Article 23), and
records of processing activities (Article 30). Additionally, the Regulation encourages self-regula-
tion by the data controllers through codes of conduct and certification mechanisms (Articles 40
and 42).
Two critical differences come out between the precedent regime and the new one: first, the
accent placed by the GDPR on ‘high risks’ to the rights and freedoms of the data subjects such
as those involving a systematic and extensive evaluation of personal aspects which is based on
automated processing, processing on a large scale of specially sensitive data or a systematic
monitoring of a publicly accessible area on a large scale (Article 35, paragraph 3; Recital 84),
while the Directive did not contain such a restriction; second, whereas the Directive stated the
duty of Member States’ supervisory authority to keep a register of all processing operations noti-
fied, under the GDPR the role of the supervisory authority is limited to establishing and making
public a list of the kind of processing operations which are subject to the requirement for a data
protection impact assessment (Article 35, paragraph 4). Now, the rule is that records of process-
ing activities are a responsibility of enterprises and organisations employing more than 250 per-
sons (Article 30).
What distinguishes a risk-based approach is a systematic attempt to assign more resources to
those factors likely to have greater effect if the risk event occurs (Duckett et al. 2015, 379).
Regulation based on the assessment of risks is about the optimum allocation of resources
according to the impact and probability of risks and about accounting and legitimising such use
(Black and Baldwin 2010). In the public policy sphere, risk-based frameworks enable regulators to
channel their resources to those issues which pose the greatest risk to the achievement of their
objectives (Black 2010). Gellert recalls the linkage of risk-based regulation to New Public
Management seeking ‘explicit standards and measures of performance’, and greater parsimony
in resource use approximating private sector management styles.
As a matter of fact, the diverse levels of accountability foreseen by the GDPR owing to differ-
ent degrees of risk posed by the data processing raised the concern of the Article 29 DPWP,
which held that the risk-based approach should not be regarded as an alternative to well-estab-
lished data protection principles and rights. These must be equally strong when the processing
is relatively ‘low risk’, the Working Party underlined (Art. 29 DPWP 2014b). Accordingly, the risk-
based approach should just be seen as demanding additional measures when the risks are iden-
tified, not to evade strict compliance in some situations. Yet, the Working Party admitted that a
data controller whose processing is relatively low risk may not have to do as much to comply
with its legal obligations as a data controller whose processing is high-risk (Art. 29 DPWP 2014b).
Overall, the problematic issue lies in the leaning of the main regulatory responsibility towards
the data controllers, as already pointed out. All the above-mentioned procedures leave a large
margin of judgment and autonomous decision-making to the operators regarding data protec-
tion and related privacy issues.
It is true that the data controller’s choices when carrying out a DPIA need to be reported to
the supervisory authority. The supervisory authority may then limit or prohibit the controller’s
plans if it finds that ‘the intended processing does not comply with this Regulation, in particular
where risks are insufficiently identified or mitigated’ (Article 36, paragraph 2). However, besides
leaving to the data controller the charge to evaluate whether a DPIA is needed in the first place,
most data protection authorities, with the limited funds available to them, may be in weaker
position vis-a-vis powerful online operators than hitherto (EUFRA 2010; Davies 2016, 290).
Moreover, the scope itself of the DPIA is rather uncertain. The DPIA draws on the Privacy
Impact Assessment (PIA) currently practiced in common law countries mainly, even though a
144 M. GONÇALVES

pioneering case for DPIA in the EU already exists, on radio-frequency identification (RFID) appli-
cations (Art. 29 DPWP 2010b, 2011). Though PIA has been regarded as more than a simple com-
pliance check with the law (Wright and De Hert 2012, 3; Wright et al. 2014, 155), existing
guidelines on PIA tend to adopt pragmatic approaches centred on security management aspects
(ICO 2014; Commission Nationale Informatique et Libertes (CNIL) 2015; Mantelero and Vaciago
2015, 104). Privacy risks are mainly perceived in this context as risks to the reputation of the
organisation, which appear to be particularly acute in businesses that deal directly with the gen-
eral public (e.g. banking, and retail). For example, for the UK Information Commissioner Office
(ICO), ‘identifying, assessing, and managing privacy risks should not be different from the process
of identifying, assessing, and managing other risks faced by an organisation, such as environ-
mental, competitive, or financial risks.’ Consequently, the DPIA may turn out to be a mere man-
agerial exercise emptied of true substance (Gellert 2015; Wright et al. 2014, 156).
Unsurprisingly, the risk-based approach to data protection was strongly advocated by the lob-
bying coalitions (Horten 2016). For Digital Europe, for example, companies ‘are in the best pos-
ition to evaluate the risks that their data processing poses to interests of data subjects’ (Digital
Europe 2013). This assertion may cause scepticism; one might, however, admit that in the con-
text of a regime relying mostly on self-regulation, ‘a strong harm-based approach can help to
promote responsible data use based on risk management’ (Black 2010).
All in all, the risk-based approach results in replacing the ex ante notification to and control
by the data protection authorities by enforcement means approximating the USA’s more liberal
regime (Hasty, Nagel and Subjally 2013).8 This means delegating to private organisations the
task of interpreting the regulatory norm and implementing the response, a role that is normally
assigned to the public authority (Bamberger 2010).
One may thereby conclude that, while reaffirming the data protection principles, the change
in their mode of implementation ends up altering the rationale itself of the data protection
regime by placing considerations of security above those of legality, meaning a ‘triumph of prag-
matism over principle’ (Kuner et al. 2015, 161–162; Davies 2016, 290).
The EDPS itself recognised that remitting monitoring of compliance predominantly to self-
control does not shield against the peril of core principles of data protection being compromised
since it is often ‘a challenging task to decide what is fair and lawful and what is not when it
comes to big data analytics’ (EDPS 2015). The DPIA involves evaluation of intangible values and
comprehensive balancing exercises, including of value conflicts, and consideration of many fac-
tors, whether it may lead to unfair discrimination or any other negative impact on the individuals
concerned or on society, which one might doubt the operators are in the best position to under-
take (Art. 29 DPWP 2014a; EDPS 2015).
All these considerations suggest that the supervisory authority should play a more active role
in the required assessment and management of data protection risks. Matching the EU data pro-
tection risk-based approach with prior EU risk regulatory experiences might, we believe, provide
valuable insights in this regard.

4. Learning from the environmental impact assessment (EIA) procedure


Weirdly, as a risk-based approach is being introduced in data protection law, consideration of
the challenges involved in the assessment and management of the risks concerned in the light
of risk regulatory experiences in other domains is only slightly emerging (Costa 2012, 14).
Despite the different nature of the risks at stake, we believe that the well-established environ-
mental impact assessment (EIA) procedure, a key tool of environmental risk regulation, presently
ruled by Directive 2011/92/EU amended by Directive 2014/52/EU, might illuminate how to better
tackle the ambivalent effects of big data applications, and help refining the approach.
JOURNAL OF RISK RESEARCH 145

Generally speaking, the adoption of the risk-based approach to data protection might be
viewed as part of a societal process whereby the impacts of technological development are
increasingly perceived through the angle of risk, and regulated accordingly (Wilkinson, 2002).
Calling attention to the global risks ‘posed by the violation of freedom rights’ in the digital
era, Beck noted the fragility involved in dealings with these risks if compared with environmental
or health-related ones. ‘The violation of our freedom does not hurt’, Beck added (Beck 2009,
Bergkamp 2017).
Hence the concepts of risk and risk regulation, and the scope of the ‘risk society’ extend
beyond conventional domains such as the ecological risk to encompass risks to intangible values
arising such as those arising from ICT uses (Renn, 1992; Beck, 1992; Adam et al., 2000; Beck,
2009). This is a relatively new development since ICT have been regarded predominantly as tools
to be regulated by the rule of law.
It is worth recalling, environmental risk regulation has been seen either from a predominantly
technocratic or democratic perspective as derived from risk society theory (Beck 2009; Renn
1992). Looking at personal data protection from the angle of risk society theory, as well as of
environmental risk regulation would imply considering what sorts of expertise might be better
suited to assessing risks to human rights and freedoms. Arguably, regulation of risks arising from
data mining and data analytics to human rights and freedoms would deserve being reinforced
not only with regard to transparency and overseeing, but also with regard to information and
participation (Rauhofer 2008, 190).
As said, it is still unclear how the DPIA will work in practice; how to ensure that risks to the
rights of data subjects are duly assessed by the operators, if it should go beyond a narrow
‘harm-based as well as actual adverse effect, assessed on a very wide scale ranging from an
impact on the person concerned to a general societal impact (e.g. loss of social trust)’, as recom-
mended by the Article 29 Data Protection Working Party. Additional doubts include the extent
to which DPIA implies working with the affected people to identify and reduce the risks, and the
technical and institutional mechanisms required to implement such dialogue.
According to the GDPR, the DPIA is to be accomplished by the data controller, and the
European Commission may adopt guiding criteria by delegated acts. In this regard, a significant
difference vis-a-vis the EIA procedure rests on the role of the competent authority when com-
pared with the limited role of the supervisory authority in the reformed data protection regime.
Under the EIA procedure, the competent authority (typically, an agency of the Environmental
Ministry) plays a decisive part starting with the screening procedure to ensure that an EIA is only
commenced for projects likely to have significant effects on the environment, and the scoping
procedure to define the scope and level of detail of the environmental information to be submit-
ted in the form of an EIA report. Then the developer must provide the environmental authorities
and the public with an extensive set of information listed in Directive 2011/92/EU amended by
Directive 2014/52/EU, and undertake consultations with either.9 Moreover, a public consultation
is carried out under the auspices of the competent authority too. Whether to approve the pro-
ject or not is for the competent authority to decide, taking into consideration the results of the
consultations. The public is informed of the decision afterwards and can challenge the decision
before the courts. Procedural rationality appears especially critical for legitimating such decisions
in view of the inherent uncertainties (Majone 2010, 9).
Indeed, effective compliance with data protection principles is likely to be affected by the
uncertainties involved in assessing data’s potential bearings on individuals for which there is no
or limited experience (Van Asselt, Vos and Rooijackers 2009). As a matter of fact, the conven-
tional model presupposes that risks can be calculated, which typically relates to physical or tech-
nical impacts. As risk assessment applies to more complex, multifactorial fields, margins of
uncertainty and indeterminacy or even ignorance tend to grow (Wynne 1992; Duckett
et al. 2015).10
146 M. GONÇALVES

To provide guidance for coping with uncertain risks a precautionary approach has been
advanced (Tosun 2013, 20). The precautionary principle has been made a statutory requirement
under EU environmental law, the underlying idea being that ‘where there are threats of serious
or irreversible damage, lack of full scientific certainty shall not be used as a reason for postpon-
ing cost-effective measures to prevent environmental degradation’ (Principle 15 of Rio
Declaration, 1992). Applying this principle does not exclude a risk/benefit analysis; it rather calls
for a deeper and plural appreciation of the values and interests at stake. The principle also
implies that there is a social responsibility to protect the public from exposure to harm, when
scientific investigation has found a plausible risk. Choices must be rendered explicit and justified
in relation to other possible choices (Von Schomberg 2012). In view of the predictable uncertain-
ties in the assessment of data protection risks of emerging ICT applications, bringing the precau-
tionary approach into discussion could enrich the debate on this topic (Costa, 2012).
Despite the immaterial nature of risks involved in the processing of personal data, one might
concede, accurate risk calculations are required for well-informed choices in this domain, weigh-
ing privacy-related risks against the benefits of data releases (Narayanan, Huey and Felten 2016).
The need for input by the public has also been acknowledged. Individual users should be able
to voice their opinion as to the standards required, and empowered to self-assess whether the
security measures a processor provides are adequate, and help him choose a company with bet-
ter privacy safeguards (Weber 2014, 296).
As a data protection principle, transparency features in Article 5, 1 (a) of the GDPR, with
Recitals 39 and 58 elaborating on the notion. While a high degree of transparency is acknowl-
edged as a requirement for big data applications specifically, how to render it effective remains
an open, and surely not easy question (Pasquale 2016). Indeed, with the advent of big data, it is
often not the collection of information in itself that is sensitive, but the inherently obscured
inferences that are drawn from it and the way in which those inferences are drawn. Based on
such considerations, the EDPS recommended that the provisions on transparency be reinforced,
as well as ‘a new generation of user control’ implying ‘powerful rights of access’ and ‘effective
opt-out mechanisms’ (EDPS 2015). Both the EDPS, and the Article 29 DPWP demanded that data
subjects be given access to their profiles, as well as to the logic used in algorithms to determine
the profiles. Moreover, for algorithms with foreseeable human impact to be interpretable by the
data subjects, special regulatory or methodological mechanisms should be implemented: algo-
rithmic auditing carried out by technically skilled entities, and creating a record of algorithmic
decision-making to unpack problematic or inaccurate decisions or to detect discrimination or
similar harms (Tutt 2016; Pasquale 2016). In other words, organisations should disclose their deci-
sional criteria, which is all the more important in the world of big data.11
Actually, the GDPR specifies responsibilities of data controllers relevant to decision-making
algorithms. Art. 12 (1) mandates that clear and plain language be used to inform about their
consequences. Additionally, ‘the controller shall, at the time when personal data are obtained,
provide the data subject with information on ‘ … the existence of automated decision-making,
including profiling, … meaningful information about the logic involved, as well as the signifi-
cance and the envisaged consequences of such processing for the data subject’ (Article 13, para-
graph 2 (f), Article 14, paragraph 2 (g)).12 Moreover, in order to ensure fair and transparent
processing, controllers should use adequate mathematical and statistical procedures for the
profiling and measures, which minimise the potential risks for the interests of data subjects.13
Furthermore, data subjects are granted a right to object to profiling methods (Art. 21 (1)) and a
right not to be subject to a decision based solely on automated processing which produces legal
or similar significant effects for her (Article 22). These safeguards include the right to obtain
human intervention on the part of the controller and the possibility for the data subject to
express his or her point of view and to contest the decision.
Yet, here again, assessing the consequences, positive or negative, for the data subject in such
circumstances will certainly be a highly difficult exercise.14 A further issue is that the ‘collect-
JOURNAL OF RISK RESEARCH 147

everything approach’ inherent to big data means that future reuses are often not anticipated.
Considering the workings of algorithms specifically, various limitations, epistemic and technical,
as well as legal (i.e. trade secrets) hamper access to information and true understanding by users
and data subjects (Pasquale 2016; Mittelstadt et al. 2016).
At first glance, the Regulation furthers transparency and defers control to the data subjects.
But, to the extent that it will apply to decisions that do not involve human judgment such as
automatic refusal of an online credit application or e-recruiting practices, it is likely to affect a
limited class of automated decisions. Besides, significant exemptions constrain the rights of data
subjects, namely that ‘the provision of such information proves impossible or would involve a
disproportionate effort’, something that it is to the data controller to evaluate (Article 14, para-
graph 5).
Algorithmic auditing carried out by technically skilled entities has been suggested as a pre-
condition to verify correct functioning (Pasquale 2016). For analytics algorithms with foreseeable
human impact, auditing could have the additional advantage of creating a record of algorithmic
decision-making. This looks especially critical at a time when the data protection authorities
lessen their control over data processing, hence the opportunity to keep a registrar of data proc-
essing instances. Techniques are currently being developed to allow for increasing transparency
of big data algorithms to external observers, yet further work is required to constue realistic
regulatory mechanisms and proper standards in this direction (ICO 2017).15
Similarly to known risk regulation experiences, independent expert-based assessment to
examine how companies as well as governments collect and use big data and evaluate the
inherent risks are key to the aforementioned propositions. To render these effective one would
need to reconsider the design of the DPIA procedure to encompass more active involvement of
the data protection authorities and of independent experts under their supervision in the light
of the EIA procedure.
Also, in contrast with this procedure, no true consultative or participative mechanisms are
being discussed in connection with the data protection regime. Indeed, the risks at issue, though
involving technical facets, call for extra public involvement for the risks to be duly perceived and
managed. As pointed out, an important feature of existing environmental risk regulation actually
is risk communication and participation. Stakeholders and the public should be informed, and
participate, as early as possible, in the decision-making process in order to avoid narrow framing
and to build trust (Rowe and Frewer 2000; Webler et al. 2014). Recent approaches to risk theory
in the light of practical experiences in risk management in various fields have underlined the
importance of combining risk analysis with other forms of knowledge about local contexts and
the perceptions and concerns of those who will be affected by decisions: ‘A discourse without a
systematic scientific basis is nothing but an empty vessel while, on the other hand, a discourse
that disregards the moral aspects of available options will aid and abet amoral actions.’ (Rosa,
Renn and McCright 2014, 172).
Under risk theory and practice, various methods of participation have been envisaged and
practiced, which relate to the nature of the risks and of the regulatory cultures. These include cit-
izens’ juries, citizens’ panels, consensus conferences, focus groups, and public hearings. In this
connection, it is maybe startling that compared to the turmoil and social mobilisation triggered
by other emerging technologies (e.g. genetically modified organisms), the risky impacts of ICT,
and now of big data technologies, specifically, look somewhat underestimated by society. Maybe
paradoxically, the Internet is far from being explored as a communication tool about the risks
raised by the Internet itself, even less a participative one, citizens’ understanding about on-line
operators practices with regard to personal data being countered by the secrecy inherent to the
operators’ scarcely transparent practices, notwithstanding information made available on oper-
ators’ privacy policies.16
Following the individualistic approach, which has moulded the data protection regime since
its inception, the GDPR assumes the mobilisation of the data subjects’ rights of access,
148 M. GONÇALVES

rectification and opposition, and now the right to be forgotten as critical enforcement tools.
However, since data subjects will most often not be in a position to anticipate risks that they
may incur from the processing of their personal data and act accordingly, forms of organised col-
lective defence, identical to those practiced in impact assessment of potentially dangerous prod-
ucts, have been advocated (Mantelero and Vaciago 2015, 140). Similarly, the idea of group
privacy has been put forward so that users find ways to organise themselves and act collectively
to defend their rights. This idea finds its justification in the recognition that group characteristics
are increasingly being used in place of individual characteristics in generating profiles and taking
action towards individuals (Mittelstadt et al. 2016). Evaluating risks of specific instances of data
mining and data analytics to rights and freedoms would surely benefit from institutionalised dia-
logue, and the plural perspectives that Internet users as data subjects and as citizens might con-
vey to identify the risks and minimise their impacts. In this light, it could be expected that the
data protection authorities be assigned appropriate competences and powers to frame the par-
ticipation of organised data subjects in the risk assessment and management procedures, and
device the proper technical and institutional mechanisms.

5. Conclusion
The opacities, complexities and uncertainties, which remain in assessing and managing personal
data-related risks in the present digital era, together with the predictable unwillingness of
powerful data controllers to open up their intents and their practices, substantiate this plea for
reconsidering the risk-based approach by assigning suitable competences and powers to data
protection authorities to ‘watch the watchers’ (Pasquale 2016). This should imply greater trans-
parency, providing appropriate frameworks for the consultation and participation of data sub-
jects, as well as for overseeing closely how the data controllers fulfil their duties and
responsibilities in order to prevent abuses. As it was devised, the current EU data protection
reform fails to provide the necessary caution, which could be expected from a law designed to
guarantee the fundamental right to personal data protection in the face of evolving information
and communication technologies. It is not hard to infer that the automation inherent to data
mining and the analysis and reuse of large data sets obtained from unrelated sources render
data protection rules and principles such as consent at the stage of data collection, purpose limi-
tation and data minimisation difficult to apply. Yet, it is doubtful that the best way out of this
problem be to assign the prime responsibility for controlling compliance with data protection
principles and rights to the data controllers and to self-defence by the data subjects themselves.
As underlined, the risk-based approach as the key enforcement method under the GDPR results
in leaving data protection issues mainly to data controllers to decide, while data protection
authorities see their supervisory role significantly weakened. One may concede that the privilege
conceded to the risk-based approach laid on the EU’s pragmatic recognition of the limitations of
the public authorities to handle the aftermaths of technological progresses. Caught between its
twofold objective of strengthening the rights of the data subjects, and facilitating business, the
EU legislator ended up favouring the latter to the detriment of the former. This is all the more
so considering the growing appropriation and control of personal data by the major on-line
operators furthered by big data technologies, yielding them a major structural power to shape
today’s critical choices, both individual and social, in access to information and knowledge. The
shortcomings of the risk-based approach are rendered more visible in this circumstance.
We believe that the regulation of risks for human rights and freedoms resulting from emerg-
ing ICT uses, and in particular from concrete projects or activities in data mining and data ana-
lytics, and automatic decision-making could and should be reinforced not only with regard to
transparency and overseeing, but also with regard to participation in consideration of previous
risk regulatory experiences such as the EIA procedure. As noted, various factors render risk
JOURNAL OF RISK RESEARCH 149

assessment and management of novel data applications, and related access to information and
understanding by the data subjects highly problematic. Experience with the EIA and similar pro-
cedures suggest that recourse to independent expert-based algorithmic auditing, and institution-
alisation of risk communication and participation procedures might help overcoming the
drawbacks of the risk-based approach as designed, Internet users’ input looking especially
important in view of the human and value-related facets involved. Accordingly, the role of the
data protection authorities should be reinforced so as to allow for more objective risk/benefit
evaluation of personal data reuses from both individual and society’s standpoints, whilst greater
transparency about the choices is secured.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Notes
1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection
of natural persons with regard to the processing of personal data and on the free movement of such data,
and repealing Directive 95/46/EC (General Data Protection Regulation), Official Journal of the European Union
L 119/1, 4 May 2016.
2. Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of
individuals with regard to the processing of personal data and on the free movement of such data, Official
Journal L 281, 23 November 1995.
3. On the definitions of data controller and data processor see Art. 29 DPWP (2010a).
4. Big data rely not only on the increasing ability of technology to support the collection and storage of large
amounts of data, but also on its power, using algorithms, to assist in analysing, understanding and taking
advantage of the value of the data to inform decisions by enabling identification of patterns among different
sources and datasets. Data analytics has been defined as the practice of using algorithms to make sense of
streams of data. Analytics identifies relationships and patterns across vast and distributed datasets (Mittelstadt
et al. 2016).
5. ‘Les assureurs demandent a leurs clients de se mettre a nu. Generali lance une assurance ‘comportementale’
dans la sante. Une premi ere en France’, and ‘Assurance: votre vie priv ee vaut bien une ristourne’, Le Monde, 7
September 2016.
6. The recent scandal involving Cambridge Analytica illustrates this point (Shultz 2016; Gibney 2018).
7. Note that the term “risk” appears 76 times in the text of the Regulation, whereas it appeared 8 times in the
text of the Directive.
8. This is ultimately in line with the world trend towards shifting the focus of privacy protection from informed
consent at the point of collecting personal data to accountable and responsible uses of personal data
(ITU 2014).
9. The information to be submitted by the developer to the competent authority in the EIA report comprises a
description of the project, a description of the likely significant effects of the project on the environment, a
description of measures envisaged in order to avoid, prevent or reduce and, if possible, offset likely significant
adverse effects on the environment, a description of the reasonable alternatives studied by the developer,
and an indication of the main reasons for the option chosen (Article 3 (5) of Directive 2011/92/EU amended
by Directive 2014/52/EU).
10. Scientific uncertainties have been categorised as: (1) Large uncertainties in outcomes relative to the expected
values; (2) Poor knowledge basis for the assigned probabilities; (3) Large uncertainties about frequentist
probabilities (chances); (4) It is difficult to specify a set of possible consequences; (5) Lack of understanding of
how the consequences (outcomes) are influenced by underlying factors (Aven 2011, 31).
11. In the USA, an initiative by the Federal Trade Commission, named ‘Reclaim Your Name’, is meant to empower
the consumer to find out how brokers are collecting and using data; give her access to information that data
brokers have amassed about her; allow her to opt-out if she learns a data broker is selling her information for
marketing purposes; and provide her the opportunity to correct errors in information used for substantive
decisions – like credit, insurance, employment, and other benefits. https://www.ftc.gov/sites/default/files/
documents/public_statements/reclaim-your-name/130626computersfreedom.pdf.
12. Some everyday examples where the logic of decision-making should be disclosed include a personalised car
insurance scheme (using car sensor data to judge driving habits); credit scoring services; a pricing and
150 M. GONÇALVES

marketing system that determines how much discount an individual will receive, or what media content to
recommend to an individual. Transparency should include, for example, informing people about re-
identification risks stemming from data collected about them (Narayanan, Huey and Felten 2016).
13. Suggestions in the direction of more transparency and user control came from Member States’ supervisory
authorities as well. The ICO of the UK, for example, underlined the need for prior consent before an
organisation starts analysing data collected for one purpose for a different purpose that is not apparent to
the individuals concerned (Information Commissioner’s Office, ‘Big Data and Data Protection’, paragraph 60,
https://ico.org.uk/media/for-organisations/documents/1541/big-data-and-data-protection.pdf). And, ICO adds,
‘the apparent complexity of big data analytics should not become an excuse for failing to seek consent where
it is required. Organisations must find the point at which to explain the benefits of the analytics and present
users with a meaningful choice – and then respect that choice. Also, ‘if, for example, information that people
have put on the social media is going to be used to assess their health risks or their credit worthiness, or to
market certain products to them, then unless they are informed of this and asked to give their consent, it is
unlikely to be either fair or compatible’ (Ibid, paragraph 69).
14. In a report published in 2013, the World Economic Forum emphasised the importance of ensuring
understanding beyond transparency, in the following terms: ‘People need to understand how data is being
collected, whether with their consent or without – through observations and tracking mechanisms given the
low cost of gathering and analyzing data’, (World Economic Forum 2013).
15. Examples of real-world systems to which such techniques have been applied include price discrimination in
popular e-commerce retailers and Uber’s surge-pricing algorithm (Le Chen and Mislove 2015).
16. An example being Microsoft’s privacy policies, see https://www.microsoft.com/en-us/TrustCenter/Transparency.
This is however a case where customer data are declared as being used only for purposes compatible with
providing services like troubleshooting or improving features (such as protection from malware).

References
Adam, B., U. Beck, and J. van Loon. (eds.) 2000. The Risk Society and Beyond: Critical Issues for Social Theory. London:
Sage.
Art. 29 DPWP. 2010a. Opinion 01/2010 on the concepts of ‘controller’ and ‘processor’. Article 29 Data Protection
Working Party, 16 February. (WP 169).
Art. 29 DPWP. 2010b. Opinion 5/2010 on the Industry Proposal for a Privacy and Data Protection Impact
Assessment Framework for RFID Applications. Article 29 Data Protection Working Party. 13 July. (WP 175).
Art. 29 DPWP. 2011. Opinion 9/2011 on the revised Industry Proposal for a Privacy and Data Protection Impact
Assessment Framework for RFID Applications’. Article 29 Data Protection Working Party. 11 February (WP 180).
Art. 29 DPWP. 2013. Opinion 03/2013 on Purpose Limitation. Article 29 Data Protection Working Party. 2 April. (WP
203).
Art. 29 DPWP. 2014a. Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7
of Directive 95/46/EC. Article 29 Data Protection Working Party. 9 April. (WP 217).
Art. 29 DPWP. 2014b. Statement on the role of a risk-based approach in data protection legal frameworks’. Data
Protection Working Party. 30 May. (WP 218).
Aven, T. 2011. “On Different Types of Uncertainties in the Context of the Precautionary Principle.” Risk Anal. 31 (10):
1515–52.
Bamberger, K. A. 2010. “Technologies of Compliance: Risk and Regulation in a Digital Age.” 88 Texas Law Review,
UC Berkeley Public Law Research Paper No. 1463727. https://ssrn.com/abstract ¼1463727
Beck, U. 1992. The Risk Society: Towards a New Modernity. London: Sage.
Beck, U. 2009. “Critical Theory of World Risk Society: A Cosmopolitan Vision.” Constellations 16 (1):3–22.
Beck, U. 2013. “The Digital Freedom Risk? Too Fragile an Acknowledgment.” Open Democracy. August 30. http://
www.opendemocracy.net/can-europemake-it/ulrich-beck/digital-freedom-risk-too-fragile-acknowledgment
Bergkamp, L. 2017. “The Concept of Risk Society as a Model for Risk Regulation – Its Hidden and Not so Hidden
Ambitions, Side Effects, and Risks.” Journal of Risk Research 20 (10):1275–91.
Black, J. 2010. “Risk-based Regulation: Choices, Practices and Lessons Being Learnt” In: OECD, Risk and Regulatory
Policy: Improving the Governance of Risk. OECD Publishing. http://dx.doi.org/10.1787/9789264082939-11-en
Black, J., and R. Baldwin. 2010. “Really Responsive Risk-Based Regulation.” Law & Policy 32 (2):181–213.
Boehme-Neßler, V. 2016. “Privacy: A Matter of Democracy. Why Democracy Needs Privacy and Data Protection.”
International Data Privacy Law 6 (3):222–9.
Boyd, D., and K. Crawford. 2012. “Critical Questions for Big Data.” Information, Communication & Society 15 (5):
662–79.
Brill, J. 2013. ‘Reclaim your name’, keynote address to 23rd computers freedom and privacy conference. Federal
Trade Commission, Washington, DC, June 26. https://www.ftc.gov/sites/default/files/documents/public_state-
ments/reclaim-your-name/130626computersfreedom.pdf
JOURNAL OF RISK RESEARCH 151

Cardon, D. 2015. A  quoi r^event les algorithmes. Nos vies a


 l’Heure des big data. Paris: Seuil.
Colonna, L. 2014. “Data Mining and Its Paradoxical Relationship to the Purpose of Limitation prInciple.” In Reloading
Data Protection: Multidisciplinary Insights and Contemporary Challenges, edited by S. Gurwitch, R. Leenes, and P.
De Hert, 299–321. Berlin: Springer.
Commission Nationale Informatique et Libert es (CNIL). 2015. Privacy Impact Assessment (PIA), Tools (Templates and
Knowledge Bases. Paris: CNIL.
Costa, L. 2012. “Privacy and the Precautionary Principle.” Computer Law & Security Review 28 (1):14–24.
Davies, S. 2016. “The Data Protection Regulation: A Triumph of Pragmatism over Principle?” European Data
Protection Law Review 2 (3):290–6.
De Hert, P., and V. Papakonstantinou. 2016. “The New General Data Protection Regulation: Still a Sound System for
the Protection of Individuals?” Computer Law & Security Review 32 (2):179–94.
Digital Europe. 2013. “Comments on the Risk-Based Approach”. August 28. http://teknologiateollisuus.fi/sites/
default/files/file_attachments/elinkeinopolitiikka_digitalisaatio_tietosuoja_digitaleurope_risk_based_approach.pdf
Duckett, D.,. B. Wynne, R. B. Christley, A. L. Heathwaite, M. Mort, Z. Austin, J. M. Wastling, S. M. Latham, R. Alcock,
and P. Haygarth. 2015. “Can Policy Be Risk-Based? the Cultural Theory of Risk and the Case of Livestock Disease
Containment.” Sociologia Ruralis 55 (4):379–99.
EC. 2010. Communication from the Commission to the European Parliament, the Council, the Economic and Social
Committee and the Committee of the Regions, ‘A Comprehensive Approach on Personal Data Protection in the
European Union’. European Commission. COM (2010) 609 final. Brussels, November 4.
EC. 2015. Press Release. Agreement on Commission’s EU Data Protection Reform will Boost Digital Single Market.
Brussels: European Commission, 15 December.
EC. 2016. “The EU Data Protection Reform and Big Data.” Factsheet, March.
EDPS. 2015. “Opinion 7/2015, Meeting the Challenges of Big Data. A Call for Transparency, User Control, Data
Protection by Design and Accountability.” European Data Protection Supervisor, November 19.
EUFRA. 2010. “Data Protection in the European Union: The Role of National Data Protection Authorities”. European
Union Agency for Fundamental Rights. Luxembourg: Publications Office of the European Union.
European Parliament. 2013. National Programmes for Mass Surveillance of Personal data in EU Member States and
Their Compatibility with EU Law. Study PE 493.032. Directorate General for Internal Policies, Policy Department
Citizens’ Rights and Constitutional Affairs.
Ferretti, F. 2012. “A European Perspective on Data Processing Consent through the Re-Conceptualization of
European Data Protection’s Looking Glass after the Lisbon Treaty: Taking Rights Seriously.” European Review of
Private Law 2:473–506.
Gellert, R. 2015. “Data Protection: A Risk Regulation? between the Risk Management of Everything and the
Precautionary Alternative.” International Data Privacy Law 5 (1):3–19.
Gibney, E. 2018. “The Scant Science behind Cambridge Analytica’s Controversial Marketing Techniques.” Nature,
March 29. https://www.nature.com/articles/d41586-018-03880-4.
Hasty, R. T. W., Nagel, M. Subjally. 2013. “Data Protection Law in the USA.” A4ID Advocates for International
Development. https://www.neighborhoodindicators.org/sites/default/files/course-materials/A4ID_
DataProtectionLaw%20.pdf
Horten, M. 2016. The Closing of the Net. Cambridge: Polity Press.
ICO. 2014. Conducting Privacy Impact Assessments Code of Practice, Information Commissioner’s Office. https://ico.
org.uk/media/for-organisations/documents/1595/pia-code-of-practice.pdf
ICO. 2017. Big Data, Artificial Intelligence, Machine Learning and Data Protection. Information Commissioner’s
Office. https://ico.org.uk/media/for-organisations/documents/1541/big-data-and-data-protection.pdf
ITU. 2014. GSR Discussion Paper Big Data – Opportunity or Threat? International Telecommunications
Union. https://www.itu.int/en/ITU-D/Conferences/GSR/Documents/GSR2014/Discussion%20papers%20
and%20presentations%20-%20GSR14/Session3_GSR14-DiscussionPaper-BigData.pdf
Koops, B. J., and R. E. Leenes. 2014. “Privacy Regulation Cannot Be Hardcoded. A Critical Comment on the ‘Privacy
by Design’ Provision in Data-Protection Law.” International Review of Law, Computers & Technology 28 (2):159–71.
Kuner, C., F. H. Cate, C. Millard, D. J. B. Svantesson, and O. Lynskey. 2015. “Editorial, the Data Protection Credibility
Crisis.” International Data Privacy Law 5 (3):161–2.
Le Chen, A., and C. W. Mislove. 2015. “Peeking Beneath the Hood of Uber”. https://www.ftc.gov/system/files/docu-
ments/public_comments/2015/09/00011-97592.pdf
Le Metayer, D., and S. Monteleone. 2009. “Automated Consent through Privacy Agents: Legal Requirements and
Technical Architecture.” Computer Law & Security Review 25 (2):136–44.
Lynskey, O. 2015. The Foundations of EU Data Protection Law. Oxford: Oxford University Press.
Majone, G. 2010. “Foundations of Risk Regulation: Science, Decision-Making, Policy Learning and Institutional
Reform.” European Journal of Risk Regulation 1 (1):5–19.
Mantelero, A., and G. Vaciago. 2013. “The ‘Dark Side’ of Big Data: Private and Public Interaction in Social
Surveillance. How Data Collections by Private Entities Affect Governmental Social Control and How the EU
Reform on Data Protection Responds.” Computer Review International 6:161–70.
152 M. GONÇALVES

Mantelero, A., and G. Vaciago. 2015. “Data Protection in a Big Data Society.” Digital Investigation 15:104–9.
Mittelstadt, B. D., P. Allo, M. Taddeo, S. Wachter, and L. Floridi. 2016. “The Ethics of Algorithms: Mapping the
Debate.” Big Data & Society, July–December, 1–21.
Morozov, E. 2015. Le mirage numerique: Pour une politique du big data. Paris: Les Prairies Ordinaires.
Narayanan, A., J. Huey, and E. Felten. 2016. “A Precautionary Approach to Big Data Privacy”. In Data Protection on
the Move: Current Developments in ICT and Privacy/Data Protection, edited by S. Gutwirth, R. Leenes, P. De Hert,
357–385. Berlin: Springer.
Pasquale, F. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard, MA:
Harvard University Press.
Rauhofer, J. 2008. “Privacy Is Dead, Get over It! Information Privacy and the Dream of a Risk-Free Society.”
Information & Communications Technology Law 17 (3):185–97.
Reding, V. 2011. “Assuring Data Protection in the Age of the Internet.” SPEECH/11/452 at BBA (British Bankers’
Association) Data Protection and Privacy Conference. London. June 20. europa.eu/rapid/press-release_SPEECH-
11-452_en.pdf
Renn, O. 1992. “Concepts of Risk: A Classification.” In Social Theories of Risk, Orgs. by S. Krimsky and D. Golding,
53–80. Westport, CT: Praeger.
Rosen, J. 2012. “The Deciders: The Future of Privacy and Free Speech in the Age of Facebook and Google.”
Fordham Law Review 80 (4):1525–38.
Rowe, G., and L. F. Frewer. 2000. “Public Participation Methods: A Framework for Evaluation.” Science, Technology &
Human Values 25 (1):3–29.
Shaw, T. 2017. “Invisible Manipulators of Your Mind.” The New York Review of Books, April 20.
Shultz, D. 2016. “Could Google Influence the Presidential Election?” October 25. http://www.sciencemag.org/news/
2016/10/could-google-influence-presidential-election
Tene, O. 2010. “Privacy: The New Generations.” International Data Privacy Law 15:13. https://papers.ssrn.com/sol3/
papers.cfm?abstract_id ¼1710688
Tosun, J. 2013. “How the EU Handles Uncertain Risks: Understanding the Role of the Precautionary Principle.”
Journal of European Public Policy 20 (10):1517–28.
Van Asselt, M. B. A., E. Vos, and B. Rooijackers. 2009. “Science, Knowledge and Uncertainty in EU Risk Regulation.”
In Uncertain Risks Regulated, edited by M. Everson and E. Vos, 359–388. Abington: Routledge.
Von Schomberg, R. 2012. “The Precautionary Principle: Its Use within Hard and Soft Law.” European Journal of Risk
Regulation 3 (2):147–56.
Weber, R. 2014. “Privacy Management Practices in the Proposed EU Regulation.” International Data Privacy Law 4
(4):290–7.
Webler, T., D. W. North, P. C. Stern, and P. Field. 2014. “Public and Stakeholder Participation for Managing and
Reducing Risks of Shale Gas Development.” Environmental Science & Technology 48 (15):8388–96.
Wilkinson, I. 2002. Anxiety in a Risk Society. London: Routledge.
World Economic Forum. 2013. Unlocking the Value of Personal Data. From Collection to Usage. http://www3.wefo-
rum.org/docs/WEF_IT_UnlockingValuePersonalData_CollectionUsage_Report_2013.pdf
Wright, D., and P. De Hert. 2012. “Introduction to Privacy Impact Assessment.” In Privacy Impact Assessment, edited
by D. Wright and P. De Hert, 3–32. Springer.
Wright, D., K. Wadhwa, M. Lagazio, C. Raab, and E. Charikane. 2014. “Integrating Privacy Impact Assessment in Risk
Management.” International Data Privacy Law 4 (2):155–70.
Wynne, B. 1992. “Uncertainty and Environmental Learning. Reconceiving Science and Policy in the Preventive
Paradigm.” Global Environmental Change 2 (2):111–27.
Zanfir, G. 2014. ‘Forgetting About Consent: Why the Focus Should Be on “Suitable Safeguards.” In Reloading Data
Protection: Multidisciplinary Insights and Contemporary Challenges, edited by S. Gurwitch, R. Leenes and P. De
Hert, 237–257. Springer.
Zittrain, J. 2014. “Facebook Could Decide an Election Without Anyone Ever Finding Out.” New Republic https://new-
republic.com/article/117878/information-fiduciary-solution-facebook-digital-ge<rrymandering

You might also like