AI Governance - in - The - Public - Governance - of - Explainable - AI - 4 - Sept - 2020
AI Governance - in - The - Public - Governance - of - Explainable - AI - 4 - Sept - 2020
Document Version
Early version, also known as pre-print
•Users may download and print one copy of any publication from the Research Portal for the purpose of private study or research.
•You may not further distribute the material or use it for any profit-making activity or commercial gain
•You may freely distribute the URL identifying the publication in the Research Portal
Take down policy
If you believe that this document breaches copyright please contact [email protected] providing details, and we will remove access to
the work immediately and investigate your claim.
Opinion paper presented at exAI2020 / Invited submission for Computer Law and
Security Review
Corresponding Author
Perry Keller
Reader in Media and Information law
School of Law, King’s College London
[email protected]
Archie Drake
Research Associate
School of Law, King’s College London
September 2020
______________________
In this comment, we address the apparent exclusivity and paternalism of goal and
standard setting for explainable AI and its implications for the public governance of AI.
We argue that the widening use of AI decision-making, including the development of
autonomous systems, not only poses widely-discussed risks for human autonomy in
itself, but is also the subject of a standard-setting process that is remarkably closed to
effective public contestation. The implications of this turn in governance for democratic
decision-making in Britain have also yet to be fully appreciated. As the governance of AI
gathers pace, one of the major tasks will be ensure not only that AI systems are
technically ‘explainable’ but that, in a fuller sense, the relevant standards and rules are
contestable and that governing institutions and processes are open to democratic
contestability.
In other work, we have asserted - and continue to investigate - the idea that UK AI
governance is paternalist in nature.1 This assertion might seem surprising in the face of
the current vigorous debate over the importance of AI explainability in multiple domains.
1
Explainable AI is undoubtedly a rational solution to the confidentiality, complexity and
opacity problems that render wide public access to and understanding of AI decision-
making impractical, if not impossible. Building reliable explainability into the functioning
of AI systems will certainly improve the possibilities for autonomy in personal decision
making, especially where AI has a relatively high impact on peoples’ lives.2 In the best
of outcomes, such ‘human-centred’ explainability will foster trust and genuine
trustworthiness, which will promote the public legitimacy of AI decision-making.3
Achieving that virtuous circle is the challenge of the moment.
On the public governance side, explainability as a solution to the ‘black box’ problem is
being quickly absorbed into legal and ethical thinking. 6 In both spheres, the potential
harms of AI applications engage complex questions of fundamental values and rights. In
information law, data protection has provided a key framework for subjecting automated
decision-making to specific rights and duties, which are directly rooted in fundamental
rights.7 Other legal fields, from contract to competition law, are also widening in scope to
address need to balance AI’s potential benefits against its risks of harm.8 The eruption of
AI as a major public policy issue has also fuelled a global proliferation of AI ethical
guidelines.9 Indeed, Charles Raab asserts that ‘[t]here has been a noticeable ‘turn’ from
reliance on legal regulation to an emphasis on ethics – and accountability and
2 ICO and Alan Turing Institute, ‘Explaining decisions made with AI’, May 2020,
https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-
themes/explaining-decisions-made-with-artificial-intelligence/
3
European Commission, High-Level Expert Group on Artificial Intelligence, ‘The Ethics Guidelines
for Trustworthy Artificial Intelligence’, April 2019, https://ec.europa.eu/futurium/en/ai-alliance-
consultation; Upol Ehsan and Mark O. Riedl, ‘Human- centered Explainable AI: Towards a
Reflective Sociotechnical Approach’, arXiv:2002.01092 [cs.HC], February 2020
4 Hamon, R., Junklewitz, H. and Sanchez Martin, J., ‘Robustness and Explainability of Artificial
Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas’, (2020) 11 European
Journal of Risk regulation (1)
6 Roger Brownsword, Eloise Scotford and Karen Yeung (eds), The Oxford Handbook of Law,
Journal, 1; Sandra Wachter and Brent Mittelstadt, ‘A Right to Reasonable Inferences: Re-Thinking
Data Protection Law in the Age of Big Data and AI’, (Columbia Business Law Review, 2019)
8 European Commission, Expert Group on Liability and New Technologies, ‘Report on Liability for
2
transparency as well – in this part of the field of information policy’.10 Explainable AI as a
public governance question has consequently become an increasingly congested legal
and ethical challenge.
On the face of it, this is unexceptional. In standard setting for new technologies, the
state is expected to dominate and, moreover, effective governance of AI technologies is
likely to require significant exclusivity and paternalism.15 Given explicit economic goals,
it is unsurprising that the government may favour collaborations with firms over those
with civil institutions. Key regulators, empowered and limited by legislation, will give
principles and standards meaning in practice.16 There have of course been parliamentary
inquiries and consultation exercises. Drawing on societal values expressed in
fundamental rights, the courts can also be expected to join in shaping the demands of
public governance on AI explainability.
James; Porter, Zoe Larissa Mayne, Mind the Gaps: Assuring the Safety of Autonomous Systems
from an Engineering, Ethical, and Legal Perspective, (2019) Artificial Intelligence, 279
13 Anna Jobin, Marcello Ienca, and Effy Vayena, ‘The Global Landscape of AI Ethics Guidelines’,
not the Remedy you are Looking for', (2017) 16 Duke Law and Technology Law Review, 17
15 Shirley Pearce, ‘AI in the UK: The Story So Far’, Committee on Standards in Public Life Blog, 19
3
On the other hand, the extent of this exclusivity and paternalism in the development of
UK AI governance is also genuinely remarkable. It runs counter to trends across liberal
democracies towards widening the avenues for active public participation in
policymaking, not least through freedom of information rights and innovations in judicial
review. The direction of travel seems to undercut the prospects for the virtuous circle of
human-centredness and trust mentioned above; without broader participation, it is
unclear how an ‘authorizing environment’ of legitimacy and support will be achieved.17
The reasons for this shift in governance back towards historic expectations of exclusivity,
paternalism and even deference are twofold. The first is a consequence of the societal
shift towards reliance on complex, interconnected technologies in every aspect of human
life. In these circumstances, direct public participation in standard-setting is impractical
and burdensome. The complexity and opacity of AI systems, which is often daunting for
AI specialists, is well beyond the comprehension of ordinary members of the public. The
economic and security consequences of disclosing confidential information are,
moreover, seemingly too high to permit anything but controlled public consultation. We
will address these pragmatic objections in our conclusions.
The second reason concerns the impact of AI’s complexity and opacity on the
effectiveness of public information access rights and, in particular, the importance of
rights to explainability. As noted above, a major purpose of AI explainability is to
enhance the trustworthiness and legitimacy of AI systems by rendering at least some AI
decision-making sufficiently understandable to stakeholders.18 One key question is
therefore who should be empowered to require that a particular AI application be
rendered explainable. This is undoubtedly a power necessary for effective regulatory
supervision and control of AI systems, for example including the work of the Financial
Conduct Authority and the Information Commissioner’s Office.19 On the other hand, a
regulator’s power to compel explanations typically comes with significant safeguards for
any disclosure of confidential information as well as duties to temper regulatory
oversight to suit levels of risk.20
In terms of public governance, the difference between rights to information and rights to
explanation are of historic importance. Direct public rights to access information and to
compel explanations first emerged in Victorian reforms to the disclosure rules of civil
investigation to attend and answer questions under the Financial Services and Markets Act 2000,
2000 C.8, s 171; See also, the expanded powers of the Information Commissioner’s Office
created under Part 5 and Schedules 12-15 of DPA 2018
20 On the lifetime confidentiality obligations of United Kingdom government employees, see, Civil
Service code, published as statutory guidance under S.5. Constitutional Reform and Governance
Act 2010, 2010 c. 25;
4
litigation and, much later, for disclosure in judicial review. While these litigant rights can
potentially be used to force the disclosure of evidence necessary to advance specific
litigation, they are subject to strict confidentiality and collateral use restrictions. Save for
evidence subsequently disclosed to the public through court proceedings, information
disclosed to other parties cannot normally be used to inform the public. The point here is
that, while litigation disclosure rules have the potential to compel considerable AI
explainability in the future, litigation only provides a narrow, albeit powerful, avenue into
matters of public concern.
It was only through the Freedom of Information Act (FOIA) rights that the public gained
an unsupervised right to compel the disclosure of information held by public
authorities.21 The FOI access right considerably enlarged the scope for individuals or
private entities to drive transparency in governmental decision making and also widened
the possibilities for radically shifting the agenda in public affairs. 22 The Freedom of
Information Act is, of course, a work of carefully constructed compromises. To minimise
the risks of damaging disclosure of confidential information or overwhelming central and
local government with impractically burdensome requests, the Act not only brims with
overlapping exemptions, but also strictly limits the scope of the FOI access right. It is
simply a right to existing information, entailing no duty to create information and no
duty to explain.
Despite these structural compromises, FOIA changed the character of public governance
in the United Kingdom. While a public authority could not be compelled to explain its
decisions, FOIA could be used to force the disclosure of the information that was used to
make the decision. The rationality of outcomes could at least be assessed by evaluating
the factors taken into account in the decision-making process. In opening this potential
route into the heart of governmental decision-making, the FOIA information access right
has unmistakeable links with ideas of deliberative and participatory democracy. 23 As a
right limited to existing information, the FOIA right will often fail to break through the
complexity and opacity of AI decision-making. Nonetheless, it does operationalise and
demonstrate the value of the idea that that decision-making of public importance should
be contestable and open to recurring public contestation.24
Conclusion
Information law, which concerns access, control and use of information, is being re-
made through the impact of AI applications.25 AI’s confidentiality, complexity and opacity
characteristics are becoming an accepted barrier to direct public enquiry, defeating the
contestability that democratic government requires. Paternalist concern by legislators
and regulators is, however, not an adequate substitute for engaged citizens who wish to
advance dissenting views and challenge the definitions of AI risk and harm imposed
upon them. More particularly, in striving to achieve the high coherence demands of
21 Freedom of Information Act 2000 (FOIA) C.36, s 1 – General right of access to information held
by public authorities
22 B. Worthy and R. Hazell, ‘Disruptive, Dynamic and Democratic? Ten Years of FOI in the UK’,
Parliamentary Affairs, Volume 70, Issue 1, 1 January 2017, 22, 40; B. Worthy, 'Freedom of
Information and the Media', 60 ( H. Tumber and S. Waisbord ,eds), The Routledge Companion to
Media and Human Rights, (Routledge, 2017) 60; M. Schudson, The Rise of the Right to Know:
Politics and the Culture of Transparency, 1945–1975, (Belknap Press, 2015)
23 Stephen Elstub, ‘Deliberative and Participatory Democracy’ in (Andre Bächtiger, John S. Dryzek,
Jane Mansbridge, and Mark Warren, ed’s) The Oxford Handbook of Deliberative Democracy, 2018
24 Deirdre K. Mulligan, Daniel Kluttz, and Nitin Kohli, ‘Shaping Our Tools: Contestability as a Means
5
explainable AI, legislators and regulators are unlikely to answer fully the questions of
explainable AI for what purposes and explainable AI for whom.