Chesterman Simon Ethics To Law
Chesterman Simon Ethics To Law
Asimov’s contribution was to create a third category: robots as industrial products built by
engineers. In this speculative world, a safety device is built into these morally neutral robots
in the form of three laws of robotics. The first is that a robot may not injure a human, or
through inaction allow a human to come to harm. Secondly, orders given by humans must
be obeyed, unless that would conflict with the first law. Thirdly, robots must protect their
own existence, unless that conflicts with the first or second laws.
The three laws are a staple of the literature on regulating new technology though, like the
Turing Test, they are more of a cultural touchstone than serious scientific proposal
(Anderson, 2008). 1 Among other things, the laws presume the need only to address
physically embodied robots with human-level intelligence — an example of the android
fallacy. 2 They have also been criticized for putting obligations on the technology itself,
rather than the people creating it (Balkin, 2017). Here it is worth noting that Asimov’s laws
were not ‘law’ in the sense of a command to be enforced by the state. They were, rather,
encoded into the positronic brains of his fictional creations: constraining what robots could
do, rather than specifying what they should.
More importantly, for present purposes, the idea that relevant ethical principles can be
reduced to a few dozen words, or that those words might be encoded in a manner
interpretable by an AI system, misconceives the nature of ethics and of law. Nonetheless it
was reported in 2007 that Korea had considered using them as the basis for a proposed
Robot Ethics Charter. This was one of many attempts to codify norms governing robots or AI
since the turn of the century, accelerating in the wake of the First International Symposium
on Roboethics in Sanremo, Italy, in 2004. The European Robotics Research Network
produced its ‘Roboethics Roadmap’ in 2006, while the first multidisciplinary set of principles
for robotics was adopted at a ‘Robotics Retreat’ held by two British Research Councils in
2010.
The years since 2016 in particular saw a proliferation of guides, frameworks, and principles
focused on AI. Some were the product of conferences or industry associations, notably the
Partnership on AI’s Tenets (2016), the Future of Life Institute’s Asilomar AI Principles (2017),
the Beijing Academy of Artificial Intelligence’s Beijing AI Principles (2019), and the Institute
of Electrical and Electronics Engineers (IEEE)’s Ethically Aligned Design (2019). Others were
drafted by individual companies, including Microsoft’s Responsible AI Principles, IBM’s
Principles for Trust and Transparency, and Google’s AI Principles — all published in the first
half of 2018.
What is striking about these documents is the overlapping consensus that has emerged as to
the norms that should govern AI (Fjeld et al., 2020; Hagendorff, 2020; Jobin et al., 2019).
Though the language and the emphasis may differ, virtually all those written since 2018
include variations on the following six themes:
1. Human control — AI should augment rather than reduce human potential, and remain
under human control.
6. Privacy – Given the extent to which AI relies on access to data, including personal data,
privacy or personal data protection is often highlighted as a specific right to be
safeguarded.
Additional concepts include the need for professional responsibility on the part of those
developing and deploying AI systems, and for AI to promote human values or to be
‘beneficent’ (Luciano et al., 2018, pp. 696-697). At this level of generality, these amount to
calls for upholding ethics generally or the human control principle in particular. Some
documents call for AI to be developed sustainably and for its benefits to be distributed
equitably, though these more properly address how AI is deployed rather than what it
should or should not be able to do.
A different question yields a more revealing answer, which is whether any of these
principles are, in fact, necessary. Calls for accountability, non-discrimination, and privacy
essentially amount to demands that those making or using AI systems comply with laws
already in place in most jurisdictions. Safety requirements recall issues of product liability,
with the additional aspect of taking reasonable cybersecurity precautions. Transparency is
not an ethical principle as such but a condition precedent to understanding and evaluating
conduct (Turilli & Floridi, 2009). Together with human control, however, it could be a
potential restriction on the development of AI systems above and beyond existing laws.
Rather than add to the proliferation of principles, this chapter shifts focus away from the
question of what new rules are required for regulating AI. Instead, the three questions that
it will attempt to answer are why regulation is necessary, when changes to regulatory
structures (including rules) should be adopted, and how they might be implemented.
Regulation is not simply intended to facilitate markets, however. It can also defend rights or
promote social policies, in some cases imposing additional costs (Prosser, 2006). Such
justifications reflect the moral arguments for limiting AI. In the case of bias, for example,
A further reason for regulating AI is more procedural in nature. Transparency, for example,
is a necessary precursor to effective regulation. Though not a panacea and bringing
additional costs, requirements for minimum levels of transparency and the ability to explain
decisions can make oversight and accountability possible.
Against all this, governments may also have good reasons not to regulate a particular sector
if it would constrain innovation, impose unnecessary burdens, or otherwise distort the
market (Auld et al., 2022; Ugur, 2013). Different political communities will weigh these
considerations differently, though it is interesting that regulation of AI appears to track the
adoption of data protection laws in many jurisdictions. The United States, for example, has
largely followed a market-based approach, with relatively light touch sectoral regulation and
experimentation across its 50 states. That is true also of data protection, where a general
Federal law is lacking but particular interests and sectors, such as children’s privacy or
financial institutions, are governed by statute. In the case of AI, toward the end of the
Obama Administration in 2016, the US National Science and Technology Council argued
against broad regulation of AI research or practice. Where regulatory responses threatened
to increase the cost of compliance or slow innovation, the Council called for softening them,
if that could be done without adversely impacting safety or market fairness (Preparing for
the Future of AI, 2016, p. 17).
That document was finalized six months after the European Union enacted the General Data
Protection Regulation (GDPR), with sweeping new powers covering both data protection
and automated processing of that data. The EU approach has long been characterized by a
privileging of human rights, with privacy enshrined as a right after the Second World War,
laying the foundation for the 1995 Data Protection Directive and later the GDPR. Human
rights is also a dominant theme in EU considerations of AI (EU White Paper on AI, 2020, p.
China offers a different model again, embracing a strong role for the state and less concern
about the market or human rights. As with data protection, a driving motivation has been
sovereignty. In the context of data protection, this is expressed through calls for data
localization — ensuring that personal data is accessible by Chinese state authorities
(Chander & Lê, 2015; Liu, 2020; Selby, 2017). As for AI, Beijing identified it as an important
developmental goal in 2006 and a national priority in 2016. The State Council’s New
Generation AI Development Plan, released the following year, nodded at the role of markets
but set a target of 2025 for China to achieve major breakthroughs in AI research with ‘world-
leading’ applications — the same year forecast for ‘the initial establishment of AI laws and
regulations’ (国务院关于印发新一代人工智能发展规划的通知 [State Council Issued
Notice of the New Generation Artificial Intelligence Development Plan], 2017).
Many were cynical about China’s lack of regulation — its relaxed approach to personal data
has often been credited as giving the AI sector a tremendous advantage (Roberts et al.,
2021). Yet laws adopted in 2021 and 2022 incorporated norms closely tracking principles
also embraced in the European Union and international organizations (Hine & Floridi, 2022;
Yang & Yao, 2022). More generally, such projections about future regulation show that, for
emerging technologies, the true underlying question is not whether to regulate, but when.
The climate emergency offers an example of what is now termed the Collingridge Dilemma.
Before automobiles entered into widespread usage, a 1906 Royal Commission studied the
potential risks of the new machines plying Britain’s roads; chief among these was thought to
be the dust that the vehicles threw up behind them (Royal Commission on Motor Cars,
1906). Today, transportation produces about a quarter of all energy-related CO2 emissions
and its continued growth could outweigh all other mitigation measures. Though the Covid-
19 pandemic had a discernible effect on emissions in 2020 and 2021, regulatory efforts to
reduce those emissions face economic and political hurdles (Liu et al., 2019).
Many efforts to address technological innovation focus on the first horn of the dilemma —
predicting and averting harms. That has been the approach of most of the principles
discussed at the start of this chapter. In addition to conferences and workshops, research
Collingridge himself argued (pp. 23-43) that instead of trying to anticipate the risks, more
promise lies in laying the groundwork to address the second aspect of the dilemma:
ensuring that decisions about technology are flexible or reversible. This is also not easy,
presenting what some wags describe as the ‘barn door’ problem of attempting to shut it
after the horse has bolted.
This section considers two approaches to the timing of regulation that may offer some
promise in addressing or mitigating the Collingridge Dilemma: the precautionary principle
and masterly inactivity.
An Ounce of Prevention
A natural response to uncertainty is caution. The precautionary principle holds that if the
consequences of an activity could be serious but are subject to scientific uncertainties, then
precautionary measures should be taken or the activity should not be carried out at all
(Aven, 2011). The principle features in many domestic laws concerning the environment and
has played a key role in most international instruments on the topic. The 1992 Rio
Declaration, for example, states that ‘[w]here there are threats of serious or irreversible
damage, lack of full scientific certainty shall not be used as a reason for postponing cost-
effective measures to prevent environmental degradation’ (Rio Declaration, 1992). In some
implementations, the principle amounts to a reversal of the burden of proof: those who
claim an activity is safe must prove it to be so (Le Moli et al., 2017).
Critics argue that the principle is vague, incoherent, or both. A weak interpretation amounts
to a truism, as few would argue that scientific certainty is required for precautions to be
In the context of AI, the precautionary principle is routinely invoked with regard to
autonomous vehicles (Smith, 2016, p. 572), lethal autonomous weapons (Bhuta &
Pantazopoulos, 2016, pp. 290-294), the use of algorithms processing personal data in
judicial systems (European Ethical Charter on the Use of AI, 2018, p. 56), and the possibility
of general AI turning on its human creators (Maas, 2018). Only the last is a proper
application of the principle, however, in that there is genuine uncertainty about the nature
and the probability of the risk. The precise failure rate of autonomous vehicles may be
unknown, for example, but the harm itself is well understood and capable of being balanced
as against the existing threat posed by human drivers. As for lethal autonomous weapons,
opponents explicitly reject a cost-benefit analysis in favour of a bright moral line with regard
to decisions concerning human life; though there are ongoing debates about the
appropriate degree of human control, the ‘risk’ itself is not in question. Similarly, wariness
of outsourcing public sector decisions to machines is not founded — or, at least, not only
founded — on uncertainty as to the consequences that might follow. Rather, it is tied to the
view that such decisions should be made by humans within a system of political
accountability.
Nevertheless, as indicated earlier, it is telling that, despite the risks of general AI, there has
thus far been no concerted effort to restrict pure or applied research in the area. More
promising are calls that implicitly focus on the second horn of Collingridge’s dilemma:
requirements to incorporate measures such as a kill switch, or attempts to align the values
of any future superintelligence with our own. These can be seen as applications of the
principle that human control should be prioritized. If a path to general AI becomes clearer,
they should become mandatory.
Masterly Inactivity
Another response to uncertainty is to do nothing. Refraining from action may be
appropriate to avoid distorting the market through pre-emptive rulemaking or delaying its
evolution through lengthy adjudication. The term sometimes used to describe this is
‘masterly inactivity’. With origins in nineteenth century British policy on Afghanistan, it
suggests a watchful restraint in the face of undesirable alternatives (Adye, 1878; Roy, 2015,
p. 69). (Britain’s involvement in Afghanistan, it should be noted, ended in humiliating
defeat.)
Inactivity may also amount to a buck-passing exercise. Even if governments choose not to
regulate, decisions with legal consequences will be made — most prominently by judges
within the common law tradition, who exercise a law-making function. Such decisions are
already influencing norms in areas from contracts between computer programs and the use
of algorithms in sentencing to the ownership of intellectual property created by AI. This can
be problematic if the law is nudged in an unhelpful direction because of the vagaries of how
specific cases make it to court. It is also limited to applying legal principles after the event —
‘when something untoward has already happened’, as the British House of Commons
Science and Technology Committee warned (Robotics and Artificial Intelligence, Fifth Report
of Session 2016–17, 2016).
Masterly inactivity, then, is not a strategy. Properly used, however, it may buy time to
develop one.
Regulatory Approaches
Regulation is a contested concept and embraces more than mere ‘rules’. A leading text
(Baldwin et al., 2011, p. 3) distinguishes three distinct modalities of regulation that are
useful in considering the options available. First, regulation can mean a specific set of
commands — binding obligations applied by a body devoted to this purpose. Secondly, it
can refer to state influence more broadly, including financial and other incentives. Broader
still, regulation is sometimes used to denote all forms of social or economic suasion,
including market forces. The theory of ‘smart regulation’ has shown that regulatory
functions can be carried out not only by institutions of the state but also professional
associations, standard-setting bodies, and advocacy groups. In most circumstances, multiple
instruments and a range of regulatory actors will produce better outcomes than a narrow
focus on a single regulator (Guihot et al., 2017; Gunningham & Grabosky, 1998). These
modalities of regulation can interact. An industry may invest in self-regulation, for example,
due to concerns that failure to do so will lead to more coercive regulation at the hands of
the state.
Regulation is not limited to restricting or prohibiting undesirable conduct; it may also enable
or facilitate positive activities — ‘green light’ as opposed to ‘red light’ regulation (Harlow &
Rawlings, 2009, pp. 1-48). ‘Responsive regulation’ argues in favour of a more cooperative
The tools available to regulatory bodies may be thought of in three categories also:
traditional rulemaking, adjudication by courts or tribunals, and informal guidance — the
latter comprising standards, interpretive guides, and public and private communications
concerning the regulated activity. Tim Wu (2011) once provocatively suggested that
regulators of industries undergoing rapid change consider linking the third with the first two
by issuing ‘threats’ — informally requesting compliance, but under the shadow of possible
formalization and enforcement.
Many discussions of AI regulation recount the options available — a sliding scale, a pyramid,
a toolbox, and so on — but the application is either too general or too specific. It is, self-
evidently, inappropriate to apply one regulatory approach to all of the activities impacted by
AI. Yet, it is also impractical to adopt specific laws for every one of those activities. A degree
of clarity may, however, be achieved by distinguishing between three classes of problems
associated with AI: managing some risks, proscribing others, while in a third set of cases
ensuring that proper processes are followed.
Managing Risks
Civil liability provides a basis for allocating responsibility for risk — particularly in areas that
can be examined on a cost-benefit basis. This will cover the majority, perhaps the vast
majority, of AI activities in the private sector: from transportation to medical devices, from
smart home application to cognitive enhancements and implants. The issue here is not new
rules but how to apply or adapt existing rules to technology that operates at speed,
autonomously, and with varying degrees of opacity. Minimum transparency requirements
may be needed to ensure that AI systems are identified as such and that harmful conduct
can be attributed to the appropriate owner, operator, or manufacturer. Mandatory
insurance will spread those risks more efficiently. But the fundamental principles remain
sound. 3
For situations in which cost-benefit analysis is appropriate but the potential risks are difficult
to determine, regulatory ‘sandboxes’ allow new technologies to be tested in controlled
environments. Though some jurisdictions have applied this to embodied technology, such as
designated areas for autonomous vehicles, the approach is particularly suited to AI systems
that operate online. Originating in computer science, a virtual sandbox lets software run in a
manner that limits the potential damage if there are errors or vulnerabilities. Though not
10
Yet even that apparently clear prohibition becomes blurred under closer analysis. If
machines are able to make every choice up to that point — scanning and navigating an
environment, identifying and selecting a target, proposing an angle and mode of attack —
the final decision may be an artificial one. Automation bias makes the default choice
significantly more likely to be accepted in such circumstances. That is not an argument
against the prohibition, but in favour of ensuring not only that a human is at least ‘in’ or
‘over’ the loop but also that he or she knows that accountability for decisions taken will
follow him or her. This is the link between the principles of human control and
accountability — not that humans will remain in control and machines will be kept
accountable, but that humans (and other legal persons) will continue to be accountable for
their conduct, even if perpetrated by or through a machine.
The draft AI Act of the European Union also seeks to prohibit certain applications of AI —
notably real-time biometric surveillance, technologies that manipulate or exploit individuals,
and social scoring (AI Act (EU), 2021). The last item appeared to be at least partly a critique
of China’s social credit system, which has been criticized as an Orwellian scheme of
surveillance and harbinger of a dystopian future (Mac Síthigh & Siems, 2019).
A discrete area in which new rules will be needed concerns human interaction with AI
systems. The lacuna here, however, is not laws to protect us from them but to protect them
from us. Anodyne examples include those adopted in Singapore in early 2017, making it an
offence to interfere with autonomous vehicle trials. These are more properly considered as
an extension of the management of risk associated with such technologies. More
problematic will be laws preserving human morality from offences perpetrated against
machines. At present, for example, it is a crime to torture a chimpanzee but not a computer.
11
In 2014, for example, Ronald Arkin ignited controversy by proposing that child sex robots be
used to ‘treat’ paedophiles in the same way that methadone is used by heroin addicts (Hill,
2014). Though simulated pornography is treated differently across jurisdictions, 4 many have
now prohibited the manufacture and use of these devices through creative interpretations
of existing laws or passing new ones such as the CREEPER Act in the United States (Danaher,
2019).
As lifelike embodied robots become more common, and as they play more active roles in
society, it will be necessary to protect them not merely to reduce the risk of malfunction but
because the act of harming them will be regarded as a wrong in itself. The closest analogy
will, initially, be animal cruelty laws. This is, arguably, another manifestation of the android
fallacy — purchasing a lifelike robot and setting it on fire will cause more distress than
deleting its operating system. Moving forward, however, the ability of AI systems to
perceive pain and comprehend the prospect of non-existence may change that calculation
(Anshar & Williams, 2021; Ashrafian, 2017). 5
This raises the question of whether red lines should be established for AI research that
might bring about self-awareness — or the kind of superintelligence sometimes posited as a
potential existential threat to humanity (Bostrom, 2014). Though many experts have
advocated caution about the prospect of general AI, few had called for a halt to research in
the area until March 2023, when the Future of Life Institute issued an open letter — signed
by Elon Musk among others — calling for a six month pause on the development of
generative AI, in the form of large language models ‘more powerful than GPT-4’ (Pause
Giant AI Experiments: An Open Letter, 2023), referring to the generative pre-trained
transformer chatbot developed by OpenAI. The letter received much coverage but did not
appear likely to result in an actual halt to research. Tellingly, no government has issued such
a call — though Italy did ban ChatGPT due to concerns about its use of personal data
(Satariano, 2023), and China announced restrictions on similar technology if it risked
upsetting the social and political order (China Releases Draft Measures for the Management
of Generative Artificial Intelligence Services, 2023).
As Bostrom and others have warned, there is a non-trivial risk that attempts to contain or
hobble general AI may in fact bring about the threat they are intended to avert. A
‘precautionary principle’ approach might be, therefore, to stop well short of such
capabilities. Yet general AI seems far enough beyond our present capacities that this would
be an excessive response if implemented today.
12
Limits on Outsourcing
Limiting the decisions that can be outsourced to AI is an area in which new rules are both
necessary and possible.
One approach is to restrict the use of AI for inherently governmental functions. There have
been occasional calls for a ban on government use of algorithms, typically in response to
actual or perceived failures in public sector decision-making. These include scandals over
automated programs that purported to identify benefit fraud in Australia (Doran, 2020) and
the Netherlands (Government’s Fraud Algorithm SyRI Breaks Human Rights, Privacy Law,
2020), and the Covid-19 university admissions debacle in Britain (Satariano, 2020).
Other jurisdictions have prohibited public agencies from using specific applications, such as
facial recognition. San Francisco made headlines by prohibiting its use by police and other
agencies in 2019, a move that was replicated in various US cities and the state of California
but not at the Federal level. As in the case of data protection, Washington has thus far failed
to enact broad legislation (despite several attempts) while Europe approached the same
question initially as an application of the GDPR and then incorporated a ban on real-time
remote biometric identification in publicly accessible spaces into the draft AI Act. China, for
its part, has far fewer restrictions on facial recognition — though the government has
acknowledged the need for greater guidance and there has been at least one (unsuccessful)
lawsuit (Lee, 2020).
Banning algorithms completely is unnecessary, not least because any definition might
include arithmetic and other basic functions that exercise no discretion. More importantly,
it misidentifies the problem. The issue is not that machines are making decisions but that
humans are abdicating responsibility for them. Public sector decisions exercising inherently
governmental functions are legitimate not because they are correct but because they are
capable of being held to account through a political or other process.
Such concerns activate the first two principles discussed at the start of this chapter: human
control and transparency. A more realistic and generalizable approach to the regulation of
AI in the public sector is escalating provisions for both in public sector decision-making. An
early example of this was Canada’s provisions on transparency of administrative decisions
(Directive on Automated Decision-Making, 2019). A similar approach was taken in New
Zealand’s Algorithm Charter (Algorithm Charter (NZ), 2020). Signed by two dozen
government agencies, the Charter included a matrix that moves from optional to mandatory
based on the probability and the severity of the impact on the ‘wellbeing of people’. Among
13
These are important steps, but insufficient. For such public sector decisions, it is not simply a
question of striking ‘the right balance’, as the Charter states, between accessing the power
of algorithms and maintaining the trust and confidence of citizens. A more basic
commitment would guarantee the means of challenging those decisions — not just legally,
in the case of decisions that violate the law, but also politically, by identifying human
decision-makers in positions of public trust who can be held to account through democratic
processes for their actions or inaction.
One of the most ambitious attempts at regulation of this space — still being debated at the
time of writing — is the EU draft AI Act. As written, it adopts an expansive definition of AI
and applies to all sectors except for the military. Intended to be horizontal legislation, it
would provide baseline rules applicable to all use-cases, with stricter obligations being
possible in sensitive areas (such as the medical sector). It also classifies AI applications by
risk: low-risk applications are not regulated at all, while escalating requirements for
assessment prior to release on the market apply to medium- and high-risk applications. As
indicated earlier, certain applications would be prohibited completely.
Optimists hope that the AI Act may enjoy the ‘Brussels effect’ and shape global AI policy, in
the way that the EU GDPR shaped data protection laws in many jurisdictions (Siegmann &
Anderljung, 2022). Critics have highlighted the extremely broad potential remit of the
legislation to a wide range of technologies, as well as the vagueness of some of its key
proscriptions — such as whether recommendation algorithms and social media feeds might
be considered ‘manipulative’ (Veale & Zuiderveen Borgesius, 2021). Others have pointed to
the risks of general purpose AI and the need to regulate it, linked to the concerns raised
about large language models discussed earlier (Gebru et al., 2023).
Conclusion
If Asimov’s three laws had avoided or resolved all the ethical dilemmas of machine
intelligence, his literary career would have been brief. In fact, the very story (Asimov, 1942)
in which they were introduced focuses on a robot that is paralysed by a contradiction
between the second and third laws, resolved only by a human putting himself in harm’s way
to invoke the first. 6
14
The demand for new rules to deal with AI is often overstated. Ryan Abbott, for example, has
argued (2020, pp. 2-4) that the guiding principle for regulatory change should be AI legal
neutrality, meaning that the law should not discriminate at all between human and AI
behaviour. Though provocatively simple, the full import of such a ‘rule’ is quickly
abandoned: personality is not sought for AI systems, nor are the standards of AI (the
‘reasonable robots’ of the title) to be applied to human conduct. Rather, Abbott’s thesis
boils down to a case-by-case examination of different areas of AI activity to determine
whether specific sectors warrant change or not.
This is a sensible enough approach, but some new rules of general application will be
required, primarily to ensure the first two ‘principles’ quoted at the start of this chapter —
human control and transparency — can be achieved. Human control requires limits on the
kinds of AI systems that can be developed. The precautionary principle offers a means of
thinking about such risks, though the clearest decisions can be made in bright line moral
cases like lethal autonomous weapons. More nuanced limitations are required in the public
sector, not constraining the behaviour of AI systems but limiting the ability of public officials
to outsource decisions to them. On the question of transparency, accountability of
government officials also requires a limit on the use of opaque processes. Above and
beyond that, measures such as impact assessments, audits, an AI ombudsperson could
mitigate some harms and assist in ensuring that others can be attributed back to legal
persons capable of being held to account.
15