Difficulty of Programming Robotics For Ethics
Difficulty of Programming Robotics For Ethics
Fully autonomous robots with humanlike capabilities might yet be some way away, still the realm largely of science
fiction, but lawmakers, legal experts and manufacturers are already engaged in debates about the ethical
challenges involved in their production and use, and their legal status, their “legal personality”: ultimately,
whether it’s these machines or human beings who should bear responsibility for their actions. There are questions
about whether and how much self-learning machines should be taking independent decisions about moral
equivalence involving ethical choices which have traditionally been the preserve of humans. At the extreme, for
example, can it be right for a machine to decide to kill an enemy combatant that it has identified without resort to
human agency? Or is the robot morally no different from a “brainless” weapon?
Around the world, militaries and arms manufacturers are testing systems that use artificial intelligence technology
to operate in swarms or choose targets independently. They could soon outperform existing military technology at
only a fraction of the cost. Is it morally acceptable, for example, that a pilotless drone could not only identify a
potential enemy target but obliterate it automatically without the intervention of human agency?
Saudi Arabia has already gone a step further in granting citizenship to a robot, “Sophia”, albeit a bit of a PR
gimmick to launch a conference on AI. She is already campaigning for women’s rights (Twitter was ablaze with the
fact that Sophia was not wearing the headscarf and abaya that human women are expected to wear when in
public.)
Great strides have been made in recent years in the development of combat robots. The US military has deployed
ground robots, aerial robots, marine robots, stationary robots, and (reportedly) space robots. The robots are used
for both reconnaissance and fighting, and further rapid advances in their design and capabilities can be expected in
the years ahead.
One consequence of these advances is that robots will gain more autonomy, which means they will have to act in
uncertain situations without direct human instruction. That raises a large and thorny challenge: how do you
program a robot to be an ethical warrior?
Perhaps robot ethics has not received the attention it needs, at least in the US, given a common misconception
that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated,
harking back to a time when computers were simpler and their programs could be written and understood by a
single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom
knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty,
since portions of large programs may interact in unexpected, untested ways ? Furthermore, increasing complexity
may lead to emergent behaviors, i.e., behaviors not programmed but arising out of sheer complexity.
The programming of a self driven car with ethical intentions knowing what the risk could be means that the public
could be less willing to view things as accidents, in other words, as long as you were driving responsibly it’s
considered ok for you to say “that person just jumped out at me” and be excused for whomever you hit, but AI
algorithms don’t have that luxury.
If computers are supposed to be faster and more intentional than us in some situations, then how they’re
programmed matters.
We need a way to understand what AI algorithms are "thinking" when they do things. How can you say to a
persons family if they were affected, or someone related to them died because of an robotic/AI intervention ‘we
don't know how this happened’? So accountability and transparency are important.
Or, Puzzling over why your self driven car swerved around the dog but backed over the cat isn’t the only AI
problem that calls for transparency. Biased AI algorithms can cause all kinds of problems.
Facial recognition systems may ignore people of color because their training data didn’t have enough faces fitting
that description.
The ethics of robotic programming is probably one of the most fundamental point to be considered. If the machine
is out to serve a goal that’s problematic, then ethical programming – the question of how it can more ethically
advance that goal - sounds misguided.
That’s tricky, because much comes down to intent. A robot could be great if it improves the quality of life for the
society. Using the same robot as an excuse to neglect people, society and bring harm would be the inverse. Like
any enabling technology from the kitchen knife to nuclear fusion, the tool itself isn’t good or bad – it’s the intent of
the person using it.