IA - FinalPaper
IA - FinalPaper
INTRODUCTION
The concept of Artificial Intelligence has become an important notion that appears everywhere around us and in our
daily lives like in our phones with facial recognition, in the medical field, or even in the game made for children.
However, it seems that some fields are reluctant to accept the application of Artificial Intelligence because of the idea
that they lack some human “sensibility”.
In that case, it mostly makes me think of the Justice discipline; having an interest in this, we can already notice how
this concept is already retarded and have some difficulties adapting to a world that keep evolving day by day. So when
we can hear about the idea of Artificial Intelligence invading Justice, I think of the good side that Justice can catch up
on thanks to the algorithm but also of the bad side, mostly around the missing “feelings”.
• Concept of Justice
Before starting and relating to this concept, the notion of Artificial Intelligence bias has to be explained; a bias is what
we would define as a “tendency to favor or dislike a person or a thing, especially as a result of a preconceived opinion”
1
according to the Oxford English Dictionary. In that way, bias is something that exists around every human being, it
would reflect the tastes and likes of every individual.
I think it was important to bring up this notion before starting the definition of Justice because as we know, a bias can
be problematic when it becomes morally objectionable.
So, first of all, I think it’s important to explain the concept of justice and how we can apply it in the Artificial
Intelligence field.
According to the Oxford English Dictionary, Justice is “Maintenance of what is just or right by the exercise of authority
or power; assignment of deserved reward or punishment; giving of due deserts” 2.
Justice is a very large and individual field that studies case by case because everyone or every situation is different
which means that one decision that applies to a case doesn’t always apply to a similar case since the circumstance,
mobile and the consequences of the act are divergent; this is how justice has to be interpreted.
Justice always existed in human history, under different forms, and in all cultures.
That said, it’s important to include the diverse versions of Justice in the algorithm of Artificial Intelligence so that it can
consider various ways of solving a case.
After defining Justice, we can now enter the subject of Artificial Intelligence in Justice with the “Correctional Offender
Management Profiling for Alternative Sanctions (COMPAS).
To begin with, COMPAS is software used in the United States courts to prevent potential recidivism risk for an
individual who has already been convicted. The risk scale was simply developed by considering the current age, age of
first arrest, history of violence, and history of non-respect.
One of the first criticisms that came out relating to this algorithm was that it’s mostly related to their data which
means that if the data is morally objectionable, the results will be biased.
This first critic happens to be the base of a ProPublica investigation that was released in 2016 where it’s revealed that a
contrast exists between black defendants and white defendants. Indeed, even tho the actual risk of recidivism was the
same, a racial bias was found when black defendants were disproportionately predicted to be at higher risk than white
defendants “Black defendants were 77 % more likely to be assigned higher risk scores than white defendants” 4.
The question now would be what is the influence this kind of algorithm can have?
Firstly, it would impact on the decision made by “human” justice; since this system is still recent, the judge can be lost
while trying to understand how the artificial intelligence works which could lead them to follow the results given out
by the algorithm.
Then, the individuals that are registered as a high risk of recidivism, can be impacted in their daily lives because of a
bias.
Subsequently, this racial bias led to contributing to a discriminatory society against black people.
The problem here is that the mistake made by the algorithm should still be at the same rate regardless of race.
This is when the notion of fairness comes into account as a solution to reevaluate the fairness in the algorithm. It’s
actually what is being proposed by the authors of the investigation article by taking into account the equal false-
positive rates and the predictive parity.
But does it really work? As an article in the MIT Technology Review 5 explains, it’s difficult to assess fairness since the
problem comes from outside the algorithm where the police, in some states, have a history of disproportional arrests
with a bigger number of minorities that are targeted.
At this rate and from my point of view, the algorithm can make the same mistake as a human judge since it’s what is
happening already. Even if the creator of the COMPAS defends their algorithm, it’s still possible to find this kind of
racial discrimination in other artificial intelligence such as facial recognition.
But as we saw with the COMPAS case, it’s not something that can be solved by creating another algorithm since the
situation comes from a real-life situation even if we can still try with the notion of predictive parity.
Even if this idea of Artificial Intelligence in Justice is controversial, some benefits exist for the algorithm.
Like how it would be with the delay in Justice: it’s something that is well known in this field because of the complexity
of some cases or just the overcharging of the courts. Machine learning can be consistent and productive which would
help the Justice courts by reducing the backlog.
Moreover, if we stay on the previous example, it can lead to the concept of the absence of human emotions.
Justice, as defined before, is not only about applying the rules but also understanding and analyzing the situation of
the individual. Judges are able to ask precise questions when they feel that something is deeper than the case while
the algorithm will just follow a pattern and lack the emotional understanding of the defendant.
Furthermore, an Artificial Intelligence will deliver a decision for the closure of the case without thinking of the
outcome for the individuals such as the impact on the victim’s life or the culpability of the defendant.
The thing is that for machine learning, in this case, an algorithm in Justice, we, as a human being, were being
considered as a number, data which reinforces the lack of empathy.
It’s also possible to come back to the COMPAS case to talk about the idea of data that are morally objectionable. In this
case, the data reflect discrimination related to racial bias. It’s something that also exists in “human” justice but it’s not
supposed to be automatic like in the algorithm that is going to perpetuate this bias.
When talking about bias in AI, we have to focus on the ones that are morally objectionable. But bias can also
compromise fairness which can lead to prejudiced results.
Talking about fairness, it’s another view that seems interesting to address.
Fairness refers to a method to correct the bias caused in an algorithm; sometimes a machine-learning process can be
examined as unfair when the variables are already sensible such as ethnicity or gender. Because of that, even if an
algorithm can end up being fair, you just need a little difference for the artificial intelligence to be out of it.
Still in the COMPAS case, a notion like “predictive parity” is mentioned.
The pros of Artificial Intelligence related to emotions would be that the algorithm allows him to stay focused and keep
a certain uniformity while human beings can be conducted by their emotions and empathy in justice.
However, this kind of machine learning is not sensible enough to replace humans in the justice system since a “human”
judge can have an opinion on the possible rehabilitation of a defendant by looking at his attitude, something that
doesn’t seem possible for an Artificial Intelligence.
After trying to balance both sides, we can also consider how artificial intelligence can be a support to the judge.
We can agree on the fact that the decision-making process by humans is lacking which is why adding the knowledge of
artificial intelligence can help to make a more accurate decision.
We can first talk about what defines artificial intelligence which is the data; thanks to the large capacity, it’s possible
for machine learning to just analyze a lot of information in a short period of time which would save a lot of time for an
average human being.
Then, it’s also possible to use the data to find an easy and quick solution to help the human judges.
Not only this but it’s possible to consider how AI-based decision has to be reviewed by humans so they can have the
final word and compensate if needed. Machine learning can always give information that a human mind couldn’t think
of but that doesn’t mean that this solution is always right.
If we think about this, the algorithm will have more difficulty doing something when it’s related to emotion but it’s not
the same when we turn around the forensic side. Artificial Intelligence is already used in this field by studying evidence
and ballistics elements.
Even with this argument, it’s interesting to remember how artificial intelligence is still not capable of replacing the
human part of an algorithm. We have the example of the COMPAS case where Artificial Intelligence was supposed to
predict the risk of recidivism based on the criminal factor and history of the defendant which are concrete facts but
were still not able to do this task.
When talking about Artificial Intelligence in Justice, it’s inevitable to talk about responsibility; mentioning this idea
could open about another reflection.
Indeed, this question exists in every field where machine learning is present. If an Artificial Intelligence makes a wrong
judgment, who’s responsible? This question is harder to answer when we know that the algorithm can lack some
transparency which creates a certain difficulty to find out more about the decision-making process and what could
have created a bias.
When thinking about this notion of responsibility around Artificial Intelligence, it tends to go towards the individuals
who developed the algorithm since they are the ones putting the data and having control of it.
The problem is that a bias existing in the algorithm can be exaggerated by the system.
This is when human justice comes into account since Artificial Intelligence made a mistake when thinking it gave out
the right solution. The subject of responsibility in Law is also so diverse and many possibilities exist to know how to
find who’s going to pay the consequences that it seems to be something else Artificial Intelligence can’t deal with yet.
For example, in the COMPAS case, the judge couldn’t really understand the suggestion given out by the algorithm and
followed the sentence that came out. In that case, who would be responsible?
CONCLUSION
To end, I would like to start by answering my problem by saying no, I don’t think Artificial Intelligence is ready to
replace the Justice we have now.
Even if we try to find a solution by creating new algorithms, for now, it doesn’t seem possible for machine learning to
judge human cases mostly because of the lack of emotions. Work also needs to be done on the data part by creating a
better understanding and access.
This is why, for now, the use of Artificial Intelligence in the field of justice can only happen by being supervised by a
human being, at least it’s the conclusion I found out; we can put machine learning into action by helping by analyzing
big data or propose diverse solutions to the jurist.
REFERENCES
1 Oxford English Dictionary – Definition of Bias (2021)
3 Miller,D. (2023). “Justice”. In Zalta, E.N.Nodelman, U. (eds), Standford Encyclopedia of Philosophy (Spring 2023
Edition)
4 Jeff Larson, Surya Mattu, Lauren Kirchner, Julia Angwin (2016) “How we analyzed the COMPAS recidivism Algorithm”
(ProRepublica)