0% found this document useful (0 votes)
44 views16 pages

AI Bias

This document discusses issues of bias in artificial intelligence. It notes that AI systems can reflect and even amplify the biases of their human creators due to being trained on biased data provided by people. Examples are given of AI systems exhibiting biases in areas like criminal risk assessment, language translation, and gender classification. The document outlines both short-term and long-term issues with AI and bias. It proposes methods for detecting and mitigating bias in training data and provides recommendations for developing more ethical and unbiased AI.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views16 pages

AI Bias

This document discusses issues of bias in artificial intelligence. It notes that AI systems can reflect and even amplify the biases of their human creators due to being trained on biased data provided by people. Examples are given of AI systems exhibiting biases in areas like criminal risk assessment, language translation, and gender classification. The document outlines both short-term and long-term issues with AI and bias. It proposes methods for detecting and mitigating bias in training data and provides recommendations for developing more ethical and unbiased AI.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

May 2018

AI bias

FRANCESCA ROSSI
AI Ethics Global leader
Distinguished Research Staff Member
IBM Research AI
Professor of Computer Science
University of Padova

@frossi_t
Headlines about AI issues

Humans must colonize another AI will either be the “best thing or


planet in 100 years or face worst thing” according to experts
extinction
Artificial
Intelligence’s
White Guy
We won’t always know what AI is thinking AI is quickly becoming as biased
– and that is scary. as we are Problem

AI poses an existential risk Talking Tech: Should we be


to humanity afraid of robots?
Main issues/concerns in AI

Short-term issues:
– Bias
– Explainability
– Transparency
– Accountability
– Data responsibility
– Value alignment
– Ethics/morality

– Impact on jobs and society

Long-term issues:
– Singularity
– Superintelligence
– Off switch problem
Bias in AI

•  Bias: prejudice for or against


something
– As a consequence of bias, one could
behave unfairly to certain groups
compared to others

•  Why should AI be biased?


– Trained on data provided by people,
and people are biased
– Learning from examples and
generalizing to situations never seen
before
Example: Judicial system

•  The tool correctly predicts recidivism 61


percent of the time
•  African Americans are almost twice as
likely as White Americans to be labeled
a higher risk but not actually re-offend
•  Opposite mistake among whites
–  They are much more likely than
blacks to be labeled lower risk but go
on to commit other crimes

https://www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing
(thanks to Karthikeyan Natesan Ramamurthy)
Example: Chabot turned racist on Twitter

•  Microsoft Taj chatbot


•  It learnt offensive language by internet trolls
Example: Language translation

English to Turkish

Turkish to English
Example: Gender classification from pictures

•  Commercial visual recognition


software has trouble classifying
gender of dark skinned female faces
Not just data bias
limited capacity/precision built-in assumptions limited scope and reach

Sensors Model Actuators

World
does not reflect human values

(thanks to Karthikeyan Natesan Ramamurthy)


Good and bad bias

•  Good bias:
–  A women's clothing store has only women’s clothes
–  Targeted marketing
–  A doctor who uses race as a factor while performing diagnosis because a
particular race has higher propensity for some diseases
•  Bad bias:
–  A loan is denied by an algorithm to someone from a protected group,
because the data used to train the algorithm is biased

(thanks to Karthikeyan Natesan Ramamurthy)


Detecting and mitigating bias in training datasets
Original Dataset (may have traces of discrimination)
Predictive attributes
Protected attributes (race, etc.)
Decision history

Fairer Dataset
Cleansed of illegal discrimination according to the specified protected attributes
Other attributes left alone
Compliant training dataset for any downstream AI
“Optimized Pre-Processing for Discrimination Prevention,” F. P. Calmon, D. Wei, B.
Vinzamuri, K. N. Ramamurthy, and K. R. Varshney, NIPS, Dec. 2017.
approve deny deny
approve

African-American African-American
more close to
denies equalized
than proportions of
approves approves and
among denies in the
African- two
Americans populations
Caucasian Caucasian

default score default score

>0 =0 ≤3
>3
classification
tree learned
from training race approve approve
African- deny
data
Caucasian American

default score age

>3 ≤3 < 45 ≥ 45

deny approve deny approve


Bias detection and rating

Three level rating framework


•  Unbiased: similar to declared unbiased
distribution, can also compensate for biased
input data
•  Following the data bias: biased only if the
input data is biased
•  Biased: may introduce bias even if the input
data is not biased

Test input-output behavior


•  Generate input biased/unbiased data
•  Similarity measure for output data

Sequential compositionality
•  Rate overall bias behavior by rating each
component and then composing

Case study: translation services


[Bias Ratings for AI Services and Their Compositions, F. Rossi, B. Srivastava, AIES 2018]
Better humans
•  Understanding AI bias will help us
recognize human bias
•  Bias alerting AI tools to alert humans of
unfair behavior

•  Multi-disciplinary, multi-gender, multi-


stakeholder, multi-cultural approach

•  Trust
– Between humans and AI
– Between users and AI services
– Between clients and corporations
deploying AI
– Between AI producers and those
impacted by AI
Some material

IBM:
– Cognitive bias in Machine Learning, IBM Academy of Technology, 2018
– Learning to Trust AI systems, Guruduth Banavar, 2016 (IBM white paper)

Others:
•  Friedman, Batya, and Helen Nissenbaum. "Bias in computer systems." ACM Transactions on
Information Systems (TOIS)14.3 (1996): 330-347
http://www.nyu.edu/projects/nissenbaum/papers/biasincomputers.pdf
•  Barocas, Solon, and Andrew D. Selbst. "Big data's disparate impact." Cal. L. Rev. 104 (2016): 671
http://www.californialawreview.org/wp-content/uploads/2016/06/2Barocas-Selbst.pdf

You might also like