0% found this document useful (0 votes)
39 views

Explainable AI

Explainable AI

Uploaded by

haseebmon
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Explainable AI

Explainable AI

Uploaded by

haseebmon
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Explainable AI - Introduction

and scope of research

Pradeep .T
Data Scientist
LTIMindtree
About Myself PRADEEP T
Data Scientist | Data science blogger |
Artificial Intelligence Speaker |AI Mentor|
Pypi contributor | Volunteer @ Kerala Police
Cyberdome Data Science team

Education
Btech: Computer Science and Engineering
College of Engineering Poonjar

Mtech: Computational Linguistics


Govt. Engineering College
Sreekrishnapuram
Let me assume, you have…… ● Intermediate level of proficiency
in Python language
● Technical terms familiarity in AI
domain
● Basic understanding of Machine
learning algorithms
● Minimal level experience with
machine learning model building
● Beginner level experience in
Natural language processing
● Machine learning use cases at
different domains in real world
● Google Colab tool work
experience
Agenda
● Introduction to XAI
● XAI Goals
● Black Box Models vs White Box Models
● Explainability vs Interpretability
● XAI Principles
● Types of XAI
● Approaches to Explainability
● Geometrical Interpretation of Prediction models
● Tools for Explainability
● Limitations of XAI
● Sample Demo
● Q&A
AI Evolution

Symbolic AI Statistical AI Explainable AI


Logic rule represent statistical models for
Systems construct
knowledge specific domains
explanatory models
training on big data.
No learning
Systems learn and
capability and poor No contextual
reason with new
handling of learning and no
tests and situations
uncertainty explainability

TIME
● Explainable AI, or Interpretable AI, or
Explainable Machine Learning, is
Introduction to artificial intelligence in which humans
XAI can understand the decisions or
predictions made by the AI models.

What is XAI ● XAI is a set of process and methods


that help to interpret the AI results.

● Explainable AI refers to methods and


techniques in the application of
artificial intelligence technology (AI)
such that the results of the solution
can be understood by human experts.
● Explainable artificial intelligence (XAI) is a
powerful tool in answering critical How?
Introduction to and Why?

XAI
● These questions about AI systems can be
used to address rising ethical and legal
What is XAI concerns.

● So XAI as a necessary feature of


trustworthy AI

● Set of processes and methods that allows


human users to comprehend and trust the
results and output created by machine
learning algorithms
● Data Scientists who build the model may
benefit from understanding how it
XAI - Goals behaves, so they may modify parameters
accordingly, to improve the model’s
accuracy and fix prediction bugs
The basic goal of XAI is
describe in detail how or
why ML models produce ● In many fields, there exists a regulator
their prediction which should validate the model before it
is possible to use it in production.

● People affected by the model may ask


for the drivers of their prediction. A good
explanation may generate more trust into
Machine Learning
Reducing Errors
XAI - Goals

Greater the confidence in AI Models , the faster and more widely it can
be deployed
Reduce model bias and improves fairness

Improves the code confidence and compliance

Increase model performance

Improves model transparency

Informed decision making

Better accountability and safety


Black Box Models
Black Box Models A model produce useful information
without revealing any information about
vs White Box its internal workings
Models

White Box Models


A model which can clearly explain how
they behave, how they produce
predictions and what the influencing
variables are
XAI is an emerging research topic of
machine learning aimed at unboxing how AI
Black Box Models systems' black-box choices are made
vs White Box
Models
STORY - 1
STORY
- 2
● Interpretability has to do with how accurate a
machine learning model can associate a cause

Interpretability in to an effect.

AI
● The cause and effect can be determined

● If a model can take the inputs, and routinely get


the same outputs, the model is interpretable

“..If you overeat your pasta at dinnertime


and you always have troubles sleeping, the situation
is interpretable..”

“..Loud noise accelerate hearing loss, the


statement is interpretable..”
Explainability vs
Interpretability
XAI -Principles To expand on the idea of
what constitutes XAI, the
National Institute of
Standards (NIST), part of
the U.S. Department of
Commerce, defines four
principles of explainable
artificial intelligence:

Those
are…
An AI system should
An AI system should supply
provide explanations
“evidence, support, or
reasoning for each output” that its users can
understand

Explanation accuracy: An Knowledge limits: An AI


system should operate only
explanation should
under the conditions it was
accurately reflect the
designed for and not provide
process the AI system used output when it lacks sufficient
to arrive at the output. confidence in the result.
Explainable data
What data went into training a model?
Types of XAI Why was that data chosen? How was
fairness assessed? Was any effort made
to remove bias?

Explainable predictions
What features of a model were
activated or used to reach a particular
output?

Explainable algorithms

What are the individual layers that make


up the model, and how do they lead to
the output or prediction?
Approaches to Transparent ML model
Explainability
One which is understandable
without further intervention

Post-Hoc technique

Use a black-box model ad


apply a Post-Hoc technique on
top of it, to explain its very
complex behavior
● Feature summary statistic
○ Eg: Feature importance
Approaches to ○ Pairwise feature interaction
Explainability strengths
● Feature summary visualization
○ Eg : Partial dependence plots
● Model internals
○ E.g: learned weights
Transparent ML Model ● Data point
Results ○ Eg: Counterfactual
explanations
● Intrinsically interpretable
model
○ Eg : Decision Tree
Accuracy vs Interpretability
● Explanation methods that work on
Approaches to
top of complex black-box methods.
Explainability
● So that anyone can have fun with

Post-Hoc techniques the ML model she/he prefers

● Post-Hoc Methods are partitioned

following two main concepts

○ Model Agnostic/Model

Specific
Model Agnostic / Model Specific
Local
Global Local vs Global Explainability
● Shows what extent each ● Also known as Individual
feature contributes to how Explainability.
the model makes its ● Somewhat self
predictions over all of the explanatory, but local
data explainability helps answer
● used to describe how the question, “for this
model works with particular example, why
inspection of model did the model make this
concepts particular decision?”
● The key term here is “all ● Local explainability is
predictions” required for getting to the
● Global is an average root cause of a particular
across all predictions issue in production
Geometrical Interpretation of Prediction models
● Each prediction model (be it Machine Learning, Statistical, etc.) draws a prediction function f(x): given
a set of x values, it returns the forecasted ŷ.

● We can turn into numbers almost any kind of info:


○ Images represented as pixels intensity colors ranging from 0 to 255
○ Words and text embedded into latent semantic spaces of concepts
○ Categories encoded as dummies, etc.

● Objective of ML is to get insight on how the Y variable changes its values depending on the values of
the X variables. - Y depends on the X

● The features in the dataset can be viewed as one dimension of a geometric space, where p is the
number of X features in the dataset and the additional dimension is the Y variable

The observations of our dataset are points in such ℝᵖ⁺¹ space.

● The prediction function f(x) is the surface that best approximates all the points in the geometric space,
and it represents the best guess about the causal relationship existing between Y and the Xs.
● Standard statistical methods usually put
constraints on the shape (eg. Linear Regression
Geometrical requires to create a linear plane, Logistic
Interpretation of Regression allows f to be monotonic in nature)

Prediction models ● These methods are called parametric because


we choose the formula for f and we just look for
the best coefficients: as an example, Linear
Parametric Models Regression f has formula Y =β₀ +β₁ X₁ + β₂ X₂,
we can select the best values for β₀, β₁, β₂

● These are less powerful , because they are


constrained in some way, but their advantage is
in their simplicity:
○ Create simple surfaces which can be
expressed with an easy function.
○ provide you with the formula for f. With it,
we can understand very well how the
● But complex ML models are non-parametric
Geometrical in nature
○ Give f the freedom to fully modify its shape
Interpretation of ○ Shape will wiggly

Prediction models ○ Able to capture correlation and interaction


between X variables
○ Eg : Gradient Boosting, Neural Networks,
Random Forests
Non-Parametric
Models ● But the biggest strength turns also into the
worst weak point: Do not get back the
precise f formula.

● When we have just 1 or 2 X variables we may


draw f on the geometrical space, but when it
is more we have no way to understand the
surface.
Geometrical “...Any ML technique builds
the best function f(x) to
Interpretation of
approximate our data,
Prediction models
given the constraints.
Some of them produce
simple f(x) and they also
return the formula, while
the black-box ones create
very complicated functions
and do not provide the
math formula…”
1. SHAP

Tools for ● SHAP (SHapley Additive exPlanations) is a


game theory based model agnostic
explainability approach/tool to explain the output of any
ML model

● It connects optimal credit allocation with


local explanations using the classic
Shapley values from game theory and
their related extensions

● SHAP helps you understand the


magnitude of impact (length of the bar)
and direction(colour) below.
2. LIME

Tools for ● Local interpretable model agnostic


explanations.
explainability ● Give a local linear approximation of the
model’s behaviour by creating local
surrogate models which are trained to
mimic the ML model’s predictions locally.
● This surrogate model could be anything
from GLMs to decision trees which try and
understand how local importance may
differ.
● Currently, Lime helps explain predictions
for tabular data, images and text
classifiers.
● Shapash
○ It uses Shap or Lime backend to compute contributions. Shapash
relies on the different steps necessary to build a ML model to
Tools for make the results understandable.
● ExplainerDashboard
explainability ○ ExplainerDashboard is a an extensive and engaging interactive
dashboard library to explain ML models across various spectrums
and methodologies
Other tools ● Dalex
○ It provides wrappers around various ML frameworks. These
wrappers can then be explored and compared with a collection of
local and global explainers
● DeepLIFT
○ It compares the activation of each neuron ti its
‘reference activation’ and assigns contribution scores
according to the difference.
● Explainable Boosting Machines (EBM)
● ELI5
● Alibi
● InterpretML
● ALX360
● Skater
● Limited depth of explanation
○ Explanation limited to how
Limitations of XAI but not why
○ Important for
training,validation and
analysis
● Behavior and explanations are
separate
○ Requires an additional step in
AI development
○ Explanations must be sync
with behavior
● XAI can cover for invalid
behavior -(Bogus
explanations)
○ Explanations can seems to be
valid for the irrelevant
Limitations of XAI

It a regulated industry ,it can be tough to avoid


unconscious bias

Current XAI techniques are limited and might not be


one size fit all

XAI has different context in various use cases


Sample Demo
Google’s StylEx
Thank You..
For further details
Contact
Pradeep .T
Data scientist
LTIMindtree

Mobile : 8086877459
Email : [email protected]
Linkedin : https://www.linkedin.com/in/pradeep-t-ab670888/
Github : https://github.com/pradeepdev-1995/
Twitter : https://twitter.com/pradeepdev_01

You might also like