AnneCoull Keynote ExplainableAI
AnneCoull Keynote ExplainableAI
Anne Coull
UNSW
October 2021
Page 1
Anne is a courageous, committed and strategic leader with a track record of leading
high performing teams to streamline business processes and turn around failing programs.
She applies her deep knowledge and experience in Program Management, Cyber Security, SDLC, ITSM, Lean
Operational Excellence, Agile and Organisational Change to deliver Business, Cultural, and Technology
Transformations at scale.
Dedicated to continuous learning and research Anne is co-founder of Women in Cyber Security (Wicys) Australia, a
member of the NSW School of Engineering & Information Technology (SEIT) External Advisory Committee, an active
contributor to the development of technical research papers and conference presenter for the International Academy,
Research and Industry Association (IARIA). Topics include: Four testing types core to informed ICT governance for
cyber-resilient systems; How much cyber security is enough; Most Essential of Eight; and Explainable AI.
Page 2
Machine Learning
Field of study that gives computers the ability to learn without being programmed.
(Arthur Samuel 1959)
Well-posed learning problem
Narrow AI General AI
Machines are good at learning a narrow task Good at multiple things
Consumer internet Science Fiction
Google Take over the world
Self driving car
Page 3 ([19])
Input AI Output
Page 4 ([18])
Availability of Generalisability Acceptance by
across Data those affected
Training Data
Sources
([19])
Page 5
What is in it for me (WIIFM)?
Change - What problem is this AI solving
Management
- Will this help, hinder, or confuse me
How will this affect my job and those I work with?
Training Data
This is a cat
P=.92
Training Data
Page 11 ([1][15][16])
Deep Learning Modified deep
Neural Network learning techniques
This is a dog.
Decision It is not a
wolf.
Because it
Explanation has short fur,
narrow tail,
round paws, a
rounded nose,
dark eyes and
a round rump.
([5][6][23])
Page 12
Opaque Model Induction
Human-centred Explanation Interface
Page 13 ([1][6]))
•I understand why
•I understand why not
•I know when you’ll succeed
Task or Request
Decision /
Recommendation
+ decision Decision
process explained,
specific cases
& reasons for
rejections
Page 14 ([5][6])
Model
Understanding User
Does the user understand
the overall model &
Satisfaction
individual decisions, its
strengths & weakness, how Based on explanation
predictable is the models clarity & Helpfulness
decisioning, and are there as measured
work-arounds for known by the user
weaknesses.
Trustworthiness
Is the model
trustworthy and
appropriate
for future use
Task Self-
& Decisioning Correctability &
Performance Improvement
Does the AI explanation Does the model identifying
improve the user’s decision, & correct its errors. Does it
Does the user understand undergo continuous
the AI decisioning training
Page 15 ([6])
“Life is by definition unpredictable. It is impossible for programmers to
anticipate every problematic or surprising situation that might arise,
which means existing ML systems remain susceptible to failures as they
encounter the irregularities and unpredictability of real-world
circumstances,” Hava Siegelmann. DARPA
Page 16
[1] A. Arrieta et al, “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI”, available from: ResearchGate arXiv:1910.10045v1 [cs.AI] 22 Oct 2019, accessed March 2021.
[2] A. Bleicher, “Demystifying the Black Box That Is AI: Humans are increasingly entrusting our security, health and safety to “black box” intelligent machines”, Scientific American, August 2017, available from: https://www.scientificamerican.com/article/demystifying-the-black-box-that-is-
ai/, accessed September 2021.
[3] DARPA, Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning, DARPA, May 7, 2018, available from: http://ein.iconnect007.com/index.php/article/110412/researchers-selected-to-develop-novel-approaches-to-lifelong-machine-learning/110415/?skin=ein,
accessed September 2021
[4] R. Guidotti et al, “A survey of methods for explaining black box models”, ACM Computing Surveys, 51 (5) (2018), pp. 93:1-93:42, accessed September 2021.
[5] D. Gunning, “Explainable Artificial Intelligence (XAI), DARPA/120, National Security Archive”, 2017, available from: https://ia803105.us.archive.org/17/items/5794867-National-Security-Archive-David-Gunning-DARPA/5794867-National-Security-Archive-David-Gunning-DARPA.pdf,
accessed September 2021.
[6] D. Gunning, “Explainable artificial intelligence (XAI)”, Technical Report, Defense Advanced Research Projects Agency (DARPA) (2017), accessed March 2021.
[7] M. I. Jordan, “Artificial Intelligence—The Revolution Hasn’t Happened Yet.” Harvard Data Science Review, 1(1) 2019. Available from: https://doi.org/10.1162/99608f92.f06c6e61, accessed March 2021.
[8] M I. Jordan, “Stop calling everything Artificial Intelligence”, IEEE Spectrum March 2021, available from: https://spectrum.ieee.org/stop-calling-everything-ai-machinelearning-pioneer-says, accessed March 2021.
[9] D. Kahneman, “Thinking, fast and slow. Penguin Press, ISBN: 9780141033570, 2 July 2012”
[10] A. Korchi et al, “Machine Learning and Deep Learning Revolutionize Artificial Intelligence”, International Journal of Scientific & Engineering Research Volume 10, Issue 9, September-2019 1536 ISSN 2229-5518, accessed September 2021
[11] T. Kulesza et al. “Principles of Explanatory Debugging to Personalize Interactive Machine Learning”. IUI 2015, Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 126-137).
[12] B. Lake et al, “Human-level concept learning through probabilistic program induction”, 2015 Available from: https://www.cs.cmu.edu/~rsalakhu/papers/LakeEtAl2015Science.pdf, accessed September 2021.
[13] G. Lawton, “The future of trust must be built on data transparency”, techtarget.com, Mar 2021, available from: https://searchcio.techtarget.com/feature/The-future-of-trust-must-be-built-on-data-transparency?track=NL-
1808&ad=938015&asrc=EM_NLN_151269842&utm_medium=EM&utm_source=NLN&utm_campaign=20210310_The+future+of+trust+must+be+built+on+data+transparency, accessed September 2021.
[14] G. Lawton, “4 explainable AI techniques for machine learning models”, techtarget.com, April 2020, available from: https://searchenterpriseai.techtarget.com/feature/How-to-achieve-explainability-in-AI-models, accessed March 2021.
[15] B. Letham et al. “Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model”. IUI 2015, Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 126-137).
[16] Y. Ming, “A survey on visualization for explainable classifiers”, 2017, available from: https://cse.hkust.edu.hk/~huamin/explainable_AI_yao.pdf, accessed September 2021.
[17] A. Ng, “The state of Artificial Intelligence, MIT Technology Review”, EmTech September 2017. Available from: https://www.youtube.com/watch?v=NKpuX_yzdYs, accessed September 2021.
[18] A. Ng, “Artificial Intelligence for everyone (part 1) – complete tutorial”, March 2019, available from: https://www.youtube.com/watch?v=zOI6Oll1Zrg, accessed September 2021.
[19] A. Ng, “CS229 – Machine Learning: Lecture 1 – the motivation and applications of machine learning”, Stanford Engineering Everywhere, Stanford University. April 2020. Available from: https://see.stanford.edu/Course/CS229/47, accessed September 2021.
[20] A. Ng, “Bridging AIs proof-of-concept to production gap”, Stanford University Human-Centred Artificial Intelligence Seminar, September 2020, available from: https://www.youtube.com/watch?v=tsPuVAMaADY, accessed September 2021.
[21] D. Snyder et al, “Improving the Cybersecurity of U.S. Air Force Military Systems Throughout their Life Cycles”, Library of Congress Control Number: 2015952790, ISBN: 978-0-8330-8900-7, Published by the RAND Corporation, Santa Monica, Calif. 2015
[22] D. Spiegelhalter, “Should We Trust Algorithms?”. Harvard Data Science Review, 2(1). 2020, available from, https://doi.org/10.1162/99608f92.cb91a35a, accessed March 2021.
[23] 3brown1blue, “Neural Networks: from the ground up”, 2017, available from: https://www.youtube.com/watch?v=aircAruvnKk, accessed September 2021.
Page 17
What are your experiences with classic AI and explainable AI?
Page 18
Explainable AI
Page 19