SlideShare a Scribd company logo
A Guide to AI
for Smarter Nonprofits
Dr. Cori Faklaris
University of North Carolina at Charlotte, College of Computing and Informatics
Presentation to United Way of Greater Charlotte, April 18, 2024
Cori
Faklaris
● Assistant Professor and Director of the
Security and Privacy Experiences (SPEX)
research group, Dept. of Software and Information
Systems, College of Computing and Informatics
○ Ph.D., Human-Computer Interaction, School of
Computer Science, Carnegie Mellon University
● Human Factors / Psychology focus on
Cybersecurity, Privacy, AI/ML
● Past career in news + design, social media
● Past freelance/consultancy business
2
cfaklari@charlotte.edu Page 2
Key takeaways from today’s presentation
• AI provides you with “infinite interns.”
• Give people permission & guardrails to learn what works with
these “interns” and what doesn’t.
• Create a roadmap for adding in more AI to assist nonprofit work,
along with strategies for bias mitigation
3
Overview of Generative AI
Adapted from a 2023 talk & course materials
4
When you hear ‘AI,’ think ‘statistical pattern-matching’
• Oracle describes AI this way:
[Artificial Intelligence] has become a catchall
term for applications that perform complex tasks
that once required human input, such as
communicating with customers online or playing
chess.
The term is often used interchangeably with …
machine learning (ML) and deep learning.
Text from What is Artificial Intelligence (AI)? Oracle, n.d. Retrieved May 16, 2023 from https://www.oracle.com/artificial-intelligence/what-is-ai/
Image from Pattern Recognition. GeeksforGeeks. Retrieved May 16, 2023 from https://www.geeksforgeeks.org/pattern-recognition-introduction/
The data is “tokenized” (= made
into “chunks” of words, punctuation
marks, pixels, etc.) during this
process - remember this for later
‘Automation’ and ‘AI’ are related, but separate
Automation
6
• Repetitive tasks
• Does not learn over time
• Aims to mimic human activity, but not
necessarily human
cognition/intelligence
• Follows instructions
• Does not necessarily use data outside
of what is required for the
self-contained tasks it is programmed
to do
Artificial Intelligence
• Dynamic tasks, extrapolation
• Learns over time
• Aims to mimic some aspects of human
cognition/intelligence
• Evolves its own instructions
• Uses to become “smart”
Slide and animations courtesy of Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications
cfaklari@charlotte.edu Page 6
‘Automation’ and ‘AI’ are related, but separate
Automation
7
• Repetitive tasks
• Does not learn over time
• Aims to mimic human activity, but not
necessarily human
cognition/intelligence
• Follows instructions
• Does not necessarily use data outside
of what is required for the
self-contained tasks it is programmed
to do
Artificial Intelligence
• Dynamic tasks, extrapolation
• Learns over time
• Aims to mimic some aspects of human
cognition/intelligence
• Evolves its own instructions
• Uses to become “smart”
As long as they have enough data, AI models now can
generate part or all of a creative work.
This includes business functions such as reading and
writing documents, creating a table or figure to
summarize data, programming, and drafting
presentations or training (ahem).
How Generative AI works (admittedly oversimplified)
The system generates text or images using its previously built model of the
statistical distributions of tokens (= “chunks” of words, punctuation marks,
pixels, etc.) created from its very large training dataset.
Image from Pattern Recognition. GeeksforGeeks. Retrieved May 16, 2023 from https://www.geeksforgeeks.org/pattern-recognition-introduction/
Murray Shanahan. 2022. Talking About Large Language Models. arXiv [cs.CL]. Retrieved from http://arxiv.org/abs/2212.03551
Bea Stollnitz. How generative language models work. Retrieved May 10, 2023 from https://bea.stollnitz.com/blog/how-gpt-works/
Doc
Chat
Image
cfaklari@charlotte.edu Page 8
How Generative AI works (admittedly oversimplified)
It might make mistakes or “hallucinate” based on the limitations of its
process, but the output still might look like what you wanted.
Ted Chiang’s analogy = “unreliable photocopier” or a “blurry JPEG”
Ted Chiang. 2023. ChatGPT Is a Blurry JPEG of the Web. The New Yorker. Retrieved May 10, 2023 from https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
Murray Shanahan. 2022. Talking About Large Language Models. arXiv [cs.CL]. Retrieved from http://arxiv.org/abs/2212.03551
Bea Stollnitz. How generative language models work. Retrieved May 10, 2023 from https://bea.stollnitz.com/blog/how-gpt-works/
Doc
Chat
Image
cfaklari@charlotte.edu Page 9
Learn how to instruct AI models via ‘prompts’
• One significant factor in the quality of a generative AI’s output is
the "prompt," or instructions that the user gives to the AI to begin
the interaction.
• For best results, the prompt should be specific, detailed, and
concise.
• It should give the LLM a persona or role to play, and a goal.
10
Rebekah Carter. 2023. How to Talk to an LLM: Prompt Engineering for Beginners. UC Today. Retrieved March 25, 2024 from https://www.uctoday.com/unified-communications/how-to-talk-to-an-llm-llm-prompt-engineering-for-beginners
Issues with AI overtrust + biases
Adapted from a 2023 talk & course materials
11
Human biases inevitably creep into human designs
cfaklari@charlotte.edu Page 12
Better Off Ted: “Racial Sensitivity.” Clip titled “Racist Sensors” via YouTube. Retrieved April 9, 2024. More info at https://www.imdb.com/title/tt1346402/
Overtrust in AI statistical pattern matching - why?
• Going on autopilot
• Rationalizing observed failures (eg
at least the system sees White people!)
• Perceiving low risk
• Social pressure/conformity
• Being told to trust the system
• Seeing others trust the system
• Individual differences
• Experts will notice when something seems
off and be able to respond; non-experts
won’t (unless/until outcomes are very bad) “illustration of "trust" spilling over a dam” | DALL-E
13
cfaklari@charlotte.edu Page 13
Adapted from Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications
Lots of data means lots of errors, biases can creep in
Human biases (whether explicit or implicit) can creep in at any point in the
AI data pipeline - but often starts with the variety of data collected.
Metika Sikka. 2021. The Human Bias-Accuracy Trade-off. Towards Data Science. Retrieved April 9, 2024 from https://towardsdatascience.com/the-human-bias-accuracy-trade-off-ad95e3c612a9
Doc
Chat
Image
Are genders,
ethnicities,
classes, regions
represented
fairly in data?
Was data
collection
skewed (eg data
about jobs
gathered mostly
from U.S. men)?
Do the training + test datasets meet benchmarks?
Did any humans audit them for biases?
Is this system even worth building?
What are foreseeable risks to people?
Is it possible to explain what it did?
How should we interpret the results?
Can we rely on
results in total
or in part?
Which part?
What is the
real-world
impact?
What if the
real-world
context
changes?
How was the
data prepped?
14
Non-zero chance that AI failures occur - How to cope?
• Avoid failing early IN PUBLIC
• Early periods of low reliability damage trust more than late periods of low reliability (Desai et al., 2013)
• Bigger problem for incumbents (eg Google) than for newcomers (eg OpenAI)
• Prioritize safety [= absence of unreasonable probability + severity of harms]
• People prioritize personal safety over financial cost (Adubor et al., 2017)
• For typical end user, computing safety includes data security and privacy guarantees
• Be socially vulnerable
• Apologizing and explaining the reason for the failure can improve rapport
• Pratfall effect
• People adapt faster than you think to new technologies & situations (eg mobile live-streaming video)
• We know that people over-trust robots even after they have demonstrated failure and made odd requests! (Salem et
al. 2015, Morales et al. 2019)
15
cfaklari@charlotte.edu Page 15
Adapted from Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications
‘AI Bill of Rights’ proposes needed human safeguards
• Safe and Effective Systems - You should be protected from unsafe or ineffective
systems.
• Algorithmic Discrimination Protections - You should not face discrimination by
algorithms, and systems should be used and designed in an equitable way.
• Data Privacy - You should be protected from abusive data practices via built-in
protections and have agency over how data about you is used.
• Notice and Explanation - You should know that automation or AI is being used
and understand how and why it contributes to outcomes that impact you.
• Human Alternatives, Consideration, and Fallback - You should be able to opt
out, where appropriate, and have access to a person who can quickly consider
and remedy problems you encounter.
16
cfaklari@charlotte.edu Page 16
Advice* on Using AI for Nonprofits
*Valid for 2024 … might be invalidated as the tech improves :-)
17
“AI gives you infinite interns.”
18
Benedict Evans, Technology Analyst
Benedict Evans. 2024. AI, and Everything Else. Retrieved from https://www.ben-evans.com/presentations
Reframe AI as ‘infinite interns’ available to work
19
Reasoning?
● Tell it what you want and
trust it to do it without you?
● Use one to instruct and
supervise another?
● Have it act as your “agent”?
● Limited by inability to
create new knowledge, lack
of persistent memory of
task context over time
Pattern Extrapolation
● Writing code
● Brainstorming
● Auto-suggest text
● Manipulate images
● Limited to the
examples that it has
already seen (and
those examples may
have errors or biases!)
Synthesis & Summary
● Get a summary &
analysis of big dataset
● Ask it questions
● Combine existing
images into new ones
● Limited by trust that
you’d give an “intern” to
access lots of valuable,
confidential data
Reframe AI as ‘infinite interns’ available to work
20
Reasoning?
● Tell it what you want and
trust it to do it without you?
● Use one to instruct and
supervise another?
● Have it act as your “agent”?
● Limited by inability to
create new knowledge, lack
of persistent memory of
task context over time
Pattern Extrapolation
● Writing code
● Brainstorming
● Auto-suggest text
● Manipulate images
● Limited to the
examples that it has
already seen (and
those examples may
have errors!)
Synthesis & Summary
● Get a summary &
analysis of big dataset
● Ask it questions
● Combine existing
images into new ones
● Limited by trust that
you’d give an “intern” to
access lots of valuable,
confidential data
Like any eager-to-please intern, the AI will always
give you an answer, an output, SOMETHING.
Whether that SOMETHING is actually what you
wanted, makes logical or practical sense, or is
trustworthy and unbiased, is up to YOU to judge!
“Fast, Cheap, or Good Quality – Pick Two” for AI
21
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???
Pick Fast + Cheap for now to explore use cases
• Start using “free” or
low-cost AI in small doses so
that people get used to it
and play around with it.
• Schedule an internal review
for x months away to discuss
these low-stakes
experiments & fill out a
roadmap to add in paid AI.
22
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???
Move toward Cheap + Good with bias mitigation
• Oversample or overweight
data from historically
disadvantaged groups
• Delegate 2-3 people with
diverse backgrounds to audit
data, AI outputs for biases
• May need a data scientist or
anti-bias tools to help (eg
IBM Research’s AIF360)
23
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???
Goal is Fast + Good to develop HUMAN potential
• You won’t be able to reduce
headcount so much (AI only
gives intern-level quality
now, might not improve a lot)
• Use people to be the AI
supervisors and make
strategic decisions based on
AI + human generated info
• Provide/subsidize AI services
24
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???
Nonprofits would benefit from a UW AI facility
Provide an AI model
(or many to choose
from), but build your
own interface that
includes prompt
templates for
specific tasks,
pretrained personas
too
25
Prompt templates:
○ All docs
- Fundraising
- Operations
- Personnel
○ Spreadsheets
- Budgeting
- Calendars
○ Coding help
○ Web search
Advanced:
○ Model + Parameters
○ License Key
○ API Key
What do you want to do?
Director
persona
Grant writer
persona
Outreach
persona
HR
persona
Wireframe based on TypingMind AI home page from April 7, 2024 at https://www.typingmind.com/
Summary from today’s presentation
• For nonprofit workplaces, think of AI as ‘infinite interns.’
• What would you trust an intern to do?
• What could they get wrong? (Biases, errors, discrimination, etc.)
• Give people permission & guardrails to learn what works with
these “interns” and what doesn’t.
• Start using “free” or low-cost AI in small doses so that people get used to
it and play around with it BEFORE rolling something out publicly
• Create a roadmap for adding in more AI to assist nonprofit work,
along with strategies for bias mitigation
26
What questions do you have?
● AI provides you with “infinite interns.”
● Give people permission & guardrails to learn what works
with these “interns” and what doesn’t.
● Create a roadmap for adding in more AI to assist nonprofit
work, along with strategies for bias mitigation
cfaklari
@charlotte.edu
Extra Slides - Can Use if Time
28
Examples of publicly available Generative AI tools
Crowdsourced list of
available AI tools:
https://bit.ly/UsefulLLMs
Employees need AI guidelines or a Use Policy
• I include the following in my course syllabus this semester:
In this course, students are permitted to use tools such as Stable Diffusion, DALL-E,
ChatGPT, and BingChat. In general, permitted use of such tools is consistent with
permitted use of non-AI assistants such as Grammarly, templating tools such as
Canva, or images or text sourced from the internet or others’ files. No student may
submit an assignment or work on an exam as their own that is entirely generated by
means of an AI tool. If students use an AI tool or other creative tool to generate, draft,
create, or compose any portion of any assignment, they must (a) credit the tool, and (b)
identify what part of the work is from the AI tool and what is from themselves.
Students are responsible for identifying and removing any factual errors, biases,
and/or fake references that are introduced into their work through use of the AI tool.
30
Future $$$ - in-house AI vs. ‘front door’ to vendor
• Can build your
own AI server &
deploy many
models, plus
give users ability
to fine-tune
outputs …
Screenshot:
https://github.com/Lightning
-AI/pytorch-lightning
31
Future $$$ - in-house AI vs. ‘front door’ to vendor
• … or host a
“wrapper”
around a paid
service such as
OpenAI’s
ChatGPT + add
guidance …
Screenshot:
https://genai.umich.edu/
32
Future $$$ - in-house AI vs. ‘front door’ to vendor
• … or provide
ChatGPT, but
build your own
interface that
includes prompt
templates for
specific tasks,
maybe pretrained
personas too?
33
Prompt templates:
○ All docs
- Fundraising
- Operations
- Personnel
○ Spreadsheets
- Budgeting
- Calendars
○ Coding help
○ Web search
Advanced:
○ License Key
○ API Key
What do you want to do?
Director
persona
Grant writer
persona
Outreach
persona
HR
persona
Wireframe based on TypingMind AI home page from April 7, 2024 at https://www.typingmind.com/

More Related Content

PDF
The Research Blueprint: Excelling in Data science, Data Analysis and AI
Loreta Jugu
 
PPTX
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
CareerBuilder
 
PDF
EDW 2015 cognitive computing panel session
Steve Ardire
 
PPTX
Ethical AI - Open Compliance Summit 2020
Debmalya Biswas
 
PDF
AI in Business: Opportunities & Challenges
Dr. Tathagat Varma
 
PPTX
[Seminar] 200731 Hyeonwook Lee
ivaderivader
 
PPTX
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
home
 
PDF
Explainable Artificial Intelligence (XAI): Precepts, Methods, and Opportuniti...
pinkukumarimfp1980
 
The Research Blueprint: Excelling in Data science, Data Analysis and AI
Loreta Jugu
 
Fact vs. Fiction: How Innovations in AI Will Intersect with Recruitment in th...
CareerBuilder
 
EDW 2015 cognitive computing panel session
Steve Ardire
 
Ethical AI - Open Compliance Summit 2020
Debmalya Biswas
 
AI in Business: Opportunities & Challenges
Dr. Tathagat Varma
 
[Seminar] 200731 Hyeonwook Lee
ivaderivader
 
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
home
 
Explainable Artificial Intelligence (XAI): Precepts, Methods, and Opportuniti...
pinkukumarimfp1980
 

Similar to A Guide to AI for Smarter Nonprofits - Dr. Cori Faklaris, UNC Charlotte (20)

PDF
Understanding Blackbox AI: Unlocking the Secrets of Complex Machine
ommprakashm78
 
PPTX
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI
DataScienceConferenc1
 
PDF
Data sci sd-11.6.17
Thinkful
 
PPTX
Future of data science as a profession
Jose Quesada
 
PDF
Uncharted Together- Navigating AI's New Frontiers in Libraries
Brian Pichman
 
PPTX
Response & Safe AI at Summer School of AI at IIITH
IIIT Hyderabad
 
PDF
AI/Data Analytics (AIDA): Key concepts, examples & risks
Simon Buckingham Shum
 
PPTX
Tessella Consulting
Tessella
 
PDF
Online course 6 14 2017
vaxelrod
 
PDF
Getstarteddssd12717sd
Thinkful
 
PDF
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
ijtsrd
 
PDF
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Edge AI and Vision Alliance
 
PDF
Generative Artificial Intelligence and Academic Integrity - LIR HEAnet User G...
Thomas Lancaster
 
PDF
RAPIDE
Tessella
 
PDF
AI for Educators - Integrating AI in the Classrooms
Premsankar Chakkingal
 
PDF
Onboarding AI & Machine Learning
Brian Pichman
 
PDF
Generative AI based assessment to engage students critical skills in the Huma...
Andiswa Mfengu
 
PPTX
TechnologyinManagement-Kashif Zafar.pptx
Asif987363
 
PDF
Snowforce 2017 Keynote - Peter Coffee
Peter Coffee
 
PPTX
Scholarly Publishing in an AI World
Juliet Kaplan
 
Understanding Blackbox AI: Unlocking the Secrets of Complex Machine
ommprakashm78
 
[DSC Europe 23] Shahab Anbarjafari - Generative AI: Impact of Responsible AI
DataScienceConferenc1
 
Data sci sd-11.6.17
Thinkful
 
Future of data science as a profession
Jose Quesada
 
Uncharted Together- Navigating AI's New Frontiers in Libraries
Brian Pichman
 
Response & Safe AI at Summer School of AI at IIITH
IIIT Hyderabad
 
AI/Data Analytics (AIDA): Key concepts, examples & risks
Simon Buckingham Shum
 
Tessella Consulting
Tessella
 
Online course 6 14 2017
vaxelrod
 
Getstarteddssd12717sd
Thinkful
 
Artificial Intelligence Role in Modern Science Aims, Merits, Risks and Its Ap...
ijtsrd
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Edge AI and Vision Alliance
 
Generative Artificial Intelligence and Academic Integrity - LIR HEAnet User G...
Thomas Lancaster
 
RAPIDE
Tessella
 
AI for Educators - Integrating AI in the Classrooms
Premsankar Chakkingal
 
Onboarding AI & Machine Learning
Brian Pichman
 
Generative AI based assessment to engage students critical skills in the Huma...
Andiswa Mfengu
 
TechnologyinManagement-Kashif Zafar.pptx
Asif987363
 
Snowforce 2017 Keynote - Peter Coffee
Peter Coffee
 
Scholarly Publishing in an AI World
Juliet Kaplan
 
Ad

More from Cori Faklaris (20)

PDF
Understanding and Mitigating SMiShing Vulnerability: Insights from U.S. Surve...
Cori Faklaris
 
PPTX
Connecting Attitudes and Social Influences with Designs for Usable Security a...
Cori Faklaris
 
PPTX
Human Factors at the Grid Edge
Cori Faklaris
 
PDF
An Introduction to Generative AI
Cori Faklaris
 
PDF
Components of a Model of Cybersecurity Behavior Adoption
Cori Faklaris
 
PPTX
Behavior Change Using Social Influences
Cori Faklaris
 
PDF
Designing for Usable Security and Privacy
Cori Faklaris
 
PPTX
How can we boost 'cyber health' ? Psychometrics, social appeals and tools for...
Cori Faklaris
 
PDF
A Self-Report Measure of End-User Security Attitudes (SA-6)
Cori Faklaris
 
PDF
Reframing Usable Privacy + Security to Design for 'Cyber Health'
Cori Faklaris
 
PPTX
Social Cybersecurity: Ideas for Nudging Secure Behaviors Through Social Influ...
Cori Faklaris
 
PDF
Share & Share Alike? An Exploration of Secure Behaviors in Romantic Relations...
Cori Faklaris
 
PDF
Reframing Organizational Cybersecurity to Design for “Cyber Health”
Cori Faklaris
 
PDF
Social Media Best Practices - CMU Fall 2017
Cori Faklaris
 
PPT
If You Are Going To Skydive, You Need a Parachute: Navigating the World of H...
Cori Faklaris
 
PPTX
"Visualizing Email Content": Article discussion slides
Cori Faklaris
 
PPTX
Together: An app to foster community for young urbanites
Cori Faklaris
 
PPTX
The State of E-Discovery as Social Media Goes Mobile
Cori Faklaris
 
PPT
5 ideas for paying for college as an adult returning student
Cori Faklaris
 
PPTX
Social media boot camp: "HeyCori"'s tips for successful engagement online
Cori Faklaris
 
Understanding and Mitigating SMiShing Vulnerability: Insights from U.S. Surve...
Cori Faklaris
 
Connecting Attitudes and Social Influences with Designs for Usable Security a...
Cori Faklaris
 
Human Factors at the Grid Edge
Cori Faklaris
 
An Introduction to Generative AI
Cori Faklaris
 
Components of a Model of Cybersecurity Behavior Adoption
Cori Faklaris
 
Behavior Change Using Social Influences
Cori Faklaris
 
Designing for Usable Security and Privacy
Cori Faklaris
 
How can we boost 'cyber health' ? Psychometrics, social appeals and tools for...
Cori Faklaris
 
A Self-Report Measure of End-User Security Attitudes (SA-6)
Cori Faklaris
 
Reframing Usable Privacy + Security to Design for 'Cyber Health'
Cori Faklaris
 
Social Cybersecurity: Ideas for Nudging Secure Behaviors Through Social Influ...
Cori Faklaris
 
Share & Share Alike? An Exploration of Secure Behaviors in Romantic Relations...
Cori Faklaris
 
Reframing Organizational Cybersecurity to Design for “Cyber Health”
Cori Faklaris
 
Social Media Best Practices - CMU Fall 2017
Cori Faklaris
 
If You Are Going To Skydive, You Need a Parachute: Navigating the World of H...
Cori Faklaris
 
"Visualizing Email Content": Article discussion slides
Cori Faklaris
 
Together: An app to foster community for young urbanites
Cori Faklaris
 
The State of E-Discovery as Social Media Goes Mobile
Cori Faklaris
 
5 ideas for paying for college as an adult returning student
Cori Faklaris
 
Social media boot camp: "HeyCori"'s tips for successful engagement online
Cori Faklaris
 
Ad

Recently uploaded (20)

PPTX
Virtuosity Award presentation for leaders.pptx
jlong12
 
PPTX
5th week.pptxazqzqxqxqqxwxwxwxwsxwswxwxwxw
EmanEssa14
 
PPTX
dawsoncitycommunityrollingadsJuly30_25.pptx
pmenzies
 
PPTX
Parliament_of_India_Presentationisakwaysgood.pptx
rawatsharukh19
 
PPTX
Culture_Presentation_Abdul_Rafay_With_Images.pptx
ehsanejaz57
 
DOCX
RRB Technician Syllabus for Technician Gr I Signal.docx
Sitamarhi Institute of Technology
 
PDF
Chemistry_Chemical_Reactions_and_Equations_Class_Notes_WARRIOR_SERIES Copy Co...
SUNILCHUG
 
PPTX
The quick brown fox jumps over the lazy dog
YohannesGetachew16
 
PDF
About The Hindu Society of North Carolin
paragdighe3
 
PPTX
原版丹佛大学毕业证文凭DU学生证购买在线制作本科文凭
sw6vvn9s
 
PPT
lecture_20_anxsacAFAERVedcdvrvVatomy.ppt
BALQISNURAZIZAH1
 
PPTX
National-National Spoil Your Dog Day (1).pptx
recouti384
 
PPTX
adklsfjslkfjnadnsm csc klajdkldjlkamto LCA - Lecture2.pptx
prateekradhakrishn
 
PPTX
DFARS Part 245 - Government Property DOD DFARS
JSchaus & Associates
 
PPTX
pppppppppppppppppppppppppPIR Bothoan.pptx
RHUCARAMORANCatandua
 
PDF
Beyond Free Rides: A Multi-State Assessment of Women's Bus Fare Subsidy Schem...
rheakaran2
 
PPTX
里贾纳大学毕业证含金量如何?全球顶尖学府的“黄金通行证”学历认证
w7gqk0ya
 
PDF
PROJECT : Nirbighna----From भीड To भरोसा
Rishab Acharya
 
PPTX
学位成绩单修改休斯顿大学毕业证(UH毕业证书)文凭证书原版制作购买毕业证流程
asp9i3c
 
PDF
Review on Rythu Bazars preparea a ppt by visual effects
ChiefExecutiveOffice17
 
Virtuosity Award presentation for leaders.pptx
jlong12
 
5th week.pptxazqzqxqxqqxwxwxwxwsxwswxwxwxw
EmanEssa14
 
dawsoncitycommunityrollingadsJuly30_25.pptx
pmenzies
 
Parliament_of_India_Presentationisakwaysgood.pptx
rawatsharukh19
 
Culture_Presentation_Abdul_Rafay_With_Images.pptx
ehsanejaz57
 
RRB Technician Syllabus for Technician Gr I Signal.docx
Sitamarhi Institute of Technology
 
Chemistry_Chemical_Reactions_and_Equations_Class_Notes_WARRIOR_SERIES Copy Co...
SUNILCHUG
 
The quick brown fox jumps over the lazy dog
YohannesGetachew16
 
About The Hindu Society of North Carolin
paragdighe3
 
原版丹佛大学毕业证文凭DU学生证购买在线制作本科文凭
sw6vvn9s
 
lecture_20_anxsacAFAERVedcdvrvVatomy.ppt
BALQISNURAZIZAH1
 
National-National Spoil Your Dog Day (1).pptx
recouti384
 
adklsfjslkfjnadnsm csc klajdkldjlkamto LCA - Lecture2.pptx
prateekradhakrishn
 
DFARS Part 245 - Government Property DOD DFARS
JSchaus & Associates
 
pppppppppppppppppppppppppPIR Bothoan.pptx
RHUCARAMORANCatandua
 
Beyond Free Rides: A Multi-State Assessment of Women's Bus Fare Subsidy Schem...
rheakaran2
 
里贾纳大学毕业证含金量如何?全球顶尖学府的“黄金通行证”学历认证
w7gqk0ya
 
PROJECT : Nirbighna----From भीड To भरोसा
Rishab Acharya
 
学位成绩单修改休斯顿大学毕业证(UH毕业证书)文凭证书原版制作购买毕业证流程
asp9i3c
 
Review on Rythu Bazars preparea a ppt by visual effects
ChiefExecutiveOffice17
 

A Guide to AI for Smarter Nonprofits - Dr. Cori Faklaris, UNC Charlotte

  • 1. A Guide to AI for Smarter Nonprofits Dr. Cori Faklaris University of North Carolina at Charlotte, College of Computing and Informatics Presentation to United Way of Greater Charlotte, April 18, 2024
  • 2. Cori Faklaris ● Assistant Professor and Director of the Security and Privacy Experiences (SPEX) research group, Dept. of Software and Information Systems, College of Computing and Informatics ○ Ph.D., Human-Computer Interaction, School of Computer Science, Carnegie Mellon University ● Human Factors / Psychology focus on Cybersecurity, Privacy, AI/ML ● Past career in news + design, social media ● Past freelance/consultancy business 2 [email protected] Page 2
  • 3. Key takeaways from today’s presentation • AI provides you with “infinite interns.” • Give people permission & guardrails to learn what works with these “interns” and what doesn’t. • Create a roadmap for adding in more AI to assist nonprofit work, along with strategies for bias mitigation 3
  • 4. Overview of Generative AI Adapted from a 2023 talk & course materials 4
  • 5. When you hear ‘AI,’ think ‘statistical pattern-matching’ • Oracle describes AI this way: [Artificial Intelligence] has become a catchall term for applications that perform complex tasks that once required human input, such as communicating with customers online or playing chess. The term is often used interchangeably with … machine learning (ML) and deep learning. Text from What is Artificial Intelligence (AI)? Oracle, n.d. Retrieved May 16, 2023 from https://www.oracle.com/artificial-intelligence/what-is-ai/ Image from Pattern Recognition. GeeksforGeeks. Retrieved May 16, 2023 from https://www.geeksforgeeks.org/pattern-recognition-introduction/ The data is “tokenized” (= made into “chunks” of words, punctuation marks, pixels, etc.) during this process - remember this for later
  • 6. ‘Automation’ and ‘AI’ are related, but separate Automation 6 • Repetitive tasks • Does not learn over time • Aims to mimic human activity, but not necessarily human cognition/intelligence • Follows instructions • Does not necessarily use data outside of what is required for the self-contained tasks it is programmed to do Artificial Intelligence • Dynamic tasks, extrapolation • Learns over time • Aims to mimic some aspects of human cognition/intelligence • Evolves its own instructions • Uses to become “smart” Slide and animations courtesy of Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications [email protected] Page 6
  • 7. ‘Automation’ and ‘AI’ are related, but separate Automation 7 • Repetitive tasks • Does not learn over time • Aims to mimic human activity, but not necessarily human cognition/intelligence • Follows instructions • Does not necessarily use data outside of what is required for the self-contained tasks it is programmed to do Artificial Intelligence • Dynamic tasks, extrapolation • Learns over time • Aims to mimic some aspects of human cognition/intelligence • Evolves its own instructions • Uses to become “smart” As long as they have enough data, AI models now can generate part or all of a creative work. This includes business functions such as reading and writing documents, creating a table or figure to summarize data, programming, and drafting presentations or training (ahem).
  • 8. How Generative AI works (admittedly oversimplified) The system generates text or images using its previously built model of the statistical distributions of tokens (= “chunks” of words, punctuation marks, pixels, etc.) created from its very large training dataset. Image from Pattern Recognition. GeeksforGeeks. Retrieved May 16, 2023 from https://www.geeksforgeeks.org/pattern-recognition-introduction/ Murray Shanahan. 2022. Talking About Large Language Models. arXiv [cs.CL]. Retrieved from http://arxiv.org/abs/2212.03551 Bea Stollnitz. How generative language models work. Retrieved May 10, 2023 from https://bea.stollnitz.com/blog/how-gpt-works/ Doc Chat Image [email protected] Page 8
  • 9. How Generative AI works (admittedly oversimplified) It might make mistakes or “hallucinate” based on the limitations of its process, but the output still might look like what you wanted. Ted Chiang’s analogy = “unreliable photocopier” or a “blurry JPEG” Ted Chiang. 2023. ChatGPT Is a Blurry JPEG of the Web. The New Yorker. Retrieved May 10, 2023 from https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web Murray Shanahan. 2022. Talking About Large Language Models. arXiv [cs.CL]. Retrieved from http://arxiv.org/abs/2212.03551 Bea Stollnitz. How generative language models work. Retrieved May 10, 2023 from https://bea.stollnitz.com/blog/how-gpt-works/ Doc Chat Image [email protected] Page 9
  • 10. Learn how to instruct AI models via ‘prompts’ • One significant factor in the quality of a generative AI’s output is the "prompt," or instructions that the user gives to the AI to begin the interaction. • For best results, the prompt should be specific, detailed, and concise. • It should give the LLM a persona or role to play, and a goal. 10 Rebekah Carter. 2023. How to Talk to an LLM: Prompt Engineering for Beginners. UC Today. Retrieved March 25, 2024 from https://www.uctoday.com/unified-communications/how-to-talk-to-an-llm-llm-prompt-engineering-for-beginners
  • 11. Issues with AI overtrust + biases Adapted from a 2023 talk & course materials 11
  • 12. Human biases inevitably creep into human designs [email protected] Page 12 Better Off Ted: “Racial Sensitivity.” Clip titled “Racist Sensors” via YouTube. Retrieved April 9, 2024. More info at https://www.imdb.com/title/tt1346402/
  • 13. Overtrust in AI statistical pattern matching - why? • Going on autopilot • Rationalizing observed failures (eg at least the system sees White people!) • Perceiving low risk • Social pressure/conformity • Being told to trust the system • Seeing others trust the system • Individual differences • Experts will notice when something seems off and be able to respond; non-experts won’t (unless/until outcomes are very bad) “illustration of "trust" spilling over a dam” | DALL-E 13 [email protected] Page 13 Adapted from Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications
  • 14. Lots of data means lots of errors, biases can creep in Human biases (whether explicit or implicit) can creep in at any point in the AI data pipeline - but often starts with the variety of data collected. Metika Sikka. 2021. The Human Bias-Accuracy Trade-off. Towards Data Science. Retrieved April 9, 2024 from https://towardsdatascience.com/the-human-bias-accuracy-trade-off-ad95e3c612a9 Doc Chat Image Are genders, ethnicities, classes, regions represented fairly in data? Was data collection skewed (eg data about jobs gathered mostly from U.S. men)? Do the training + test datasets meet benchmarks? Did any humans audit them for biases? Is this system even worth building? What are foreseeable risks to people? Is it possible to explain what it did? How should we interpret the results? Can we rely on results in total or in part? Which part? What is the real-world impact? What if the real-world context changes? How was the data prepped? 14
  • 15. Non-zero chance that AI failures occur - How to cope? • Avoid failing early IN PUBLIC • Early periods of low reliability damage trust more than late periods of low reliability (Desai et al., 2013) • Bigger problem for incumbents (eg Google) than for newcomers (eg OpenAI) • Prioritize safety [= absence of unreasonable probability + severity of harms] • People prioritize personal safety over financial cost (Adubor et al., 2017) • For typical end user, computing safety includes data security and privacy guarantees • Be socially vulnerable • Apologizing and explaining the reason for the failure can improve rapport • Pratfall effect • People adapt faster than you think to new technologies & situations (eg mobile live-streaming video) • We know that people over-trust robots even after they have demonstrated failure and made odd requests! (Salem et al. 2015, Morales et al. 2019) 15 [email protected] Page 15 Adapted from Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications
  • 16. ‘AI Bill of Rights’ proposes needed human safeguards • Safe and Effective Systems - You should be protected from unsafe or ineffective systems. • Algorithmic Discrimination Protections - You should not face discrimination by algorithms, and systems should be used and designed in an equitable way. • Data Privacy - You should be protected from abusive data practices via built-in protections and have agency over how data about you is used. • Notice and Explanation - You should know that automation or AI is being used and understand how and why it contributes to outcomes that impact you. • Human Alternatives, Consideration, and Fallback - You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. 16 [email protected] Page 16
  • 17. Advice* on Using AI for Nonprofits *Valid for 2024 … might be invalidated as the tech improves :-) 17
  • 18. “AI gives you infinite interns.” 18 Benedict Evans, Technology Analyst Benedict Evans. 2024. AI, and Everything Else. Retrieved from https://www.ben-evans.com/presentations
  • 19. Reframe AI as ‘infinite interns’ available to work 19 Reasoning? ● Tell it what you want and trust it to do it without you? ● Use one to instruct and supervise another? ● Have it act as your “agent”? ● Limited by inability to create new knowledge, lack of persistent memory of task context over time Pattern Extrapolation ● Writing code ● Brainstorming ● Auto-suggest text ● Manipulate images ● Limited to the examples that it has already seen (and those examples may have errors or biases!) Synthesis & Summary ● Get a summary & analysis of big dataset ● Ask it questions ● Combine existing images into new ones ● Limited by trust that you’d give an “intern” to access lots of valuable, confidential data
  • 20. Reframe AI as ‘infinite interns’ available to work 20 Reasoning? ● Tell it what you want and trust it to do it without you? ● Use one to instruct and supervise another? ● Have it act as your “agent”? ● Limited by inability to create new knowledge, lack of persistent memory of task context over time Pattern Extrapolation ● Writing code ● Brainstorming ● Auto-suggest text ● Manipulate images ● Limited to the examples that it has already seen (and those examples may have errors!) Synthesis & Summary ● Get a summary & analysis of big dataset ● Ask it questions ● Combine existing images into new ones ● Limited by trust that you’d give an “intern” to access lots of valuable, confidential data Like any eager-to-please intern, the AI will always give you an answer, an output, SOMETHING. Whether that SOMETHING is actually what you wanted, makes logical or practical sense, or is trustworthy and unbiased, is up to YOU to judge!
  • 21. “Fast, Cheap, or Good Quality – Pick Two” for AI 21 Cheap Open source or public free version vs. buying new Fast Use what’s been built vs. configure brand new tools Good OpenAI paid models, Mistral, ???
  • 22. Pick Fast + Cheap for now to explore use cases • Start using “free” or low-cost AI in small doses so that people get used to it and play around with it. • Schedule an internal review for x months away to discuss these low-stakes experiments & fill out a roadmap to add in paid AI. 22 Cheap Open source or public free version vs. buying new Fast Use what’s been built vs. configure brand new tools Good OpenAI paid models, Mistral, ???
  • 23. Move toward Cheap + Good with bias mitigation • Oversample or overweight data from historically disadvantaged groups • Delegate 2-3 people with diverse backgrounds to audit data, AI outputs for biases • May need a data scientist or anti-bias tools to help (eg IBM Research’s AIF360) 23 Cheap Open source or public free version vs. buying new Fast Use what’s been built vs. configure brand new tools Good OpenAI paid models, Mistral, ???
  • 24. Goal is Fast + Good to develop HUMAN potential • You won’t be able to reduce headcount so much (AI only gives intern-level quality now, might not improve a lot) • Use people to be the AI supervisors and make strategic decisions based on AI + human generated info • Provide/subsidize AI services 24 Cheap Open source or public free version vs. buying new Fast Use what’s been built vs. configure brand new tools Good OpenAI paid models, Mistral, ???
  • 25. Nonprofits would benefit from a UW AI facility Provide an AI model (or many to choose from), but build your own interface that includes prompt templates for specific tasks, pretrained personas too 25 Prompt templates: ○ All docs - Fundraising - Operations - Personnel ○ Spreadsheets - Budgeting - Calendars ○ Coding help ○ Web search Advanced: ○ Model + Parameters ○ License Key ○ API Key What do you want to do? Director persona Grant writer persona Outreach persona HR persona Wireframe based on TypingMind AI home page from April 7, 2024 at https://www.typingmind.com/
  • 26. Summary from today’s presentation • For nonprofit workplaces, think of AI as ‘infinite interns.’ • What would you trust an intern to do? • What could they get wrong? (Biases, errors, discrimination, etc.) • Give people permission & guardrails to learn what works with these “interns” and what doesn’t. • Start using “free” or low-cost AI in small doses so that people get used to it and play around with it BEFORE rolling something out publicly • Create a roadmap for adding in more AI to assist nonprofit work, along with strategies for bias mitigation 26
  • 27. What questions do you have? ● AI provides you with “infinite interns.” ● Give people permission & guardrails to learn what works with these “interns” and what doesn’t. ● Create a roadmap for adding in more AI to assist nonprofit work, along with strategies for bias mitigation cfaklari @charlotte.edu
  • 28. Extra Slides - Can Use if Time 28
  • 29. Examples of publicly available Generative AI tools Crowdsourced list of available AI tools: https://bit.ly/UsefulLLMs
  • 30. Employees need AI guidelines or a Use Policy • I include the following in my course syllabus this semester: In this course, students are permitted to use tools such as Stable Diffusion, DALL-E, ChatGPT, and BingChat. In general, permitted use of such tools is consistent with permitted use of non-AI assistants such as Grammarly, templating tools such as Canva, or images or text sourced from the internet or others’ files. No student may submit an assignment or work on an exam as their own that is entirely generated by means of an AI tool. If students use an AI tool or other creative tool to generate, draft, create, or compose any portion of any assignment, they must (a) credit the tool, and (b) identify what part of the work is from the AI tool and what is from themselves. Students are responsible for identifying and removing any factual errors, biases, and/or fake references that are introduced into their work through use of the AI tool. 30
  • 31. Future $$$ - in-house AI vs. ‘front door’ to vendor • Can build your own AI server & deploy many models, plus give users ability to fine-tune outputs … Screenshot: https://github.com/Lightning -AI/pytorch-lightning 31
  • 32. Future $$$ - in-house AI vs. ‘front door’ to vendor • … or host a “wrapper” around a paid service such as OpenAI’s ChatGPT + add guidance … Screenshot: https://genai.umich.edu/ 32
  • 33. Future $$$ - in-house AI vs. ‘front door’ to vendor • … or provide ChatGPT, but build your own interface that includes prompt templates for specific tasks, maybe pretrained personas too? 33 Prompt templates: ○ All docs - Fundraising - Operations - Personnel ○ Spreadsheets - Budgeting - Calendars ○ Coding help ○ Web search Advanced: ○ License Key ○ API Key What do you want to do? Director persona Grant writer persona Outreach persona HR persona Wireframe based on TypingMind AI home page from April 7, 2024 at https://www.typingmind.com/