A Guide to AI for Smarter Nonprofits - Dr. Cori Faklaris, UNC Charlotte

CoriFaklaris 238 views 33 slides Jun 07, 2024
Slide 1
Slide 1 of 33
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33

About This Presentation

Working with data is a challenge for many organizations. Nonprofits in particular may need to collect and analyze sensitive, incomplete, and/or biased historical data about people. In this talk, Dr. Cori Faklaris of UNC Charlotte provides an overview of current AI capabilities and weaknesses to cons...


Slide Content

A Guide to AI
for Smarter Nonprofits
Dr. Cori Faklaris
University of North Carolina at Charlotte, College of Computing and Informatics
Presentation to United Way of Greater Charlotte, April 18, 2024

Cori
Faklaris
https://spexlab.org
https://corifaklaris.com

●Assistant Professor and Director of the
Security and Privacy Experiences (SPEX)
research group, Dept. of Software and Information
Systems, College of Computing and Informatics
○Ph.D., Human-Computer Interaction, School of
Computer Science, Carnegie Mellon University
●Human Factors / Psychology focus on
Cybersecurity, Privacy, AI/ML
●Past career in news + design, social media
●Past freelance/consultancy business
2 [email protected] Page 2

Key takeaways from today’s presentation
•AI provides you with “infinite interns.”
•Give people permission & guardrails to learn what works with
these “interns” and what doesn’t.
•Create a roadmap for adding in more AI to assist nonprofit work,
along with strategies for bias mitigation
3

Overview of Generative AI
Adapted from a 2023 talk & course materials
4

When you hear ‘AI,’ think ‘statistical pattern-matching’
•Oracle describes AI this way:
[Artificial Intelligence] has become a catchall
term for applications that perform complex tasks
that once required human input, such as
communicating with customers online or playing
chess.
The term is often used interchangeably with …
machine learning (ML) and deep learning.
Text from What is Artificial Intelligence (AI)? Oracle, n.d. Retrieved May 16, 2023 from https://www.oracle.com/artificial-intelligence/what-is-ai/
Image from Pattern Recognition. GeeksforGeeks. Retrieved May 16, 2023 from https://www.geeksforgeeks.org/pattern-recognition-introduction/
The data is “tokenized” (= made
into “chunks” of words, punctuation
marks, pixels, etc.) during this
process - remember this for later

‘Automation’ and ‘AI’ are related, but separate
Automation
6
•Repetitive tasks
•Does not learn over time
•Aims to mimic human activity, but not
necessarily human
cognition/intelligence
•Follows instructions
•Does not necessarily use data outside
of what is required for the
self-contained tasks it is programmed
to do
Artificial Intelligence

•Dynamic tasks, extrapolation
•Learns over time
•Aims to mimic some aspects of human
cognition/intelligence
•Evolves its own instructions
•Uses to become “smart”
Slide and animations courtesy of Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications
[email protected] Page 6

‘Automation’ and ‘AI’ are related, but separate
Automation
7
•Repetitive tasks
•Does not learn over time
•Aims to mimic human activity, but not
necessarily human
cognition/intelligence
•Follows instructions
•Does not necessarily use data outside
of what is required for the
self-contained tasks it is programmed
to do
Artificial Intelligence

•Dynamic tasks, extrapolation
•Learns over time
•Aims to mimic some aspects of human
cognition/intelligence
•Evolves its own instructions
•Uses to become “smart”
As long as they have enough data, AI models now can
generate part or all of a creative work.

This includes business functions such as reading and
writing documents, creating a table or figure to
summarize data, programming, and drafting
presentations or training (ahem).

How Generative AI works (admittedly oversimplified)
The system generates text or images using its previously built model of the
statistical distributions of tokens (= “chunks” of words, punctuation marks,
pixels, etc.) created from its very large training dataset.

Image from Pattern Recognition. GeeksforGeeks. Retrieved May 16, 2023 from https://www.geeksforgeeks.org/pattern-recognition-introduction/
Murray Shanahan. 2022. Talking About Large Language Models. arXiv [cs.CL]. Retrieved from http://arxiv.org/abs/2212.03551
Bea Stollnitz. How generative language models work. Retrieved May 10, 2023 from https://bea.stollnitz.com/blog/how-gpt-works/
Doc
Chat
Image
[email protected] Page 8

How Generative AI works (admittedly oversimplified)
It might make mistakes or “hallucinate” based on the limitations of its
process, but the output still might look like what you wanted.
Ted Chiang’s analogy = “unreliable photocopier” or a “blurry JPEG”

Ted Chiang. 2023. ChatGPT Is a Blurry JPEG of the Web. The New Yorker. Retrieved May 10, 2023 from https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
Murray Shanahan. 2022. Talking About Large Language Models. arXiv [cs.CL]. Retrieved from http://arxiv.org/abs/2212.03551
Bea Stollnitz. How generative language models work. Retrieved May 10, 2023 from https://bea.stollnitz.com/blog/how-gpt-works/
Doc
Chat
Image
[email protected] Page 9

Learn how to instruct AI models via ‘prompts’
•One significant factor in the quality of a generative AI’s output is
the "prompt," or instructions that the user gives to the AI to begin
the interaction.
•For best results, the prompt should be specific, detailed, and
concise.
•It should give the LLM a persona or role to play, and a goal.
10
Rebekah Carter. 2023. How to Talk to an LLM: Prompt Engineering for Beginners. UC Today. Retrieved March 25, 2024 from https://www.uctoday.com/unified-communications/how-to-talk-to-an-llm-llm-prompt-engineering-for-beginners

Issues with AI overtrust + biases
Adapted from a 2023 talk & course materials
11

Human biases inevitably creep into human designs
[email protected] Page 12
Better Off Ted: “Racial Sensitivity.” Clip titled “Racist Sensors” via YouTube. Retrieved April 9, 2024. More info at https://www.imdb.com/title/tt1346402/

Overtrust in AI statistical pattern matching - why?
•Going on autopilot
•Rationalizing observed failures (eg
at least the system sees White people!)
•Perceiving low risk
•Social pressure/conformity
•Being told to trust the system
•Seeing others trust the system
•Individual differences
•Experts will notice when something seems
off and be able to respond; non-experts
won’t (unless/until outcomes are very bad)
“illustration of "trust" spilling over a dam” | DALL-E
13 [email protected] Page 13
Adapted from Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications

Lots of data means lots of errors, biases can creep in
Human biases (whether explicit or implicit) can creep in at any point in the
AI data pipeline - but often starts with the variety of data collected.
Metika Sikka. 2021. The Human Bias-Accuracy Trade-off. Towards Data Science. Retrieved April 9, 2024 from https://towardsdatascience.com/the-human-bias-accuracy-trade-off-ad95e3c612a9
Doc
Chat
Image
Are genders,
ethnicities,
classes, regions
represented
fairly in data?

Was data
collection
skewed (eg data
about jobs
gathered mostly
from U.S. men)?

Do the training + test datasets meet benchmarks?
Did any humans audit them for biases?

Is this system even worth building?
What are foreseeable risks to people?

Is it possible to explain what it did?
How should we interpret the results?

Can we rely on
results in total
or in part?
Which part?

What is the
real-world
impact?

What if the
real-world
context
changes?



How was the
data prepped?

14

Non-zero chance that AI failures occur - How to cope?
•Avoid failing early IN PUBLIC
•Early periods of low reliability damage trust more than late periods of low reliability (Desai et al., 2013)
•Bigger problem for incumbents (eg Google) than for newcomers (eg OpenAI)
•Prioritize safety [= absence of unreasonable probability + severity of harms]
•People prioritize personal safety over financial cost (Adubor et al., 2017)
•For typical end user, computing safety includes data security and privacy guarantees
•Be socially vulnerable
•Apologizing and explaining the reason for the failure can improve rapport
•Pratfall effect
•People adapt faster than you think to new technologies & situations (eg mobile live-streaming video)
•We know that people over-trust robots even after they have demonstrated failure and made odd requests! (Salem et
al. 2015, Morales et al. 2019)
15 [email protected] Page 15
Adapted from Dr. Samantha Reig, University of Massachusetts - Lowell, 2022 private communications

‘AI Bill of Rights’ proposes needed human safeguards
•Safe and Effective Systems - You should be protected from unsafe or ineffective
systems.
•Algorithmic Discrimination Protections - You should not face discrimination by
algorithms, and systems should be used and designed in an equitable way.
•Data Privacy - You should be protected from abusive data practices via built-in
protections and have agency over how data about you is used.
•Notice and Explanation - You should know that automation or AI is being used
and understand how and why it contributes to outcomes that impact you.
•Human Alternatives, Consideration, and Fallback - You should be able to opt
out, where appropriate, and have access to a person who can quickly consider
and remedy problems you encounter.
16 [email protected] Page 16

Advice* on Using AI for Nonprofits
*Valid for 2024 … might be invalidated as the tech improves :-)
17

“AI gives you infinite interns.”
18
Benedict Evans, Technology Analyst
Benedict Evans. 2024. AI, and Everything Else. Retrieved from https://www.ben-evans.com/presentations

Reframe AI as ‘infinite interns’ available to work
19
Reasoning?
●Tell it what you want and
trust it to do it without you?
●Use one to instruct and
supervise another?
●Have it act as your “agent”?
●Limited by inability to
create new knowledge, lack
of persistent memory of
task context over time
Pattern Extrapolation
●Writing code
●Brainstorming
●Auto-suggest text
●Manipulate images
●Limited to the
examples that it has
already seen (and
those examples may
have errors or biases!)
Synthesis & Summary
●Get a summary &
analysis of big dataset
●Ask it questions
●Combine existing
images into new ones
●Limited by trust that
you’d give an “intern” to
access lots of valuable,
confidential data

Reframe AI as ‘infinite interns’ available to work
20
Reasoning?
●Tell it what you want and
trust it to do it without you?
●Use one to instruct and
supervise another?
●Have it act as your “agent”?
●Limited by inability to
create new knowledge, lack
of persistent memory of
task context over time
Pattern Extrapolation
●Writing code
●Brainstorming
●Auto-suggest text
●Manipulate images
●Limited to the
examples that it has
already seen (and
those examples may
have errors!)
Synthesis & Summary
●Get a summary &
analysis of big dataset
●Ask it questions
●Combine existing
images into new ones
●Limited by trust that
you’d give an “intern” to
access lots of valuable,
confidential data
Like any eager-to-please intern, the AI will always
give you an answer, an output, SOMETHING.

Whether that SOMETHING is actually what you
wanted, makes logical or practical sense, or is
trustworthy and unbiased, is up to YOU to judge!

“Fast, Cheap, or Good Quality – Pick Two” for AI
21
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???

Pick Fast + Cheap for now to explore use cases
•Start using “free” or
low-cost AI in small doses so
that people get used to it
and play around with it.
•Schedule an internal review
for x months away to discuss
these low-stakes
experiments & fill out a
roadmap to add in paid AI.
22
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???

Move toward Cheap + Good with bias mitigation
•Oversample or overweight
data from historically
disadvantaged groups
•Delegate 2-3 people with
diverse backgrounds to audit
data, AI outputs for biases
•May need a data scientist or
anti-bias tools to help (eg
IBM Research’s AIF360)
23
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???

Goal is Fast + Good to develop HUMAN potential
•You won’t be able to reduce
headcount so much (AI only
gives intern-level quality
now, might not improve a lot)
•Use people to be the AI
supervisors and make
strategic decisions based on
AI + human generated info
•Provide/subsidize AI services
24
Cheap
Open source or
public free version
vs. buying new
Fast
Use what’s been
built vs. configure
brand new tools
Good
OpenAI
paid models,
Mistral, ???

Nonprofits would benefit from a UW AI facility
Provide an AI model
(or many to choose
from), but build your
own interface that
includes prompt
templates for
specific tasks,
pretrained personas
too

25
Prompt templates:
○ All docs
-Fundraising
-Operations
-Personnel
○ Spreadsheets
-Budgeting
-Calendars
○ Coding help
○ Web search

Advanced:
○ Model + Parameters
○ License Key
○ API Key
What do you want to do?
Director
persona
Grant writer
persona
Outreach
persona
HR
persona
Wireframe based on TypingMind AI home page from April 7, 2024 at https://www.typingmind.com/

Summary from today’s presentation
•For nonprofit workplaces, think of AI as ‘infinite interns.’
•What would you trust an intern to do?
•What could they get wrong? (Biases, errors, discrimination, etc.)
•Give people permission & guardrails to learn what works with
these “interns” and what doesn’t.
•Start using “free” or low-cost AI in small doses so that people get used to
it and play around with it BEFORE rolling something out publicly
•Create a roadmap for adding in more AI to assist nonprofit work,
along with strategies for bias mitigation
26

What questions do you have?
●AI provides you with “infinite interns.”
●Give people permission & guardrails to learn what works
with these “interns” and what doesn’t.
●Create a roadmap for adding in more AI to assist nonprofit
work, along with strategies for bias mitigation
cfaklari
@charlotte.edu

Extra Slides - Can Use if Time
28

Examples of publicly available Generative AI tools
Crowdsourced list of
available AI tools:
https://bit.ly/UsefulLLMs

Employees need AI guidelines or a Use Policy
•I include the following in my course syllabus this semester:
In this course, students are permitted to use tools such as Stable Diffusion, DALL-E,
ChatGPT, and BingChat. In general, permitted use of such tools is consistent with
permitted use of non-AI assistants such as Grammarly, templating tools such as
Canva, or images or text sourced from the internet or others’ files. No student may
submit an assignment or work on an exam as their own that is entirely generated by
means of an AI tool. If students use an AI tool or other creative tool to generate, draft,
create, or compose any portion of any assignment, they must (a) credit the tool, and (b)
identify what part of the work is from the AI tool and what is from themselves.
Students are responsible for identifying and removing any factual errors, biases,
and/or fake references that are introduced into their work through use of the AI tool.
30

Future $$$ - in-house AI vs. ‘front door’ to vendor
•Can build your
own AI server &
deploy many
models, plus
give users ability
to fine-tune
outputs …
Screenshot:
https://github.com/Lightning
-AI/pytorch-lightning
31

Future $$$ - in-house AI vs. ‘front door’ to vendor
•… or host a
“wrapper”
around a paid
service such as
OpenAI’s
ChatGPT + add
guidance …
Screenshot:
https://genai.umich.edu/
32

Future $$$ - in-house AI vs. ‘front door’ to vendor
•… or provide
ChatGPT, but
build your own
interface that
includes prompt
templates for
specific tasks,
maybe pretrained
personas too?

33
Prompt templates:
○ All docs
-Fundraising
-Operations
-Personnel
○ Spreadsheets
-Budgeting
-Calendars
○ Coding help
○ Web search


Advanced:
○ License Key
○ API Key
What do you want to do?
Director
persona
Grant writer
persona
Outreach
persona
HR
persona
Wireframe based on TypingMind AI home page from April 7, 2024 at https://www.typingmind.com/