AI Explanations as Two-Way Experiences, Led by Users

DesignforContext 236 views 46 slides Jun 30, 2024
Slide 1
Slide 1 of 46
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46

About This Presentation

In human communication, explanations serve to increase understanding, overcome communication barriers, and build trust. They are, in most cases, dialogues. In computer science, AI explanations (“XAI”) map how an AI system expresses underlying logic, algorithmic processing, and data sources that ...


Slide Content

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI’s explanations
as two-wayexperiences,
led by users
User Experience
Professionals Association
[email protected] Degler
https://d4c.link/UXPA24-XAI

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
LET’S START WITH: WHAT IS AI’S SCOPE?

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
“I think that discussions of this technology
become much clearer when we replace
the term ‘AI’ with the word ‘automation’.”
AI REFRAMED: Prof. Emily M. Bender
AI in the Workplace: New Crisis or Longstanding Challenge?
Opening remarks for Congressional roundtable.
Dr. Emily M. Bender , 9/28/2023.
Text & video: https://medium.com/@emilymenonbender/opening-
remarks-on-ai-in-the-workplace-new-crisis-or-longstanding-challenge-
eb81d1bee9fTypes of AI applications:
1.Automatic decision systems
2.Different kinds of automated classification
3.Recommender systems -what to promote
4.Automating access to human labor
5.Automation of translation between forms
6.Synthetic media machinesGenerative AI

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
for example…
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
Search indexing
Translate / transcribe
Monitor content
©/ rights watchdogs

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
GPS / Maps
Voice assistants
Type-ahead
Image processing

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
Voice assistants
Vacuuming
Thermostats
Surveillance

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
Lane assist
Cruise control
Maintenance
(semi-) autonomous
driving

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
Customer service / CX
Performance reviews
HR resume
assessment

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
Risk assessment
Mortgage valuation
Stock monitoring
Gambling

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
Diagnosis
Health record analysis
Surgery support
Robots in hallways

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI Appears In Many Projects, and Many Guises
Society
Your car
Your financesYour phone
Your internet
Your home
Your health
Your job
Face recognition
Community policing
Robotic warehouses
Deep fakes

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI In Brief
EXPERTISE
Content
(articulated expertise)
Contexts
(framed expertise)
MATH
Supervised
(trained)
Unsupervised
(self-learning)
MODELS
Human-AI teaming
Trained data patterns
Discovered patterns

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
EXPLAINING “EXPLANATIONS”

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
●Two-way communication
●Usually iterative –progressing
to mutual understanding
●Communication involves
●Language
●Non-verbal cues
●Shared contexts
●Effective explanation
fosters trust
HUMAN < > HUMAN explanation
Human Human

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
●UX research, design, and
code logic anticipates specific
needed messages, such as
●Instructions
●Error states
●Confirmations
Non-AI COMPUTER < > HUMAN explanation
Human Static machine AI machine
●User interaction provides
direct response and/or
context to an application
●Confirmations
●Choices
●Contexts & information

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
●Describing *
●Elaboration on outputs
●Source data/content
●Algorithm logic
●Weightings
●In some cases, rules
governing its processing
●Methods *
●Language
●Visualizations
Comp-Sci View of AI > HUMAN Explanation (“XAI”)
Human Static machine AI machineHuman Static machine AI machine
* The aim, not always the
reality in AI applications
Note: In the period 2021-2022, academic literature on XAI increased about 55-75%.

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
●A historical perspective of explainable AI, Confalonieri, et al, 9/21/2020
●A useful paper outlining and assessing effectiveness of types of explanations.
Aiming to guide ways that machines could provide effectiveexplanations.
https://onlinelibrary.wiley.com/doi/abs/10.1002/widm.1391
●Explainable AI, but explainable to whom?Gerlings, et al, 2021-06-10
●Healthcare case study: comprehension of healthcare information by different user communities
(various health patients/practitioners, as well as developers, subject matter experts, healthcare
decision-makers), and how that affects XAI design. http://arxiv.org/abs/2106.05568
●TED: Teaching AI to Explain its Decisions, Hind, et al, 1/27/2019
●The paper states: “Unlike existing methods, it does not attempt to probe the reasoning process of a
model. Instead, it seeks to replicate the reasoning process of a human domain user.”
https://doi.org/10.1145/3306618.3314273
●A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods,
Speith, et al, 6/20/2022
●Assessing structure and implementabilityof various XAI taxonomies.
https://doi.org/10.1145/3531146.3534639
Examples of Papers with Insights Into XAI UX

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Designing for responsible
explanations in Human-AI Teaming?
Let’s consider…
●Humility
●Transparency
●AI eliciting contexts and needs
●Reflecting back user instructions,
intentions and contexts
●Multi-modal interactions
●User control in the experience
●Recognizable implications
●Reducing risk, misunderstanding
“HUMAN-AI TEAMING” Explanation
Human Static machine AI machineHuman Static machine AI machine
This term is often preferred by the HCAI community.

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AN INITIAL FRAMEWORK
FOR EXPLANATIONS
A thinking tool for Human-AI explanations.
A framework-in-progress.

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Characterize Explanations for 2-way Communication
Human Static machine AI machineHuman Static machine AI machine
A framework-in-progress, a thinking
tool for Human-AI explanations…
As you do this work, your user
requirements and domain will
refine your categories.

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Types of Explanations in the Framework
9 Categories
Static,
Rule-Based
Dynamic,
Interactive,
Complex

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Types of Explanations in the Framework
InstructionsError / Warning
MessagesMultilingual
SubstantiveContextualMulti-cultural
EquitableRelationalEvolving

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Instructions
Explanations of application/website actions that are based on known
functions. These explanations help users understand the operation and
sequential steps required by the application.
H/M: When a user performs an action, instructional explanation builds
awareness of process and requirements, which should aid memory
H: In requirements gathering, people explain work processes and policies,
which are then encoded as rules within a system
M: The machine can monitor user actions, including sequence and data
accuracy, identifying patterns needing explanation
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Error / Warning Messages
Explanations of an anomaly, based on an application's internal rules. Error
messages describe system or user error. They ideally explain why (help users
understand rules) and give next steps.
H/M: Match explanation level to mutual prior interactions/experience
M: Highlight both the location and the nature of errors, focusing user attention
and prompting for action
H/M: Increase 2-way communication when it may be unclear whether there is
actuallyan error, rather than an incorrect rule or model
M: Increase explanation depth, quality, if errors are identified as repeated,
or user is “stuck”
H: Explanation to elaborate on mental model (particularly for assumed rules)
M: Elicit more user context about task/end goal, to provide different levels
and types of information as explanation
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Multilingual
Explanations that are available in the user’s preferred language, and
translations are accurate although some terms might not translate exactly.
M: Languages have different lengths and typography, so depending on the length of
explanations there may be space considerations that affect comprehension
M: Identify the languages that are available, and the source language of the information
used to support an explanation; set expectations when an explanation is confusing
H/M: In some cases, words or concepts may not be available in the language of choice;
users may need to (or decide to) seek information in another language (e.g.“Use
the English because there isn’t an Italian equivalent”)
M: Recognize homographs, where the same word may have two meanings, but not in all
cultures, and maybe not in the language that is in use at the moment
M: Accents may matter, and could reduce user comprehension speed in spoken situations
H/M: Some terms may convey a different sense of urgency, severity or importance in a
particular language, which could impact understanding
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Substantive
Evidence, "provability." Focusing on subject-specific information, describing
reliability. Possibly referring to AI scoring (the output's fit with different
"feature" categorizations).
H/M: With traditional search, individual results are given as evidence (with human review
= effort), but the ranking algorithm is not transparent; then, human user selections
are not “explainable” to the machine, for it to refine its acquisition
M: Visualizing the information space to clarify dominant subject areas and distribution
M: Subject/sources authority, theories, methodologies, available materials/research
M: Layers of information, such as definitions, “what is…” guidance, typical questions
M: Information scoring, pointing out contrasts/volatility
H/M: How best to explain uncertainty or likely bias?
H/M: Is it useful for AI to ask for next steps? Will users request next steps for evidence?
H: Elaboration or change expressed in their information scope/need
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Contextual
Users offer information about contexts that affect their needs. Machine
explanation reflects backunderstanding of user goals, tasks, experience –
and clarifies limitations, info depth, etc.
M: Elicit/request relevant contexts for an interaction
H/M: Share dimensions that influence decisions; such as people involved, additional
medical/health conditions, diagnostics from sensors/other machines
H/M: How might additional context(s) affect rules/algorithms?
M: Confirm (explain) understanding of contexts and what matters to the task being done
by human and machine; confirm understanding and impact of any context changes
M: Offer the appropriate information –and only that information –to fit the context
H: How to learn what effect various context aspects impact algorithms and information
acquisition?
M: How can the machine effectively/efficiently elicit contexts?
H/M: In medical situations, comparing and if needed combining diagnostic models
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Multi-Cultural
The degree to which machines can reflect cultural awareness in
explanations. Framing explanations to be user-centered (align with intent)
and societally-centered (enhance trust).
M: How to identify cues of particular culturalalignment?
M: How general or specific should explanations be? Should they be culturally influenced?
Is it possible to not influence culturally? How to be transparent and sensitive?
H/M: What evidence in context expressions reflect a user’s cultural expectations? How
much iterative user profiling is needed to assess this?
H/M: “Reading the Air” –a Japanese phrase for sensing and understanding the cultural
expectations of other parties in an interaction; being attuned and sensitive to them
M: Setting the right tone: Authoritative, with humility. Help users balance confidence in
the machine, confidence in their judgment
H/M: Level of detail that is welcome, or burdening, or interpreted in unintended ways
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Equitable
Explanations avoid inherent biases that can lead to unfair treatment of
individuals. Different contextual elements, when combined, could create
unexpected concerns among users.
M: Unmoderated tone, such as using declarative language in chat, gives a false sense of
authority or certainty from an AI system, which risks disempowering people
H/M: Some language “tones” will affect people differently, which risks self-questioning by
the user, or even possibly by the AI
M: Consider what communication aspects influence the power dynamics or receptivity of
the user, such as timing of delivering an explanation, pace at which it is delivered
(fast or slow), interruption or over-talking as part of turn-taking in conversation
M: Communicating in ways that might be perceived as dismissive of the recipient (such as
a style perceived as “mansplaining”)
H: Having the feeling of “missing out” because information is excluded or described as not
relevant to the user (e.g.patients not getting content seen by doctors/nurses)
M: Language, vocabulary, tone can affect sense of “us” or “other”
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Relational
Human-AI interaction now, and in future, will be longitudinal. Knowledge of
each other, previous interactions, and changing experience/expectations
need to be expressed and explained.
H/M: Longitudinal (multiple sessions over time) rely on familiarity (and a sense of
memory/history) between the user and system. An expectation of updating
understandings should be in any model
H/M: How should changes in learning, experience, intents, contexts, models, and the
information space be shared?
H: In diagnostic decision support, the ongoing condition and treatments will be evolving.
This requires very current, shared contextual information
H/M: In conversations, are multiple parties involved over time?
M: How “familiar” should a machine be? Can there be a move from low-context to
high-contextcommunication? Should language use/form change with knowledge?
H: Accurate, consistent explanations engenders trust. What else is key to support trust?
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Evolving
Throughout the life of an AI system, it evolves as data changes, interactions
are refined, models vary, and human expectations change. Evolution must be
continually monitored/explained.
M: Routine diagnostics and proactive AI explanations can signal evolution in the info
space, internal models, or human uses
H: Developers, data scientists and UX must validate and explain any type of change,
however small, in a system
M: Certain types of data benefit from visualization: trend data, scoring information
features, statistics of human use/ responses, problems with human request input
H/M: Iteration can lead to evolution –how does an AI system explain internal ecosystem
iterations or model drift?
H: Can competing models (between different internal agents) be identified? How would
agents provide explanations/evidence?
M: How are potential paradigm shifts explained… When emerging evidence begins to
diverge from “known facts”?
Actions / Challenges

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Types of Explanations to Train for XAI (UX involved)
InstructionsError / Warning
MessagesMultilingual
SubstantiveContextualMulti-cultural
EquitableRelationalEvolving

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
THE ELEPHANT IN THE ROOM ?

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Where AI’s Focus can Support Explanation
EXPERTISE
Content
(articulated expertise)
Contexts
(framed expertise)
MATH
Supervised
(trained)
Unsupervised
(self-learning)
MODELS
Human-AI teaming
Trained data patterns
Discovered patterns

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
The Challenge
EXPERTISE
Content
(articulated expertise)
Contexts
(framed expertise)
MATH
Supervised
(trained)
Unsupervised
(self-learning)
MODELS
Human-AI teaming
Trained data patterns
Discovered patterns
This is not the way LLMs/Generative AI are trained.

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
●GenerativeAI uses probabilities in a statistical engine for word choices (“tokens”)
●Type-ahead on steroids
●Word choices are based on word pattern frequency, from example content
harvested from Large amounts of Language, shaped into a Model
●The Internet
●Word choices are based on language from the prompt that seeks the information
LLMs and Generative AI… the Elephant in the Room
Air Canada offers reduced bereavement fares… submit your ticket … within 90 days
“…our Bereavement policy does not allow refunds for travel that has already happened.”
and the reality:
https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
The Focus of LLMs and Generative AI
EXPERTISE
Content
(articulated expertise)
Contexts
(framed expertise)
MATH
Supervised
(trained)
Unsupervised
(self-learning)
MODELS
Human-AI teaming
Trained data patterns
Discovered patterns

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
●LLMs can be effective when used for initial NLP (Natural Language Processing)
●BUT –they are only able to recognize user contextand AI analytical processes…
IF –they have been trained specifically for those purposes
LLMs and Generative AI… the Elephant in the Room
What about using “Retrieval Augmented Generation” (RAG) ?
What about “Knowledge Graphs” (KG) + RAG ?
Even RAG + KG can’t explain —in context —reliably
●All Explanatory Contexts mustbe in your source documents —are they?

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Challenges:
●Declarative language
●Verbosity
●Potential for
●Lack of consistency (different answers for the same situations)
●Degraded performance or “Model Collapse”
And broadly:
●Environmental challenges
●Societal challenges
…even if these last items are not your work focus, their subtext affects TRUST
LLMs and Generative AI… the Elephant in the Room

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
HOW CAN WE TRAIN AI
FOR 2-WAY EXPLANATIONS?

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Training AI is a Deliberate Design & Data Act
EXPERTISE
Content
(articulated expertise)
Contexts
(framed expertise)
MATH
Supervised
(trained)
Unsupervised
(self-learning)
MODELS
Human-AI teaming
Trained data patterns
Discovered patterns

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
UX Must Be a ContinuousProcess
Startup, design,initial trainingAdoption,maturing
Visioning
Research
Design
Training
Use
Feedback
Evolutionthrough use

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
Time Cycles
Rich Context Models
Continuous, evolutionary practices
Additional Design
Models:
Collaboration /
Action Framework
AI design, method explorations
D. Degler, C. Smith, R. Evanhoe
2021-2024
https://d4c.link/IAC21
https://d4c.link/IAC22

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
As you do this work, your
user requirementsand domain
will refine your categories.
Train AI for Explanations with 2-way Communication
Human Static machine AI machineHuman Static machine AI machine

AI’s explanations as two-way experiences, led by users
#UXPA24
25 June 2024 © Duane Degler
AI’s explanations
as two-wayexperiences,
led by users
User Experience Professionals
Association Conference
[email protected] Degler
https://d4c.link/UXPA24