Charting Our Course- Information Professionals as AI Navigators
bpichman
46 views
33 slides
Aug 28, 2024
Slide 1 of 33
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
About This Presentation
As we reflect on our collaborative journey into the discussions at SLA, this closing session focuses on how special libraries can collectively shape an inclusive, innovative future. Join Brian Pichman again as we finish our map of the AI frontier for building partnerships and interdisciplinary tea...
As we reflect on our collaborative journey into the discussions at SLA, this closing session focuses on how special libraries can collectively shape an inclusive, innovative future. Join Brian Pichman again as we finish our map of the AI frontier for building partnerships and interdisciplinary teams to drive projects that align with the specialized missions of our libraries. This talk emphasizes the power of collaboration in amplifying the positive impacts of AI, from enhancing user engagement to analyzing data and research. When you leave the beautiful University of Rhode Island campus, take with you actionable steps for fostering a culture of continuous collaboration and innovation, ensuring that our libraries remain at the forefront of shaping the AI-laden frontiers.
Size: 5.72 MB
Language: en
Added: Aug 28, 2024
Slides: 33 pages
Slide Content
Charting Our Course: Information
Professionals as AI Navigators
Brian Pichman
How will AI
show up
our world?
Copyright and Privacy:
• Evaluate the data available to the AI model, considering
its sources and the impact of sourcing methods on model
outputs.
• Ensure your ‘text and data-mining addendum’ clearly
outlines the terms and conditions for AI usage in your
work.
Advice and Expertise:
• Identify the various AI tools available and their optimal
applications.
• Guide patrons toward services that incorporate robust
risk management practices.
Advocacy and Policy:
• Address the dual nature of social media, which
promotes open collaboration and spreads
misinformation. Teach good epistemic practices to
navigate this landscape.
• Recognize the complexities in the Open Access
movement, where information overload coexists with
access issues, and strategize accordingly.
Use of AI in publishing
•AI cannot be an author
•Use of AI must be disclosed
•No use of AI to create or edit
images
•No use in review or editing
•AI cannot be an author
•Use of AI must be disclosed
•No use of AI to create or edit
images
•No use in review or editing
•AI cannot be an author, nor can AI-
authored publications be cited.
•Use of AI must be disclosed,
including the full prompts used
•No use of AI to create images
without permission
•No use of AI in review
•AI use must be disclosed.
•AI can be used with a detailed
description in the Methods
section.
https://www.linkedin.com/in/williamgunn/
Privacy concerns
We also use data from versions of ChatGPT and
DALL·E for individuals. Data from ChatGPT Team,
ChatGPT Enterprise, and the API Platform (after
March 1, 2023) isn't used for training our models.
We will not train our models on any Materials that
are not publicly available, except in two
circumstances:If you provide Feedback to us and if
your Materials are flagged for trust and safety
review
Gemini Apps use your past conversations, location,
and related info to generate a response. Google uses
conversations (as well as feedback and related data)
from Gemini Apps users to improve Google products
(such as the generative machine-learning models
that power Gemini Apps). Human review is a
necessary step of the model improvement process.
Through their review, rating, and rewrites, humans
help enable quality improvements of generative
machine-learning models like the ones that power
Gemini Apps.
https://www.linkedin.com/in/williamgunn/
Risk management plans
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to test systems before release, collaborate with government
and academia, invest in cybersecurity, build watermarking systems, publicly disclose capabilities, and research bias and privacy issues.
•Safety Team for existing models
•Preparedness Framework for
frontier models
•Assess and evaluate capabilities
in persuasion, cybersecurity,
CBRN threats, autonomous
replication
•Superalignment for AGI/ASI
•Use AI to help align AGI
Responsible Scaling Policy
The plan outlines safety levels, ASL 1-5
and details plans to detect capabilities
that have advanced to the next level
and to decide whether and how the
model should be deployed.
Google mostly talks about
cybersecurity and their research.
Microsoft has a template for
individual teams to design their
own plans.
Amazon has a set of tools to
allow model builders to specify
topics to be avoided and to
understand how a dataset might
lead to biased or unexpected
outputs.
https://www.linkedin.com/in/williamgunn/
Ethical AI•Definition:AI systems that
adhere to agreed ethical
principles ensuring fairness,
transparency, and
accountability.
•Ethical AI preserves user
trust and protects against
harmful biases.
•Core Principles:
Transparency, justice &
fairness, non-maleficence,
responsibility, and privacy.
•Common Ethical Issues:
Bias in AI algorithms,
privacy concerns, and
decision-making
transparency.
•Relevance to Libraries:
How can these issues
affect library services, such
as personalized
recommendations and
digital collections
management?
Why does this matter
•User Trust:Maintaining user trust is paramount
for library services.
•Social Responsibility:Libraries have a duty to
promote inclusivity, prevent discrimination, and
be sources of truth
Five Core Ethical Principles
•Transparency:Making AI decisions
understandable to users.
Five Core Ethical
Principles
•Justice & Fairness:Ensuring AI
systems do not perpetuate
inequalities.
Five Core Ethical
Principles
•Non-maleficence:Preventing harm
to users by AI decisions.
Five Core Ethical
Principles
•Responsibility:Accountability for
AI impacts.
Five Core Ethical
Principles
•Privacy:Safeguarding user data.
Developing Our Own AI For Library Use
Opportunities
Personalization / User Preferences
Efficient Data Management
Improved Accessibility
Risks
Privacy issues
who owns the data?
Bias
Transparency of AI decisions
Mitigating
Risks
Audit how the AI is
working
Review responses, test the
system, etc
Diversified Data Sets
diverse dataset training and
involving community
feedback
Data Minimization
Encryption
Transparent Data Policies
Explainable AI (XAI): A type of AI that provides
insights into AI decision processes.
Case Studies – Anonymized Data
•New York Public Library uses anonymized data to improve services
without compromising individual privacy.
•https://www.nypl.org/press/new-york-public-library-announces-
participation-department-commerce-consortium-dedicated-ai
Case Study – AI Bias
•A healthcare algorithm used by hospitals to prioritize patient care
needs
•This algorithm inaccurately concluded that Black patients were healthier than
equally sick White patients because it used healthcare costs as a proxy for
health needs
•Black patients historically spend less on healthcare, and the algorithm discriminated
against them, prioritizing White patients for further care
•https://www.nature.com/articles/d41586-019-03228-6
•https://www.science.org/doi/full/10.1126/science.aax2342#:~:text=Bias%20occurs%20b
ecause%20the%20algorithm,than%20equally%20sick%20White%20patients.
Case Study – Privacy Violation
•Facebook and Cambridge Analytica incident serves as a potent
example of AI-related privacy violations
•Data from millions of Facebook users were harvested without consent and
used for political advertising, highlighting significant privacy breaches
So…where do we go
from here?
Selecting AI
Tools
Assess Needs
Identify library
services that
could benefit
from AI
enhancement.
Vendor
Evaluation
Choose AI
vendors that
adhere to
ethical
standards.
Community
Feedback
Involve library
users and staff
in the selection
process.
Data
Collection
and AI
Models
Data Ethics tells us to respect user
privacy and offer consent
•Help ensure diversity by
representing all community
segments
Design (or choose) an AI System
that is transparent where you can
see how it responds to questions,
how the model is trained, and what
data sets its using
Grow and
Monitor
Conduct pilot tests to
evaluate the AI
performance before
making live for everyone
Use a feedback
process to refine
the responses
Monitor the impact and
effectiveness
Does it cause an
increase in
program usage,
circulation, and
community
members
helped?
Explain the role of AI in library services to
users
Community
Involvement
Adapt strategies to identify ways to
improve the AI tool
Host sessions to education people
about the use of AI
If you are offering tools that use AI to the community,
you will also want to teach them ethical use of AI (using
it so it doesn’t cause harm to others or themselves)
Policies
Considerations for Policy Writing
•Establish core ethical principles specific to library needs.
•This could be safety and inclusion
•Determine how you will review and update these policies
•Use clear, accessible language to ensure all stakeholders understand
the policies.
•Define roles and responsibilities for enforcement and oversight.
•Schedule regular policy reviews to adapt to new AI developments and
community needs.
Great
Example
https://www.seattle.gov/t
ech/data-privacy/the-
citys-responsible-use-of-
artificial-intelligence
Training
Provide comprehensive training on AI ethics and its importance.
Launch campaigns to educate users on how AI is used in the library and its
benefits.
Promote transparency by making AI policies and practices accessible to the
public.
Actively seek input from the community on AI use in library services.
Recapping
•It is important to be surgical in an approach
to using AI – whether you’re developing it or
purchasing a solution
•The more communication you have
around what is being done, the better the
outcome and usage will be
AI Model Collapse /
AI Degradation
•Overtime, a model can get over
saturated with “nonsense” data
•Best solution – use an LLM and a
RAG
•Purge the LLM and refresh
based on use
•RAG Data doesn’t change,
LLM data can with user
input