Navigating the Misinformation Minefield: The Role of Higher Education in the Era of Generative AI

markcarrigan 127 views 28 slides May 18, 2024
Slide 1
Slide 1 of 28
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28

About This Presentation

In an age where generative AI is becoming increasingly sophisticated, the potential for fraud and misinformation has reached unprecedented levels. This keynote will begin with a personal case study exploring how the speaker became the target of a generative AI scam, highlighting the convincing natur...


Slide Content

Navigating the Misinformation Minefield: The Role of Higher Education in the Era of Generative AI Mark Carrigan [email protected]

Part 1 A personal experience of attempted fraud

Oh no! Am I getting sued? 😬

Hey wait, I know what a DMCA notice looks like 🤔 Their e-mail wasn’t written as a legal document DMCA notices go to the web host, not the content producer The image in question was from a royalty free image site They’re asking me to insert a URL into my post The legal section of the letter is nonsense The firm and their lawyers have no digital footprint

A specialist in ‘corporate legal matters'

The domain name was registered in Iceland in December 2023

There’s no Anna Brown registered to practice law in Massachusetts ….

Have you noticed how they’re all looking perfectly at the camera?

Part 2 The threat to knowledge

What’s going on here? Blackhat SEO involves using unethical and/or illegal means to increase the search engine visibility of clients. GAI makes it possible to produce a website and supporting materials to legitimize these speculative claims in minutes The costs of undertaking this sort of activity effectively have shrunk rapidly: time, expertise, money etc. This is disinformation : strategic deception intended to promote political or commercial goals . As opposed to misinformation: incorrect or misleading information not produced in a knowing/strategic way

It’s not hard to get around the safeguards

Hamilton, Pierce & Lee: Where Tradition Meets Innovation Welcome to Hamilton, Pierce & Lee, a premier law firm nestled in the heart of Boston, Massachusetts. With a storied history dating back over a century, our firm has established itself as a beacon of legal excellence, tradition, and innovation. Our team of distinguished attorneys is dedicated to providing top-tier legal services across a broad spectrum of practice areas, including corporate law, intellectual property, litigation, real estate, and more.

Even if the safeguards were foolproof There are still open source GAI tools which can be run locally with minimal expertise In fact ChatGPT can give you instructions and even write you bespoke python code to support you in doing this The means of disinformation are now freely available to anyone who is willing to spend even a small amount of time learning to act on them What does this mean for education? What does this mean for society?

Huge advances in text-to-video coming rapidly

Part 3 The Challenge for Education  

The next phase of post-truth society  The line between truth and falsehood is becoming increasingly blurred Fake evidence can be made to seem real. Real evidence can be explained away as fake. This leads to a breakdown of trust in representations. How do we know what to believe?  Social media has already led to a post-truth environmen t in which there's no longer a consensus on factual matters The rapid uptake of synthetic media (text, audio, video, images) is likely to radically intensify those problems in the digital public sphere

Higher education has a crucial role to play  There are forms of digital literacy which protect against these challenges: understanding generative AI, recognising red flags, evaluating sources and analysing claims Higher education generates expertise (through research) and experts (through teaching) leaving it crucial in a post-truth world (Harrison and Luckett 2019) This is why academics need to understand how to navigate the challenge and opportunities provided by generative AI

Unfortunately generative AI is already contributing to declining standards in academic publishing... There are existing problems with a 'publish or perish' culture and the rise of 'predatory publishers' which target anxious academics But when academics face sustained pressures on their time and energy, it can be tempting to cut corners using the newfound capacities of generative AI 

The problem is fundamentally a matter of trust At the heart of post-truth is a breakdown of trust in expert knowledge. Experts are seen as partial and self-interested.  It's impractical to suggest we  ban generative AI use by academics (unenforceable and self-defeating) which means we have to regulate its use.  By developing shared standards for reflexive and accountable use of generative AI we can retain trust in the knowledge we produce This helps fortify trust in wider society. Responsible practice in higher education can't solve these problems, but it can ensure we retain the social value of the knowledge we produce. 

What does this mean in practice? Academics should be trained in responsible use of generative AI: Using it as a thinking tool and administrative assistant rather than a system which will do writing and analysis for you  Reporting on your use of generative AI in publications and research projects Ensuring AI literacy so the fundamental limitations of this technology are widely understood e.g. it will never be fully reliable, particularly for factual information Develop communities of practice in which colleagues can share and reflect on their use of these systems

We need to be ready for continual development of the technology

Questions