Thinking about AI and health misinformation - for social media influencers

purnatt1 401 views 34 slides Aug 29, 2024
Slide 1
Slide 1 of 34
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34

About This Presentation

Artificial Intelligence (hashtag#AI) is rapidly growing and transforming various sectors, including health communication, with the potential to combat misinformation at an unprecedented scale.

In health campaigns, AI can amplify public health messaging by targeting specific audiences with tailored ...


Slide Content

TINA D PURNAT Prajna Leadership Fellow and DrPH Student TH Chan School of Public Health, Harvard University Fellow of the Australasian Institute of Digital Health tinapurnat.com Unless credited, images generated by ChatGPT 4o. Thinking about AI and health misinformation (the social media influencer edition) This presentation  © 2024 by  Tina D Purnat  is licensed under  Attribution- NonCommercial - ShareAlike 4.0 International        

How you can use this slide deck Thanks for your interest in this topic. I developed this deck to support public health efforts and have made it available for others to use it as well. I’ve made full effort to acknowledge sources of information and adaptation of slides from other people. You are welcome to adapt the slide deck as per the license below. Please make an effort to properly credit the efforts of others that you use. This presentation  © 2024 by  Tina D Purnat  is licensed under  Attribution- NonCommercial - ShareAlike 4.0 International         You are free to: Share  — copy and redistribute the material in any medium or format Adapt  — remix, transform, and build upon the material The licensor cannot revoke these freedoms as long as you follow the license terms. Under the following terms: Attribution  - You must give  appropriate credit  , provide a link to the license, and  indicate if changes were made  . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. NonCommercial   - You may not use the material for  commercial purposes  . ShareAlike   - If you remix, transform, or build upon the material, you must distribute your contributions under the  same license  as the original. No additional restrictions  - You may not apply legal terms or  technological measures  that legally restrict others from doing anything the license permits. Creator: https://www.linkedin.com/in/tinadpurnat/ Work is published at: https://tinapurnat.com

PUBLIC HEALTH AND THE CHALLENGE OF INFORMATION ENVIRONMENT Examples hijack health-conscious communities Industry marketing and influencers

PUBLIC HEALTH AND THE CHALLENGE OF INFORMATION ENVIRONMENT Examples hijack health-conscious communities Industry marketing and influencers Well-funded and well-organized “anti” movement counters health advice and politicizes health.

PUBLIC HEALTH AND THE CHALLENGE OF INFORMATION ENVIRONMENT Examples hijack health-conscious communities Industry marketing and influencers Heathcare professionals undermine evidence-based health advice Well-funded and well-organized “anti” movement counters health advice and politicizes health.

PUBLIC HEALTH AND THE CHALLENGE OF INFORMATION ENVIRONMENT Examples hijack health-conscious communities Industry marketing and influencers Heathcare professionals undermine evidence-based health advice Well-funded and well-organized “anti” movement counters health advice and politicizes health. Health fraud, scams and deceptive marketing exploit vulnerabilities within the information environment

PUBLIC HEALTH AND THE CHALLENGE OF INFORMATION ENVIRONMENT Examples hijack health-conscious communities Industry marketing and influencers Heathcare professionals undermine evidence-based health advice Trusted tech platform services misdirect and mislead people searching for health information Well-funded and well-organized “anti” movement counters health advice and politicizes health. Health fraud, scams and deceptive marketing exploit vulnerabilities within the information environment

PUBLIC HEALTH AND THE CHALLENGE OF INFORMATION ENVIRONMENT Examples hijack health-conscious communities Industry marketing and influencers Heathcare professionals undermine evidence-based health advice Trusted tech platform services misdirect and mislead people searching for health information Well-funded and well-organized “anti” movement counters health advice and politicizes health. Health fraud, scams and deceptive marketing exploit vulnerabilities within the information environment When health workers are harassed, doxxed and attacked this is the alarm that something has gone very wrong between community and health system

PUBLIC HEALTH AND THE CHALLENGE OF INFORMATION ENVIRONMENT Examples hijack health-conscious communities Industry marketing and influencers Heathcare professionals undermine evidence-based health advice Trusted tech platform services misdirect and mislead people searching for health information Communities with multiple vulnerabilities may make health choices based on low quality information. Well-funded and well-organized “anti” movement counters health advice and politicizes health. Health fraud, scams and deceptive marketing exploit vulnerabilities within the information environment When health workers are harassed, doxxed and attacked this is the alarm that something has gone very wrong between community and health system For examples of challenges and possible actions, see: tinapurnat.com/blog

AI strengthening public health communication Use cases Tools Automating analysis of social media Talkwalker, Pulsar, Meltwater,... Generating text, images and video ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Testing for readability/understandability ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Draft translations, editing, synthesizing information ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Making content more accessible (e.g. alt text, descriptions) ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Brainstorming/organizational tools ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Offering more tailored content/responses to users (e.g. chatbots) Many commercial offerings ...but all come with limitations and biases.

AI eroding public health communication ... all the same tools can be either misinterpreted or misused. Use cases Tools Automating analysis of social media Talkwalker, Pulsar, Meltwater,... Generating text, images and video ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Testing for readability/understandability ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Draft translations, editing, synthesizing information ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Making content more accessible (e.g. alt text, descriptions) ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Brainstorming/organizational tools ChatGPT, Claude, Poe, Bard, Copilot, ... DALL·E 3, Midjourney, Stable Diffusion, Adobe Firefly, Generative AI, ... Canva, Miro, Mentimeter, ... Offering more tailored content/responses to users (e.g. chatbots) Many commercial offerings

Public health approach conservative, human rights-based approach understanding questions, concerns, information voids and narratives circulating in a community of focus or about a particular topic focuses on health risk reduction instead of marketing/social reputation management who is saying what, about whom, where, and when main competitor: more compelling content, including misinformation measure of success: widespread distribution and uptake of public health messaging (compared to misinformation) Using AI for tracking your social media analytics is different from how a Ministry of Health may track health misinformation Social influencer approach creative and unique voice creating and amplifying content on topics of interest focuses on marketing and engagement main competitor: other influencers measure of success: more engagement, more followers, more paid advertising/ sponsorships

AI tools sometimes spit out irrelevant analyses that are not useful for public health action. Sentiment analysis Reach metrics Hashtag tracking The reason these don’t make sense is because they were built for commercial brand promotion. ...and they use data that’s easiest to obtain and easiest to analyze See: https://researchworld.com/articles/10-challenges-of-sentiment-analysis-and-how-to-overcome-them-part-1 Generalized top narratives See: https://www.wired.com/story/death-of-truth-misinformation-advertising/ See: https://arxiv.org/html/2402.10230v1

Reminder: Data that is available for AI-assisted analysis is biased and incomplete. Most social media platforms are walled gardens. Since the COVID-19 pandemic, data available from these platforms has been further restricted. The information environment is changing rapidly. Users have dispersed across new social media platforms, many of which don’t have data sharing and governance policies in place for health. Dashboards consisting of socio-behavioral indicators that are more than 3 weeks old are less useful for action. Sufficient subnational or population- or community-specific analysis is rarely available to understand health behaviors.

Ask yourself, when was the last time you changed someone’s mind with a simple message? Avoid temptation to use AI to create and use generic health messages or content Use a “magic” message bank

Case example: mpox as a health topic and the ways AI and content creation tools are misused This is not what an mpox sample kits look like! (also - this image is also not of a vaccine!)

Case example: mpox as a health topic and the ways AI and content creation tools are misused This image doesn’t match the headline, doesn’t explain how to protect yourself, and can invite confusion. Just using a WHO or UNICEF image doesn’t mean that you are communicating properly.

Case example: mpox as a health topic and the ways AI and content creation tools are misused

AI lets you take shortcuts, which can help speed up processes. However, AI is an unreliable source of content and analysis for expert-driven communication.

AI used in social media and digital spaces is not going to improve health services. Providing more accurate information alone is not going to improve trust in the health system.

Health communication must be fully aligned with health service delivery and experience, and health guidance Demand for health information Demand for health services and products Adherence to health guidance PULL – people want something from the system PUSH - the system wants something from the people See: https://www.linkedin.com/pulse/demand-promotion-trust-trouble-information-tina-d-purnat-cw3of/

Even if we can do some tasks faster by using technology, this doesn’t let us off the hook that health communications make an implicit promise about quality, accessibility, affordability, and acceptability of health guidance, health services and products. https://substack.com/home/post/p-145099430 https://www.statnews.com/2024/05/09/h5n1-communication-didnt-federal-government-learn-anything-from-covid/ https://www.washingtonpost.com/travel/2024/02/18/air-canada-airline-chatbot-ruling/ https://www.ualberta.ca/folio/2020/01/virtual-assistants-provide-disappointing-advice-when-asked-for-first-aid-emergency-information-study.html

Key action #1: P roviding credible, accurate health information is the basis for any other health communication intervention.

Key action #1: Advise health partners on ensuring the basics first web site is up-to date, accessible, and search engine optimized ensure web site is crosslinked from other reputable sources, such as county, state, or jurisdiction-specific web sites people with questions can reach a live human if they need to editorial policy and content moderation policy across social media channels point of contact for the media and for inquiries FAQ section on web site content available in multiple formats and languages SOPs developed on how to respond to and address urgent questions, concerns, information voids and mis/disinformation

Key action #2: You have more latitude for addressing health misinformation than the Ministry of Health. https://xkcd.com/386/

Only few categories of health misinformation narratives can be directly addressed by a Ministry of Health Adapted from: FirstDraft https://firstdraftnews.org/wp-content/uploads/2020/11/FirstDraft_Underthesurface_Fullreport_Final.pdf?x21167 Development, provision and access of healthcare services and products Safety, efficacy and necessity of diagnostics, therapeutics and vaccines Political and economic motives Conspiracy theories Liberty and freedom Morality and religion Can be addressed by a Ministry of Health

https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/ Key action #3: Focus on meeting demand for health information - not running after misinformation "Generative AI makes it easier to create misinformation, which could increase the supply of misinformation. However, it is not because there is more misinformation that people will necessarily consume more of it. Instead, we argue here that the consumption of misinformation is mostly limited by demand and not by supply. "

Key action(s) #4: The ethical considerations for social media influencers who talk about health go beyond responsible use of AI Avoid stigmatizing language and images Contextualize risk of a particular health behavior in ways that people can understand. Avoid copy and pasting messages or PSAs you receive and sharing them directly with your audience without tailoring them first Have a clear content moderation and audience engagement policy for yourself (eg do you respond to direct messages, comments in public, etc) Transparently label any AI generated content you use. Regularly review your social analytics to identify effective content Consult healthcare professionals to ensure any health messages are accurate (don’t rely on just GPT!) Use positive message framing for your content Share your audience engagement insights with yout MoH to take action (on health information and services!) Avoid conflicts of interest with other partners and sponsors that you are working with in the health area

Thank you very much! [email protected] Resources for infodemic managers

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2805756

https://dx.doi.org/10.2196/60678 “These results indicate that ChatGPT's ability to classify misinformation is negatively impacted when role-playing social identities, highlighting the complexity of integrating human biases and perspectives in LLMs. This points to the need for human oversight and fact checking in the use of LLMs for misinformation detection. Further research is needed to understand how LLMs weigh social identities in prompt-based tasks and to explore their application in different cultural contexts and information domains.”

https://issues.org/to-fix-health-misinformation-think-beyond-fact-checking/ Recommended reading and listening: https://issues.org/misunderstanding-misinformation-wardle/ https://www.cjr.org/special_report/truth-pollution-disinformation.php

Familiarize yourself with the current scientific debate about social inoculation and prebunking: https://www.conspicuouscognition.com/p/misinformation-poses-a-smaller-threat versus https://www.nature.com/articles/d41586-024-01587-3

the recency of an LLM’s pre-trained dataset is a notable limitation to its overall effectiveness and accuracy In emergencies, because of changes in policy or guidance (and their specificity to countries), or evolution in evidence, and pretraining datasets on health topics may inadvertently provide inaccurate or decontextualized health information, especially on health questions and concerns or epidemiology of disease that change relatively quickly over time. What is considered accurate in health information is not just a mere check but is specific to national guidelines, the population in question, and the context. GPTs don’t pick up humor or sarcasm really well. GPT, based on a corpus of English language data and predominantly (white male) western sources, it won't represent perspectives from other places where less data is available. Some limitations of using generative AI (GPTs) in analysis or generation of messages in health communication