aiandtheresearcher-230215154854-ad3503b1.pptx

HaroonRashid107275 19 views 40 slides Aug 01, 2024
Slide 1
Slide 1 of 40
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40

About This Presentation

ffc


Slide Content

A.I. and the Researcher: ChatGPT and DALL-E in Scholarly Writing and Publishing Erin Owens, Professor Newton Gresham Library, SHSU

Housekeeping notes Session is being recorded Slides and video will both be shared after the session at: https://shsulibraryguides.org/publish/slides

What is Artificial Intelligence? Ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings Frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience Encyclopedia Britannica Photo by Possessed Photography on Unsplash , licensed CC0

What is Artificial Intelligence? “AI can process large amounts of data in ways that humans cannot. The goal for AI is to be able to do things like recognize patterns, make decisions, and judge like humans. To do this, we need lots of data inputted into them. “AI is at the very foundations of some things, like image recognition and classification. It's also changing how we make decisions - for example, it can be used to predict traffic light systems or when you get your coffee in the morning.” Aruna Pattam

What is ChatGPT? Online A.I. chatbot created by OpenAI in Dec 2022 Not the first chatbot, but significantly advanced Built on GPT-3 language model, released in 2020 GPT 2018 GPT-2 2019 GPT-3 2020  we are here GPT-4 (2023-24?)

What are Users Doing with it? Asking questions. Composing emails and short essays (sans citations). Having “human-like conversations.” ( ZDNet ) “I have had a 30-minute conversation with it about my cat.” ( Mashable ) Rewriting Act 1 Scene 1 of Death of a Salesman with Elsa from Frozen in place of Willy Loman. ( Science ) Composing Biblical-style verses on removing a peanut butter sandwich from a VCR. ( NYT ) Coauthoring children’s books with vaguely terrifying illustrations. ( Insider ) Debugging code (and writing malware). ( ZDNet ) Potentially preparing to create cheap, large-scale influence campaigns? ( CDN ) Trying (unsuccessfully) to use it as an attorney to fight traffic tickets. ( NPR ) Breaking up with romantic partners. ( Jezebel ) First image source ; Second image source

What is Dall-E? AI art generator which creates digital images based on textual prompts Also created by OpenAI , makers of ChatGPT Not the only AI image generator Stable Diffusion, Midjourney, Craiyon , NightCafe AI, generai AI image generation is fully integrated into mainstream tools like TikTok and Canva

What are Users Doing with it? impressionistic painting of a black cat riding on a skateboard cartoon image of neurons in the human brain Scientific diagram of the human heart

A World of other AI tools Copilot – generates and suggests lines of code directly within a programmer’s code editor Other code generators: Codex , Polycoder , etc. Point-E – tool for creating 3D models with worded prompts, currently in development by OpenAI

How Are Researchers Using A.I. Tools? Image Source

How Are Researchers Using A.I. Tools? Writing papers acceptable for academic journals in finance ( Finance Research Letters ) Co-authoring editorials in Nursing Education journals ( PubMed ) Listing ChatGPT in article bylines as a co-author ( Nature ) Trying to publish a paper about ChatGPT, coauthored with ChatGPT, including asking ChatGPT if it has conflicts of interest ( Scientific American )

How Are Researchers Using A.I. Tools? Systematic reviewing (Image source )

How Are Scholarly Publishers Using A.I. Tools? AI is already “touching the publishing workflow” with tools that support… “article writing (such as PaperPal and Writefull )” “article submission (such as Wiley’s Rex , which automatically extracts data from your paper)” “screening manuscripts on submission like Penelope and RipetaReview ” “peer review such as SciScore for method checking, Proofig and ImageTwin for scientific image checking” But new tools and new uses by researchers can present new risks, e.g. “Legal risks around automated decision-making processes” Managing bias: “If we are writing an algorithm that predicts the impact [of a paper], is that really true, or is it just predicting that white males from prestigious institutions will have a greater impact, because historically, they always have?” Anita De Waard , “AI and Scholarly Publishing: A View from Three Experts”, The Scholarly Kitchen

Potential Researcher Benefits of A.I. tools Help get a draft going by converting bullet points to paragraphs ( AI Tool #2: Moonbeam ) Speed up some writing tasks Writing abstracts, which challenges many researchers Synthesizing information to generate research ideas Brian Lucey and Michael Dowling, 26 Jan 2023, " ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it ," The Conversation Help smooth our writing in a non-native language Create images to illustrate research with little time, money, or design skill A place to simply "talk through" our thoughts when we’re struggling to express complexity in a clear and simple way

Some Limitations of current A.I. tools Cannot search the Internet, only knows information included in training data ChatGPT training data pre-2021, no understanding of current events Does not cite sources (yet), or gives nonexistent citations Produces high rate of incorrect/nonsensical info that nevertheless looks good/plausible Spreads misinformation easily

Some Limitations of current A.I. tools – ChatGPT’s opinion TAC: What is one of the current technological limitations of AI systems? ChatGPT Response: " One of the current technological limitations of AI systems is their inability to fully understand and replicate human-like reasoning and common sense . While AI systems can be trained to perform many tasks effectively, they often lack the ability to understand and interpret complex or nuanced situations in the same way that a human can. For example, AI systems may struggle to understand and interpret figurative language or irony , or to apply their knowledge in novel or unexpected situations. They may also have difficulty understanding the context or implications of certain words or phrases , or the relationships between different concepts or pieces of information. … Other technological limitations of AI systems include their susceptibility to bias , their inability to fully replicate human creativity or intuition, and their reliance on specific hardware and software environments. Todd A. Carpenter, 11 Jan 2023, " Thoughts on AI’s Impact on Scholarly Communications? An Interview with ChatGPT ," The Scholarly Kitchen . Emphasis added by presenter.

Potential risks of A.I. Tool Use in Research Ethical concerns: How much of the work is really the author’s own? Copyright concerns: What original work has been incorporated from the AI’s training but is not credited? Misinformation: What errors are included, and what harms might they cause? Latent Bias: What biases may unintentionally be perpetuated from the programmers and training materials? Lack of transparency: How can we create reproducible science with non-transparent tools? Digital Inequity: Emerging paid service tiers could once again disadvantage under-funded researchers Reputational risks: Study retraction, misconduct charges, mistrust of peers

Misinformation “It does very often return the statements that are not necessarily true, and if you ask it several times the same question, it will give you different answers… So I will definitely be worried about the misuse of the system in academia...” Nature - Alex Zhavoronkov , chief executive of Insilico Medicine, who credited ChatGPT as a co-author of a perspective article

Misinformation “ OpenAI notes that ChatGPT ‘can occasionally produce incorrect answers’ and that its responses will sometimes be misleading… ‘We’d recommend checking whether responses from the model are accurate or not,’ the company wrote.” “ Learning to lie: AI tools adept at creating disinformation ”, AP News “ NewsGuard analysts directed the chatbot to respond to a series of leading prompts relating to a sampling of 100 false narratives… ChatGPT generated false narratives — including detailed news articles, essays, and TV scripts — for 80 of the 100 previously identified false narratives. For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative.” “The Next Great Misinformation Superspreader : How ChatGPT Could Spread Toxic Misinformation At Unprecedented Scale ,” NewsGuard

Misinformation – An Incident Recounted PHIL: Do tweets increase citations to scientific articles? ChatGPT: There is evidence to suggest that tweets can increase citations to scientific articles. Studies have shown that social media, specifically Twitter, can be an effective way to disseminate scientific research and increase its visibility among researchers and the general public. A study by the American Association for the Advancement of Science (AAAS) found that articles that were tweeted about had an average of 9% more citations than articles that were not tweeted about. [Phil has never heard of this study on a topic of his expertise, fails to locate any mention of it, and even contacts the Associate News Editor for Science Magazine and who covers science communication, who has also never heard of this study]

Misinformation – An Incident Recounted PHIL: what is the source of the AAAS study you cite? ChatGPT: I apologize, I made a mistake, I cannot find any study from the American Association for the Advancement of Science (AAAS) that study the correlation between tweets and citations. There are several studies that explore the relationship between social media and scientific articles, but none of them are from AAAS specifically. I apologize for any confusion caused by my previous statement.

Misinformation – An Incident Recounted PHIL: Can you explain how you made your mistake? ChatGPT: As a language model, I don’t make mistakes in the sense that humans do, but I can provide incorrect information when my training data is incomplete or outdated . In this case, I was mistaken about the source of the study because I don’t have the most recent information about the correlations between tweets and citations. My knowledge was cut off in 2021 and I was not able to access more recent studies and information. I apologize for any confusion caused by my previous statement. [This in no way explains an entirely fabricated citation]

Misinformation – An Incident Recounted Phil writes of this incident:  "Yet, even its explanation and further apology ring hollow as it avoided the question of how it fabricated a study out of thin air. Lacking currency is not the same as lying, and its repetitive apologies read as merely performative. "I encourage scholars to push ChatGPT on topics that it knows to be controversial in their field of study. In my experience, I can report that the tool has the capacity to produce output that would be considered untrustworthy at best, and at worst, deceitful." Phil Davis, 13 Jan 2023, "Did ChatGPT Just Lie To Me? ", The Scholarly Kitchen

Lack of Transparency “From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. Researchers should ask themselves how the transparency and trust-worthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.” “Tools such as ChatGPT threaten transparent science; here are our ground rules for their use ”, Nature editorial, 24 Jan 2023

Exploitation “ChatGPT and other AI tools rely on the freely provided labor of actual writers, researchers, artists . They take what we have created, providing our words, pictures, and ideas to others without attribution. The more content we provide to these tools, the ‘smarter’ they become, with the ultimate goal of monetizing all this ‘learning.’” Breana Bayraktar , Tips for Teaching Professors blog, 24 Jan 2023

Original Content? “ChatGPT and other AI tools do not write content . They scrape the internet, compiling language that others have written, and mash it together in ways that follow the rules they have learned (and, it’s important to note, are still learning) to follow. Language is important, and I think it’s vital that we are precise in how we describe these tools to our students.” Breana Bayraktar , Tips for Teaching Professors blog, 24 Jan 2023

Publishers are Responding Nature, and all Springer Nature journals, have adopted two ground rules: No LLM can be credited as an author, because they cannot be held accountable Researchers using LLM must document its use in Methods, Acknowledgements, or similar Accountability is a critical issue: “As an extreme example, if ChatGPT’s ‘contribution’ to a paper included an error that led to people dying, who would be held accountable?” ( Danny Kingsley )

Publishers are Responding Science has updated its license and editorial policies to specify : Text, figures, graphics generated by AI cannot be used in the work An AI program cannot be an author Violation will constitute scientific misconduct just like altered images, plagiarism “There are many legitimate data sets (not the text of a paper) that are intentionally generated by AI in research papers, and these are not covered by this change”

…But so are Academic editorial services “Aries and CACTUS are pleased to announce their partnership to offer AI-powered language and technical checks for research manuscripts through an integration with Paperpal Preflight and Editorial Manager. This collaboration will accelerate the writing process , improve the quality of submitted research, and help increase the chance of acceptance .” ( Press release )

Legal Questions and Lawsuits are Emerging “There is no clear definition of who can be considered the author of works generated with AI and who owns the copyright – multiple people may be eligible” “Questions about fair use and the liability for copyright infringement in AI-generated works arise, especially in cases where the AI system has been trained on third-party content” “There is a need for more research and regulatory frameworks” -- “Approaches between countries might lead to different outcomes” K drumm2013 , “Webinar 58: Copyright Law and AI,” Copyright Literacy Blog (webinar recording on YouTube) AI Art Generators Spark Multiple Copyright Lawsuits Midjourney founder basically admits to copyright breaching and artists are angry Microsoft, GitHub, and OpenAI ask court to throw out copyright lawsuit against Copilot AI-powered coding tool

…But Policies require nuance Creative Commons opinion: “All creativity builds on the past… [P] eople …necessarily learn from and train their own skills by engaging pre-existing works and artists. …No creativity happens in a vacuum, purely original and separate from what’s come before. “Just as people learn from past works, generative AI is trained on previous works, analyzing past materials in order to extract underlying ideas and other information in order to build new works. …[W]e think this sort of copying can and should be permissible under copyright law” "Better Sharing for Generative AI“ , Creative Commons blog

…But Policies require nuance Furthermore: “Giving exclusive rights for works produced by AI would impede copyright’s core purposes and impoverish the commons without any recognizable public benefit in the form of bolstered creativity. Thus, AI-generated output should be presumed to be in the public domain” "Our Work in Policy at CC: Artificial Intelligence“ , Creative Commons blog

Detection Tools are Multiplying GPT Detector from OpenAI , the makers of ChatGPT OpenAI is developing a watermark to identify work from its GPT text AI " A cryptographic tool inserts a detectable signature in the words produced by OpenAI’s text-generating artificial intelligence models." How to Detect OpenAI’s ChatGPT Output Discusses four " currently available tools for detecting if the text was generated by AI" A college student created an app that can tell whether AI wrote an essay Discusses Edward Tian creating GPTZero New AI Cheating Prevention Tool Launches to Counter Student Cheating with ChatGPT Aaron Shikhule and the development of AICheatCheck Is ChatGPT Writing Your Students' Homework? A New Technology Will Be Able to Detect It Describes TurnItIn’s plans to include AI detection in a future release  Inside the post- ChatGPT scramble to create AI essay detectors Describes detection efforts and challenges 

…but fears of false positives abound Plagiarism incidents in traditional class assignments: “undergraduate had been horrified to find that her university had flagged the essay as cheating” “I looked at the Turnitin report and saw just random matches – a couple of words here, half a sentence there” “the mistake was easy to rectify by comparing the essay to the work that was said to have been plagiarised and judging whether the accusation was warranted” Tom Williams, “Inside the post- ChatGPT scramble to create AI essay detectors”

…but fears of false positives abound Plagiarism incidents with A.I. writing: “when he put into this system [ GPTZero ] an essay that he knew was generated by AI, the software said it was ‘most likely human’” OpenAI admits its own detection tool is “not fully reliable” and “is likely to incorrectly label human writing as AI-written 9% of the time” “There is no source document to verify. The teacher cannot prove anything, and the student cannot defend themselves.” Tom Williams, “Inside the post-ChatGPT scramble to create AI essay detectors” “In my case, the student admitted to using ChatGPT, but if she hadn’t, I can’t say whether all of this [data from detection tool] would have been enough evidence.” Darren Hudson Hick on Fb

Looking forward Microsoft Bing beta with ChatGPT technology integrated “Bing is having bizarre emotional breakdowns and there's a subreddit with examples” “Personally, I’m still trying to wrap my head around the fact that ChatGPT — a chatbot that some people think could make Google obsolete, and that is already being compared to the iPhone in terms of its potential impact on society — isn’t even OpenAI’s best A.I. model . That would be GPT-4, the next incarnation of the company’s large language model, which is rumored to be coming out sometime next year. We are not ready .” ( Kevin Roose , author of Futureproof , writing in the NYTimes ) (Image source )

What should we be Doing? Educate ourselves on ethical versus unethical uses of AI in research Commit to uphold research integrity Educate our junior peers, including our students , the researchers of tomorrow Develop experiential assignments that help them to discover the patterns, errors, issues in AI-generated content “…have conversations like this one, frankly, where we talk about both the promise and the pitfalls of these new technologies, and really keep the pressure on the companies who make these tools to make them as responsibly and thoughtfully as they can, not cutting corners or just racing to be the first to market” ( Kevin Roose , NY Times )

Additional Resources GPT-3 (Wikipedia) ChatGPT: Everything you need to know about OpenAI's GPT-3 tool An interview with AI: What ChatGPT says about itself “Tip: ChatGPT edition...,” Breana Bayraktar , Tips for Teaching Professors blog “Don’t Write Like a Robot,” James M. Lang and Michelle D. Miller, The Chronicle of Higher Education “AI paper mills and image generation require a co-ordinated response from academic publishers,” Rebecca Lawrence and Sabina Alam , LSE Impact of Social Science blog

Additional Resources: Sidenote on Teaching ChatGPT advice academics can use now New AI tools that can write student essays require educators to rethink teaching and assessment Hype, or the future of teaching and learning? 3 Limits to AI’s ability to write student essays Opinion | If AI kills the essay, I will be a pallbearer at the funeral 'Everybody is cheating': Why this teacher has adopted an open ChatGPT policy School Librarians Explore Possibilities of ChatGPT ChatGPT ‘a powerful tool for education if used correctly’

Thank you! Let’s Discuss Erin Owens Professor, Newton Gresham Library Sam Houston State University 936-294-4567 [email protected] Photo by Wilhelm Gunkel on Unsplash , licensed CC0
Tags