Reducing Hallucination in Generative AI using ML Technique
SudeepVishwakarma5
0 views
22 slides
Oct 13, 2025
Slide 1 of 22
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
About This Presentation
Reducing Hallucination in
Generative AI using ML Technique
Size: 1.06 MB
Language: en
Added: Oct 13, 2025
Slides: 22 pages
Slide Content
Dept of AIML
March 5, 2025 Department of Artificial Intelligence and Machine learning 1
Overveiw
Acharya stands as a beacon of excellence in higher education,
boasting a legacy of academic distinction since its establishment
in 1990. We offer a transformative educational experience,
fostering holistic development, nurturing innovation and
providing world-class facilities to ensure an enriching journey for
our students.
Dept of AIML
March 5, 2025 Department of Artificial Intelligence and Machine learning 2
11 Institutions, Infinite Possibilities
We provide 100+ programs across 50 academic streams.
Dept of AIML
March 5, 2025 Department of Artificial Intelligence and Machine learning 3
Dept of AIML
VISVESVARAYA TECHNOLOGICAL UNIVERSITY
Belagavi - 590 018
Reducing Hallucination in
Generative AI using ML Technique
Presented By:
Md Arham 1AY22CD027
Moseen K. 1AY22CD030
Nischal P. 1AY22CD032
Yatish S. Naik 1AY22CD061
Under the Guidance of
Prof. Adarsha S P
Assistant Professor
March 5, 2025 Department of Artificial Intelligence and Machine learning 4
Dept of AIML
Content
•Abstract
•Introduction
•Problem Statement
•Project Objectives
•Team Introduction & Roles
•Literature Survey
•Methodology and Approach
•Expected Outcomes
•Conclusions
March 5, 2025 Department of Artificial Intelligence and Machine learning 5
Dept of AIML
Abstract
•This Project will help to understand Large Language Models (LLMs) which generate
human-like text but struggle with hallucinations—producing misleading or ungrounded
content. This issue poses risks in critical fields like healthcare, finance, and law. Unlike
traditional AI, LLMs process vast online data, making them prone to biases and
misinformation.
March 5, 2025 Department of Artificial Intelligence and Machine learning 6
Dept of AIML
Introduction
•This project aims to explore and categorize existing techniques for mitigating hallucinations in
LLMs. Methods such as : Retrieval-Augmented Generation (RAG), Knowledge Retrieval, CoNLI,
and CoVe have been developed to improve factual consistency and reduce hallucination.
•Additionally, this study will help to introduces a proper study of hallucination mitigation strategies
based on dataset usage, task-specific approaches, feedback mechanisms, and retriever types. By
analyzing current solutions and their limitations, this research lays a foundation for future
advancements in making LLMs more accurate, scalable, and reliable for real-world applications.
•This model can contribute in helping LLMs to hallucinate less and prioritize their work more
effectively, reducing time from generating unintended results and improve overall output.
•This presentation, we will take you through the problem statement, the methodology we would like
to use, the expected results, and the conclusions drawn from this project.
March 5, 2025 Department of Artificial Intelligence and Machine learning 7
Dept of AIML
Problem Statement
•Accuracy and Reliability – LLMs generate text that appears factual but often contains
misleading or unverified information, making them unreliable for critical applications like
healthcare, finance, and law.
•Lack of Grounded Knowledge – LLMs struggle to differentiate between factual and
fictional information, often extrapolating or fabricating responses without proper
verification.
•Context Misinterpretation – Ambiguous or complex prompts can lead LLMs to
misinterpret queries, producing responses that do not align with the intended meaning.
•User Trust and Safety – Hallucinations in AI-generated content can erode user trust and
lead to harmful consequences, especially when used in decision-making processes.
•Feedback and Continuous Learning – Current approaches lack robust feedback
mechanisms that allow LLMs to learn from mistakes and reduce hallucinations over time.
March 5, 2025 Department of Artificial Intelligence and Machine learning 8
Dept of AIML
Project Objectives
•Focusing goal is to create a model that can help reduce the hallucinations done by
generative ai.
•Implementing feedback-driven learning techniques that allow LLMs to recognize and
correct hallucinations over time, improving their factual accuracy and reliability.
•Ensuring ethical and safe AI usage. Address ethical concerns related to misinformation,
bias, and decision-making risks in LLM-generated content to build trust in AI-driven
solutions.
March 5, 2025 Department of Artificial Intelligence and Machine learning 9
Dept of AIML
Team Introduction
March 5, 2025 Department of Artificial Intelligence and Machine learning 10
Nischal P.
1AY22CD032
Md Arham
1AY22CD027
Moseen K.
1AY22CD030
Yathish S. Naik
1AY22CD061
Dept of AIML
Literature Survey
March 5, 2025 Department of Artificial Intelligence and Machine learning 11
S.No. Paper Title &
Publication Details
Name of the
Authors
Technical Ideas /
Algorithms used in
the Paper
Shortfalls /
Disadvantages &
Solutions Proposed
1. A Comprehensive
Survey of
Hallucination
Mitigation
Techniques in
Large
LanguageModels
S.M Towhidul Islam
Tonmoy, S M
Mehedi Zaman,
Vinija Jain, Anku
Rani, Vipula Rawte,
AmanChadha,
AmitavaDas
Decompose and
Query
Framework(D&Q)
CoNLI (Contextual
and Natural
Language Inference
Knowledge
Retrieval
Dataset Bias,
Computational
Costs, Scalability
Issues, Ethical &
Safety Concerns
Dept of AIML
Literature Survey
March 5, 2025 Department of Artificial Intelligence and Machine learning 12
S.No. Paper Title &
Publication Details
Name of the
Authors
Technical Ideas /
Algorithms used in
the Paper
Shortfalls /
Disadvantages &
Solutions Proposed
2. Reducing
hallucination in
structured outputs
via Retrieval-
Augmented
Generation
Patrice Béchard,
Orlando Marquez
Ayala
Dense Retrieval,
Retriever Training,
Greedy Decoding
Hallucination Risks
Still Present,
Dependence on
Retriever Quality,
Complexity of User
Requirements.
Dept of AIML
Methodology and Approach
March 5, 2025 Department of Artificial Intelligence and Machine learning 13
Several approaches have been proposed to tackle hallucination in LLMs:
1. Retrieval-Augmented Generation (RAG): This method enhances the factual accuracy of
LLM-generated content by incorporating external knowledge sources. Lewis et al. (2021)
proposed RAG, which integrates document retrieval with text generation, ensuring that
responses are grounded in reliable sources.
2. Self-Refinement Mechanisms: Techniques such as Think While Effectively Articulating
Knowledge (TWEAK) use hypothesis verification to rank and refine generated responses,
reducing hallucination by promoting fact-checking within the generation process.
Dept of AIML
Methodology and Approach
March 5, 2025 Department of Artificial Intelligence and Machine learning 14
3. Supervised Fine-Tuning (SFT): By training LLMs on high-quality, labeled datasets,
models can be optimized to prioritize factual correctness. Various studies have explored
knowledge injection techniques, where domain-specific data is embedded into weaker
LLMs to enhance accuracy.
4. Post-Generation Correction: Techniques such as High Entropy Word Spotting and
Replacement (Rawte et al., 2023) focus on identifying and replacing unreliable segments
of text within model-generated responses.
Flow Diagram
March 5, 2025 Department of Artificial Intelligence and Machine learning 15
Human Input
Search API Prompt Manager
LLM
Search Response
Prompt
Final
Output
Dept of AIML
Diagram to Understand Retrieval-Augmented
Generation (RAG)
March 5, 2025 Department of Artificial Intelligence and Machine learning 16
Screenshots
March 5, 2025 Department of Artificial Intelligence and Machine learning 17
Example of LLM hallucinating
Dept of AIML
Workplan and Attendance
Milestones
1.Literature Review & Problem Identification
2.Dataset Selection & Benchmarking Framework
3.Implementation of Existing Mitigation Techniques
1.Implement Retrieval-Augmented Generation (RAG), self-refinement, and supervised fine-tuning.
2.Compare their effectiveness in reducing hallucinations.
4.Testing & Performance Evaluation
1.Run experiments on real-world case studies such as medical records, legal documents, and customer
support systems.
2.Assess improvements in hallucination reduction and reliability.
5.Documentation & Final Report
1.Compile findings into a research paper or technical report.
2.Discuss future directions for hallucination mitigation in LLMs.
March 5, 2025 Department of Artificial Intelligence and Machine learning 18
Dept of AIML
Workplan and Attendance
March 5, 2025 Department of Artificial Intelligence and Machine learning 19
Risk Potential Impact Mitigation Strategy
Dataset Bias
Hallucination mitigation techniques may be
influenced by biases in training data.
Use diverse and well-curated datasets, perform
bias detection and correction.
Computational Costs
High resource consumption for running
advanced models like RAG.
Optimize models for efficiency, use cloud-
based solutions, leverage pre-trained LLMs.
Evaluation Complexity
Difficulty in measuring hallucination rates and
benchmarking techniques.
Develop standardized evaluation metrics and
use human-in-the-loop validation.
Scalability Issues
Some mitigation techniques may not
generalize well across different applications.
Design scalable frameworks, test on multiple
domains before deployment.
Ethical & Safety Concerns
LLMs might still generate misleading content,
impacting real-world users.
Implement feedback mechanisms, ensure AI
transparency, and enforce responsible AI
practices.
Integration Challenges
Combining multiple techniques may lead to
inconsistencies or inefficiencies.
Carefully design hybrid approaches with
modular architectures.
Dept of AIML
Expected Outcomes
•Comprehensive Understanding of Hallucinations in LLMs.
•A detailed study on the causes, types, and impact of hallucinations in LLMs across
different applications.
•Comparative analysis of techniques such as Retrieval-Augmented Generation (RAG),
self-refinement, supervised fine-tuning, and post-generation correction to determine their
effectiveness.
•A well-documented research study that serves as a reference for future work on
hallucination mitigation in LLMs, guiding the development of more robust and
trustworthy AI systems.
March 5, 2025 Department of Artificial Intelligence and Machine learning 20
Dept of AIML
Conclusions
•This project highlights the critical issue of hallucinations in Large Language Models
(LLMs) and their impact across various domains. It examines recent advancements in
hallucination detection, while emphasizing the importance of addressing this challenge in
models like GPT-4, Gemini 2.0, Claude 3.5 etc. Understanding the analyzing can help
restore the facts and help LLMs to hallucinate less providing factual and accurate results.
March 5, 2025 Department of Artificial Intelligence and Machine learning 21
Dept of AIML
Q&A
March 5, 2025 Department of Artificial Intelligence and Machine learning 22