251027_Thien_Labseminar_SG-RAG: Multi-Hop Question Answering With Large Language Models Through Knowledge Graphs.pptx
thanhdowork
7 views
9 slides
Oct 27, 2025
Slide 1 of 9
1
2
3
4
5
6
7
8
9
About This Presentation
SG-RAG: Multi-Hop Question Answering With Large Language Models Through Knowledge Graphs
Size: 1.41 MB
Language: en
Added: Oct 27, 2025
Slides: 9 pages
Slide Content
" SG-RAG: Multi-Hop Question Answering With Large Language Models Through Knowledge Graphs" Saleh et al., ICNLSP 2024. Thien Nguyen Network Science Lab Dept. of Artificial Intelligence The Catholic University of Korea E-mail: [email protected]
1. Introduction Challenges with LLMs and RAG LLMs (e.g., GPT-3, Llama) hallucinate on domain-specific questions. Standard RAG (Retrieval Augmented Generation) helps with single-hop questions but fails on multi-hop. Goal: Accurate multi-hop QA using structured KGs to provide precise context. Problem: Answer n-hop questions Q about domain D from KG accurately.
2. Methodology Two Steps: Subgraph Retrieval : Use Cypher query to fetch relevant subgraphs from KG; transform to textual triplets. Response Generation : Prompt LLM with triplets as context + question. Zero-shot: No training; exploits KG structure. Key Innovation: Groups triplets by subgraph to preserve order/relations, avoiding LLM confusion.
2. Methodology Subgraph Retrieval Querying KG : Generate Cypher query from question Transformation to Text : Convert edges to triplets: (Subject, Relation, Object) – Preserves direction. Group by subgraph (e.g., 4 triplets in two groups for 2-hop example). Benefits: Captures multi-hop relations structurally, unlike semantic RAG.
8. Experiments and Results Evaluation Metric q: input question Y = y1, y2,..., ym: the gold answer Y’ = y’1, y’2,..., y’n: generated response
8. Experiments and Results
8. Experiments and Results
9. Conclusions Summary SG-RAG leverages KG structure via subgraphs + triplets for accurate multi-hop QA. Reduces hallucinations by providing precise, relational context. Limitations Manual Cypher templates; future: Fine-tune LLM for auto-generation. Tested on small GPT-4 set; extend to larger + compare Graph-COT.