251027_Thien_Labseminar_SG-RAG: Multi-Hop Question Answering With Large Language Models Through Knowledge Graphs.pptx

thanhdowork 7 views 9 slides Oct 27, 2025
Slide 1
Slide 1 of 9
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9

About This Presentation

SG-RAG: Multi-Hop Question Answering With Large Language Models Through Knowledge Graphs


Slide Content

" SG-RAG: Multi-Hop Question Answering With Large Language Models Through Knowledge Graphs" Saleh et al., ICNLSP 2024. Thien Nguyen Network Science Lab Dept. of Artificial Intelligence The Catholic University of Korea E-mail: [email protected]

1. Introduction Challenges with LLMs and RAG LLMs (e.g., GPT-3, Llama) hallucinate on domain-specific questions. Standard RAG (Retrieval Augmented Generation) helps with single-hop questions but fails on multi-hop. Goal: Accurate multi-hop QA using structured KGs to provide precise context. Problem: Answer n-hop questions Q about domain D from KG accurately.

2. Methodology Two Steps: Subgraph Retrieval : Use Cypher query to fetch relevant subgraphs from KG; transform to textual triplets. Response Generation : Prompt LLM with triplets as context + question. Zero-shot: No training; exploits KG structure. Key Innovation: Groups triplets by subgraph to preserve order/relations, avoiding LLM confusion.

2. Methodology Subgraph Retrieval Querying KG : Generate Cypher query from question Transformation to Text : Convert edges to triplets: (Subject, Relation, Object) – Preserves direction. Group by subgraph (e.g., 4 triplets in two groups for 2-hop example). Benefits: Captures multi-hop relations structurally, unlike semantic RAG.

2. Methodology LLM Prompting Context: Grouped triplets from retrieval. Question: Input query.

8. Experiments and Results Evaluation Metric q: input question Y = y1, y2,..., ym: the gold answer Y’ = y’1, y’2,..., y’n: generated response

8. Experiments and Results

8. Experiments and Results

9. Conclusions Summary SG-RAG leverages KG structure via subgraphs + triplets for accurate multi-hop QA. Reduces hallucinations by providing precise, relational context. Limitations Manual Cypher templates; future: Fine-tune LLM for auto-generation. Tested on small GPT-4 set; extend to larger + compare Graph-COT.
Tags