250908_Thien_Labseminar_G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable Recommendation.pptx
thanhdowork
2 views
8 slides
Sep 09, 2025
Slide 1 of 8
1
2
3
4
5
6
7
8
About This Presentation
G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable Recommendation
Size: 1.45 MB
Language: en
Added: Sep 09, 2025
Slides: 8 pages
Slide Content
Li et al. " G-refer: Graph retrieval-augmented large language model for explainable recommendation. " Proceedings of the ACM on Web Conference 2025. 2025. Thien Nguyen Network Science Lab Dept. of Artificial Intelligence The Catholic University of Korea E-mail: [email protected]
1. Introduction G-Refer: framework integrating graph retrieval with LLMs to enhance explainable recommendation systems , improving transparency and trustworthiness by generating personalized, human-understandable explanations G-R efer contribution G-Refer Framework : Introduces a novel approach combining: Hybrid graph retrieval for structural and semantic CF signals. Knowledge pruning to filter irrelevant data. Retrieval-Augmented Fine-Tuning (RAFT) for enhanced LLM performance. Methodology : Uses multi-granularity retrievers (path-level and node-level). Employs GNNs like R-GCN for embeddings. Formulates path retrieval as a link prediction problem. Evaluation : Superior performance in explainability and stability on datasets like Amazon-books, Yelp, and Google-reviews. Challenges of recommendation system : - Extracting collaborative filtering (CF) information from complex user-item interaction graphs. - Integrating implicit CF data with LLMs due to modality gaps between graph structures and natural language explanations. => Need a method to effectively extract CF signals and generate interpretable, personalized explanations.
2. Methodology Core Components : Hybrid Graph Retrieval : Path-Level Retriever : Identifies k significant paths in user-item graphs using GNNs (e.g., R-GCN) and Dijkstra’s algorithm for structural CF signals. Node-Level Retriever : Uses dual-encoder architecture to compute semantic similarities, retrieving relevant nodes. Knowledge Pruning : Filters redundant or irrelevant data to enhance LLM focus on relevant CF information. Retrieval-Augmented Fine-Tuning (RAFT) : Fine-tunes LLMs on pruned datasets to improve explanation quality. Key Feature : Bridges modality gap between graphs and natural language, ensuring human-readable explanations.
2. Methodology GNN: R-GCN m-core Pruning: recursive algorithm that eliminates nodes with degrees less than m Explanation Path Retrieval: leverage PaGE-Link to perform mask learning on the learned GNN model. Based on the mask => apply the Dijkstra’s shortest path algorithm to retrieve explanation paths
2. Methodology Retrieval-Augmented Fine-Tuning (RAFT) Dprune: pruned training set θ: parameters associated with the LoRA model user-item pair (u, i) with profiles bu and ci retrieved knowledge K(u,i ) Knowledge Pruning ⊕ : concatenation Explain(u,i) is the ground truth explanation for (u, i)
3. Experiments and Results
3. Experiments and Results
4. Conclusions G-Refer advances explainable recommendation by integrating GNN-based graph retrieval with LLMs, addressing challenges in CF extraction and modality gaps. Limitations : Limited exploration of diverse graph types or user interaction patterns. Pruning may discard potentially useful information. Scalability to extremely large graphs not fully addressed. Future Directions : Expand to diverse graph-based recommendation scenarios. Enhance pruning strategies for better information retention. Explore dynamic retrieval mechanisms for real-time applications.