251027_HW_LabSeminar[Pre-training Graph Neural Networks on Molecules by Using Subgraph-Conditioned Graph Information Bottleneck].pptx
thanhdowork
6 views
11 slides
Oct 27, 2025
Slide 1 of 11
1
2
3
4
5
6
7
8
9
10
11
About This Presentation
Pre-training Graph Neural Networks on Molecules by Using Subgraph-Conditioned Graph Information Bottleneck
Size: 1.41 MB
Language: en
Added: Oct 27, 2025
Slides: 11 pages
Slide Content
K im Hyun W oo Network Science Lab The Catholic University of Korea E-mail : [email protected] Pre-training Graph Neural Networks on Molecules by Using Subgraph-Conditioned Graph Information Bottleneck (Hoang et al,AAAI’25)
Introduction Pre-training Method on Molecular Learning Limitation of Previous work Method Graph Compression Graph Core and Sub-Graph Interaction Model Optimization Experiment Conclusion
Introduction Pre-training Method on Molecular Learning Recent year, Graph representation learning has show powerful performance on Molecular learning. But still have Limitation on Practical view Usually Deep learning method require Large amount of data Compared with different domain like CV and NLP, Molecular learning has difficulty on make dataset First, to build molecular data, we have to experiment on Wet-lab second, if the target property is different model can’t predict Because of these reason, pre-training method have became important area
Limitation of Previous work In previous work, there are three kind of Category First, node level task Predict masked atom Second, sub-graph level task Predict masked Sub-graph Third, contrastive learning task Do contrastive learning via two augmented view But these three paradaim still lack on generalization issue Introduction
Method Overview
Method Graph Compression To find Core Graph in molecule, S-CGIB model compress Graph structure by masking node representation
Method Graph Core and Sub-Graph Interaction Molecular is composed by important sub-structure namely functional group To utilize functional groups information well, model do extract k-hop sub-graph In the phase of training, the candidates, interact with core Graph through attention mechanism
Method Model Optimization To effectively capture information, the authors utilize information bottleneck for training There are more objective, contrastive learning and reconstruction Contrastive learning make distintive representation between molecule. Reconstruction make model learn transferable representation