251027_HW_LabSeminar[Pre-training Graph Neural Networks on Molecules by Using Subgraph-Conditioned Graph Information Bottleneck].pptx

thanhdowork 6 views 11 slides Oct 27, 2025
Slide 1
Slide 1 of 11
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11

About This Presentation

Pre-training Graph Neural Networks on Molecules by Using Subgraph-Conditioned Graph Information Bottleneck


Slide Content

K im Hyun W oo Network Science Lab The Catholic University of Korea E-mail : [email protected] Pre-training Graph Neural Networks on Molecules by Using Subgraph-Conditioned Graph Information Bottleneck (Hoang et al,AAAI’25)

Introduction Pre-training Method on Molecular Learning Limitation of Previous work Method Graph Compression Graph Core and Sub-Graph Interaction Model Optimization Experiment Conclusion

Introduction Pre-training Method on Molecular Learning Recent year, Graph representation learning has show powerful performance on Molecular learning. But still have Limitation on Practical view Usually Deep learning method require Large amount of data Compared with different domain like CV and NLP, Molecular learning has difficulty on make dataset First, to build molecular data, we have to experiment on Wet-lab second, if the target property is different model can’t predict Because of these reason, pre-training method have became important area

Limitation of Previous work In previous work, there are three kind of Category First, node level task Predict masked atom Second, sub-graph level task Predict masked Sub-graph Third, contrastive learning task Do contrastive learning via two augmented view But these three paradaim still lack on generalization issue Introduction

Method Overview

Method Graph Compression To find Core Graph in molecule, S-CGIB model compress Graph structure by masking node representation

Method Graph Core and Sub-Graph Interaction Molecular is composed by important sub-structure namely functional group To utilize functional groups information well, model do extract k-hop sub-graph In the phase of training, the candidates, interact with core Graph through attention mechanism

Method Model Optimization To effectively capture information, the authors utilize information bottleneck for training There are more objective, contrastive learning and reconstruction Contrastive learning make distintive representation between molecule. Reconstruction make model learn transferable representation

Experiment Experiment

Experiment Experiment
Tags