Unveiling the unseen potential of graph learning through MLPs: Effective graph learners using propagation-embracing MLPs

Yong Min Shin, Won Yong Shin

Research output: Contribution to journalArticlepeer-review

Abstract

Recent studies attempted to utilize multilayer perceptrons (MLPs) to solve semi-supervised node classification on graphs, by training a student MLP by knowledge distillation (KD) from a teacher graph neural network (GNN). While previous studies have focused mostly on training the student MLP by matching the output probability distributions between the teacher and student models during KD, it has not been systematically studied how to inject the structural information in an explicit and interpretable manner. Inspired by GNNs that separate feature transformation T and propagation Π, we re-frame the KD process as enabling the student MLP to explicitly learn both T and Π. Although this can be achieved by applying the inverse propagation Π−1 before distillation from the teacher GNN, it still comes with a high computational cost from large matrix multiplications during training. To solve this problem, we propose Propagate & Distill (P&D), which propagates the output of the teacher GNN before KD and can be interpreted as an approximate process of the inverse propagation Π−1. Through comprehensive evaluations using real-world benchmark datasets, we demonstrate the effectiveness of P&D by showing further performance boost of the student MLP.

Original languageEnglish
Article number112297
JournalKnowledge-Based Systems
Volume301
DOIs
Publication statusPublished - 2024 Oct 9

Bibliographical note

Publisher Copyright:
© 2024 Elsevier B.V.

All Science Journal Classification (ASJC) codes

  • Software
  • Management Information Systems
  • Information Systems and Management
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Unveiling the unseen potential of graph learning through MLPs: Effective graph learners using propagation-embracing MLPs'. Together they form a unique fingerprint.

Cite this