graph neural network, graph pooling, information loss
The pooling operation is used in graph classification tasks to leverage hierarchical structures preserved in data and reduce computational complexity. However, pooling shrinkage discards graph details, and existing pooling methods may lead to the loss of key classification features. In this work, we propose a residual convolutional graph neural network to tackle the problem of key classification features losing. Particularly, our contributions are threefold: (1) Different from existing methods, we propose a new strategy to calculate sorting values and verify their importance for graph classification. Our strategy does not only use features of simple nodes but also their neighbors for the accurate evaluation of its importance. (2) We design a new graph convolutional layer architecture with the residual connection. By feeding discarded features back into the network architecture, we reduce the probability of losing critical features for graph classification. (3) We propose a new method for graph-level representation. The messages for each node are aggregated separately, and then different attention levels are assigned to each node and merged into a graph-level representation to retain structural and critical information for classification. Our experimental results show that our method leads to state-of-the-art results on multiple graph classification benchmarks.
Duan, Yutai; Wang, Jianming; Ma, Haoran; and Sun, Yukuan
"Residual Convolutional Graph Neural Network with Subgraph Attention Pooling,"
Tsinghua Science and Technology: Vol. 27:
4, Article 1.
Available at: https://dc.tsinghuajournals.com/tsinghua-science-and-technology/vol27/iss4/1