دورية أكاديمية

Entropy-aware self-training for graph convolutional networks.

التفاصيل البيبلوغرافية
العنوان: Entropy-aware self-training for graph convolutional networks.
المؤلفون: Zhao, Gongpei1 (AUTHOR), Wang, Tao1 (AUTHOR) twang@bjtu.edu.cn, Li, Yidong1 (AUTHOR), Jin, Yi1 (AUTHOR), Lang, Congyan1 (AUTHOR)
المصدر: Neurocomputing. Nov2021, Vol. 464, p394-407. 14p.
مصطلحات موضوعية: *BOOSTING algorithms, *RANDOM walks, *ALGORITHMS
مستخلص: [Display omitted] • An entropy-aggregation layer proposed to strengthen reasoning ability of GCN. • An ingenious checking part based on self-training to enhance node classification. • Sufficient experiments and analyses to validate the superiority of ES-GCN. Recently, graph convolutional networks (GCNs) have achieved significant success in many graph-based learning tasks, especially for node classification, due to its excellent ability in representation learning. Nevertheless, it remains challenging for GCN models to obtain satisfying predictions on graphs where only few nodes are with known labels. In this paper, we propose a novel entropy-aware self-training algorithm to boost semi-supervised node classification on graphs with little supervised information. Firstly, an entropy-aggregation layer is developed to strengthen the reasoning ability of GCN models. To the best of our knowledge, this is the first work to combine the entropy-based random walk theory with GCN design. Furthermore, we propose an ingenious checking part to add new nodes as supervision after each training round to enhance node prediction. In particular, the checking part is designed based on aggregated features, which is demonstrated more effective than previous methods and boosts node classification significantly. The proposed algorithm is validated on six public benchmarks in comparison with several state-of-the-art baseline algorithms, and the results illustrate its excellent performance. [ABSTRACT FROM AUTHOR]
قاعدة البيانات: Academic Search Index
الوصف
تدمد:09252312
DOI:10.1016/j.neucom.2021.08.092