SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations

التفاصيل البيبلوغرافية
العنوان: SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations
المؤلفون: Wu, Qitian, Zhao, Wentao, Yang, Chenxiao, Zhang, Hengrui, Nie, Fan, Jiang, Haitian, Bian, Yatao, Yan, Junchi
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Computer Science - Social and Information Networks
الوصف: Learning representations on large-sized graphs is a long-standing challenge due to the inter-dependence nature involved in massive data points. Transformers, as an emerging class of foundation encoders for graph-structured data, have shown promising performance on small graphs due to its global attention capable of capturing all-pair influence beyond neighboring nodes. Even so, existing approaches tend to inherit the spirit of Transformers in language and vision tasks, and embrace complicated models by stacking deep multi-head attentions. In this paper, we critically demonstrate that even using a one-layer attention can bring up surprisingly competitive performance across node property prediction benchmarks where node numbers range from thousand-level to billion-level. This encourages us to rethink the design philosophy for Transformers on large graphs, where the global attention is a computation overhead hindering the scalability. We frame the proposed scheme as Simplified Graph Transformers (SGFormer), which is empowered by a simple attention model that can efficiently propagate information among arbitrary nodes in one layer. SGFormer requires none of positional encodings, feature/graph pre-processing or augmented loss. Empirically, SGFormer successfully scales to the web-scale graph ogbn-papers100M and yields up to 141x inference acceleration over SOTA Transformers on medium-sized graphs. Beyond current results, we believe the proposed methodology alone enlightens a new technical path of independent interest for building Transformers on large graphs.
Comment: Accepted to NeurIPS 2023, the codes are available at https://github.com/qitianwu/SGFormerTest
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2306.10759Test
رقم الانضمام: edsarx.2306.10759
قاعدة البيانات: arXiv