Petar Veličković [email protected] Department of Computer Science and Technology Montréal Institute for Learning Algorithms Department of Computer Science and Technology Montréal Institu, Guillem Cucurull [email protected] Department of Computer Science and Technology Montréal Institute for Learning Algorithms Department of Computer Science and Technology Montréal Institute for Lear, Arantxa Casanova [email protected] Department of Computer Science and Technology Montréal Institute for Learning Algorithms Department of Computer Science and Technology Montréal Institute for , Adriana Romero [email protected] Department of Computer Science and Technology Montréal Institute for Learning Algorithms Department of Computer Science and Technology Montréal Inst, Pietro Liò [email protected] Department of Computer Science and Technology Montréal Institute for Learning Algorithms Department of Computer Science and Technology Montréal Institute for Learn, Yoshua Bengio Department of Computer Science and Technology Montréal Institute for Learning Algorithms Department of Computer Science and Technology Montréal Institute for Learning Algorithms Centre (2017)
This paper introduces Graph Attention Networks (GATs), novel neural network architectures designed for graph-structured data. The core innovation is leveraging masked self-attention layers to overcome limitations of earlier graph convolutional approaches. GATs allow nodes to attend selectively to their neighborhoods, assigning different weights without costly matrix operations or prior knowledge of the graph structure. The authors validate their model on four established datasets: Cora, Citeseer, Pubmed, and a protein-protein interaction (PPI) dataset, achieving state-of-the-art results in both transductive and inductive tasks. Key features of GATs include high computational efficiency, applicable to graphs of different structures, and enhanced interpretability through learned attention weights.
This paper employs the following methods:
The following datasets were used in this research:
The authors identified the following limitations: