Abstract
For automotive applications, the Graph Attention Network (GAT) is a prominently used architecture to include relational information of a traffic scenario during feature embedding. As shown in this work, however, one of the most popular GAT realizations, namely GATv2, has potential pitfalls that hinder an optimal parameter learning. Especially for small and sparse graph structures a proper optimization is problematic. To surpass limitations, this work proposes architectural modifications of GATv2. In controlled experiments, it is shown that the proposed model adaptions improve prediction performance in a node-level regression task and make it more robust to parameter initialization. This work aims for a better understanding of the attention mechanism and analyzes its interpretability of identifying causal importance.
Abstract (translated)
对汽车应用而言,Graph Attention Network (GAT)是一个重要的架构,用于在特征嵌入期间包含交通场景的关系信息。正如本文所示,然而,最受欢迎的GAT实现之一,即GATv2,存在一些潜在的陷阱,可能会妨碍最佳的参数学习。特别是对于小型和稀疏的图结构,正确的优化问题很严重。为了超越限制,本文提出了GATv2的建筑学修改建议。在控制实验中,它表明,所提出的模型适应度可以提高节点级回归任务的预测性能,并使其更加鲁棒地对参数初始化进行初始化。本文旨在更好地理解注意力机制,并分析其识别因果关系的解释性。
URL
https://arxiv.org/abs/2305.16196