Abstract
Graphs are ubiquitous, and learning on graphs has become a cornerstone in artificial intelligence and data mining communities. Unlike pixel grids in images or sequential structures in language, graphs exhibit a typical non-Euclidean structure with complex interactions among the objects. This paper argues that Riemannian geometry provides a principled and necessary foundation for graph representation learning, and that Riemannian graph learning should be viewed as a unifying paradigm rather than a collection of isolated techniques. While recent studies have explored the integration of graph learning and Riemannian geometry, most existing approaches are limited to a narrow class of manifolds, particularly hyperbolic spaces, and often adopt extrinsic manifold formulations. We contend that the central mission of Riemannian graph learning is to endow graph neural networks with intrinsic manifold structures, which remains underexplored. To advance this perspective, we identify key conceptual and methodological gaps in existing approaches and outline a structured research agenda along three dimensions: manifold type, neural architecture, and learning paradigm. We further discuss open challenges, theoretical foundations, and promising directions that are critical for unlocking the full potential of Riemannian graph learning. This paper aims to provide a coherent viewpoint and to stimulate broader exploration of Riemannian geometry as a foundational framework for future graph learning research.
Abstract (translated)
图形无处不在,图上的学习已经成为人工智能和数据挖掘领域中的基石。与图像中的像素网格或语言中的序列结构不同,图展示了对象之间复杂相互作用的非欧几里得结构。本文认为黎曼几何为图表示学习提供了一个合理且必要的基础,并主张将黎曼图学习视为一种统一范式而非孤立技术的集合。虽然近期的研究已经探索了图学习与黎曼几何的整合,但大多数现有方法仅限于一类狭隘的流形,特别是双曲空间,并经常采用外在流形公式。我们认为黎曼图学习的核心使命是赋予图神经网络内在流形结构,这一点仍然鲜为人知。 为了推进这一视角,我们指出了现有方法中概念和方法论的关键差距,并概述了一个沿三个维度展开的研究议程:流形类型、神经架构以及学习范式。进一步地,本文还讨论了开放性挑战、理论基础及有前景的方向,这些对于解锁黎曼图学习的全部潜力至关重要。 本文旨在提供一个连贯的观点,并刺激对黎曼几何作为未来图学习研究基础框架进行更广泛探索的兴趣。
URL
https://arxiv.org/abs/2602.10982