Paper Reading AI Learner

RiemannGL: Riemannian Geometry Changes Graph Deep Learning

2026-02-11 16:10:53
Li Sun, Qiqi Wan, Suyang Zhou, Zhenhao Huang, Philip S. Yu

Abstract

Graphs are ubiquitous, and learning on graphs has become a cornerstone in artificial intelligence and data mining communities. Unlike pixel grids in images or sequential structures in language, graphs exhibit a typical non-Euclidean structure with complex interactions among the objects. This paper argues that Riemannian geometry provides a principled and necessary foundation for graph representation learning, and that Riemannian graph learning should be viewed as a unifying paradigm rather than a collection of isolated techniques. While recent studies have explored the integration of graph learning and Riemannian geometry, most existing approaches are limited to a narrow class of manifolds, particularly hyperbolic spaces, and often adopt extrinsic manifold formulations. We contend that the central mission of Riemannian graph learning is to endow graph neural networks with intrinsic manifold structures, which remains underexplored. To advance this perspective, we identify key conceptual and methodological gaps in existing approaches and outline a structured research agenda along three dimensions: manifold type, neural architecture, and learning paradigm. We further discuss open challenges, theoretical foundations, and promising directions that are critical for unlocking the full potential of Riemannian graph learning. This paper aims to provide a coherent viewpoint and to stimulate broader exploration of Riemannian geometry as a foundational framework for future graph learning research.

Abstract (translated)

图形无处不在,图上的学习已经成为人工智能和数据挖掘领域中的基石。与图像中的像素网格或语言中的序列结构不同,图展示了对象之间复杂相互作用的非欧几里得结构。本文认为黎曼几何为图表示学习提供了一个合理且必要的基础,并主张将黎曼图学习视为一种统一范式而非孤立技术的集合。虽然近期的研究已经探索了图学习与黎曼几何的整合,但大多数现有方法仅限于一类狭隘的流形,特别是双曲空间,并经常采用外在流形公式。我们认为黎曼图学习的核心使命是赋予图神经网络内在流形结构,这一点仍然鲜为人知。 为了推进这一视角,我们指出了现有方法中概念和方法论的关键差距,并概述了一个沿三个维度展开的研究议程:流形类型、神经架构以及学习范式。进一步地,本文还讨论了开放性挑战、理论基础及有前景的方向,这些对于解锁黎曼图学习的全部潜力至关重要。 本文旨在提供一个连贯的观点,并刺激对黎曼几何作为未来图学习研究基础框架进行更广泛探索的兴趣。

URL

https://arxiv.org/abs/2602.10982

PDF

https://arxiv.org/pdf/2602.10982.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot