Paper Reading AI Learner

A Survey of Quantized Graph Representation Learning: Connecting Graph Structures with Large Language Models

2025-02-02 05:57:34
Qika Lin, Zhen Peng, Kaize Shi, Kai He, Yiming Xu, Erik Cambria, Mengling Feng

Abstract

Recent years have witnessed rapid advances in graph representation learning, with the continuous embedding approach emerging as the dominant paradigm. However, such methods encounter issues regarding parameter efficiency, interpretability, and robustness. Thus, Quantized Graph Representation (QGR) learning has recently gained increasing interest, which represents the graph structure with discrete codes instead of conventional continuous embeddings. Given its analogous representation form to natural language, QGR also possesses the capability to seamlessly integrate graph structures with large language models (LLMs). As this emerging paradigm is still in its infancy yet holds significant promise, we undertake this thorough survey to promote its rapid future prosperity. We first present the background of the general quantization methods and their merits. Moreover, we provide an in-depth demonstration of current QGR studies from the perspectives of quantized strategies, training objectives, distinctive designs, knowledge graph quantization, and applications. We further explore the strategies for code dependence learning and integration with LLMs. At last, we give discussions and conclude future directions, aiming to provide a comprehensive picture of QGR and inspire future research.

Abstract (translated)

近年来,图表示学习取得了迅速进展,连续嵌入方法已成为主导范式。然而,这类方法在参数效率、可解释性和鲁棒性方面遇到了问题。因此,量化图表示(Quantized Graph Representation, QGR)学习最近引起了越来越多的关注,它使用离散代码而不是传统的连续嵌入来表示图结构。由于其与自然语言的类比表现形式,QGR还具备将图结构无缝集成到大型语言模型(LLMs)中的能力。鉴于这一新兴范式仍处于起步阶段但前景广阔,我们进行了全面调研以促进其未来的快速发展。首先,我们将介绍通用量化方法及其优势的背景信息。此外,从量化解策略、训练目标、独特设计、知识图谱量化和应用等方面深入展示了当前QGR研究。进一步地,我们探讨了代码依赖学习策略以及与LLMs集成的方法。最后,我们将进行讨论并总结未来的发展方向,旨在为QGR提供一个全面的视角,并激发未来的相关研究工作。

URL

https://arxiv.org/abs/2502.00681

PDF

https://arxiv.org/pdf/2502.00681.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot