Paper Reading AI Learner

MemAdapter: Fast Alignment across Agent Memory Paradigms via Generative Subgraph Retrieval

2026-02-09 08:09:25
Xin Zhang, Kailai Yang, Chenyue Li, Hao Li, Qiyu Wei, Jun'ichi Tsujii, Sophia Ananiadou

Abstract

Memory mechanism is a core component of LLM-based agents, enabling reasoning and knowledge discovery over long-horizon contexts. Existing agent memory systems are typically designed within isolated paradigms (e.g., explicit, parametric, or latent memory) with tightly coupled retrieval methods that hinder cross-paradigm generalization and fusion. In this work, we take a first step toward unifying heterogeneous memory paradigms within a single memory system. We propose MemAdapter, a memory retrieval framework that enables fast alignment across agent memory paradigms. MemAdapter adopts a two-stage training strategy: (1) training a generative subgraph retriever from the unified memory space, and (2) adapting the retriever to unseen memory paradigms by training a lightweight alignment module through contrastive learning. This design improves the flexibility for memory retrieval and substantially reduces alignment cost across paradigms. Comprehensive experiments on three public evaluation benchmarks demonstrate that the generative subgraph retriever consistently outperforms five strong agent memory systems across three memory paradigms and agent model scales. Notably, MemAdapter completes cross-paradigm alignment within 13 minutes on a single GPU, achieving superior performance over original memory retrievers with less than 5% of training compute. Furthermore, MemAdapter enables effective zero-shot fusion across memory paradigms, highlighting its potential as a plug-and-play solution for agent memory systems.

Abstract (translated)

记忆机制是基于大模型的代理的核心组成部分,它使代理能够在长时间上下文中进行推理和知识发现。现有的代理内存系统通常是在孤立的范式(例如显式、参数化或潜在内存)中设计,并且这些系统的检索方法紧密耦合,这阻碍了跨范式的泛化与融合。在这项工作中,我们朝着在单一内存系统内统一异构记忆范式迈出了第一步。我们提出了MemAdapter,这是一个允许代理记忆范式之间快速对齐的记忆检索框架。 MemAdapter采用两阶段训练策略:(1) 从统一的内存空间中训练生成子图检索器;(2) 通过对比学习训练一个轻量级对齐模块来适应未见过的记忆范式。这种设计提高了内存检索的灵活性,并且显著减少了跨范式的对准成本。 在三个公共评估基准上进行的全面实验表明,生成子图检索器在三种记忆范式和代理模型规模下始终优于五个强大的代理记忆系统。值得注意的是,MemAdapter在一个GPU上完成跨范式的对齐仅需13分钟,并且使用不到5%的训练计算量就实现了比原始内存检索者更优的表现。此外,MemAdapter还能够有效实现不同记忆范式之间的零样本融合,突显了它作为代理内存系统即插即用解决方案的巨大潜力。

URL

https://arxiv.org/abs/2602.08369

PDF

https://arxiv.org/pdf/2602.08369.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot