Abstract
Memory mechanism is a core component of LLM-based agents, enabling reasoning and knowledge discovery over long-horizon contexts. Existing agent memory systems are typically designed within isolated paradigms (e.g., explicit, parametric, or latent memory) with tightly coupled retrieval methods that hinder cross-paradigm generalization and fusion. In this work, we take a first step toward unifying heterogeneous memory paradigms within a single memory system. We propose MemAdapter, a memory retrieval framework that enables fast alignment across agent memory paradigms. MemAdapter adopts a two-stage training strategy: (1) training a generative subgraph retriever from the unified memory space, and (2) adapting the retriever to unseen memory paradigms by training a lightweight alignment module through contrastive learning. This design improves the flexibility for memory retrieval and substantially reduces alignment cost across paradigms. Comprehensive experiments on three public evaluation benchmarks demonstrate that the generative subgraph retriever consistently outperforms five strong agent memory systems across three memory paradigms and agent model scales. Notably, MemAdapter completes cross-paradigm alignment within 13 minutes on a single GPU, achieving superior performance over original memory retrievers with less than 5% of training compute. Furthermore, MemAdapter enables effective zero-shot fusion across memory paradigms, highlighting its potential as a plug-and-play solution for agent memory systems.
Abstract (translated)
记忆机制是基于大模型的代理的核心组成部分,它使代理能够在长时间上下文中进行推理和知识发现。现有的代理内存系统通常是在孤立的范式(例如显式、参数化或潜在内存)中设计,并且这些系统的检索方法紧密耦合,这阻碍了跨范式的泛化与融合。在这项工作中,我们朝着在单一内存系统内统一异构记忆范式迈出了第一步。我们提出了MemAdapter,这是一个允许代理记忆范式之间快速对齐的记忆检索框架。 MemAdapter采用两阶段训练策略:(1) 从统一的内存空间中训练生成子图检索器;(2) 通过对比学习训练一个轻量级对齐模块来适应未见过的记忆范式。这种设计提高了内存检索的灵活性,并且显著减少了跨范式的对准成本。 在三个公共评估基准上进行的全面实验表明,生成子图检索器在三种记忆范式和代理模型规模下始终优于五个强大的代理记忆系统。值得注意的是,MemAdapter在一个GPU上完成跨范式的对齐仅需13分钟,并且使用不到5%的训练计算量就实现了比原始内存检索者更优的表现。此外,MemAdapter还能够有效实现不同记忆范式之间的零样本融合,突显了它作为代理内存系统即插即用解决方案的巨大潜力。
URL
https://arxiv.org/abs/2602.08369