Abstract
Camouflaged object detection (COD) primarily focuses on learning subtle yet discriminative representations from complex scenes. Existing methods predominantly follow the parametric feedforward architecture based on static visual representation modeling. However, they lack explicit mechanisms for acquiring historical context, limiting their adaptation and effectiveness in handling challenging camouflage scenes. In this paper, we propose a recall-augmented COD architecture, namely RetroMem, which dynamically modulates camouflage pattern perception and inference by integrating relevant historical knowledge into the process. Specifically, RetroMem employs a two-stage training paradigm consisting of a learning stage and a recall stage to construct, update, and utilize memory representations effectively. During the learning stage, we design a dense multi-scale adapter (DMA) to improve the pretrained encoder's capability to capture rich multi-scale visual information with very few trainable parameters, thereby providing foundational inferences. In the recall stage, we propose a dynamic memory mechanism (DMM) and an inference pattern reconstruction (IPR). These components fully leverage the latent relationships between learned knowledge and current sample context to reconstruct the inference of camouflage patterns, thereby significantly improving the model's understanding of camouflage scenes. Extensive experiments on several widely used datasets demonstrate that our RetroMem significantly outperforms existing state-of-the-art methods.
Abstract (translated)
伪装物体检测(COD)主要集中在从复杂场景中学习细微但具有区分性的表示上。现有的方法大多基于静态视觉表示模型的参数化前馈架构,然而它们缺乏获取历史上下文的显式机制,这限制了其在处理挑战性伪装场景中的适应性和有效性。 本文提出了一个增强型COD架构——RetroMem(回忆增强),通过将相关的历史知识整合到过程中来动态调节伪装模式感知和推理。具体来说,RetroMem采用了一个两阶段训练范式,包括学习阶段和召回阶段,以有效地构建、更新和利用记忆表示。 在学习阶段,我们设计了一种密集多尺度适配器(DMA),通过添加很少的可训练参数,来提升预训练编码器捕捉丰富多尺度视觉信息的能力,从而提供基础推理。在召回阶段,我们提出了一种动态记忆机制(DMM)以及推断模式重构(IPR)。这些组件充分利用了学习知识和当前样本上下文之间的潜在关系,以重建伪装模式的推理过程,从而显著提升模型对伪装场景的理解能力。 在几个广泛使用的数据集上的大量实验表明,我们的RetroMem架构显著优于现有的最先进的方法。
URL
https://arxiv.org/abs/2506.15244