Abstract
Previous visual object tracking methods employ image-feature regression models or coordinate autoregression models for bounding box prediction. Image-feature regression methods heavily depend on matching results and do not utilize positional prior, while the autoregressive approach can only be trained using bounding boxes available in the training set, potentially resulting in suboptimal performance during testing with unseen data. Inspired by the diffusion model, denoising learning enhances the model's robustness to unseen data. Therefore, We introduce noise to bounding boxes, generating noisy boxes for training, thus enhancing model robustness on testing data. We propose a new paradigm to formulate the visual object tracking problem as a denoising learning process. However, tracking algorithms are usually asked to run in real-time, directly applying the diffusion model to object tracking would severely impair tracking speed. Therefore, we decompose the denoising learning process into every denoising block within a model, not by running the model multiple times, and thus we summarize the proposed paradigm as an in-model latent denoising learning process. Specifically, we propose a denoising Vision Transformer (ViT), which is composed of multiple denoising blocks. In the denoising block, template and search embeddings are projected into every denoising block as conditions. A denoising block is responsible for removing the noise in a predicted bounding box, and multiple stacked denoising blocks cooperate to accomplish the whole denoising process. Subsequently, we utilize image features and trajectory information to refine the denoised bounding box. Besides, we also utilize trajectory memory and visual memory to improve tracking stability. Experimental results validate the effectiveness of our approach, achieving competitive performance on several challenging datasets.
Abstract (translated)
之前的视觉目标跟踪方法采用图像特征回归模型或坐标自回归模型来进行边界框预测。图像特征回归法严重依赖于匹配结果,不利用位置先验信息;而自回归方法只能通过训练集中可用的边界框进行训练,在测试时面对未见过的数据可能会导致次优性能。受到扩散模型的启发,去噪学习可以增强模型对未见数据的鲁棒性。因此,我们在边界框上引入噪声,生成用于训练的有噪音的边界框,从而提高在测试数据上的模型鲁棒性。我们提出了一种新范式,将视觉目标跟踪问题表述为一个去噪学习过程。 然而,由于跟踪算法通常要求实时运行,直接应用扩散模型进行对象跟踪会严重损害跟踪速度。因此,我们将去噪学习过程分解到每个模型内的去噪块中,而不是通过多次运行整个模型来实现,并且我们总结提出的范式为一种模内潜在的去噪学习流程。 具体来说,我们提出了一种去噪视觉变换器(ViT),它由多个去噪模块组成。在这些去噪模块中,模板和搜索嵌入被投影到每个模块作为条件。一个去噪块负责去除预测边界框中的噪声,而多层堆叠的去噪块合作完成整个去噪过程。随后,我们利用图像特征和轨迹信息来细化去噪后的边界框。此外,我们也使用轨迹记忆和视觉记忆来提高跟踪稳定性。 实验结果验证了该方法的有效性,在几个具有挑战性的数据集上实现了竞争性的性能表现。
URL
https://arxiv.org/abs/2501.02467