Paper Reading AI Learner

DeTrack: In-model Latent Denoising Learning for Visual Object Tracking

2025-01-05 07:28:50
Xinyu Zhou, Jinglun Li, Lingyi Hong, Kaixun Jiang, Pinxue Guo, Weifeng Ge, Wenqiang Zhang

Abstract

Previous visual object tracking methods employ image-feature regression models or coordinate autoregression models for bounding box prediction. Image-feature regression methods heavily depend on matching results and do not utilize positional prior, while the autoregressive approach can only be trained using bounding boxes available in the training set, potentially resulting in suboptimal performance during testing with unseen data. Inspired by the diffusion model, denoising learning enhances the model's robustness to unseen data. Therefore, We introduce noise to bounding boxes, generating noisy boxes for training, thus enhancing model robustness on testing data. We propose a new paradigm to formulate the visual object tracking problem as a denoising learning process. However, tracking algorithms are usually asked to run in real-time, directly applying the diffusion model to object tracking would severely impair tracking speed. Therefore, we decompose the denoising learning process into every denoising block within a model, not by running the model multiple times, and thus we summarize the proposed paradigm as an in-model latent denoising learning process. Specifically, we propose a denoising Vision Transformer (ViT), which is composed of multiple denoising blocks. In the denoising block, template and search embeddings are projected into every denoising block as conditions. A denoising block is responsible for removing the noise in a predicted bounding box, and multiple stacked denoising blocks cooperate to accomplish the whole denoising process. Subsequently, we utilize image features and trajectory information to refine the denoised bounding box. Besides, we also utilize trajectory memory and visual memory to improve tracking stability. Experimental results validate the effectiveness of our approach, achieving competitive performance on several challenging datasets.

Abstract (translated)

之前的视觉目标跟踪方法采用图像特征回归模型或坐标自回归模型来进行边界框预测。图像特征回归法严重依赖于匹配结果,不利用位置先验信息;而自回归方法只能通过训练集中可用的边界框进行训练,在测试时面对未见过的数据可能会导致次优性能。受到扩散模型的启发,去噪学习可以增强模型对未见数据的鲁棒性。因此,我们在边界框上引入噪声,生成用于训练的有噪音的边界框,从而提高在测试数据上的模型鲁棒性。我们提出了一种新范式,将视觉目标跟踪问题表述为一个去噪学习过程。 然而,由于跟踪算法通常要求实时运行,直接应用扩散模型进行对象跟踪会严重损害跟踪速度。因此,我们将去噪学习过程分解到每个模型内的去噪块中,而不是通过多次运行整个模型来实现,并且我们总结提出的范式为一种模内潜在的去噪学习流程。 具体来说,我们提出了一种去噪视觉变换器(ViT),它由多个去噪模块组成。在这些去噪模块中,模板和搜索嵌入被投影到每个模块作为条件。一个去噪块负责去除预测边界框中的噪声,而多层堆叠的去噪块合作完成整个去噪过程。随后,我们利用图像特征和轨迹信息来细化去噪后的边界框。此外,我们也使用轨迹记忆和视觉记忆来提高跟踪稳定性。 实验结果验证了该方法的有效性,在几个具有挑战性的数据集上实现了竞争性的性能表现。

URL

https://arxiv.org/abs/2501.02467

PDF

https://arxiv.org/pdf/2501.02467.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot