Paper Reading AI Learner

Moving Object Segmentation: All You Need Is SAM

2024-04-18 17:59:53
Junyu Xie, Charig Yang, Weidi Xie, Andrew Zisserman

Abstract

The objective of this paper is motion segmentation -- discovering and segmenting the moving objects in a video. This is a much studied area with numerous careful,and sometimes complex, approaches and training schemes including: self-supervised learning, learning from synthetic datasets, object-centric representations, amodal representations, and many more. Our interest in this paper is to determine if the Segment Anything model (SAM) can contribute to this task. We investigate two models for combining SAM with optical flow that harness the segmentation power of SAM with the ability of flow to discover and group moving objects. In the first model, we adapt SAM to take optical flow, rather than RGB, as an input. In the second, SAM takes RGB as an input, and flow is used as a segmentation prompt. These surprisingly simple methods, without any further modifications, outperform all previous approaches by a considerable margin in both single and multi-object benchmarks. We also extend these frame-level segmentations to sequence-level segmentations that maintain object identity. Again, this simple model outperforms previous methods on multiple video object segmentation benchmarks.

Abstract (translated)

本论文的目标是运动分割,即在视频中发现和分割运动物体。这是一个已经研究广泛的领域,包括许多仔细研究过的方法,有时很复杂,包括自监督学习、从合成数据中学习、以物体为中心表示、以模式为基础表示等等。本文的兴趣在于确定Segment Anything模型(SAM)是否能为这项任务做出贡献。我们研究了两个将SAM与光学流结合的模型,利用SAM的分割能力与流发现和分组移动物体的能力。在第一个模型中,我们将SAM适应为以光学流为输入。在第二个模型中,SAM以RGB为输入,并使用流作为分割提示。这些简单的方法,没有任何进一步的修改,在单物体和多物体基准测试中显著优于所有先前的方法。我们还将这些帧级分割扩展到序列级分割,保持物体身份。再次,这个简单模型在多个视频物体分割基准测试中优于先前的方法。

URL

https://arxiv.org/abs/2404.12389

PDF

https://arxiv.org/pdf/2404.12389.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot