Paper Reading AI Learner

Fast video object segmentation with Spatio-Temporal GANs

2019-03-28 17:45:09
Sergi Caelles, Albert Pumarola, Francesc Moreno-Noguer, Alberto Sanfeliu, Luc Van Gool

Abstract

Learning descriptive spatio-temporal object models from data is paramount for the task of semi-supervised video object segmentation. Most existing approaches mainly rely on models that estimate the segmentation mask based on a reference mask at the first frame (aided sometimes by optical flow or the previous mask). These models, however, are prone to fail under rapid appearance changes or occlusions due to their limitations in modelling the temporal component. On the other hand, very recently, other approaches learned long-term features using a convolutional LSTM to leverage the information from all previous video frames. Even though these models achieve better temporal representations, they still have to be fine-tuned for every new video sequence. In this paper, we present an intermediate solution and devise a novel GAN architecture, FaSTGAN, to learn spatio-temporal object models over finite temporal windows. To achieve this, we concentrate all the heavy computational load to the training phase with two critics that enforce spatial and temporal mask consistency over the last K frames. Then at test time, we only use a relatively light regressor, which reduces the inference time considerably. As a result, our approach combines a high resiliency to sudden geometric and photometric object changes with efficiency at test time (no need for fine-tuning nor post-processing). We demonstrate that the accuracy of our method is on par with state-of-the-art techniques on the challenging YouTube-VOS and DAVIS datasets, while running at 32 fps, about 4x faster than the closest competitor.

Abstract (translated)

从数据中学习描述性时空对象模型是半监督视频对象分割的首要任务。大多数现有的方法主要依赖于基于第一帧参考遮罩(有时借助于光流或前一个遮罩)的模型来估计分割遮罩。然而,由于这些模型在时间分量建模方面的局限性,这些模型在快速的外观变化或闭塞下容易失效。另一方面,最近,其他方法使用卷积LSTM来利用以前所有视频帧中的信息来学习长期特性。即使这些模型实现了更好的时间表示,它们仍然需要针对每一个新的视频序列进行微调。本文提出了一种中间解,并设计了一种新的GaN体系结构FastGaN,用于学习有限时间窗下的时空对象模型。为了实现这一点,我们将所有繁重的计算负载集中到训练阶段,其中有两个批评者在最后的k帧中强制执行空间和时间掩模一致性。然后在测试时,我们只使用相对较轻的回归量,这大大减少了推理时间。因此,我们的方法结合了对突然几何和光度物体变化的高弹性和测试时的效率(无需微调或后处理)。我们证明,我们的方法的准确性与具有挑战性的YouTube VOS和Davis数据集的最先进技术相当,同时以32 fps的速度运行,比最接近的竞争对手快4倍。

URL

https://arxiv.org/abs/1903.12161

PDF

https://arxiv.org/pdf/1903.12161.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot