Paper Reading AI Learner

AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures

2019-05-30 17:51:03
Michael S. Ryoo, AJ Piergiovanni, Mingxing Tan, Anelia Angelova

Abstract

Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to a third dimension (using a limited number of space-time modules such as 3D convolutions) or by introducing a handcrafted two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream space-time convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin.

Abstract (translated)

学习视频的表现是一个非常具有挑战性的任务,无论是在算法上还是在计算上。标准视频CNN架构的设计是通过直接将用于图像理解的架构扩展到第三维度(使用有限数量的时空模块,如3D卷积)或引入手工制作的双流设计来捕获视频中的外观和运动。我们将一个视频CNN解释为一组相互连接的多流时空卷积块,并提出了一种自动寻找具有更好连通性的神经结构的方法,以便于视频理解。这是通过在连接权重学习的指导下,对大量过度连接的体系结构进行改进来实现的。搜索在多个时间分辨率下抽象不同输入类型(即RGB和光流)的组合表示的体系结构,允许不同类型或信息源相互作用。我们的方法,被称为汇编网,在某些情况下,在公共视频数据集上优于以前的方法。

URL

https://arxiv.org/abs/1905.13209

PDF

https://arxiv.org/pdf/1905.13209.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot