Abstract
In this work we introduce a time- and memory-efficient method for structured prediction that couples neuron decisions across both space at time. We show that we are able to perform exact and efficient inference on a densely connected spatio-temporal graph by capitalizing on recent advances on deep Gaussian Conditional Random Fields (GCRFs). Our method, called VideoGCRF is (a) efficient, (b) has a unique global minimum, and (c) can be trained end-to-end alongside contemporary deep networks for video understanding. We experiment with multiple connectivity patterns in the temporal domain, and present empirical improvements over strong baselines on the tasks of both semantic and instance segmentation of videos.
Abstract (translated)
在这项工作中,我们引入了一种时间和内存有效的结构化预测方法,可以在两个空间内同时耦合神经元决策。我们通过利用深度高斯条件随机场(GCRF)的最新进展,表明我们能够在密集连接的时空图上进行精确有效的推理。我们称为VideoGCRF的方法是(a)有效,(b)具有独特的全局最小值,(c)可以与当代深度网络一起端到端地进行视频理解。我们在时域中尝试多种连接模式,并对视频的语义和实例分割任务的强基线进行经验改进。
URL
https://arxiv.org/abs/1807.03148