This paper proposes a method to gain extra supervision via multi-task learning for multi-modal video question answering. Multi-modal video question answering is an important task that aims at the joint understanding of vision and language. However, establishing large scale dataset for multi-modal video question answering is expensive and the existing benchmarks are relatively small to provide sufficient supervision. To overcome this challenge, this paper proposes a multi-task learning method which is composed of three main components: (1) multi-modal video question answering network that answers the question based on the both video and subtitle feature, (2) temporal retrieval network that predicts the time in the video clip where the question was generated from and (3) modality alignment network that solves metric learning problem to find correct association of video and subtitle modalities. By simultaneously solving related auxiliary tasks with hierarchically shared intermediate layers, the extra synergistic supervisions are provided. Motivated by curriculum learning, multi task ratio scheduling is proposed to learn easier task earlier to set inductive bias at the beginning of the training. The experiments on publicly available dataset TVQA shows state-of-the-art results, and ablation studies are conducted to prove the statistical validity.
本文提出了一种通过多任务学习获得额外监督的多模式视频问答方法。多模态视频问答是一项旨在实现视觉和语言共同理解的重要任务。然而,建立大规模的多模式视频问答数据集代价高昂,现有的基准相对较小,无法提供足够的监督。为了克服这一挑战,本文提出了一种多任务学习方法,该方法由三个主要部分组成:(1)基于视频和字幕特征的多模式视频问答网络;(2)时间检索网络,预测视频剪辑中生成问题的时间。OM和(3)模态对齐网络,解决了度量学习问题,以找到视频和副标题模态的正确关联。通过同时解决具有层次共享中间层的相关辅助任务,提供了额外的协同监控。在课程学习的激励下,提出了多任务比调度的方法,以便在培训开始时提前学习更容易的任务,从而设置归纳偏差。在公共可用数据集TVCA上进行的实验显示了最新的结果,并进行了消融研究以证明统计有效性。
https://arxiv.org/abs/1905.13540
Many prediction tasks, especially in computer vision, are often inherently ambiguous. For example, the output of semantic segmentation may depend on the scale one is looking at, and image saliency or video summarization is often user or context dependent. Arguably, in such scenarios, exploiting instance specific evidence, such as scale or user context, can help resolve the underlying ambiguity leading to the improved predictions. While existing literature has considered incorporating such evidence in classical models such as probabilistic graphical models (PGMs), there is limited (or no) prior work looking at this problem in the context of deep neural network (DNN) models. In this paper, we present a generic multi task learning (MTL) based framework which handles the evidence as the output of one or more secondary tasks, while modeling the original problem as the primary task of interest. Our training phase is identical to the one used by standard MTL architectures. During prediction, we back-propagate the loss on secondary task(s) such that network weights are re-adjusted to match the evidence. An early stopping or two norm based regularizer ensures weights do not deviate significantly from the ones learned originally. Implementation in two specific scenarios (a) predicting semantic segmentation given the image level tags (b) predicting instance level segmentation given the text description of the image, clearly demonstrates the effectiveness of our proposed approach.
https://arxiv.org/abs/1811.09796
Morphological analysis is an important first step in downstream tasks like machine translation and dependency parsing of morphologically rich languages (MRLs) such as those belonging to Indo-Aryan and Dravidian families. However, the ambiguities introduced by the recombination of morphemes constructing several possible inflections for a word makes the prediction of syntactic traits a notoriously complicated task for MRLs. We propose a character-level neural morphological analyzer, the Multi Task Deep Morphological analyzer (MT-DMA), based on multitask learning of word-level tag markers for Hindi. In order to show the portability of our system to other related languages, we present results on Urdu too. MT-DMA predicts the complete set of morphological tags for words of Indo-Aryan languages: Parts-of-speech (POS), Gender (G), Number (N), Person (P), Case (C), Tense-Aspect-Modality (TAM) marker as well as the Lemma (L) by jointly learning all these in a single end-to-end framework. We show the effectiveness of training of such deep neural networks by the simultaneous optimization of multiple loss functions and sharing of initial parameters for context-aware morphological analysis. Our model outperforms the state-of-art analyzers for Hindi and Urdu. Exploring the use of a set of character-level features in phonological space optimized for each tag through a multi-objective genetic algorithm, coupled with effective training strategies, our model establishes a new state-of-the-art accuracy score upon all seven of the tasks for both the languages. MT-DMA is publicly accessible to be used at <a href="http://35.154.251.44/.">this http URL</a>
https://arxiv.org/abs/1811.08619
We address the visual relocalization problem of predicting the location and camera orientation or pose (6DOF) of the given input scene. We propose a method based on how humans determine their location using the visible landmarks. We define anchor points uniformly across the route map and propose a deep learning architecture which predicts the most relevant anchor point present in the scene as well as the relative offsets with respect to it. The relevant anchor point need not be the nearest anchor point to the ground truth location, as it might not be visible due to the pose. Hence we propose a multi task loss function, which discovers the relevant anchor point, without needing the ground truth for it. We validate the effectiveness of our approach by experimenting on CambridgeLandmarks (large scale outdoor scenes) as well as 7 Scenes (indoor scenes) using variousCNN feature extractors. Our method improves the median error in indoor as well as outdoor localization datasets compared to the previous best deep learning model known as PoseNet (with geometric re-projection loss) using the same feature extractor. We improve the median error in localization in the specific case of Street scene, by over 8m.
https://arxiv.org/abs/1811.04370
In Automatic Speech Recognition, it is still challenging to learn useful intermediate representations when using of high-level (or abstract) target units such as words. Character or phoneme based systems tend to outperform word based systems as long as thousands of hours of training data are being used. In this paper, we first show how hierarchical multi-task training can encourage the formation of useful intermediate representations. We achieve this by performing Connectionist Temporal Classification at different levels of the network with targets of different granularity. Our model thus performs predictions in multiple scales of granularity for the same input. On the standard 300h Switchboard training setup, our hierarchical multi-task architecture exhibits improvements over single-task architectures with the same number of parameters. Our model obtains 14.0% Word Error Rate on the Eval2000 Switchboard subset without any decoder or language model, outperforming the current state-of-the-art on acoustic-to-word models.
在自动语音识别中,当使用诸如单词的高级(或抽象)目标单元时,学习有用的中间表示仍然是具有挑战性的。只要使用数千小时的训练数据,基于字符或音素的系统往往优于基于单词的系统。在本文中,我们首先展示了分层多任务训练如何能够鼓励形成有用的中间表征。我们通过在具有不同粒度的目标的网络的不同级别执行连接主义时间分类来实现这一点。因此,我们的模型针对相同的输入执行多个粒度尺度的预测。在标准的300h Switchboard培训设置中,我们的分层多任务架构比具有相同参数数量的单任务架构有所改进。我们的模型在没有任何解码器或语言模型的情况下在Eval2000交换机子集上获得14.0%的字错误率,优于当前最先进的声学到单词模型。
https://arxiv.org/abs/1807.07104