Abstract
We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.
Abstract (translated)
我们提出了一种新的多任务学习体系结构,允许学习特定任务的特征级关注。我们的设计,多任务注意网络(MTAN),由一个包含全局功能池的共享网络和一个针对每个任务的软注意模块组成。这些模块允许从全局功能中学习特定于任务的功能,同时允许跨不同任务共享功能。该体系结构可以进行端到端的训练,可以建立在任意前馈神经网络的基础上,实现简单,参数效率高。我们在各种数据集上评估我们的方法,包括图像到图像预测和图像分类任务。与现有的多任务学习方法相比,我们的体系结构在多任务学习方面是最先进的,对多任务损失函数中的各种加权方案也不太敏感。代码可在https://github.com/lorenmt/mtan上找到。
URL
https://arxiv.org/abs/1803.10704