Abstract
This paper proposes a GeneraLIst encoder-Decoder (GLID) pre-training method for better handling various downstream computer vision tasks. While self-supervised pre-training approaches, e.g., Masked Autoencoder, have shown success in transfer learning, task-specific sub-architectures are still required to be appended for different downstream tasks, which cannot enjoy the benefits of large-scale pre-training. GLID overcomes this challenge by allowing the pre-trained generalist encoder-decoder to be fine-tuned on various vision tasks with minimal task-specific architecture modifications. In the GLID training scheme, pre-training pretext task and other downstream tasks are modeled as "query-to-answer" problems, including the pre-training pretext task and other downstream tasks. We pre-train a task-agnostic encoder-decoder with query-mask pairs. During fine-tuning, GLID maintains the pre-trained encoder-decoder and queries, only replacing the topmost linear transformation layer with task-specific linear heads. This minimizes the pretrain-finetune architecture inconsistency and enables the pre-trained model to better adapt to downstream tasks. GLID achieves competitive performance on various vision tasks, including object detection, image segmentation, pose estimation, and depth estimation, outperforming or matching specialist models such as Mask2Former, DETR, ViTPose, and BinsFormer.
Abstract (translated)
本文提出了一种用于更好地处理各种下游计算机视觉任务的GeneraLIst编码器-解码器(GLID)预训练方法。虽然自监督预训练方法(例如,遮罩自动编码器)在迁移学习方面已经取得了成功,但为了适应各种下游任务,仍需要附加任务特定的子架构。GLID通过允许预训练的泛化编码器-解码器在各种视觉任务上进行微调,从而克服了这一挑战。在GLID训练方案中,预训练预语任务和其他下游任务被建模为"查询-回答"问题,包括预训练预语任务和其他下游任务。我们预训练了一个任务无关的编码器-解码器,包括查询遮罩对。在微调过程中,GLID仅用任务特定的线性变换层替换最顶层的线性变换层。这减少了预训练-微调架构不一致性,并使预训练模型更好地适应下游任务。GLID在各种视觉任务上实现了竞争力的性能,包括目标检测、图像分割、姿态估计和深度估计,超过了或与专家模型(如Mask2Former、DETR、ViTPose和BinsFormer)相当。
URL
https://arxiv.org/abs/2404.07603