Paper Reading AI Learner

CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting and Anchor Pre-Matching

2023-03-23 07:13:57
Xiaoshi Wu, Feng Zhu, Rui Zhao, Hongsheng Li

Abstract

Open-vocabulary detection (OVD) is an object detection task aiming at detecting objects from novel categories beyond the base categories on which the detector is trained. Recent OVD methods rely on large-scale visual-language pre-trained models, such as CLIP, for recognizing novel objects. We identify the two core obstacles that need to be tackled when incorporating these models into detector training: (1) the distribution mismatch that happens when applying a VL-model trained on whole images to region recognition tasks; (2) the difficulty of localizing objects of unseen classes. To overcome these obstacles, we propose CORA, a DETR-style framework that adapts CLIP for Open-vocabulary detection by Region prompting and Anchor pre-matching. Region prompting mitigates the whole-to-region distribution gap by prompting the region features of the CLIP-based region classifier. Anchor pre-matching helps learning generalizable object localization by a class-aware matching mechanism. We evaluate CORA on the COCO OVD benchmark, where we achieve 41.7 AP50 on novel classes, which outperforms the previous SOTA by 2.4 AP50 even without resorting to extra training data. When extra training data is available, we train CORA$^+$ on both ground-truth base-category annotations and additional pseudo bounding box labels computed by CORA. CORA$^+$ achieves 43.1 AP50 on the COCO OVD benchmark and 28.1 box APr on the LVIS OVD benchmark.

Abstract (translated)

Open-vocabulary detection (OVD) 是一种目标检测任务,旨在检测对象来自新分类类别超越了检测器训练的基础类别。最近的 OVD 方法依赖于大型视觉语言预训练模型,如 CLIP,以识别新对象。我们识别了两个核心障碍,当将这些模型融入检测器训练时需要解决:(1) 应用整个图像训练的 VL-模型用于区域识别任务时的分布不匹配;(2) 难以定位未观测过类的对象。为了克服这些障碍,我们提出了 CORA,一种 DETR 风格框架,通过区域提示和Anchor 前匹配来适应 CLIP 以进行 Open-vocabulary 检测。区域提示缓解了整个到区域的分布差距,通过提示 CLIP 基于区域分类器的区域特征。Anchor 前匹配帮助学习基于类元匹配机制的可通用对象定位。我们在 COCO OVD 基准上评估了 CORA,在 novel 类上取得了 41.7 AP50,比先前的 SOTA 提高了 2.4 AP50 甚至不需要额外的训练数据。当有额外的训练数据时,我们训练了 CORA$^+$,在真实的基类注释和由 CORA 计算的额外的伪边界框标签上训练。 CORA$^+$ 在 COCO OVD 基准上取得了 43.1 AP50,在 LVIS OVD 基准上取得了 28.1 框 APr。

URL

https://arxiv.org/abs/2303.13076

PDF

https://arxiv.org/pdf/2303.13076.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot