Paper Reading AI Learner

Video Annotator: A framework for efficiently building video classifiers using vision-language models and active learning

2024-02-09 17:19:05
Amir Ziai, Aneesh Vartakavi

Abstract

High-quality and consistent annotations are fundamental to the successful development of robust machine learning models. Traditional data annotation methods are resource-intensive and inefficient, often leading to a reliance on third-party annotators who are not the domain experts. Hard samples, which are usually the most informative for model training, tend to be difficult to label accurately and consistently without business context. These can arise unpredictably during the annotation process, requiring a variable number of iterations and rounds of feedback, leading to unforeseen expenses and time commitments to guarantee quality. We posit that more direct involvement of domain experts, using a human-in-the-loop system, can resolve many of these practical challenges. We propose a novel framework we call Video Annotator (VA) for annotating, managing, and iterating on video classification datasets. Our approach offers a new paradigm for an end-user-centered model development process, enhancing the efficiency, usability, and effectiveness of video classifiers. Uniquely, VA allows for a continuous annotation process, seamlessly integrating data collection and model training. We leverage the zero-shot capabilities of vision-language foundation models combined with active learning techniques, and demonstrate that VA enables the efficient creation of high-quality models. VA achieves a median 6.8 point improvement in Average Precision relative to the most competitive baseline across a wide-ranging assortment of tasks. We release a dataset with 153k labels across 56 video understanding tasks annotated by three professional video editors using VA, and also release code to replicate our experiments at: this http URL.

Abstract (translated)

高品质和一致的注释是成功开发稳健的机器学习模型的基础。传统的数据注释方法资源密集且效率低下,通常导致在领域专家不是的情况下依赖第三方注释者。难样本(通常是模型训练中最有信息量的)往往在无商业上下文的情况下难以准确且一致地标注。这些挑战在注释过程中可能会意外出现,需要进行反复的迭代和反馈,导致未知的开销和保证质量的时间要求。我们提出了一种名为视频注释者(VA)的新框架,用于标注、管理和迭代视频分类数据集。我们的方法为以用户为中心的模型开发过程提供了一个新的范例,提高了视频分类器的效率、可用性和效果。值得注意的是,VA允许进行连续的注释过程,将数据收集和模型训练无缝集成。我们结合了视觉语言基础模型的零 shot功能与积极学习技术,证明了VA能够高效地创建高质量的模型。VA在广泛的任务中实现了平均6.8个点的AP提高,与最具有竞争力的基线相比。我们发布了由三个专业视频编辑师标注的56个视频理解任务的数据集,并在此处发布代码以复制我们的实验:http:// this URL.

URL

https://arxiv.org/abs/2402.06560

PDF

https://arxiv.org/pdf/2402.06560.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot