Paper Reading AI Learner

A Control-Centric Benchmark for Video Prediction

2023-04-26 17:59:45
Stephen Tian, Chelsea Finn, Jiajun Wu

Abstract

Video is a promising source of knowledge for embodied agents to learn models of the world's dynamics. Large deep networks have become increasingly effective at modeling complex video data in a self-supervised manner, as evaluated by metrics based on human perceptual similarity or pixel-wise comparison. However, it remains unclear whether current metrics are accurate indicators of performance on downstream tasks. We find empirically that for planning robotic manipulation, existing metrics can be unreliable at predicting execution success. To address this, we propose a benchmark for action-conditioned video prediction in the form of a control benchmark that evaluates a given model for simulated robotic manipulation through sampling-based planning. Our benchmark, Video Prediction for Visual Planning ($VP^2$), includes simulated environments with 11 task categories and 310 task instance definitions, a full planning implementation, and training datasets containing scripted interaction trajectories for each task category. A central design goal of our benchmark is to expose a simple interface -- a single forward prediction call -- so it is straightforward to evaluate almost any action-conditioned video prediction model. We then leverage our benchmark to study the effects of scaling model size, quantity of training data, and model ensembling by analyzing five highly-performant video prediction models, finding that while scale can improve perceptual quality when modeling visually diverse settings, other attributes such as uncertainty awareness can also aid planning performance.

Abstract (translated)

视频是身体参与 agents 学习世界动态模型的有前途的知识来源。大型深网络在以自我监督方式建模复杂的视频数据方面变得越来越有效,以基于人类感知相似性或像素比较的度量指标进行评估。然而,目前仍不清楚当前度量指标是否准确地反映了下游任务的表现。我们经验证,对于规划机器人操纵,现有的度量指标在预测执行成功方面的可靠性是不可靠的。为了解决这个问题,我们提出了一种控制基准,以作为评估通过采样计划模拟机器人操纵给定模型的基准。我们的基准是“视频预测Visual Planning”(VP2),包括模拟环境,每个任务类别具有11个任务分类和310个任务实例定义,以及完整的计划实施和包含每个任务类别的编程交互轨迹的训练数据集。我们的基准的核心设计目标是暴露一个简单的接口——一个单一的预测向前调用——以便几乎可以直接评估任何行动条件的视频预测模型。然后我们利用我们的基准来分析五家表现优异的视频预测模型,发现虽然规模可以在建模视觉多样性的情境中改善感知质量,但其他属性,如不确定性意识,也可以协助规划性能。

URL

https://arxiv.org/abs/2304.13723

PDF

https://arxiv.org/pdf/2304.13723.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot