Abstract
Specifying reward signals that allow agents to learn complex behaviors is a long-standing challenge in reinforcement learning. A promising approach is to extract preferences for behaviors from unlabeled videos, which are widely available on the internet. We present Video Prediction Rewards (VIPER), an algorithm that leverages pretrained video prediction models as action-free reward signals for reinforcement learning. Specifically, we first train an autoregressive transformer on expert videos and then use the video prediction likelihoods as reward signals for a reinforcement learning agent. VIPER enables expert-level control without programmatic task rewards across a wide range of DMC, Atari, and RLBench tasks. Moreover, generalization of the video prediction model allows us to derive rewards for an out-of-distribution environment where no expert data is available, enabling cross-embodiment generalization for tabletop manipulation. We see our work as starting point for scalable reward specification from unlabeled videos that will benefit from the rapid advances in generative modeling. Source code and datasets are available on the project website: this https URL
Abstract (translated)
指定能够让代理学习复杂行为的奖励信号是 reinforcement learning 中的长期挑战。一个有前途的方法是从未标记的视频中提取对行为的偏好,这些视频在互联网上广泛可得。我们提出了视频预测奖励(VIPER),该算法利用训练好的视频预测模型作为 reinforcement learning 中的无行动奖励信号。具体来说,我们首先训练了专家视频的自回归Transformer,然后使用视频预测概率作为奖励信号,为 reinforcement learning 代理提供专家级别的控制,而无需程序任务奖励。此外,视频预测模型的泛化使我们能够在没有专家数据可用的分布外环境中提取奖励,从而实现桌面操作对象的跨身体化身泛化。我们认为我们的工作可以作为从未标记视频 scalable 奖励 specification 的起点。源代码和数据集可在项目网站上找到:这个 https URL。
URL
https://arxiv.org/abs/2305.14343