Abstract
Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model. While previously limited to predicting missing parts of an input, we explore how to generalize the JEPA prediction task to a broader set of corruptions. We introduce Image World Models, an approach that goes beyond masked image modeling and learns to predict the effect of global photometric transformations in latent space. We study the recipe of learning performant IWMs and show that it relies on three key aspects: conditioning, prediction difficulty, and capacity. Additionally, we show that the predictive world model learned by IWM can be adapted through finetuning to solve diverse tasks; a fine-tuned IWM world model matches or surpasses the performance of previous self-supervised methods. Finally, we show that learning with an IWM allows one to control the abstraction level of the learned representations, learning invariant representations such as contrastive methods, or equivariant representations such as masked image modelling.
Abstract (translated)
联合嵌入预测架构(JEPA)作为一种新兴的自监督方法,通过利用世界模型取得了有前途的结果。与以前仅预测输入中的缺失部分不同,我们探讨了如何将JEPA预测任务扩展到更广泛的错误集合。我们引入了图像世界模型,这是一种超越了遮罩图像建模的方法,可以在潜在空间中预测全局光度变换的影响。我们研究了学习性能优秀的图像世界模型的学习食谱,并证明了其依赖于三个关键方面:条件、预测难度和容量。此外,我们还证明了通过微调,预训练的图像世界模型可以适应各种任务,比以前的自我监督方法的表现更好。最后,我们证明了使用图像世界模型进行学习可以让人们控制学习到的表示的抽象水平,例如采用对比方法学习的不变表示或采用遮罩图像建模的等价表示。
URL
https://arxiv.org/abs/2403.00504