Abstract
We approached the goal of applying meta-learning to self-supervised masked autoencoders for spatiotemporal learning in three steps. Broadly, we seek to understand the impact of applying meta-learning to existing state-of-the-art representation learning architectures. Thus, we test spatiotemporal learning through: a meta-learning architecture only, a representation learning architecture only, and an architecture applying representation learning alongside a meta learning architecture. We utilize the Memory Augmented Neural Network (MANN) architecture to apply meta-learning to our framework. Specifically, we first experiment with applying a pre-trained MAE and fine-tuning on our small-scale spatiotemporal dataset for video reconstruction tasks. Next, we experiment with training an MAE encoder and applying a classification head for action classification tasks. Finally, we experiment with applying a pre-trained MAE and fine-tune with MANN backbone for action classification tasks.
Abstract (translated)
我们采取了三个步骤来接近将元学习应用于自监督掩码生成器以时间空间学习的目标。总的来说,我们旨在理解将元学习应用于现有先进的表示学习架构的影响。因此,我们只有通过元学习架构、表示学习架构和元学习架构一起使用的架构来测试时间空间学习。我们利用增强记忆神经网络(MANN)架构将元学习应用于我们的框架。具体来说,我们首先尝试应用预训练的MAE并对我们的小型时间空间数据集进行微调,以进行视频重建任务。然后,我们尝试训练MAE编码器和应用分类头,以进行动作分类任务。最后,我们尝试应用预训练的MAE并调整ManN的骨架以进行动作分类任务。
URL
https://arxiv.org/abs/2308.01916