Abstract
We present a novel approach for action recognition in UAV videos. Our formulation is designed to handle occlusion and viewpoint changes caused by the movement of a UAV. We use the concept of mutual information to compute and align the regions corresponding to human action or motion in the temporal domain. This enables our recognition model to learn from the key features associated with the motion. We also propose a novel frame sampling method that uses joint mutual information to acquire the most informative frame sequence in UAV videos. We have integrated our approach with X3D and evaluated the performance on multiple datasets. In practice, we achieve 18.9% improvement in Top-1 accuracy over current state-of-the-art methods on UAV-Human(Li et al., 2021), 7.3% improvement on Drone-Action(Perera et al., 2019), and 7.16% improvement on NEC Drones(Choi et al., 2020). We will release the code at the time of publication
Abstract (translated)
我们提出了一种在无人机视频中进行行动识别的新方法。我们的设计旨在处理无人机运动造成的遮挡和视角变化。我们使用了互信息的概念来计算和对齐与人类行动或运动相关的区域。这使我们的识别模型能够从与运动相关的关键特征中学习。我们还提出了一种新帧采样方法,该方法使用共同互信息来获取无人机视频中最 informative 的帧序列。我们与X3D集成并评估了多个数据集的性能。在实践中,我们在UAV-人类(Li等人,2021)和无人机-行动(Perera等人,2019)准确率方面实现了18.9%的提高,在 NEC 无人机(Choi等人,2020)方面实现了7.16%的提高。我们将在发布时释放代码。
URL
https://arxiv.org/abs/2303.02575