Abstract
This work explores the performance of a large video understanding foundation model on the downstream task of human fall detection on untrimmed video and leverages a pretrained vision transformer for multi-class action detection, with classes: "Fall", "Lying" and "Other/Activities of daily living (ADL)". A method for temporal action localization that relies on a simple cutup of untrimmed videos is demonstrated. The methodology includes a preprocessing pipeline that converts datasets with timestamp action annotations into labeled datasets of short action clips. Simple and effective clip-sampling strategies are introduced. The effectiveness of the proposed method has been empirically evaluated on the publicly available High-Quality Fall Simulation Dataset (HQFSD). The experimental results validate the performance of the proposed pipeline. The results are promising for real-time application, and the falls are detected on video level with a state-of-the-art 0.96 F1 score on the HQFSD dataset under the given experimental settings. The source code will be made available on GitHub.
Abstract (translated)
这项工作探讨了在未剪辑视频的下游任务中大型视频理解基础模型的表现,并利用预训练的视觉Transformer进行多类动作检测,类别包括“跌倒”、“躺下”和“其他/日常生活活动(ADL)”。我们展示了依靠简单截断未剪辑视频的方法。 本文提出了一种基于剪辑的时序动作定位方法。该方法依赖于简单的截断未剪辑视频的数据集,将其转换为带有标签的短动作片段的数据集。我们介绍了两种简单的剪辑采样策略。 所提出方法的性能已经在公开可用的优质跌倒模拟数据集(HQFSD)上进行了实证评估。实验结果证实了所提出工作流程的有效性。在给定的实验设置下,HQFSD数据集上的跌倒事件在视频级别上的准确率达到了0.96 F1分数。 源代码将放在GitHub上。
URL
https://arxiv.org/abs/2401.16280