Abstract
In the task of temporal action localization of ActivityNet-1.3 datasets, we propose to locate the temporal boundaries of each action and predict action class in untrimmed videos. We first apply VideoSwinTransformer as feature extractor to extract different features. Then we apply a unified network following Faster-TAD to simultaneously obtain proposals and semantic labels. Last, we ensemble the results of different temporal action detection models which complement each other. Faster-TAD simplifies the pipeline of TAD and gets remarkable performance, obtaining comparable results as those of multi-step approaches.
Abstract (translated)
在ActivityNet-1.3数据集的时间动作定位任务中,我们提出定位每个动作的时间边界,并预测未剪辑视频中的动作类别。首先应用VideoSwinTransformer作为特征提取器来抽取不同的特征。然后采用类似于Faster-TAD的统一网络同时获得提议和语义标签。最后,我们将不同时间动作检测模型的结果进行集成,这些模型互相补充。Faster-TAD简化了TAD的流程,并取得了显著性能,达到了与多步骤方法相当的效果。
URL
https://arxiv.org/abs/2411.00883