Abstract
Video domain generalization aims to learn generalizable video classification models for unseen target domains by training in a source domain. A critical challenge of video domain generalization is to defend against the heavy reliance on domain-specific cues extracted from the source domain when recognizing target videos. To this end, we propose to perceive diverse spatial-temporal cues in videos, aiming to discover potential domain-invariant cues in addition to domain-specific cues. We contribute a novel model named Spatial-Temporal Diversification Network (STDN), which improves the diversity from both space and time dimensions of video data. First, our STDN proposes to discover various types of spatial cues within individual frames by spatial grouping. Then, our STDN proposes to explicitly model spatial-temporal dependencies between video contents at multiple space-time scales by spatial-temporal relation modeling. Extensive experiments on three benchmarks of different types demonstrate the effectiveness and versatility of our approach.
Abstract (translated)
视频领域泛化旨在通过在源域中训练来学习未见过的目标领域中的通用的视频分类模型。视频领域泛化的一个关键挑战是在识别目标视频时,抵御源域中提取的领域特定线索的过重依赖。为此,我们提出了一种感知多样时空线索的方法,旨在发现潜在的领域无关线索,除了领域特定线索。我们提出了一个名为 Spatial-Temporal Diversification Network (STDN) 的新模型,该模型从时间和空间维度改善了视频数据的多样性。首先,我们的 STDN 通过空间分组来发现个体帧内的各种类型空间线索。然后,我们的 STDN 通过空间-时间关系建模,明确地建模了多个空间-时间尺度之间视频内容的时空依赖关系。对不同类型的基准测试的广泛实验证明了我们方法的有效性和多样性。
URL
https://arxiv.org/abs/2310.17942