Abstract
Sports analytics benefits from recent advances in machine learning providing a competitive advantage for teams or individuals. One important task in this context is the performance measurement of individual players to provide reports and log files for subsequent analysis. During sport events like basketball, this involves the re-identification of players during a match either from multiple camera viewpoints or from a single camera viewpoint at different times. In this work, we investigate whether it is possible to transfer the out-standing zero-shot performance of pre-trained CLIP models to the domain of player re-identification. For this purpose we reformulate the contrastive language-to-image pre-training approach from CLIP to a contrastive image-to-image training approach using the InfoNCE loss as training objective. Unlike previous work, our approach is entirely class-agnostic and benefits from large-scale pre-training. With a fine-tuned CLIP ViT-L/14 model we achieve 98.44 % mAP on the MMSports 2022 Player Re-Identification challenge. Furthermore we show that the CLIP Vision Transformers have already strong OCR capabilities to identify useful player features like shirt numbers in a zero-shot manner without any fine-tuning on the dataset. By applying the Score-CAM algorithm we visualise the most important image regions that our fine-tuned model identifies when calculating the similarity score between two images of a player.
Abstract (translated)
体育 analytics 受益于最近的机器学习进展,为团队或个人提供了竞争优势。在这个背景下,一个重要的任务是对个人球员的性能测量,以提供报告和日志文件,为后续分析提供便利。在篮球等运动比赛中,这涉及在一场比赛的不同时间内从多个相机视角或单个相机视角多次识别球员。在这项工作中,我们研究如何将前训练的 CLIP 模型的零样本性能转移到球员识别领域。为此,我们重新阐述了对比性语言-图像前训练方法从 CLIP 到对比性图像-图像训练方法,使用 InfoNCE 损失作为训练目标。与以前的工作不同,我们的 approach 是完全类别无关的,并从大规模的预训练中获得好处。使用优化的 CLIP ViT-L/14 模型,我们在 MMSports 2022 球员重新识别挑战中取得了 98.44 % 的 mAP。此外,我们表明, CLIP 视觉转换器已经具有强大的 OCR 能力,以零样本方式识别有用的球员特征,例如球衣号码,而无需在数据集上进行微调。通过应用评分-CAM 算法,我们可视化了我们的优化模型在计算球员两个图像之间的相似性评分时识别的最重要图像区域。
URL
https://arxiv.org/abs/2303.11855