Paper Reading AI Learner

MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark

2024-03-29 15:08:37
Sanghyun Woo, Kwanyong Park, Inkyu Shin, Myungchul Kim, In So Kweon

Abstract

Multi-target multi-camera tracking is a crucial task that involves identifying and tracking individuals over time using video streams from multiple cameras. This task has practical applications in various fields, such as visual surveillance, crowd behavior analysis, and anomaly detection. However, due to the difficulty and cost of collecting and labeling data, existing datasets for this task are either synthetically generated or artificially constructed within a controlled camera network setting, which limits their ability to model real-world dynamics and generalize to diverse camera configurations. To address this issue, we present MTMMC, a real-world, large-scale dataset that includes long video sequences captured by 16 multi-modal cameras in two different environments - campus and factory - across various time, weather, and season conditions. This dataset provides a challenging test-bed for studying multi-camera tracking under diverse real-world complexities and includes an additional input modality of spatially aligned and temporally synchronized RGB and thermal cameras, which enhances the accuracy of multi-camera tracking. MTMMC is a super-set of existing datasets, benefiting independent fields such as person detection, re-identification, and multiple object tracking. We provide baselines and new learning setups on this dataset and set the reference scores for future studies. The datasets, models, and test server will be made publicly available.

Abstract (translated)

多目标多摄像机跟踪是一个关键任务,涉及使用来自多个摄像头的视频流识别和跟踪一段时间内的个体。这项任务在各种领域具有实际应用,如视频监控、人群行为分析和异常检测。然而,由于收集和标注数据的难度和成本,现有的数据集只能在受控的相机网络环境中构建,这限制了它们对建模现实世界动态和泛化到不同相机配置的能力。为了解决这个问题,我们提出了MTMMC,一个真实世界的大型数据集,其中包括16个多模态相机在校园和工厂环境中捕获的长时间视频序列。这个数据集为研究不同现实世界复杂性下的多相机跟踪提供了具有挑战性的测试平台,包括额外的输入模态:空间对齐和时间同步的RGB和热成像相机,从而提高了多相机跟踪的准确性。MTMMC是现有数据集中的超集, benefit independent fields such as person detection, re-identification, and multiple object tracking。我们在该数据集上提供了基线和新学习设置,并为未来的研究设置了参考分数。数据集、模型和测试服务器将公开发布。

URL

https://arxiv.org/abs/2403.20225

PDF

https://arxiv.org/pdf/2403.20225.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot