Paper Reading AI Learner

4D Panoptic Scene Graph Generation

2024-05-16 17:56:55
Jingkang Yang, Jun Cen, Wenxuan Peng, Shuai Liu, Fangzhou Hong, Xiangtai Li, Kaiyang Zhou, Qifeng Chen, Ziwei Liu

Abstract

We are living in a three-dimensional space while moving forward through a fourth dimension: time. To allow artificial intelligence to develop a comprehensive understanding of such a 4D environment, we introduce 4D Panoptic Scene Graph (PSG-4D), a new representation that bridges the raw visual data perceived in a dynamic 4D world and high-level visual understanding. Specifically, PSG-4D abstracts rich 4D sensory data into nodes, which represent entities with precise location and status information, and edges, which capture the temporal relations. To facilitate research in this new area, we build a richly annotated PSG-4D dataset consisting of 3K RGB-D videos with a total of 1M frames, each of which is labeled with 4D panoptic segmentation masks as well as fine-grained, dynamic scene graphs. To solve PSG-4D, we propose PSG4DFormer, a Transformer-based model that can predict panoptic segmentation masks, track masks along the time axis, and generate the corresponding scene graphs via a relation component. Extensive experiments on the new dataset show that our method can serve as a strong baseline for future research on PSG-4D. In the end, we provide a real-world application example to demonstrate how we can achieve dynamic scene understanding by integrating a large language model into our PSG-4D system.

Abstract (translated)

我们正在通过第四维度(时间)向前移动,生活在三维空间中。为了使人工智能能够全面理解这种四维环境,我们引入了4D透视场景图(PSG-4D),一种新的表示方法,它桥接了在动态四维世界中感知到的原始视觉数据和高层次视觉理解。具体来说,PSG-4D将丰富的4D感官数据抽象为节点,这些节点表示具有精确位置和状态信息的实体,并且边捕获了时间关系。为了促进关于这一新领域的 research,我们构建了一个带有丰富注释的PSG-4D数据集,包括3K个RGB-D视频,总共有1M个帧,每个视频都被标注了4D透视分割掩码以及细粒度的动态场景图。为了解决PSG-4D,我们提出了PSG4DFormer,一种基于Transformer的模型,可以预测透视分割掩码,跟踪掩码沿着时间轴,并通过关系组件生成相应的场景图。对新技术数据集的实验证明表明,我们的方法可以为未来的PSG-4D研究提供一个强大的基线。最后,我们通过将大型语言模型集成到PSG-4D系统中来提供了一个真实的应用实例,展示了我们如何通过整合大型语言模型来实现动态场景理解。

URL

https://arxiv.org/abs/2405.10305

PDF

https://arxiv.org/pdf/2405.10305.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot