Paper Reading AI Learner

Extended Object Tracking and Classification based on Linear Splines

2024-10-31 17:46:54
Matteo Tesori, Giorgio Battistelli, Luigi Chisci

Abstract

This paper introduces a framework based on linear splines for 2-dimensional extended object tracking and classification. Unlike state of the art models, linear splines allow to represent extended objects whose contour is an arbitrarily complex curve. An exact likelihood is derived for the case in which noisy measurements can be scattered from any point on the contour of the extended object, while an approximate Monte Carlo likelihood is provided for the case wherein scattering points can be anywhere, i.e. inside or on the contour, on the object surface. Exploiting such likelihood to measure how well the observed data fit a given shape, a suitable estimator is developed. The proposed estimator models the extended object in terms of a kinematic state, providing object position and orientation, along with a shape vector, characterizing object contour and surface. The kinematic state is estimated via a nonlinear Kalman filter, while the shape vector is estimated via a Bayesian classifier so that classification is implicitly solved during shape estimation. Numerical experiments are provided to assess, compared to state of the art extended object estimators, the effectiveness of the proposed one.

Abstract (translated)

本文介绍了一个基于线性样条的二维扩展目标跟踪与分类框架。不同于最先进的模型,线性样条能够表示轮廓为任意复杂曲线的扩展对象。对于噪声测量可以来自扩展物体轮廓上任何点的情况,推导出一个精确的可能性;而对于散射点可以在任何地方(即内部或轮廓上的对象表面)的情况,则提供了一个近似的蒙特卡罗可能性。利用这种可能性来衡量观测数据与给定形状的吻合程度,开发了一种合适的估计器。所提出的估计器通过动力学状态模型扩展物体,提供了目标的位置和方向,以及一个描述物体轮廓和表面的形状向量。动力学状态是通过非线性卡尔曼滤波器进行估计的,而形状向量则是通过贝叶斯分类器来估计的,从而在形状估计过程中隐含地解决了分类问题。提供了一些数值实验以评估与最先进的扩展目标估计算法相比,所提出方法的有效性。

URL

https://arxiv.org/abs/2410.24183

PDF

https://arxiv.org/pdf/2410.24183.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot