Paper Reading AI Learner

On Input Formats for Radar Micro-Doppler Signature Processing by Convolutional Neural Networks

2024-04-12 07:30:08
Mikolaj Czerkawski, Carmine Clemente, Craig Michie, Christos Tachtatzis

Abstract

Convolutional neural networks have often been proposed for processing radar Micro-Doppler signatures, most commonly with the goal of classifying the signals. The majority of works tend to disregard phase information from the complex time-frequency representation. Here, the utility of the phase information, as well as the optimal format of the Doppler-time input for a convolutional neural network, is analysed. It is found that the performance achieved by convolutional neural network classifiers is heavily influenced by the type of input representation, even across formats with equivalent information. Furthermore, it is demonstrated that the phase component of the Doppler-time representation contains rich information useful for classification and that unwrapping the phase in the temporal dimension can improve the results compared to a magnitude-only solution, improving accuracy from 0.920 to 0.938 on the tested human activity dataset. Further improvement of 0.947 is achieved by training a linear classifier on embeddings from multiple-formats.

Abstract (translated)

卷积神经网络(CNN)通常被提议用于处理雷达微多普勒特征,最常见的目的是进行信号分类。大多数研究倾向于忽略复杂时间-频率表示中的相位信息。在这里,分析讨论了相位信息在卷积神经网络分类器中的重要性,以及卷积神经网络输入多普勒时间的最优格式。研究发现,卷积神经网络分类器的性能受到输入表示类型的极大影响,即使在具有等效信息的不同格式下也是如此。此外,还证明了多普勒时间表示的相位分量包含对于分类和有用的信息,而在时域中提取相位可以提高结果,与仅依靠幅度的解决方案相比,精度从0.920提高至0.938,在测试的人活动数据集上。通过多格式嵌入的训练,可以实现0.947的进一步改进。

URL

https://arxiv.org/abs/2404.08291

PDF

https://arxiv.org/pdf/2404.08291.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot