Paper Reading AI Learner

Physics-Guided Dual Implicit Neural Representations for Source Separation

2025-07-07 17:56:31
Yuan Ni, Zhantao Chen, Alexander N. Petsch, Edmund Xu, Cheng Peng, Alexander I. Kolesnikov, Sugata Chowdhury, Arun Bansil, Jana B. Thayer, Joshua J. Turner

Abstract

Significant challenges exist in efficient data analysis of most advanced experimental and observational techniques because the collected signals often include unwanted contributions--such as background and signal distortions--that can obscure the physically relevant information of interest. To address this, we have developed a self-supervised machine-learning approach for source separation using a dual implicit neural representation framework that jointly trains two neural networks: one for approximating distortions of the physical signal of interest and the other for learning the effective background contribution. Our method learns directly from the raw data by minimizing a reconstruction-based loss function without requiring labeled data or pre-defined dictionaries. We demonstrate the effectiveness of our framework by considering a challenging case study involving large-scale simulated as well as experimental momentum-energy-dependent inelastic neutron scattering data in a four-dimensional parameter space, characterized by heterogeneous background contributions and unknown distortions to the target signal. The method is found to successfully separate physically meaningful signals from a complex or structured background even when the signal characteristics vary across all four dimensions of the parameter space. An analytical approach that informs the choice of the regularization parameter is presented. Our method offers a versatile framework for addressing source separation problems across diverse domains, ranging from superimposed signals in astronomical measurements to structural features in biomedical image reconstructions.

Abstract (translated)

在大多数先进的实验和观测技术中,高效的数据分析面临着重大挑战,因为收集到的信号通常包含不需要的贡献(如背景噪声和信号失真),这些因素会掩盖感兴趣的物理相关信息。为了解决这个问题,我们开发了一种自监督机器学习方法来进行源分离,该方法使用双重隐式神经表示框架同时训练两个神经网络:一个用于近似感兴趣物理信号的失真,另一个则用于学习有效的背景贡献。我们的方法直接从原始数据中进行学习,并通过最小化基于重建的损失函数来完成这一过程,而无需标注数据或预定义字典。 我们通过考虑一个具有挑战性的案例研究展示了框架的有效性:该研究涉及大规模模拟和实验中的动量-能量依赖的非弹性中子散射数据,在四维参数空间内展开。这些数据的特点是异质背景贡献及对目标信号未知的失真。我们的方法能够在复杂或结构化的背景下成功分离出物理意义明确的信号,即使这种信号特征跨越了参数空间的所有四个维度。 此外,还提出了一种分析方法来指导正则化参数的选择。 我们的方法提供了一个灵活框架,用于解决跨多个领域的源分离问题,从天文学测量中的叠加信号到生物医学图像重建中的结构特性。

URL

https://arxiv.org/abs/2507.05249

PDF

https://arxiv.org/pdf/2507.05249.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot