Abstract
Recent advancements have showcased the potential of handheld millimeter-wave (mmWave) imaging, which applies synthetic aperture radar (SAR) principles in portable settings. However, existing studies addressing handheld motion errors either rely on costly tracking devices or employ simplified imaging models, leading to impractical deployment or limited performance. In this paper, we present IFNet, a novel deep unfolding network that combines the strengths of signal processing models and deep neural networks to achieve robust imaging and focusing for handheld mmWave systems. We first formulate the handheld imaging model by integrating multiple priors about mmWave images and handheld phase errors. Furthermore, we transform the optimization processes into an iterative network structure for improved and efficient imaging performance. Extensive experiments demonstrate that IFNet effectively compensates for handheld phase errors and recovers high-fidelity images from severely distorted signals. In comparison with existing methods, IFNet can achieve at least 11.89 dB improvement in average peak signal-to-noise ratio (PSNR) and 64.91% improvement in average structural similarity index measure (SSIM) on a real-world dataset.
Abstract (translated)
近年来,便携式毫米波成像(mmWave Imaging)的潜在应用已经得到了展示,这种应用利用了便携式设置下的合成孔径雷达(SAR)原理。然而,现有的研究要么依赖于昂贵的跟踪设备,要么采用简化的成像模型,导致实际部署不实用或性能有限。在本文中,我们提出了IFNet,一种新颖的深度展开网络,结合了信号处理模型的优势和深度神经网络的优点,为手持mmWave系统实现稳健的成像和聚焦。我们首先通过整合多个关于mmWave图像和手持相位误差的多项prior,形式化地定义了手持成像模型。此外,我们将优化过程转化为一个迭代网络结构,以提高和实现高效的成像性能。大量实验证明IFNet能够有效补偿手持相位误差,并从严重扭曲的信号中恢复高保真的图像。与现有方法相比,IFNet可以在真实世界数据集上实现至少11.89 dB的平均峰值信号-噪声比(PSNR)的改进和64.91%的平均结构相似性指数测量(SSIM)。
URL
https://arxiv.org/abs/2405.02023