Abstract
We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps. Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis. Recent publications demonstrated that INR models could encode a representation of a three-dimensional scene from a set of two-dimensional US frames. However, these models fail to consider the view-dependent changes in appearance and geometry intrinsic to US imaging. In our work, we discuss direction-dependent changes in the scene and show that a physics-inspired rendering improves the fidelity of US image synthesis. In particular, we demonstrate experimentally that our proposed method generates geometrically accurate B-mode images for regions with ambiguous representation owing to view-dependent differences of the US images. We conduct our experiments using simulated B-mode US sweeps of the liver and acquired US sweeps of a spine phantom tracked with a robotic arm. The experiments corroborate that our method generates US frames that enable consistent volume compounding from previously unseen views. To the best of our knowledge, the presented work is the first to address view-dependent US image synthesis using INR.
Abstract (translated)
我们提出了一种基于物理学增强的隐含神经网络表示(INR)的超声波(US)成像方法,该方法从重叠的US扫描中学习组织特性。我们利用基于ray-tracing的神经网络渲染技术实现全新的US成像视角。最近的研究表明,INR模型可以从一组二维US框架中编码三维场景的表示。然而,这些模型没有考虑到US成像固有的视角依赖性外观和几何变化。在我们的工作中,我们讨论了场景的方向依赖性变化,并表明基于物理学的渲染可以提高US图像合成的精度。特别是,我们实验表明,我们 proposed 方法生成了因US图像视角依赖性差异而具有歧义表现的区域几何形状的高精度B-mode图像。我们使用模拟的肝脏B-modeUS扫描和通过机器人手臂追踪的脊柱假人US扫描进行实验。实验证实了我们的方法生成了可用于连续体积合成以前从未见过的视角的US帧。据我们所知,这是第一个使用INR方法解决视角依赖性US图像合成的研究。
URL
https://arxiv.org/abs/2301.10520