Abstract
Neuron reconstruction, one of the fundamental tasks in neuroscience, rebuilds neuronal morphology from 3D light microscope imaging data. It plays a critical role in analyzing the structure-function relationship of neurons in the nervous system. However, due to the scarcity of neuron datasets and high-quality SWC annotations, it is still challenging to develop robust segmentation methods for single neuron reconstruction. To address this limitation, we aim to distill the consensus knowledge from massive natural image data to aid the segmentation model in learning the complex neuron structures. Specifically, in this work, we propose a novel training paradigm that leverages a 2D Vision Transformer model pre-trained on large-scale natural images to initialize our Transformer-based 3D neuron segmentation model with a tailored 2D-to-3D weight transferring strategy. Our method builds a knowledge sharing connection between the abundant natural and the scarce neuron image domains to improve the 3D neuron segmentation ability in a data-efficiency manner. Evaluated on a popular benchmark, BigNeuron, our method enhances neuron segmentation performance by 8.71% over the model trained from scratch with the same amount of training samples.
Abstract (translated)
神经元重建是神经科学中的一个基本任务,它通过从3D光显微镜图像数据中重构神经元形态学来分析神经系统中神经元的结构和功能关系。然而,由于神经元数据集的稀缺性和高质量的SWC注释的缺乏,开发用于单神经元重建的稳健分割方法仍然具有挑战性。为了克服这一限制,我们旨在从大规模自然图像数据中提取共识知识,以帮助分割模型学习复杂的神经元结构。具体来说,在本文中,我们提出了一种利用预训练于大型自然图像的2D Vision Transformer模型作为初始化,以实现基于Transformer的3D神经元分割模型的自适应2D到3D权转移策略。我们的方法建立了丰富自然和稀疏神经元图像领域之间的知识共享联系,以以数据效率的方式提高神经元分割能力。在流行的基准测试BigNeuron上进行评估,我们的方法将自定义模型的神经元分割性能提高了8.71%。
URL
https://arxiv.org/abs/2405.02686