Abstract
Decoding visual information from human brain activity has seen remarkable advancements in recent research. However, due to the significant variability in cortical parcellation and cognition patterns across subjects, current approaches personalized deep models for each subject, constraining the practicality of this technology in real-world contexts. To tackle the challenges, we introduce Wills Aligner, a robust multi-subject brain representation learner. Our Wills Aligner initially aligns different subjects' brains at the anatomical level. Subsequently, it incorporates a mixture of brain experts to learn individual cognition patterns. Additionally, it decouples the multi-subject learning task into a two-stage training, propelling the deep model and its plugin network to learn inter-subject commonality knowledge and various cognition patterns, respectively. Wills Aligner enables us to overcome anatomical differences and to efficiently leverage a single model for multi-subject brain representation learning. We meticulously evaluate the performance of our approach across coarse-grained and fine-grained visual decoding tasks. The experimental results demonstrate that our Wills Aligner achieves state-of-the-art performance.
Abstract (translated)
近年来,从人脑活动解读视觉信息的研究取得了显著的进展。然而,由于不同受试者之间皮质分叶和认知模式的重大差异,为每个受试者定制深度模型在现实场景中限制了技术的实用性。为了解决这些挑战,我们引入了Wills Aligner,一个 robust 的多subject brain representation learner。 我们的Wills Aligner首先在解剖层面上对不同受试者的的大脑进行对齐。然后,它结合了多位脑专家来学习个体认知模式。此外,它将多subject学习任务转化为两个阶段的训练,推动深度模型及其插件网络学习跨受试者共性知识和各种认知模式。Wills Aligner使我们能够克服解剖差异,并有效地利用单个模型进行多subject brain representation learning。 我们详细评估了我们的方法在粗粒度和细粒度视觉解码任务上的性能。实验结果表明,我们的Wills Aligner达到了最先进的水平。
URL
https://arxiv.org/abs/2404.13282