Abstract
Reconstructions of visual perception from brain activity have improved tremendously, but the practical utility of such methods has been limited. This is because such models are trained independently per subject where each subject requires dozens of hours of expensive fMRI training data to attain high-quality results. The present work showcases high-quality reconstructions using only 1 hour of fMRI training data. We pretrain our model across 7 subjects and then fine-tune on minimal data from a new subject. Our novel functional alignment procedure linearly maps all brain data to a shared-subject latent space, followed by a shared non-linear mapping to CLIP image space. We then map from CLIP space to pixel space by fine-tuning Stable Diffusion XL to accept CLIP latents as inputs instead of text. This approach improves out-of-subject generalization with limited training data and also attains state-of-the-art image retrieval and reconstruction metrics compared to single-subject approaches. MindEye2 demonstrates how accurate reconstructions of perception are possible from a single visit to the MRI facility. All code is available on GitHub.
Abstract (translated)
从大脑活动的视觉感知重构已经取得了很大的进步,但这种方法的实用性受到了限制。这是因为这类模型在每名受试者上都是独立训练的,每位受试者需要花费数十小时的高昂fMRI训练数据才能达到高质量的结果。本文仅使用1小时的fMRI训练数据展示了高品质的视觉感知重构。我们在7个受试者上进行预训练,然后在新受试者上通过最小的数据进行微调。我们采用了新颖的功能对齐方法将所有脑数据映射到共享受试者潜在空间,然后通过共享非线性映射将潜在空间映射到CLIP图像空间。接着将CLIP空间映射到像素空间,通过微调Stable Diffusion XL接受CLIP潜在作为输入。这种方法在有限训练数据的情况下提高了对外部受试者的泛化能力,并且与单受试者方法相比,在图像检索和重构方面实现了最先进的性能。MindEye2展示了从一次到MRI设施参观如何实现对感知的高质量重构。所有代码都可在GitHub上找到。
URL
https://arxiv.org/abs/2403.11207