Abstract
Insect vision supports complex behaviors including associative learning, navigation, and object detection, and has long motivated computational models for understanding biological visual processing. However, many contemporary models prioritize task performance while neglecting biologically grounded processing pathways. Here, we introduce a bio-inspired vision model that captures principles of the insect visual system to transform dense visual input into sparse, discriminative codes. The model is trained using a fully self-supervised contrastive objective, enabling representation learning without labeled data and supporting reuse across tasks without reliance on domain-specific classifiers. We evaluated the resulting representations on flower recognition tasks and natural image benchmarks. The model consistently produced reliable sparse codes that distinguish visually similar inputs. To support different modelling and deployment uses, we have implemented the model as both an artificial neural network and a spiking neural network. In a simulated localization setting, our approach outperformed a simple image downsampling comparison baseline, highlighting the functional benefit of incorporating neuromorphic visual processing pathways. Collectively, these results advance insect computational modelling by providing a generalizable bio-inspired vision model capable of sparse computation across diverse tasks.
Abstract (translated)
昆虫视觉支持复杂的行为,包括关联学习、导航和物体检测,并且长期以来一直是理解生物视觉处理的计算模型的动力来源。然而,许多当代模型在优化任务性能的同时忽视了生物学基础的处理路径。在这里,我们引入了一种受生物启发的视觉模型,该模型捕捉到了昆虫视觉系统的基本原则,将密集的视觉输入转化为稀疏、具有区分度的代码。该模型使用完全自监督对比目标进行训练,在没有标记数据的情况下支持表示学习,并且能够在不同任务中重用而不依赖于特定领域的分类器。我们对花识别任务和自然图像基准进行了结果表示的评估。模型始终产生可靠的稀疏码,能够区分视觉上相似的输入。为了支持不同的建模和部署需求,我们将该模型实现为人工神经网络和脉冲神经网络。在一个模拟定位场景中,我们的方法优于简单的图像下采样对比基线,突显了纳入类脑视觉处理路径的功能益处。总的来说,这些结果通过提供一种能够在各种任务上进行稀疏计算的通用生物启发式视觉模型,推动了昆虫计算建模的发展。
URL
https://arxiv.org/abs/2602.06405