Abstract
At the risk of overstating the case, connectionist approaches to machine learning, i.e. neural networks, are enjoying a small vogue right now. However, these methods require large volumes of data and produce models that are uninterpretable to humans. An alternative framework that is compatible with neural networks and gradient-based learning, but explicitly models compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of algebras on high-dimensional vector representations. They arose in cognitive science from the need to unify neural processing and the kind of symbolic reasoning that humans perform. While machine learning methods have benefited from category theoretical analyses, VSAs have not yet received similar treatment. In this paper, we present a first attempt at applying category theory to VSAs. Specifically, we conduct a brief literature survey demonstrating the lacking intersection of these two topics, provide a list of desiderata for VSAs, and propose that VSAs may be understood as a (division) rig in a category enriched over a monoid in Met (the category of Lawvere metric spaces). This final contribution suggests that VSAs may be generalised beyond current implementations. It is our hope that grounding VSAs in category theory will lead to more rigorous connections with other research, both within and beyond, learning and cognition.
Abstract (translated)
有风险夸大其词地说,连接主义的机器学习方法——即神经网络——目前正受到一些追捧。然而,这些方法需要大量的数据,并且生成的人类难以解读的模型。与神经网络和基于梯度的学习兼容但明确建模组合性的另一种框架是向量符号架构(VSAs)。VSAs是一系列在高维向量表示上的代数系统。它们起源于认知科学,在那里它们被用来统一神经处理和人类执行的那种符号推理。 尽管机器学习方法从范畴理论分析中受益匪浅,但VSAs尚未得到类似的关注。在这篇论文中,我们首次尝试将范畴理论应用于VSAs,并进行了简要的文献综述,展示了这两个主题之间缺乏交集之处,提供了一系列表明对VSAs期望的需求清单,并提出可以将VSAs理解为在Met(Lawvere度量空间类)中单态加法幺半群上丰富化的范畴中的除环。最后这一贡献表明,VSAs可能超越目前的实现方式而得到推广。 我们希望基于范畴理论来构建VSAs能够促进与学习和认知领域内以及跨领域的研究更严格的联系。
URL
https://arxiv.org/abs/2501.05368