Abstract
Artificial learners often behave differently from human learners in the context of neural agent-based simulations of language emergence and change. The lack of appropriate cognitive biases in these learners is one of the prevailing explanations. However, it has also been proposed that more naturalistic settings of language learning and use could lead to more human-like results. In this work, we investigate the latter account focusing on the word-order/case-marking trade-off, a widely attested language universal which has proven particularly difficult to simulate. We propose a new Neural-agent Language Learning and Communication framework (NeLLCom) where pairs of speaking and listening agents first learn a given miniature language through supervised learning, and then optimize it for communication via reinforcement learning. Following closely the setup of earlier human experiments, we succeed in replicating the trade-off with the new framework without hard-coding any learning bias in the agents. We see this as an essential step towards the investigation of language universals with neural learners.
Abstract (translated)
在神经代理基于语言出现和变化模拟的背景下,人工学习往往表现出与人类学习不同的行为。缺乏适当的认知偏见是常见的解释之一。然而,也有研究表明,更加自然主义的语言学习和使用可能会导致更加人类化的结果。在本研究中,我们关注了 word-order/case-marking 权衡,这是一个被广泛证明的语言普遍现象,但至今仍未被模拟。我们提出了一个新的神经代理语言学习和通信框架(NeLLCom),其中说话和倾听代理通过监督学习学习一个给定小型语言,然后通过强化学习优化它用于通信。我们 closely 跟随着之前人类实验的设置,成功在框架和新框架之间实现了权衡,而无需将任何学习偏见硬编码到代理中。我们认为这是迈向研究神经学习语言普遍现象的关键步骤。
URL
https://arxiv.org/abs/2301.13083