Abstract
This paper investigates what insights about linguistic features and what knowledge about the structure of natural language can be obtained from the encodings in transformer language this http URL particular, we explore how BERT encodes the government relation between constituents in a sentence. We use several probing classifiers, and data from two morphologically rich languages. Our experiments show that information about government is encoded across all transformer layers, but predominantly in the early layers of the model. We find that, for both languages, a small number of attention heads encode enough information about the government relations to enable us to train a classifier capable of discovering new, previously unknown types of government, never seen in the training data. Currently, data is lacking for the research community working on grammatical constructions, and government in particular. We release the Government Bank -- a dataset defining the government relations for thousands of lemmas in the languages in our experiments.
Abstract (translated)
本文研究了关于语言特征和自然语言结构的知识可以从 transformer 语言中的编码中得到多少洞见。在这个特定的 HTTP URL 下,我们探讨了 BERT 对句子中语素之间政府关系编码的情况。我们使用了几个查询分类器,并从两种具有丰富形态的语言的数据显示数据。我们的实验结果表明,政府信息在所有 transformer 层中都有编码,但主要在模型的早期层。我们发现,对于这两种语言,只有少数的注意力头编码了足够关于政府关系的信息,使我们能够训练一个能够发现训练数据中未曾见过的全新政府类型的分类器。目前,对于研究社区正在研究语法的构造,以及特别是政府语,数据缺乏。我们发布了 Government Bank 数据集,该数据集定义了我们实验中数千个语言的政府关系。
URL
https://arxiv.org/abs/2404.14270