Abstract
We present FewShotTextGCN, a novel method designed to effectively utilize the properties of word-document graphs for improved learning in low-resource settings. We introduce K-hop Neighbourhood Regularization, a regularizer for heterogeneous graphs, and show that it stabilizes and improves learning when only a few training samples are available. We furthermore propose a simplification in the graph-construction method, which results in a graph that is $\sim$7 times less dense and yields better performance in little-resource settings while performing on par with the state of the art in high-resource settings. Finally, we introduce a new variant of Adaptive Pseudo-Labeling tailored for word-document graphs. When using as little as 20 samples for training, we outperform a strong TextGCN baseline with 17% in absolute accuracy on average over eight languages. We demonstrate that our method can be applied to document classification without any language model pretraining on a wide range of typologically diverse languages while performing on par with large pretrained language models.
Abstract (translated)
我们提出了 FewShotTextGCN,一种设计有效利用文档关系图的特性,在资源受限的情况下改进学习的新方法。我们介绍了K-hop邻居 Regularization,一种适用于异质图的 Regularizer,并证明当只有几个训练样本可用时,它可以稳定地改进学习。我们还提出了graph 构造方法的简化,结果导致图密度 $sim$ 7倍减少,在资源受限的情况下表现更好,而在高资源情况下与最先进的方法相当。最后,我们介绍了一种新的自适应伪标签定向,专门用于文档关系图。当使用只有20个训练样本时,我们平均优于TextGCN基线, absolute accuracy为每个语言平均17%。我们证明,我们的方法可以应用于文档分类,在没有语言模型预训练的广泛类型不同的语言中,同时表现与大型预训练语言模型相当。
URL
https://arxiv.org/abs/2301.10481