Abstract
Knowledge Graphs (KGs) have proven essential in information processing and reasoning applications because they link related entities and give context-rich information, supporting efficient information retrieval and knowledge discovery; presenting information flow in a very effective manner. Despite being widely used globally, Bangla is relatively underrepresented in KGs due to a lack of comprehensive datasets, encoders, NER (named entity recognition) models, POS (part-of-speech) taggers, and lemmatizers, hindering efficient information processing and reasoning applications in the language. Addressing the KG scarcity in Bengali, we propose BanglaAutoKG, a pioneering framework that is able to automatically construct Bengali KGs from any Bangla text. We utilize multilingual LLMs to understand various languages and correlate entities and relations universally. By employing a translation dictionary to identify English equivalents and extracting word features from pre-trained BERT models, we construct the foundational KG. To reduce noise and align word embeddings with our goal, we employ graph-based polynomial filters. Lastly, we implement a GNN-based semantic filter, which elevates contextual understanding and trims unnecessary edges, culminating in the formation of the definitive KG. Empirical findings and case studies demonstrate the universal effectiveness of our model, capable of autonomously constructing semantically enriched KGs from any text.
Abstract (translated)
知识图(KGs)在信息处理和推理应用中证明至关重要,因为它们将相关的实体链接起来,并提供丰富上下文信息,支持有效的信息检索和知识发现;以非常有效的方式呈现信息流。尽管在全球范围内得到广泛应用,但孟加拉语在KGs中的代表性相对较低,主要是因为缺乏全面的数据集、编码器、词干提取器(NER)、语义标注器和词干提取器,这阻碍了在孟加拉语中进行高效的信息处理和推理应用。为了解决孟加拉语在KGs中的不足,我们提出了BanglaAutoKG,这是一个先驱性的框架,能够自动构建孟加拉语知识图。我们利用多语言LLM理解各种语言,并普遍地将实体和关系进行关联。通过使用翻译词典识别英语等值,并从预训练的BERT模型中提取词特征,我们构建了基础KG。为了减少噪声并使词向量与我们的目标对齐,我们使用基于图的多项式滤波器。最后,我们实现了基于GNN的语义滤波器,提高了上下文理解并削减了不必要的边,最终形成了确定的KG。实证研究和案例研究证明了我们模型的普遍有效性,能够自主构建语义丰富的KGs。
URL
https://arxiv.org/abs/2404.03528