Abstract
Ontologies are useful for automatic machine processing of domain knowledge as they represent it in a structured format. Yet, constructing ontologies requires substantial manual effort. To automate part of this process, large language models (LLMs) have been applied to solve various subtasks of ontology learning. However, this partial ontology learning does not capture the interactions between subtasks. We address this gap by introducing OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch. Rather than focusing on subtasks, like individual relations between entities, we model entire subcomponents of the target ontology by finetuning an LLM with a custom regulariser that reduces overfitting on high-frequency concepts. We introduce a novel suite of metrics for evaluating the quality of the generated ontology by measuring its semantic and structural similarity to the ground truth. In contrast to standard metrics, our metrics use deep learning techniques to define more robust distance measures between graphs. Both our quantitative and qualitative results on Wikipedia show that OLLM outperforms subtask composition methods, producing more semantically accurate ontologies while maintaining structural integrity. We further demonstrate that our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples. Our source code and datasets are available at this https URL.
Abstract (translated)
本体在领域知识的自动机器处理中非常有用,因为它们以结构化的格式表示这些知识。然而,构建本体需要大量的手动工作。为了自动化这一过程的一部分,大型语言模型(LLMs)已被应用于解决本体重构学习的各种子任务。然而,这种部分本体重构并未捕捉到各子任务之间的相互作用。为了解决这个问题,我们引入了OLLM,这是一种从零开始构建本体重构骨架的一般性和可扩展性方法。与专注于诸如实体间个别关系的子任务不同,我们通过使用定制的正则化器来微调一个LLM,以减少对高频概念的过拟合,从而建模整个目标本体的子组件。我们引入了一套新的评估指标,用于衡量生成的本体与其真实值在语义和结构上的相似性,从而评价其质量。与标准度量相比,我们的度量使用深度学习技术来定义图之间更稳健的距离测量方法。我们在维基百科上的定量和定性结果表明,OLLM优于子任务组合方法,在保持结构完整性的同时生成了更具语义准确性本体。我们进一步证明了该模型可以有效地适应新的领域,例如arXiv,只需要少量的训练样本即可。我们的源代码和数据集可以在这个https URL上获取。
URL
https://arxiv.org/abs/2410.23584