Paper Reading AI Learner

End-to-End Ontology Learning with Large Language Models

2024-10-31 02:52:39
Andy Lo, Albert Q. Jiang, Wenda Li, Mateja Jamnik

Abstract

Ontologies are useful for automatic machine processing of domain knowledge as they represent it in a structured format. Yet, constructing ontologies requires substantial manual effort. To automate part of this process, large language models (LLMs) have been applied to solve various subtasks of ontology learning. However, this partial ontology learning does not capture the interactions between subtasks. We address this gap by introducing OLLM, a general and scalable method for building the taxonomic backbone of an ontology from scratch. Rather than focusing on subtasks, like individual relations between entities, we model entire subcomponents of the target ontology by finetuning an LLM with a custom regulariser that reduces overfitting on high-frequency concepts. We introduce a novel suite of metrics for evaluating the quality of the generated ontology by measuring its semantic and structural similarity to the ground truth. In contrast to standard metrics, our metrics use deep learning techniques to define more robust distance measures between graphs. Both our quantitative and qualitative results on Wikipedia show that OLLM outperforms subtask composition methods, producing more semantically accurate ontologies while maintaining structural integrity. We further demonstrate that our model can be effectively adapted to new domains, like arXiv, needing only a small number of training examples. Our source code and datasets are available at this https URL.

Abstract (translated)

本体在领域知识的自动机器处理中非常有用,因为它们以结构化的格式表示这些知识。然而,构建本体需要大量的手动工作。为了自动化这一过程的一部分,大型语言模型(LLMs)已被应用于解决本体重构学习的各种子任务。然而,这种部分本体重构并未捕捉到各子任务之间的相互作用。为了解决这个问题,我们引入了OLLM,这是一种从零开始构建本体重构骨架的一般性和可扩展性方法。与专注于诸如实体间个别关系的子任务不同,我们通过使用定制的正则化器来微调一个LLM,以减少对高频概念的过拟合,从而建模整个目标本体的子组件。我们引入了一套新的评估指标,用于衡量生成的本体与其真实值在语义和结构上的相似性,从而评价其质量。与标准度量相比,我们的度量使用深度学习技术来定义图之间更稳健的距离测量方法。我们在维基百科上的定量和定性结果表明,OLLM优于子任务组合方法,在保持结构完整性的同时生成了更具语义准确性本体。我们进一步证明了该模型可以有效地适应新的领域,例如arXiv,只需要少量的训练样本即可。我们的源代码和数据集可以在这个https URL上获取。

URL

https://arxiv.org/abs/2410.23584

PDF

https://arxiv.org/pdf/2410.23584.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot