Paper Reading AI Learner

CatE: Embedding $mathcal{ALC}$ ontologies using category-theoretical semantics

2023-05-11 22:27:51
Fernando Zhapa-Camacho, Robert Hoehndorf

Abstract

Machine learning with Semantic Web ontologies follows several strategies, one of which involves projecting ontologies into graph structures and applying graph embeddings or graph-based machine learning methods to the resulting graphs. Several methods have been developed that project ontology axioms into graphs. However, these methods are limited in the type of axioms they can project (totality), whether they are invertible (injectivity), and how they exploit semantic information. These limitations restrict the kind of tasks to which they can be applied. Category-theoretical semantics of logic languages formalizes interpretations using categories instead of sets, and categories have a graph-like structure. We developed CatE, which uses the category-theoretical formulation of the semantics of the Description Logic $\mathcal{ALC}$ to generate a graph representation for ontology axioms. The CatE projection is total and injective, and therefore overcomes limitations of other graph-based ontology embedding methods which are generally not invertible. We apply CatE to a number of different tasks, including deductive and inductive reasoning, and we demonstrate that CatE improves over state of the art ontology embedding methods. Furthermore, we show that CatE can also outperform model-theoretic ontology embedding methods in machine learning tasks in the biomedical domain.

Abstract (translated)

与语义网本体论相关的机器学习采用多种策略,其中一种策略是将本体论元假设投射到图结构中,并将图嵌入或基于图的机器学习方法应用于生成的图。已经开发出几种方法,将本体论元假设投射到图中。但是这些方法在能够投射的元假设类型(全量),是否可逆(全因数),以及如何利用语义信息等方面受到限制。这些限制限制了它们能够应用于的任务类型。逻辑语言的 category- theoretical 语义用 categories 而不是集合来 formalize 解释,而 categories 具有图-like 结构。我们开发了 CatE,它使用描述逻辑 $\mathcal{ALC}$ 的 category- theoretical 语义框架生成本体论元假设的图表示。 CatE 投影是 total 和 injective,因此克服了一般不可逆的其他基于图的本体嵌入方法的限制。我们应用 CatE 处理多个不同任务,包括推理和推理。我们证明 CatE 优于当前先进的本体嵌入方法。此外,我们表明 CatE 也可以在生物医学领域的机器学习任务中优于模型理论的本体嵌入方法。

URL

https://arxiv.org/abs/2305.07163

PDF

https://arxiv.org/pdf/2305.07163.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot