Paper Reading AI Learner

Applying the Closed World Assumption to SUMO-based Ontologies

2018-08-14 10:41:14
Javier Álvez, Itziar Gonzalez-Dios, German Rigau

Abstract

In commonsense knowledge representation, the Open World Assumption is adopted as a general standard strategy for the design, construction and use of ontologies, e.g. in OWL. This strategy limits the inferencing capabilities of any system using these ontologies because non-asserted statements could be assumed to be alternatively true or false in different interpretations. In this paper, we investigate the application of the Closed World Assumption to enable a better exploitation of the structural knowledge encoded in a SUMO-based ontology. To that end, we explore three different Closed World Assumption formulations for subclass and disjoint relations in order to reduce the ambiguity of the knowledge encoded in first-order logic ontologies. We evaluate these formulations on a practical experimentation using a very large commonsense benchmark automatically obtained from the knowledge encoded in WordNet through its mapping to SUMO. The results show that the competency of the ontology improves more than 47 % when reasoning under the Closed World Assumption. As conclusion, applying the Closed World Assumption automatically to first-order logic ontologies reduces their expressed ambiguity and more commonsense questions can be answered.

Abstract (translated)

在常识性知识表示中,采用开放世界假设作为本体的设计,构造和使用的一般标准策略,例如,在OWL。该策略限制了使用这些本体的任何系统的推理能力,因为在不同的解释中可以假设非断言语句可选地为真或假。在本文中,我们研究了封闭世界假设的应用,以便更好地利用基于SUMO的本体中编码的结构知识。为此,我们探索了三种不同的闭合世界假设公式,用于子类和不相交关系,以减少在一阶逻辑本体中编码的知识的模糊性。我们使用一个非常大的常识基准来评估这些公式,这些基准通过映射到SUMO从WordNet中编码的知识自动获得。结果表明,在闭合世界假设下推理时,本体的能力提高了47%以上。作为结论,将闭合世界假设自动应用于一阶逻辑本体可以减少它们表达的模糊性,并且可以回答更常见的问题。

URL

https://arxiv.org/abs/1808.04620

PDF

https://arxiv.org/pdf/1808.04620.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot