Abstract
The widespread use of knowledge graphs in various fields has brought about a challenge in effectively integrating and updating information within them. When it comes to incorporating contexts, conventional methods often rely on rules or basic machine learning models, which may not fully grasp the complexity and fluidity of context information. This research suggests an approach based on reinforcement learning (RL), specifically utilizing Deep Q Networks (DQN) to enhance the process of integrating contexts into knowledge graphs. By considering the state of the knowledge graph as environment states defining actions as operations for integrating contexts and using a reward function to gauge the improvement in knowledge graph quality post-integration, this method aims to automatically develop strategies for optimal context integration. Our DQN model utilizes networks as function approximators, continually updating Q values to estimate the action value function, thus enabling effective integration of intricate and dynamic context information. Initial experimental findings show that our RL method outperforms techniques in achieving precise context integration across various standard knowledge graph datasets, highlighting the potential and effectiveness of reinforcement learning in enhancing and managing knowledge graphs.
Abstract (translated)
在各个领域的知识图谱的广泛应用给有效地整合和更新知识带来了挑战。当涉及到纳入上下文时,传统的做法通常依赖于规则或基本的机器学习模型,这些模型可能无法完全理解上下文信息的复杂性和流动性。这项研究建议了一种基于强化学习(RL)的方法,具体利用深度 Q 网络(DQN)增强将上下文融入知识图谱的过程。通过将知识图谱的状态定义为操作,将知识图谱的状态作为上下文的整合操作,并使用奖励函数衡量知识图谱质量的改善,这种方法旨在自动开发最优上下文整合策略。我们的 DQN 模型使用网络作为函数近似的参数,不断更新 Q 值来估计动作价值函数,从而实现对复杂和动态上下文信息的有效整合。初步实验结果表明,我们的 RL 方法在各种标准知识图谱数据集上实现了精确的上下文整合,突出了强化学习在增强和管理知识图谱中的潜力和有效性。
URL
https://arxiv.org/abs/2404.12587