Paper Reading AI Learner

A Mini Review on the utilization of Reinforcement Learning with OPC UA

2023-05-24 13:03:48
Simon Schindler, Martin Uray, Stefan Huber

Abstract

Reinforcement Learning (RL) is a powerful machine learning paradigm that has been applied in various fields such as robotics, natural language processing and game playing achieving state-of-the-art results. Targeted to solve sequential decision making problems, it is by design able to learn from experience and therefore adapt to changing dynamic environments. These capabilities make it a prime candidate for controlling and optimizing complex processes in industry. The key to fully exploiting this potential is the seamless integration of RL into existing industrial systems. The industrial communication standard Open Platform Communications UnifiedArchitecture (OPC UA) could bridge this gap. However, since RL and OPC UA are from different fields,there is a need for researchers to bridge the gap between the two technologies. This work serves to bridge this gap by providing a brief technical overview of both technologies and carrying out a semi-exhaustive literature review to gain insights on how RL and OPC UA are applied in combination. With this survey, three main research topics have been identified, following the intersection of RL with OPC UA. The results of the literature review show that RL is a promising technology for the control and optimization of industrial processes, but does not yet have the necessary standardized interfaces to be deployed in real-world scenarios with reasonably low effort.

Abstract (translated)

强化学习(RL)是一种强大的机器学习范式,已经应用于各种领域,如机器人、自然语言处理和游戏玩,取得了最先进的结果。其目标是解决Sequential决策问题,因此可以设计从经验中学习,因此适应不断变化的动态环境。这些能力使其成为工业控制和优化复杂过程的首选。要充分利用这种潜力,关键是要将RL无缝融入现有的工业系统。工业通信标准Open Platform Communications Unified Architecture(OPC UA)可以填补这个差距。然而,由于RL和OPC UA来自不同领域,研究人员需要填补这两个技术之间的差距。这项工作旨在填补这个差距,通过提供两个技术的简要技术概述和进行半充分的文献综述来了解如何将RL和 OPC UA结合起来。通过这份调查,三个主要研究主题被识别,随着RL和 OPC UA之间的交集。文献综述的结果表明,RL是工业过程控制和优化的一种有前途的技术,但还不具备必要的标准化接口,以便以合理的较低 effort 在现实世界场景中部署。

URL

https://arxiv.org/abs/2305.15113

PDF

https://arxiv.org/pdf/2305.15113.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot