Abstract
Large Language Models (LLMs) encapsulate an extensive amount of world knowledge, and this has enabled their application in various domains to improve the performance of a variety of Natural Language Processing (NLP) tasks. This has also facilitated a more accessible paradigm of conversation-based interactions between humans and AI systems to solve intended problems. However, one interesting avenue that shows untapped potential is the use of LLMs as Reinforcement Learning (RL) agents to enable conversational RL problem solving. Therefore, in this study, we explore the concept of formulating Markov Decision Process-based RL problems as LLM prompting tasks. We demonstrate how LLMs can be iteratively prompted to learn and optimize policies for specific RL tasks. In addition, we leverage the introduced prompting technique for episode simulation and Q-Learning, facilitated by LLMs. We then show the practicality of our approach through two detailed case studies for "Research Scientist" and "Legal Matter Intake" workflows.
Abstract (translated)
大语言模型(LLMs)封装了大量的世界知识,这使得它们在各种领域中的应用不断提高自然语言处理(NLP)任务的性能。这也促进了一种更可接近的人机交互范式,以解决预定问题。然而,一个有趣的研究方向是使用LLMs作为强化学习(RL)代理,以实现对话式RL问题求解。因此,在本研究中,我们探讨了将形式化随机过程(RMDP)为基础的RL问题作为一个LLM提示任务的概念。我们证明了LLM可以通过迭代提示学习并优化特定RL任务的策略。此外,我们还利用LLM引入的提示技术进行状态模拟和Q-学习,并对其进行了演示。最后,我们通过两个详细的案例研究展示了我们方法的实际可行性,"研究科学家"和"法律事务接待"工作流程。
URL
https://arxiv.org/abs/2404.18638