Abstract
Order execution is a fundamental task in quantitative finance, aiming at finishing acquisition or liquidation for a number of trading orders of the specific assets. Recent advance in model-free reinforcement learning (RL) provides a data-driven solution to the order execution problem. However, the existing works always optimize execution for an individual order, overlooking the practice that multiple orders are specified to execute simultaneously, resulting in suboptimality and bias. In this paper, we first present a multi-agent RL (MARL) method for multi-order execution considering practical constraints. Specifically, we treat every agent as an individual operator to trade one specific order, while keeping communicating with each other and collaborating for maximizing the overall profits. Nevertheless, the existing MARL algorithms often incorporate communication among agents by exchanging only the information of their partial observations, which is inefficient in complicated financial market. To improve collaboration, we then propose a learnable multi-round communication protocol, for the agents communicating the intended actions with each other and refining accordingly. It is optimized through a novel action value attribution method which is provably consistent with the original learning objective yet more efficient. The experiments on the data from two real-world markets have illustrated superior performance with significantly better collaboration effectiveness achieved by our method.
Abstract (translated)
订单执行是量化金融中的一项基本任务,旨在完成对特定资产的一些交易订单的 acquisition 或 liquidation。最近在无模型强化学习(RL)方面的进展为订单执行问题提供了数据驱动的解决方案。然而,现有的工作总是优化单个订单的执行,忽略了多个订单被指定同时执行的现实情况,导致最优化和偏见。在本文中,我们首先提出了考虑实际约束条件的多Agent RL(MARL)方法,以执行多个订单。具体来说,我们将所有 Agent 视为单个交易员,执行一个特定的订单,同时与其他 Agent 保持沟通和协作,以最大化整体利润。尽管如此,现有的 MARL 算法往往通过仅交换其部分观察信息来集成 Agent 之间的通信,这在复杂的金融市场中效率低下。为了改善协作,我们随后提出了可学习多轮通信协议,以使 Agent 之间相互通信并相应地改进。它通过一种新的行为价值归因方法优化,该方法显然与原始学习目标保持一致,但更高效。从两个实际市场的数据实验可以看出,我们的方法取得了更好的表现,协作效果 significantly better。
URL
https://arxiv.org/abs/2307.03119