Abstract
Active inference is a normative framework for generating behaviour based upon the free energy principle, a global theory of self-organisation. This framework has been successfully used to solve reinforcement learning and stochastic control problems, yet, the formal relation between active inference and reward maximisation has not been fully explicated. In this paper, we consider the relation between active inference and dynamic programming under the Bellman equation, which underlies many approaches to reinforcement learning and control. Our contribution shows that, on finite-horizon partially observed Markov decision processes, dynamic programming is a limiting case of active inference. In active inference, agents select actions in order to maximise expected free energy. In the absence of ambiguity about the latent causes of outcomes, this reduces to matching a target distribution encoding the agent's preferences. When these target states correspond to rewarding states, this minimises risk or maximises expected reward, as in reinforcement learning. When states are partially observed or ambiguous, an active inference agent will choose the action that minimises both risk and ambiguity. This allows active inference agents to supplement their reward maximising (or exploitative) behaviour with novelty-seeking (or exploratory) behaviour. This speaks to the unifying potential of active inference, as the functional optimised during action selection subsumes many important quantities used in decision-making in the physical, engineering, and life sciences.
Abstract (translated)
URL
https://arxiv.org/abs/2009.08111