Modified: August 04, 2022
distributional RL
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.There are two forms of uncertainty in value-based reinforcement learning.
Let be the return from trajectory , and
be the expected return of a trajectory that begins with and follows thereafter. But this is just an expectation. For any given MDP and policy (together defining a 'Markov reward process', i.e., a distribution over trajectories), the value will follow some actual distribution, which will be different for different initial states and actions, and of which is only the expectation. That is, we have aleatoric uncertainty over the returns of such trajectories arising from intrinsic randomness.
In the general reinforcement learning setting, we also have epistemic uncertainty that arises from having only limited knowledge of the MDP we are in. We may fully know the policy , but we will be uncertain about its value because we have only limited experience with which to evaluate it. It also seems reasonable to speak of epistemic uncertainty around the optimal policy and the corresponding values (this is in some sense mathematically equivalent to uncertainty about the MDP, since if we knew the MDP then we would also 'know' its optimal policy, up to computational concerns).
Epistemic uncertainty is closely related to exploration, in that representations of uncertainty can tell us where we should gather new experience.
The line of work known as 'distributional RL' is explicitly not concerned with epistemic uncertainty. It is only interested in representing and learning the distribution of returns arising from intrinsic randomness in the MDP.
Theory
The most important theoretical result is that the distributional Bellman backup,
which we abbreviate as for a set of Q-value distributions , is a contraction in the maximum Wasserstein metric
This implies by the Banach fixed-point theorem that iterating the update converges to a unique fixed point, which must be the return distribution . So distributional policy evaluation 'just works'.
Unfortunately the situation with control (Q-learning) is not as nice: in general there is no unique optimal return distribution (because the optimal policy is not necessarily unique), the distributional greedy optimality operator given byTechnically there are multiple such operators, corresponding to different ways of breaking ties in the argmax; one can generalize this to allow updates in which the action a* is chosen stochastically from the set of greedy-optimal actions.
is not a contraction in any metric, not all versions of this operator have a fixed point, and even the existence of a fixed point is not sufficient to guarantee convergence to it. There are some weaker convergence results: in particular we do still guarantee convergence in the expected Q-values, so we are not worse off than with regular Q-learning.
Representing value distributions
How should we represent the distribution over Q-values?
Quantized categorical distribution (C51, Bellamere et al., 2017): The original distributional RL paper proposed to clip the rewards to and use a categorical distribution on 51 quantized values within that range. Unfortunately the Bellman backup doesn't preserve this distribution family, so it's necessary to project the backed-up distribution back into the family. The C51 algorithm then trains a network to minimize the KL divergence between a categorical output and this target. Unfortunately, the combination backup-projection-minimizeKL update is not a contraction mapping so we lose all of the theory.
Explicit quantiles (Dabney et al., 2018a): we can 'transpose' C51, so that instead of having the network produce probabilities of evenly-spaced point masses, we assume uniform probabilities and instead have the network output a set of locations for the point masses,
Given any target distribution with cumulative distribution function , the minimum-Wasserstein projection into this distribution family is to put the th point mass at the corresponding quantile midpoint, . For example, given point masses, we would put them at the 25th and 75th percentiles of the target distribution, representing the midpoints of the and probability intervals respectively. The composition of this projection with the Bellman backup is a contraction in the Wasserstein metric . Interestingly it is not directly a contraction in for finite , but the bound guarantees that we still get convergence in those metrics. In practice, we can train this network from samples using a quantile regression loss.
Implicit quantiles (Dabney et al., 2018b): instead of explicitly producing a finite set of locations, targeting specific quantiles of the return distribution, we feed the desired quantile as an input to the network via positional embedding, so that the network effectively implements the quantile function . Now the return distribution is not limited to a collection of point masses, but can be fully universal! This is trained to minimize an expected quantile loss with respect to the uniform distribution on quantiles, which in practice we approximate by sampling . So the update looks like:
- sample M values from the target return distribution
- sample N quantile locations
- for each quantile location, run a gradient update using the corresponding quantile loss averaged across all M target samples
Fully parameterized quantile (Yang et al., 2019): instead of trying to fit a network at all quantiles via sampling, we additionally parameterize the set of quantiles to target, using a 'fraction proposal network' to propose a specific set of quantiles. We are back to representing the return distribution as a set of point masses (now with nonuniform weights in general since the target quantiles are nonuniformly spaced), but this has the advantage that the network can learn to allocate its representational capacity to useful parts of the distribution.
References
- Bellemare, M.G., Dabney, W. & Munos, R.. (2017). A Distributional Perspective on Reinforcement Learning. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:449-458
- Dabney, Will, Mark Rowland, Marc Bellemare, and Rémi Munos.Distributional reinforcement learning with quantile regression. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1. 2018a.
- Dabney, Will, Georg Ostrovski, David Silver, and Rémi Munos. Implicit quantile networks for distributional reinforcement learning. In International conference on machine learning, pp. 1096-1105. PMLR, 2018b.
- Yang, Derek, Li Zhao, Zichuan Lin, Tao Qin, Jiang Bian, and Tie-Yan Liu. Fully parameterized quantile function for distributional reinforcement learning. Advances in neural information processing systems 32 (2019).