Modified: January 06, 2023
policy gradient
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.(see also my deep RL notes from John Schulman's class several years ago, which cover much of the same material)
We can approach reinforcement learning as learning a policy by following the gradient of its value . For simplicity we'll assume a fixed initial state and fixed-length finite trajectories, but these results can be generalized to discounted-reward or average-reward notions of value in the continuing setting.
The policy gradient theorem says that
See the proof of the policy gradient theorem. Generally the score function trick is used to refine this into an expectation
The simplest possible policy gradient algorithm, called REINFORCE, computes a naive unbiased estimate of this gradient using a sampled trajectory :
where is the empirical return at step . Although unbiased, this update is exceedingly high-variance and not usually practical on its own. We'll discuss improvements below.
Why learn a policy
Why would we learn by following a policy gradient, instead of by estimating state or action values?
- A learned policy can be stochastic.
- A learned policy can act in a continuous space.
- Policy learning may converge more reliably than value learning.
Potential downsides are that policy learning converges to local (not global) maxima, and the updates can be much higher variance.
Variance reduction
Where does the variance come from in a naive policy gradient estimate? Let's consider a simple model whose variance we can analyze in closed form: a stateless (bandit) setting with two possible actions , a Bernoulli policy parameterized by log-odds ,
and stochastic reward represented by a random variable such that has expected value or (for or , respectively), and variance (independent of the action).
Considering the two cases, under the first action the score function becomes
and similarly for we have
If we initialize at the uniform distribution over actions, , then these derivatives simplify to and -1 respectively, so we have expected gradient
This will tend to pull the parameter towards whichever action has larger reward, as we would hope. Moving to the variance, we exploit the identity , first computing
(in which the second line uses that same identity again, to express the expected squared reward as the sum of its variance and squared mean), and then plug this in to derive
We see that the overall gradient variance is the sum of two terms: the intrinsic variance of the reward, and a term that measures the magnitude (in the squared norm) of the reward averaged over actions.
Note that this latter term is sensitive to how 'centered' the reward function is: for actions with expected rewards and this term would be zero, but if we apply a constant shift to and then this term contributes a very large quantity to the variance, even though the two situations are decision-theoretically equal. This observation motivates the first of our variance reduction techniques: we should try to 'center' the reward function (more generally, the action-value function) by estimating and subtract the average or 'baseline' value at each state.
So far we've considered only the initial gradient under the uniform policy, but ultimately we are hoping to reach the optimal policy, which in general will put all of its mass on whichever of the two actions yields higher expected reward. What happens in this extreme? Let's redo the analysis, replacing the uniform parameter with the general case , so that puts mass on action 1 and only on action 0. Now we have
and
We observe that these recover our previous results in the special case . But as , we have
so the variance becomes infinite. That's not good! Looking back at the two cases, we see that this near-optimal policy will almost always (with probability ) sample action 1 , and the expected gradient in that case is essentially zero due to the score function term: a small change in won't appreciably change the log-probability of action 1. The policy will almost never sample action 0, but when it does, the score function and thus the gradient will be huge. This is essentially due to the log-odds policy parameterization.
Baselines
Since the expectation of the score function itself is zero, we can subtract any constant multiple of it without changing the expectation. A particularly natural baseline for the Q-value (motivated by the toy model in the previous section, but which can also be derived by a more general analysis) is the state value , obtained by averaging Q-values under the policy's action distribution. The thus-'centered' Q value is called the advantage:
The advantage function case be estimated indirectly by maintaining separate models of Q and V, which can even be updated at different time scales (corresponding to different values). Or we can use the temporal difference error as a direct estimate of the advantage, since:
In general we can of course use longer-timescale temporal differences , since these are all estimates of the same underlying quantity.
Using a critic
The expression for the policy gradient includes Q-values, and we can use any estimate of these we like. We saw the naive Monte Carlo estimate above, in the REINFORCE algorithm. More generally, we can use modeled Q-values trained using a temporal difference method such as SARSA(). The model of Q-values is known as the 'critic', and the resulting overall method is an actor-critic method.
In general, the critic's value estimates will be biased, leading to biased policy gradients.
Continual learning and online updates
Using a critic we can apply a policy gradient update at each step based on the estimated return instead of waiting until the end of a trajectory to get its empirical return. Unlike trajectories as a whole, state-action pairs at different timesteps are not independent or identically distributed, so we no longer have the convergence guarantee of stochastic gradient ascent, if we ever did (the critic's biased gradients would also break these guarantees even if we only updated at the trajectory level).
Still, online updating may help the system learn faster in practice (indeed, in the continual-learning setting of a single never-ending trajectory this is the only way for any learning to happen).
Naively, the update to at time will be the score function times some scalar multiple : if the action gave us a positive advantage, we increase its probability, otherwise we decrease it. But we may also want to update the policy for previous actions based on our current experience. This can be done by replacing the score function with an eligibility trace that accumulates an exponentially decreasing average of the score functions from recent actions.
As policy iteration
We can view policy-gradient methods as a form of 'softened' policy iteration, or generalized policy iteration. The loopReference: Sergey Levine's Berkeley Deep RL lecture https://www.youtube.com/watch?v=ySenCHPsKJU is:
- Estimate for some or all state-action pairs. (in the simplest case, this estimate is just the Monte Carlo rewards seen in the collected trajectory).
- Update the policy using these estimated advantages.
A full update would set
Fully updating on experience
Experience is expensive. Given a sampled trajectory, we should aspire to a proper Bayesian update, which would incorporate all information present in the trajectory to narrow down the possible set of optimal policies. Actually implementing a Bayesian policy optimization would require
- representing a distribution over optimal policies, rather than just a single policy, and
- updating that distribution fully in response to new evidence - this would look like fully optimizing a variational objective, not just taking a single gradient step.
In practice, we might at least think of trying to take multiple gradient steps on the sample objective defined by some observed experience. The problem is that after taking the first such step, we're no longer in the on-policy regime and the policy gradient theorem no longer holds. This can be worked around by making sure to not travel too far away in policy space. One approach for this is trust region policy optimization (TRPO).
More generally, we can decouple the training into multiple processes, which can operate asynchronously:
- One process collects data and stores it for experience replay.
- One process uses the replay buffer to train a value function, e.g., via Q-learning.
- One process optimizes the policy using the surrogate value function.
This kind of decomposition (which I've discussed in the context of deep deterministic policy gradient methods, though the ideas are more general) seems like a promising strategy for unifying value and policy-based approaches to RL.
Reparameterization and preconditioning
Once we get past all of the RL-specific aspects of gradient estimation, all the usual tricks for speeding up stochastic gradient optimizations still apply. It's important to choose a good parameterization for the actor and critic; ideas like natural gradient and mirror descent are applicable.
Deterministic policy
In continuous action spaces it can be convenient to use a deterministic policy , so that . Then
where is in general the Jacobian matrix of the policy giving the sensitivity of each action dimension to each policy parameter. Then we simply need the gradient of (s, a) with respect to the action , which we can query directly given a differentiable model of .
Note that if we optimized to completion with a sufficiently flexible family, we would obtain the greedy policy ; in other words we would simply be doing (deep) Q-learning. Thus we see this as an approach to Q-learning in continuous action spaces, where interleaving updates of and is a case of generalized policy iteration.
Silver et al. (2014) shows that the above expression is also the limit of the usual stochastic policy gradient as the policy variance approaches zero.
Lillicrap et al. (2015) demonstrated that this can work with deep networks if we apply the DQN tricks of experience replay and target networks to learn the Q function.
Entropy bonus
One approach to encourage exploration (used, e.g., in A3C) is to add the entropy gradient to the policy gradient at each step. This 'entropy bonus' tries to prevent the policy from collapsing into a deterministic policy, unless the reward function provides really strong pressure for it to do so.
What objective does this procedure optimize? Well, everything happens inside an expectation over states reached by the policy, and we've seen that when we push gradients inside of that expectation we generally have to use the score function trick to account for the change in distribution introduced by a change in policy. But (unlike the usual policy gradient) the entropy bonus term doesn't include the score function! Thus we can view it as a modified gradient
which ignores the path by which changing will change the visited states . So this approach will try to increase the entropy of the policy at the currently-visited states, but it makes no attempt to cause the policy to visit states in which it will have high entropy. This myopic approach to maximizing entropy is sometimes called a 'one-step' bonus.
The natural alternative to a one-step bonus is to fold the policy entropy into the reward function itself, so that future entropies propagate into the value and functions. This is called maximum-entropy reinforcement learning and seems to be a nicer and more coherent approach with some theoretical benefits.
Criticism
Ben Recht (and others probably) has argued that policy gradient methods are just glorified random search. Because they don't know anything about the reward being optimized, they can only stumble blindly around, and they scale poorly with dimension, making them generally bad algorithms.
This seems like a reasonable objection to naive policy gradient methods. I think it becomes much less potent against actor-critic methods such as deep deterministic policy gradient, since those do learn a model of the value function, and can use gradients of that surrogate model to guide policy learning.