Modified: June 22, 2022
actor-critic
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.(references: https://julien-vitay.net/deeprl/ActorCritic.html)
Advantage actor-critic
The advantage function
is a 'centered' version of ; in policy gradient methods this corresponds to using the state-value function as a control variate to reduce variance. Advantage actor-critic methods estimate the advantage function directly. A standard estimate is the -step advantage, which just plugs in the usual -step temporal difference estimate of action values:As with other TD estimates, we could average this over multiple values of for finer-grained control over the bias-variance tradeoff.
Parallel / asynchronous actor-critic
(summarizing Minh et al. 2016, Asynchronous Methods for Deep Reinforcement Learning)
A parallel advantage actor-critic (A2C) algorithm is:
- Initialize a global actor and critic , and many parallel copies of the environment.
- In each environment:
- Take steps, logging the 'minibatch' of tuples. If a terminal state is reached after steps, just pretend that we used the shorter horizon , and reset the environment so that the next minibatch starts at the start state .
- Compute the TD estimate for each . That is: for each state we compute the longest-horizon estimate possible using the minibatch data, so gets an -step estimate, gets an -step estimate, and so on.
- Accumulate the minibatch actor and critic gradients and .
- Apply the summed gradient update and repeat from step 2.
Note a few design choices here:
- By using parallel environments we reduce correlation in the state/action pairs. This accomplishes a similar function as experience replay, but allows for on-policy learning.
- Computing TD estimates of different horizons from an -step minibatch effectively averages over -step TD methods for .
Minh et al. find that it helps a lot to augment the policy gradient with an entropy regularization term to improve exploration.
The Asynchronous Advantage Actor-Critic (A3C) method generalizes this to allow the parallel actors to run simulations and apply gradient updates asynchronously. (this is an instance of 'HogWild' optimization). They find that it works well, and that it's helpful to share a global RMSProp optimizer state.