Modified: March 29, 2022
state values, then action values
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.A common pattern in reinforcement learning pedagogy is to develop some idea first in the context of estimating state values , and then extend it to estimate action values . For example: moving from temporal difference learning on states, to SARSA and Q-learning on actions.
Such an extension is always possible since we can view the state-action pair as an element of an augmented state space.For example, by separating action selection and execution as separate steps, so our trajectories look like . But why do things this way?
State-value estimates are a little bit easier to think about, just because there are fewer moving parts. But they're not directly useful for control. Control requires us to choose actions, so we need to know how good the actions are. This is generally why we ultimately end up formulating RL algorithms in terms of Q-values.
When would state values be useful?
- In planning or model-based RL settings where transition dynamics are available.
- It may aid generalization to share statistical strength across actions from a single state, even if we ultimately care about the action values.
- As a baseline for policy gradient methods, and to estimate the advantage function via temporal difference error.