Created: October 30, 2020
Modified: October 30, 2020
Modified: October 30, 2020
noisy natural gradient as VI
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.- https://arxiv.org/abs/1712.02390
- Basic idea: optimizers like Adam and RMSProp already keep track of posterior curvature estimates. These are basically Gaussian posteriors. Can we connect them?
- It
- Handwavy takeaway: in the case of a Gaussian surrogate posterior, consider the
- log likelihood with Fisher
- That Fisher measures the curvature of the true posterior (ignoring any prior??) .
- Let be a Gaussian surrogate. If is a good approximation to the true posterior, then the likelihood Fisher is also a good approximation to the prec parameter. And -style natural gradient on means preconditioning the mean update using the prec, which is what we would have also done if we'd used the likelihood Fisher.
- Does this imply that compositional natural gradient is bad for posteriors with curvature? Preconditioning twice by the Fisher wouldn't be good.
- Suppose the true posterior is Normal(). We'd get that if the likelihood were just , with . Now we want to learn a posterior .
- The Fisher of our 'loss' is . So it's reasonable to consider the immediate natural gradient: . If we then suppose that is a posterior sample given by , and try to do natural gradient updates on and , what happens? We have (really, the Jacobian is ), so the preconditioned gradient to is . That's not great.
- How does this relate to the Pushforward natural gradient? There they consider the 'variational predictive distribution' , which is Normal(). In the case where we are close to the true posterior, , this predictive covariance is just , so using it as a preconditioning Fisher avoids the issue above.
- The crucial thing is that the pushforward relationship of Gaussian covariances is addition, not any kind of multiplication. :-(