Created: May 08, 2020
Modified: July 14, 2022
Modified: July 14, 2022
Hamiltonian
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.- What are Hamiltonian dynamics?
- A system has phase space coordinates (p, q), representing momenta and positions respectively. How do these evolve over time?
- The Hamiltonian is any function such that
- Often this is the total energy of the system, i.e., sum of kinetic energy (a function of the momenta ) and potential energy (function of positions ).
The term on the right is just the velocity. That is exactly the derivative of the kinetic energy () with respect to momentum ().
On the left, is the change in momentum: the force on our system. We're saying this is the negative gradient of the potential energy. When potential energy is increasing (has positive gradient), it means we're going uphill and our momentum will decrease. And vice versa.
- The log-density function that we write in ML is just the potential energy portion of this. That's because there is some theorem ?? that the probability that Hamiltonian dynamics ends up in a state with potential energy is proportional to , so we can simulate from any distribution by simulating from the Hamiltonian .
- In QM, the Hamiltonian is an 'operator' on wave functions. What does that mean?
- Qs:
- when in physics history were these developed, and why? what are the advantages of this approach?
- what are the alternatives and when does it make sense to use them?
- Why does Hamiltonian mechanics sample from ?
- What's the analogy to machine learning?
From Lagrangian
The Hamiltonian is the Legendre transform of the Lagrangian. If we think about Lagrangian mechanics as a constrained optimization with objective s.t. , then construct that objective with a Lagrange multiplier , now the dual objective (in the optimization sense) is the one in which we optimize out , so we have