Modified: July 07, 2022
constrained optimization
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.Suppose we want to optimize an objective under some equality and/or inequality constraints,
Some general classes of approach we can use are:
- Reparameterizing the problem to eliminate the constraints.
- Projected gradient descent, or its generalization mirror descent.
- Lagrange multiplier methods, including dual gradient ascent.
Constraint reductions
Sometimes it is convenient to work with only inequality or only equality constraints.
An equality constraint can always be reduced to two inequality constraints and , so we can always assume that all constraints are inequalities.
Conversely, an inequality constraint can be reduced to an equality constraint by adding a slack variable: becomes , where represents the 'slack' in the inequality, squared to guarantee that this is positive. So we can likewise choose to assume that all constraints are equalities.
While valid in principle, these reductions will affect the geometry of the problem and may change its properties (e.g., a linear inequality will become quadratic in the slack variable).