constrained optimization: Nonlinear Function
Created: July 06, 2022
Modified: July 07, 2022

constrained optimization

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

Suppose we want to optimize an objective under some equality and/or inequality constraints,

minxf(x) s.t. g(x)0h(x)=0\begin{align*} \min_\mathbf{x} &f(\mathbf{x})\\ \text{ s.t. } g(\mathbf{x}) &\le \mathbf{0}\\ h(\mathbf{x}) &= \mathbf{0} \end{align*}

Some general classes of approach we can use are:

Constraint reductions

Sometimes it is convenient to work with only inequality or only equality constraints.

An equality constraint hi(x)=0h_i(x) = 0 can always be reduced to two inequality constraints hi(x)0h_i(x) \le 0 and hi0h_i \ge 0, so we can always assume that all constraints are inequalities.

Conversely, an inequality constraint can be reduced to an equality constraint by adding a slack variable: gi(x)0g_i(x) \le 0 becomes gi(x)+ξi2=0g_i(x) + \xi_i^2 = 0, where ξi2\xi_i^2 represents the 'slack' in the inequality, squared to guarantee that this is positive. So we can likewise choose to assume that all constraints are equalities.

While valid in principle, these reductions will affect the geometry of the problem and may change its properties (e.g., a linear inequality will become quadratic in the slack variable).