- Constrained optimization and Lagrange multipliers
This tutorial presents an introduction to optimization problems that involve finding a maximum or a minimum value of an objective function subject to a constraint of the form .
Maximum and minimum
Finding optimum values of the function without a constraint is a well known problem dealt with in calculus courses. One would normally use the gradient to find
stationary point s. Then check all stationary and boundary points to find optimum values.Example
*
*
* has one stationary point at (0,0).The Hessian
A common method of determining whether or not a function has an
extreme value at a stationary point is to evaluate the hessian [3] of the function at that point. where the hessian is defined as:
econd derivative test
The Second derivative test determines the optimality of stationary point according to the following rules [2] :
* If at point x then has a local minimum at x
* If at point x then has a local maximum at x
* If has negative and positive eigenvalues then x is asaddle point
* Otherwise the test is inconclusiveIn the above example.:
Therefore has a minimum at (0,0).
Constrained maximum and minimum
When finding the extreme values of subject to a constraint , the stationary points found above will not work. This new problem can be thought of as finding extreme values of when the point is restricted to lie on the surface . The value of is maximized(minimized) when the surfaces touch each other,"i.e" , they have a common tangent line. This means that the surfaces gradient vectors at that point are parallel, hence,
:
The number in the equation is known as the Lagrange multiplier.
Lagrange multiplier method
The Lagrange multiplier methods solves the constrained
optimization problem by transforming it into a non-constrained optimization problem of the form:
* Then finding the gradient and hessian as was done above will determine any optimum values of .Suppose we now want to find optimum values for subject to from [2] .
Then the Lagrangian method will result in a non-constrained function.
*The gradient for this new function is
*
*
*Finding the stationary points of the above equations can be obtained from their matrix from.
:
This results in .
Next we can use the hessian as before to determine the type of this stationary point.
:
Since then the solution minimizes subject to with .
References
: [1] T.K. Moon and W.C. Stirling. "Mathematical Methods and Algorithms for Signal Processing". Prentice Hall. 2000.: [2] http://mat.gsia.cmu.edu/QUANT/NOTES/chap4/node1.html: [3] http://www.ece.tamu.edu/~chmbrlnd/Courses/ECEN601/ECEN601-Chap3.pdf
Wikimedia Foundation. 2010.