Difference between revisions of "Lagrange Multipliers"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
Line 45: Line 45:
  
 
==Resources==
 
==Resources==
 +
* [https://en.wikibooks.org/wiki/Calculus_Optimization_Methods/Lagrange_Multipliers Lagrange Multipliers], WikiBooks: Calculus Optimization Methods
  
 
===Videos===
 
===Videos===

Revision as of 12:22, 6 October 2021

The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form:

  • Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \operatorname{\mathcal{L}}(x_1,x_2,\ldots, x_n,\lambda)= \operatorname{f}(x_1,x_2,\ldots, x_n)+\operatorname{\lambda}(k-g(x_1,x_2,\ldots, x_n))}

Then finding the gradient and Hessian as was done above will determine any optimum values of .

Suppose we now want to find optimum values for subject to from [2].

Then the Lagrangian method will result in a non-constrained function.

The gradient for this new function is

Finding the stationary points of the above equations can be obtained from their matrix from.

This results in .

Next we can use the Hessian as before to determine the type of this stationary point.

Since then the solution minimizes subject to with Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle f(x,y)=2/3} .


Resources

Videos