Difference between revisions of "Lagrange Multipliers"
Jump to navigation
Jump to search
Line 1: | Line 1: | ||
− | + | The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form: | |
+ | * <math>\operatorname{\mathcal{L}}(x_1,x_2,\ldots, x_n,\lambda)= \operatorname{f}(x_1,x_2,\ldots, x_n)+\operatorname{\lambda}(k-g(x_1,x_2,\ldots, x_n))</math> | ||
+ | Then finding the gradient and Hessian as was done above will determine any optimum values of <math>\operatorname{\mathcal{L}}(x_1,x_2,\ldots, x_n,\lambda)</math>. | ||
+ | |||
+ | Suppose we now want to find optimum values for <math>f(x,y)=2x^2+y^2</math> subject to <math>x+y=1</math> from [2]. | ||
+ | |||
+ | Then the Lagrangian method will result in a non-constrained function. | ||
+ | * <math>\operatorname{\mathcal{L}}(x,y,\lambda)= 2x^2+y^2+\lambda (1-x-y)</math> | ||
+ | |||
+ | The gradient for this new function is | ||
+ | * <math>\frac{\partial \mathcal{L}}{\partial x}(x,y,\lambda)= 4x+\lambda (-1)=0</math> | ||
+ | * <math>\frac{\partial \mathcal{L}}{\partial y}(x,y,\lambda)= 2y+\lambda (-1)=0</math> | ||
+ | * <math>\frac{\partial \mathcal{L}}{\partial \lambda}(x,y,\lambda)=1-x-y=0</math> | ||
+ | |||
+ | Finding the stationary points of the above equations can be obtained from their matrix from. | ||
+ | |||
+ | : <math> \begin{bmatrix} | ||
+ | 4 & 0 & -1 \\ | ||
+ | 0& 2 & -1 \\ | ||
+ | -1 & -1 & 0 | ||
+ | \end{bmatrix} \begin{bmatrix} | ||
+ | x\\ | ||
+ | y \\ | ||
+ | \lambda \end{bmatrix}= \begin{bmatrix} | ||
+ | 0\\ | ||
+ | 0\\ | ||
+ | -1 | ||
+ | \end{bmatrix} | ||
+ | </math> | ||
+ | |||
+ | This results in <math>x=1/3, y=2/3, \lambda=4/3</math>. | ||
+ | |||
+ | Next we can use the Hessian as before to determine the type of this stationary point. | ||
+ | |||
+ | : <math> H(\mathcal{L})= | ||
+ | \begin{bmatrix} | ||
+ | 4 & 0 & -1 \\ | ||
+ | 0& 2 & -1 \\ | ||
+ | -1&-1&0 | ||
+ | \end{bmatrix} | ||
+ | </math> | ||
+ | |||
+ | Since <math> H(\mathcal{L}) >0 </math> then the solution <math>(1/3,2/3,4/3)</math> minimizes <math>f(x,y)=2x^2+y^2</math> subject to <math>x+y=1</math> with <math>f(x,y)=2/3</math>. | ||
+ | |||
+ | |||
+ | ==Resources== | ||
+ | |||
+ | ===Videos=== | ||
*[https://www.youtube.com/watch?v=ry9cgNx1QV8&list=RDCMUCFe6jenM1Bc54qtBsIJGRZQ&index=5 LaGrange Multipliers - Finding Maximum or Minimum Values ]Video by patrickJMT | *[https://www.youtube.com/watch?v=ry9cgNx1QV8&list=RDCMUCFe6jenM1Bc54qtBsIJGRZQ&index=5 LaGrange Multipliers - Finding Maximum or Minimum Values ]Video by patrickJMT | ||
Revision as of 12:22, 6 October 2021
The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form:
Then finding the gradient and Hessian as was done above will determine any optimum values of .
Suppose we now want to find optimum values for subject to from [2].
Then the Lagrangian method will result in a non-constrained function.
The gradient for this new function is
Finding the stationary points of the above equations can be obtained from their matrix from.
This results in .
Next we can use the Hessian as before to determine the type of this stationary point.
Since then the solution minimizes subject to with .
Resources
Videos
- LaGrange Multipliers - Finding Maximum or Minimum Values Video by patrickJMT
- Lagrange Multipliers Practice Problems Video by ames Hamblin 2017
- Lagrange multipliers | MIT 18.02SC Multivariable Calculus, Fall 2010 Video by MIT OpenCourseWare
- Lagrange Multipliers - Two Constraints -patrickJMT 2009 Video by patrickJMT 2009
- Lagrange multipliers (3 variables) | MIT 18.02SC Multivariable Calculus, Fall 2010 Video by MIT OpenCourseWare