Difference between revisions of "Lagrange Multipliers"
| (3 intermediate revisions by 2 users not shown) | |||
| Line 1: | Line 1: | ||
| − | + | The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form: | |
| − | + | * <math>\operatorname{\mathcal{L}}(x_1,x_2,\ldots, x_n,\lambda)= \operatorname{f}(x_1,x_2,\ldots, x_n)+\operatorname{\lambda}(k-g(x_1,x_2,\ldots, x_n))</math> | |
| − | + | Then finding the gradient and Hessian as was done above will determine any optimum values of <math>\operatorname{\mathcal{L}}(x_1,x_2,\ldots, x_n,\lambda)</math>. | |
| − | + | ||
| − | + | Suppose we now want to find optimum values for <math>f(x,y)=2x^2+y^2</math> subject to <math>x+y=1</math> from [2]. | |
| − | + | ||
| + | Then the Lagrangian method will result in a non-constrained function. | ||
| + | * <math>\operatorname{\mathcal{L}}(x,y,\lambda)= 2x^2+y^2+\lambda (1-x-y)</math> | ||
| + | |||
| + | The gradient for this new function is | ||
| + | * <math>\frac{\partial \mathcal{L}}{\partial x}(x,y,\lambda)= 4x+\lambda (-1)=0</math> | ||
| + | * <math>\frac{\partial \mathcal{L}}{\partial y}(x,y,\lambda)= 2y+\lambda (-1)=0</math> | ||
| + | * <math>\frac{\partial \mathcal{L}}{\partial \lambda}(x,y,\lambda)=1-x-y=0</math> | ||
| + | |||
| + | Finding the stationary points of the above equations can be obtained from their matrix from. | ||
| + | |||
| + | : <math> \begin{bmatrix} | ||
| + | 4 & 0 & -1 \\ | ||
| + | 0& 2 & -1 \\ | ||
| + | -1 & -1 & 0 | ||
| + | \end{bmatrix} \begin{bmatrix} | ||
| + | x\\ | ||
| + | y \\ | ||
| + | \lambda \end{bmatrix}= \begin{bmatrix} | ||
| + | 0\\ | ||
| + | 0\\ | ||
| + | -1 | ||
| + | \end{bmatrix} | ||
| + | </math> | ||
| + | |||
| + | This results in <math>x=1/3, y=2/3, \lambda=4/3</math>. | ||
| + | |||
| + | Next we can use the Hessian as before to determine the type of this stationary point. | ||
| + | |||
| + | : <math> H(\mathcal{L})= | ||
| + | \begin{bmatrix} | ||
| + | 4 & 0 & -1 \\ | ||
| + | 0& 2 & -1 \\ | ||
| + | -1&-1&0 | ||
| + | \end{bmatrix} | ||
| + | </math> | ||
| + | |||
| + | Since <math> H(\mathcal{L}) >0 </math> then the solution <math>(1/3,2/3,4/3)</math> minimizes <math>f(x,y)=2x^2+y^2</math> subject to <math>x+y=1</math> with <math>f(x,y)=2/3</math>. | ||
| + | |||
| + | |||
| + | ==Resources== | ||
| + | * [https://en.wikibooks.org/wiki/Calculus_Optimization_Methods/Lagrange_Multipliers Lagrange Multipliers], WikiBooks: Calculus Optimization Methods | ||
| + | |||
| + | ===Videos=== | ||
| + | *[https://www.youtube.com/watch?v=ry9cgNx1QV8&list=RDCMUCFe6jenM1Bc54qtBsIJGRZQ&index=5 LaGrange Multipliers - Finding Maximum or Minimum Values ]Video by patrickJMT | ||
| + | |||
| + | *[https://www.youtube.com/watch?v=x6j6yFzTUgU Lagrange Multipliers Practice Problems] Video by ames Hamblin 2017 | ||
| + | |||
| + | *[https://www.youtube.com/watch?v=HyqBcD_e_Uw Lagrange multipliers | MIT 18.02SC Multivariable Calculus, Fall 2010] Video by MIT OpenCourseWare | ||
| + | |||
| + | *[https://www.youtube.com/watch?v=qXhcpqslNUU Lagrange Multipliers - Two Constraints -patrickJMT 2009 ] Video by patrickJMT 2009 | ||
| + | |||
| + | *[https://www.youtube.com/watch?v=nDuS5uQ7-lo Lagrange multipliers (3 variables) | MIT 18.02SC Multivariable Calculus, Fall 2010] Video by MIT OpenCourseWare | ||
| + | |||
| + | ==Licensing== | ||
| + | Content obtained and/or adapted from: | ||
| + | * [https://en.wikibooks.org/wiki/Calculus_Optimization_Methods/Lagrange_Multipliers Lagrange Multipliers, WikiBooks: Calculus Optimization Methods] under a CC BY-SA license | ||
Latest revision as of 15:52, 2 November 2021
The method of Lagrange multipliers solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form:
Then finding the gradient and Hessian as was done above will determine any optimum values of Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \operatorname{\mathcal{L}}(x_1,x_2,\ldots, x_n,\lambda)} .
Suppose we now want to find optimum values for subject to from [2].
Then the Lagrangian method will result in a non-constrained function.
The gradient for this new function is
Finding the stationary points of the above equations can be obtained from their matrix from.
This results in .
Next we can use the Hessian as before to determine the type of this stationary point.
Since then the solution minimizes subject to with .
Resources
- Lagrange Multipliers, WikiBooks: Calculus Optimization Methods
Videos
- LaGrange Multipliers - Finding Maximum or Minimum Values Video by patrickJMT
- Lagrange Multipliers Practice Problems Video by ames Hamblin 2017
- Lagrange multipliers | MIT 18.02SC Multivariable Calculus, Fall 2010 Video by MIT OpenCourseWare
- Lagrange Multipliers - Two Constraints -patrickJMT 2009 Video by patrickJMT 2009
- Lagrange multipliers (3 variables) | MIT 18.02SC Multivariable Calculus, Fall 2010 Video by MIT OpenCourseWare
Licensing
Content obtained and/or adapted from:
- Lagrange Multipliers, WikiBooks: Calculus Optimization Methods under a CC BY-SA license