Difference between revisions of "Newton's Method"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
(Created page with "In numerical analysis, '''Newton's method''', also known as the '''Newton–Raphson method''', named after Isaac Newton and Joseph Raphson, is a root-finding alg...")
 
 
Line 1: Line 1:
In [[numerical analysis]], '''Newton's method''', also known as the '''Newton–Raphson method''', named after [[Isaac Newton]] and [[Joseph Raphson]], is a [[root-finding algorithm]] which produces successively better [[Numerical analysis|approximations]] to the [[root of a function|roots]] (or zeroes) of a [[Real number|real]]-valued [[function (mathematics)|function]]. The most basic version starts with a single-variable function {{math|''f''}} defined for a [[real number|real variable]] {{math|''x''}}, the function's [[derivative]] {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''}}, and an initial guess {{math|''x''<sub>0</sub>}} for a [[Zero of a function|root]] of {{math|''f''}}. If the function satisfies sufficient assumptions and the initial guess is close, then
+
In numerical analysis, '''Newton's method''', also known as the '''Newton–Raphson method''', named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a single-variable function {{math|''f''}} defined for a real variable {{math|''x''}}, the function's derivative {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''}}, and an initial guess {{math|''x''<sub>0</sub>}} for a root of {{math|''f''}}. If the function satisfies sufficient assumptions and the initial guess is close, then
  
 
:<math>x_{1} = x_0 - \frac{f(x_0)}{f'(x_0)}</math>
 
:<math>x_{1} = x_0 - \frac{f(x_0)}{f'(x_0)}</math>
  
is a better approximation of the root than {{math|''x''<sub>0</sub>}}. Geometrically, {{math|(''x''<sub>1</sub>, 0)}} is the intersection of the {{math|''x''}}-axis and the [[tangent]] of the [[graph of a function|graph]] of {{math|''f''}} at {{math|(''x''<sub>0</sub>, ''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x''<sub>0</sub>))}}: that is, the improved guess is the unique root of the [[linear approximation]] at the initial point. The process is repeated as
+
is a better approximation of the root than {{math|''x''<sub>0</sub>}}. Geometrically, {{math|(''x''<sub>1</sub>, 0)}} is the intersection of the {{math|''x''}}-axis and the tangent of the graph of {{math|''f''}} at {{math|(''x''<sub>0</sub>, ''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x''<sub>0</sub>))}}: that is, the improved guess is the unique root of the linear approximation at the initial point. The process is repeated as
  
 
:<math>x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}</math>
 
:<math>x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}</math>
  
until a sufficiently precise value is reached. This algorithm is first in the class of [[Householder's method]]s, succeeded by [[Halley's method]]. The method can also be extended to [[Complex-valued function|complex functions]] and to [[systems of equations]].
+
until a sufficiently precise value is reached. This algorithm is first in the class of Householder's methods, succeeded by Halley's method. The method can also be extended to complex functions and to systems of equations.
  
 
==Description==
 
==Description==
<!-- [[Image:newton iteration.svg|alt=Illustration of Newton's method|thumb|right|300px|An illustration of one iteration of Newton's method (the function {{mvar|f}} is shown in blue and the tangent line is in red). We see that {{math|''x''<sub>''n'' + 1</sub>}} is a better approximation than {{math|''x''<sub>''n''</sub>}} for the root {{math|''x''}} of the function {{math|''f''}}.]] -->
 
 
[[Image:NewtonIteration Ani.gif|alt=Illustration of Newton's method|thumb|right|300px|The function {{mvar|f}} is shown in blue and the tangent line is in red. We see that {{math|''x''<sub>''n'' + 1</sub>}} is a better approximation than {{math|''x''<sub>''n''</sub>}} for the root {{mvar|x}} of the function {{mvar|f}}.]]
 
[[Image:NewtonIteration Ani.gif|alt=Illustration of Newton's method|thumb|right|300px|The function {{mvar|f}} is shown in blue and the tangent line is in red. We see that {{math|''x''<sub>''n'' + 1</sub>}} is a better approximation than {{math|''x''<sub>''n''</sub>}} for the root {{mvar|x}} of the function {{mvar|f}}.]]
  
The idea is to start with an initial guess which is reasonably close to the true root, then to approximate the function by its [[tangent line]] using [[calculus]], and finally to compute the {{mvar|x}}-intercept of this tangent line by [[elementary algebra]]. This {{mvar|x}}-intercept will typically be a better approximation to the original function's root than the first guess, and the method can be [[iterative method|iterated]].
+
The idea is to start with an initial guess which is reasonably close to the true root, then to approximate the function by its tangent line using calculus, and finally to compute the {{mvar|x}}-intercept of this tangent line by elementary algebra. This {{mvar|x}}-intercept will typically be a better approximation to the original function's root than the first guess, and the method can be iterated.
  
More formally, suppose {{math|''f'' : (''a'', ''b'') → ℝ}} is a [[derivative|differentiable]] function defined on the [[interval (mathematics)|interval]] {{math|(''a'', ''b'')}} with values in the [[real number]]s&nbsp;{{math|ℝ}}, and we have some current approximation {{math|''x''<sub>''n''</sub>}}. Then we can derive the formula for a better approximation, {{math|''x''<sub>''n'' + 1</sub>}} by referring to the diagram on the right. The equation of the [[tangent line]] to the curve {{math|''y'' {{=}} ''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'')}} at {{math|''x'' {{=}} ''x''<sub>''n''</sub>}} is
+
More formally, suppose {{math|''f'' : (''a'', ''b'') → ℝ}} is a differentiable function defined on the interval {{math|(''a'', ''b'')}} with values in the real numbers&nbsp;{{math|ℝ}}, and we have some current approximation {{math|''x''<sub>''n''</sub>}}. Then we can derive the formula for a better approximation, {{math|''x''<sub>''n'' + 1</sub>}} by referring to the diagram on the right. The equation of the tangent line to the curve <math>y = f(x)</math> at <math>x = x_n</math> is
  
 
:<math>y = f'(x_n) \, (x-x_n) + f(x_n),</math>
 
:<math>y = f'(x_n) \, (x-x_n) + f(x_n),</math>
  
where {{mvar|'<span style{{=}}"letter-spacing:0.15em">f</span>′}} denotes the [[derivative]]. The {{mvar|x}}-intercept of this line (the value of {{mvar|x}} which makes {{math|''y'' {{=}} 0}}) is taken as the next approximation, {{math|''x''<sub>''n'' + 1</sub>}}, to the root, so that the equation of the tangent line is satisfied when {{math|(''x'', ''y'') {{=}} (''x''<sub>''n'' + 1</sub>, 0)}}:
+
where {{mvar|'<span style{{=}}"letter-spacing:0.15em">f</span>′}} denotes the derivative. The {{mvar|x}}-intercept of this line (the value of {{mvar|x}} which makes ''y'' = 0) is taken as the next approximation, {{math|''x''<sub>''n'' + 1</sub>}}, to the root, so that the equation of the tangent line is satisfied when (''x'', ''y'') = (''x''<sub>''n'' + 1</sub>, 0):
  
 
:<math>0 = f'(x_n) \, (x_{n+1}-x_n) + f(x_n).</math>
 
:<math>0 = f'(x_n) \, (x_{n+1}-x_n) + f(x_n).</math>
Line 26: Line 25:
 
:<math>x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}. </math>
 
:<math>x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}. </math>
  
We start the process with some arbitrary initial value {{math|''x''<sub>0</sub>}}. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the [[intermediate value theorem]].) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x''<sub>0</sub>) ≠ 0}}. Furthermore, for a zero of [[Multiplicity (mathematics)|multiplicity]]&nbsp;1, the convergence is at least quadratic (see [[rate of convergence]]) in a [[neighbourhood (mathematics)|neighbourhood]] of the zero, which intuitively means that the number of correct digits roughly doubles in every step. More details can be found in the [[#Analysis|analysis section]] below.
+
We start the process with some arbitrary initial value {{math|''x''<sub>0</sub>}}. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x''<sub>0</sub>) ≠ 0}}. Furthermore, for a zero of multiplicity&nbsp;1, the convergence is at least quadratic (see rate of convergence) in a neighborhood of the zero, which intuitively means that the number of correct digits roughly doubles in every step. More details can be found in the analysis section below.
  
[[Householder's method]]s are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if {{mvar|f}} or its derivatives are computationally expensive to evaluate.
+
Householder's methods are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if {{mvar|f}} or its derivatives are computationally expensive to evaluate.
  
 
==Analysis==
 
==Analysis==
Suppose that the function {{mvar|f}} has a zero at {{mvar|α}}, i.e., {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''α'') {{=}} 0}}, and {{mvar|f}} is differentiable in a [[topological neighborhood|neighborhood]] of {{mvar|α}}.
+
Suppose that the function {{mvar|f}} has a zero at {{mvar|α}}, i.e., <math>f(\alpha) = 0</math>, and {{mvar|f}} is differentiable in a neighborhood of {{mvar|α}}.
  
If {{mvar|f}} is continuously differentiable and its derivative is nonzero at&nbsp;{{mvar|α}}, then there exists a [[topological neighborhood|neighborhood]] of {{mvar|α}} such that for all starting values {{math|''x''<sub>0</sub>}} in that neighborhood, the [[sequence]] {{math|(''x''<sub>''n''</sub>)}} will [[limit of a sequence|converge]] to {{mvar|α}}.<ref>{{citation|title=A Theoretical Introduction to Numerical Analysis|first1=Victor S.|last1=Ryaben'kii|first2=Semyon V.|last2=Tsynkov|publisher=CRC Press|year=2006|isbn=9781584886075|page=243|url=https://books.google.com/books?id=V8gWP031-hEC&pg=PA243}}.</ref>
+
If {{mvar|f}} is continuously differentiable and its derivative is nonzero at&nbsp;{{mvar|α}}, then there exists a neighborhood of {{mvar|α}} such that for all starting values {{math|''x''<sub>0</sub>}} in that neighborhood, the sequence {{math|(''x''<sub>''n''</sub>)}} will converge to {{mvar|α}}.
  
If the function is continuously differentiable and its derivative is not 0 at {{mvar|α}} and it has a [[second derivative]] at {{mvar|α}} then the convergence is quadratic or faster. If the second derivative is not 0 at {{mvar|α}} then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of {{mvar|α}}, then:
+
If the function is continuously differentiable and its derivative is not 0 at {{mvar|α}} and it has a second derivative at {{mvar|α}} then the convergence is quadratic or faster. If the second derivative is not 0 at {{mvar|α}} then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of {{mvar|α}}, then:
 
:<math>\Delta x_{i+1} = \frac{f'' (\alpha)}{2 f' (\alpha)} \left(\Delta x_{i}\right)^2 + O\left(\Delta x_{i}\right)^3 \,,</math>
 
:<math>\Delta x_{i+1} = \frac{f'' (\alpha)}{2 f' (\alpha)} \left(\Delta x_{i}\right)^2 + O\left(\Delta x_{i}\right)^3 \,,</math>
  
Line 42: Line 41:
 
:<math>\Delta x_i \triangleq x_i - \alpha \,.</math>
 
:<math>\Delta x_i \triangleq x_i - \alpha \,.</math>
  
If the derivative is 0 at {{mvar|α}}, then the convergence is usually only linear. Specifically, if {{mvar|f}} is twice continuously differentiable, {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''α'') {{=}} 0}} and {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>″''(''α'') ≠ 0}}, then there exists a neighborhood of {{mvar|α}} such that, for all starting values {{math|''x''<sub>0</sub>}} in that neighborhood, the sequence of iterates converges linearly, with [[rate of convergence|rate]] {{sfrac|1|2}}.<ref>{{harvnb|Süli|Mayers|2003|loc=Exercise 1.6}}</ref> Alternatively, if {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''α'') {{=}} 0}} and {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x'') ≠ 0}} for {{math|''x'' ≠ ''α''}}, {{mvar|x}}&nbsp;in a [[topological neighborhood|neighborhood]] {{mvar|U}} of {{mvar|α}}, {{mvar|α}} being a zero of [[Multiplicity (mathematics)|multiplicity]] {{mvar|r}}, and if {{math|''f'' ∈ ''C''{{isup|''r''}}(''U'')}}, then there exists a neighborhood of {{mvar|α}} such that, for all starting values {{math|''x''<sub>0</sub>}} in that neighborhood, the sequence of iterates converges linearly.
+
If the derivative is 0 at {{mvar|α}}, then the convergence is usually only linear. Specifically, if {{mvar|f}} is twice continuously differentiable, <math>f'(\alpha) = 0</math> and {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>″''(''α'') ≠ 0}}, then there exists a neighborhood of {{mvar|α}} such that, for all starting values {{math|''x''<sub>0</sub>}} in that neighborhood, the sequence of iterates converges linearly, with rate <math>\frac{1}{2}</math>. Alternatively, if <math>f'(\alpha) = 0</math> and {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x'') ≠ 0}} for {{math|''x'' ≠ ''α''}}, {{mvar|x}}&nbsp;in a neighborhood {{mvar|U}} of {{mvar|α}}, {{mvar|α}} being a zero of multiplicity {{mvar|r}}, and if ''f'' ∈ ''C''<sup>''r''</sup>(''U''), then there exists a neighborhood of {{mvar|α}} such that, for all starting values {{math|''x''<sub>0</sub>}} in that neighborhood, the sequence of iterates converges linearly.
  
 
However, even linear convergence is not guaranteed in pathological situations.
 
However, even linear convergence is not guaranteed in pathological situations.
Line 49: Line 48:
  
 
===Proof of quadratic convergence for Newton's iterative method===
 
===Proof of quadratic convergence for Newton's iterative method===
According to [[Taylor's theorem]], any function {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'')}} which has a continuous second derivative can be represented by an expansion about a point that is close to a root of {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'')}}. Suppose this root is {{mvar|α}}. Then the expansion of {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''α'')}} about {{math|''x''<sub>''n''</sub>}} is:
+
According to Taylor's theorem, any function {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'')}} which has a continuous second derivative can be represented by an expansion about a point that is close to a root of {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'')}}. Suppose this root is {{mvar|α}}. Then the expansion of {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''α'')}} about {{math|''x''<sub>''n''</sub>}} is:
{{NumBlk|:|<math>f(\alpha) = f(x_n) + f'(x_n)(\alpha - x_n) + R_1 \,</math>|{{EquationRef|1}}}}
+
:<math>f(\alpha) = f(x_n) + f'(x_n)(\alpha - x_n) + R_1 \,</math> ('''1''')
  
where the [[Lagrange remainder|Lagrange form of the Taylor series expansion remainder]] is
+
where the Lagrange form of the Taylor series expansion remainder is
 
:<math>R_1 = \frac{1}{2!}f''(\xi_n)\left(\alpha - x_n\right)^{2} \,,</math>
 
:<math>R_1 = \frac{1}{2!}f''(\xi_n)\left(\alpha - x_n\right)^{2} \,,</math>
  
 
where {{math|''ξ''<sub>''n''</sub>}} is in between {{math|''x''<sub>''n''</sub>}} and {{mvar|α}}.
 
where {{math|''ξ''<sub>''n''</sub>}} is in between {{math|''x''<sub>''n''</sub>}} and {{mvar|α}}.
  
Since {{mvar|α}} is the root, ({{EquationNote|1}}) becomes:
+
Since {{mvar|α}} is the root, ('''1''') becomes:
{{NumBlk|:|<math>0 = f(\alpha) = f(x_n) + f'(x_n)(\alpha - x_n) + \tfrac12f''(\xi_n)\left(\alpha - x_n\right)^2 \,</math>|{{EquationRef|2}}}}
+
:<math>0 = f(\alpha) = f(x_n) + f'(x_n)(\alpha - x_n) + \tfrac12f''(\xi_n)\left(\alpha - x_n\right)^2 \,</math> ('''2''')
  
Dividing equation ({{EquationNote|2}}) by {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x<sub>n</sub>'')}} and rearranging gives
+
Dividing equation ('''2''') by {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x<sub>n</sub>'')}} and rearranging gives
{{NumBlk|:|<math> \frac {f(x_n)}{f'(x_n)}+\left(\alpha-x_n\right) = \frac {- f'' (\xi_n)}{2 f'(x_n)}\left(\alpha-x_n\right)^2 </math>|{{EquationRef|3}}}}
+
:<math> \frac {f(x_n)}{f'(x_n)}+\left(\alpha-x_n\right) = \frac {- f'' (\xi_n)}{2 f'(x_n)}\left(\alpha-x_n\right)^2 </math> ('''3''')
  
 
Remembering that {{math|''x''<sub>''n'' + 1</sub>}} is defined by
 
Remembering that {{math|''x''<sub>''n'' + 1</sub>}} is defined by
{{NumBlk|:|<math> x_{n+1} = x_{n} - \frac {f(x_n)}{f'(x_n)} \,,</math>|{{EquationRef|4}}}}
+
:<math> x_{n+1} = x_{n} - \frac {f(x_n)}{f'(x_n)} \,,</math> ('''4''')
  
 
one finds that
 
one finds that
Line 70: Line 69:
  
 
That is,
 
That is,
{{NumBlk|:|<math> \varepsilon_{n+1} = \frac {- f'' (\xi_n)}{2 f'(x_n)} \cdot {\varepsilon_n}^2 \,.</math>|{{EquationRef|5}}}}
+
:<math> \varepsilon_{n+1} = \frac {- f'' (\xi_n)}{2 f'(x_n)} \cdot {\varepsilon_n}^2 \,.</math> ('''5''')
  
 
Taking the absolute value of both sides gives
 
Taking the absolute value of both sides gives
{{NumBlk|:|<math> \left| {\varepsilon_{n+1}}\right| = \frac {\left| f'' (\xi_n) \right| }{2 \left| f'(x_n) \right|} \cdot {\varepsilon_n}^2 \,. </math> |{{EquationRef|6}}}}
+
:<math> \left| {\varepsilon_{n+1}}\right| = \frac {\left| f'' (\xi_n) \right| }{2 \left| f'(x_n) \right|} \cdot {\varepsilon_n}^2 \,. </math>   ('''6''')
  
Equation ({{EquationNote|6}}) shows that the [[rate of convergence]] is at least quadratic if the following conditions are satisfied:
+
Equation ('''6''') shows that the rate of convergence is at least quadratic if the following conditions are satisfied:
  
# {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x'') ≠ 0}}; for all {{math|''x'' ∈ ''I''}}, where {{mvar|I}} is the interval {{math|[''α'' − ''r'', ''α'' + ''r'']}} for some {{math|''r'' ≥ {{abs|''α'' − ''x''<sub>0</sub>}}}};
+
# {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x'') ≠ 0}}; for all {{math|''x'' ∈ ''I''}}, where {{mvar|I}} is the interval {{math|[''α'' − ''r'', ''α'' + ''r'']}} for some ''r'' ≥ |''α'' − ''x''<sub>0</sub>|;
 
# {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>″''(''x'')}} is continuous, for all {{math|''x'' ∈ ''I''}};
 
# {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>″''(''x'')}} is continuous, for all {{math|''x'' ∈ ''I''}};
 
# {{math|''x''<sub>0</sub>}} is ''sufficiently'' close to the root {{mvar|α}}.
 
# {{math|''x''<sub>0</sub>}} is ''sufficiently'' close to the root {{mvar|α}}.
  
 
The term ''sufficiently'' close in this context means the following:
 
The term ''sufficiently'' close in this context means the following:
{{numbered list|list_style_type=lower-alpha
 
|Taylor approximation is accurate enough such that we can ignore higher order terms;
 
|<math>\frac12 \left |{\frac {f'' (x_n)}{f'(x_n)}}\right |<C\left |{\frac {f''(\alpha)}{f'(\alpha)}}\right |,</math>
 
:for some {{math|''C'' < ∞}};
 
|<math>C \left |{\frac {f''(\alpha)}{f'(\alpha)}}\right |\varepsilon_n<1,</math>
 
:for {{math|''n'' ∈ '''ℤ''', ''n'' ≥ 0}} and {{mvar|C}} satisfying condition b.}}
 
  
Finally, ({{EquationNote|6}}) can be expressed in the following way:
+
:a. Taylor approximation is accurate enough such that we can ignore higher order terms;
 +
:b. |<math>\frac12 \left |{\frac {f'' (x_n)}{f'(x_n)}}\right |<C\left |{\frac {f''(\alpha)}{f'(\alpha)}}\right |,</math>
 +
::for some {{math|''C'' < ∞}};
 +
:c. <math>C \left |{\frac {f''(\alpha)}{f'(\alpha)}}\right |\varepsilon_n<1,</math>
 +
::for {{math|''n'' ∈ '''ℤ''', ''n'' ≥ 0}} and {{mvar|C}} satisfying condition b.
 +
 
 +
Finally, ('''6''') can be expressed in the following way:
 
:<math> \left | {\varepsilon_{n+1}}\right | \le M{\varepsilon_n}^2 \, </math>
 
:<math> \left | {\varepsilon_{n+1}}\right | \le M{\varepsilon_n}^2 \, </math>
where {{mvar|M}} is the [[w:supremum|supremum]] of the variable coefficient of {{math|''ε<sub>n</sub>''<sup>2</sup>}} on the interval {{mvar|I}} defined in condition 1, that is:
+
where {{mvar|M}} is the supremum of the variable coefficient of {{math|''ε<sub>n</sub>''<sup>2</sup>}} on the interval {{mvar|I}} defined in condition 1, that is:
  
 
: <math>M = \sup_{x \in I} \frac12 \left |\frac {f'' (x)}{f'(x)}\right |. \,</math>
 
: <math>M = \sup_{x \in I} \frac12 \left |\frac {f'' (x)}{f'(x)}\right |. \,</math>
  
The initial point {{math|''x''<sub>0</sub>}} has to be chosen such that conditions 1 to 3 are satisfied, where the third condition requires that {{math|''M'' {{abs|''ε''<sub>0</sub>}} < 1}}.
+
The initial point {{math|''x''<sub>0</sub>}} has to be chosen such that conditions 1 to 3 are satisfied, where the third condition requires that ''M'' |''ε''<sub>0</sub>| < 1.
  
 
===Basins of attraction===
 
===Basins of attraction===
The disjoint subsets of the [[basin of attraction|basins of attraction]]—the regions of the real number line such that within each region iteration from any point leads to one particular root—can be infinite in number and arbitrarily small. For example,<ref>{{cite journal|last=Dence |first=Thomas |title=Cubics, chaos and Newton's method |journal=[[Mathematical Gazette]] |volume=81 |date=Nov 1997 |issue=492 |pages=403–408 |doi=10.2307/3619617|jstor=3619617 }}</ref> for the function {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'') {{=}} ''x''<sup>3</sup> − 2''x''<sup>2</sup> − 11''x'' + 12 {{=}} (''x'' − 4)(''x'' − 1)(''x'' + 3)}}, the following initial conditions are in successive basins of attraction:
+
The disjoint subsets of the basins of attraction—the regions of the real number line such that within each region iteration from any point leads to one particular root—can be infinite in number and arbitrarily small. For example, for the function <math> f(x) = x^3 - 2x^2 - 11x + 12 = (x - 4)(x - 1)(x + 3)</math>, the following initial conditions are in successive basins of attraction:
  
 
:{|
 
:{|
|{{val|2.35287527}}||converges to||align=right|4;
+
|2.352 875 27|| converges to||align=right|4;
 
|-
 
|-
|{{val|2.35284172}}||converges to||align=right|−3;
+
|2.352 841 72|| converges to||align=right|−3;
 
|-
 
|-
|{{val|2.35283735}}||converges to||align=right|4;
+
|2.352 837 35|| converges to||align=right|4;
 
|-
 
|-
|{{val|2.352836327}}||converges to||align=right|−3;
+
|2.352 836 327|| converges to||align=right|−3;
 
|-
 
|-
|{{val|2.352836323}}||converges to||align=right|1.
+
|2.352 836 323|| converges to||align=right|1.
 
|}
 
|}
  
Line 116: Line 115:
  
 
===Bad starting points===
 
===Bad starting points===
In some cases the conditions on the function that are necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval where the method converges. This can happen, for example, if the function whose root is sought approaches zero asymptotically as {{mvar|x}} goes to {{math|∞}} or {{math|−∞}}. In such cases a different method, such as [[Bisection method|bisection]], should be used to obtain a better estimate for the zero to use as an initial point.
+
In some cases the conditions on the function that are necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval where the method converges. This can happen, for example, if the function whose root is sought approaches zero asymptotically as {{mvar|x}} goes to {{math|∞}} or {{math|−∞}}. In such cases a different method, such as bisection, should be used to obtain a better estimate for the zero to use as an initial point.
  
 
====Iteration point is stationary====
 
====Iteration point is stationary====
Line 123: Line 122:
 
:<math>f(x) = 1-x^2.</math>
 
:<math>f(x) = 1-x^2.</math>
  
It has a maximum at {{math|''x'' {{=}} 0}} and solutions of {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'') {{=}} 0}} at {{math|''x'' {{=}} ±1}}. If we start iterating from the [[stationary point]] {{math|''x''<sub>0</sub> {{=}} 0}} (where the derivative is zero), {{math|''x''<sub>1</sub>}} will be undefined, since the tangent at {{math|(0, 1)}} is parallel to the {{mvar|x}}-axis:
+
It has a maximum at ''x'' = 0 and solutions of ''f(x)'' = 0 at ''x'' = ±1. If we start iterating from the stationary point ''x''<sub>0</sub> = 0 (where the derivative is zero), {{math|''x''<sub>1</sub>}} will be undefined, since the tangent at {{math|(0, 1)}} is parallel to the {{mvar|x}}-axis:
  
 
:<math>x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} = 0 - \frac{1}{0}.</math>
 
:<math>x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} = 0 - \frac{1}{0}.</math>
Line 135: Line 134:
 
:<math>f(x) = x^3 - 2x + 2 \!</math>
 
:<math>f(x) = x^3 - 2x + 2 \!</math>
  
and take 0 as the starting point. The first iteration produces 1 and the second iteration returns to 0 so the sequence will alternate between the two without converging to a root. In fact, this 2-cycle is stable: there are neighborhoods around 0 and around 1 from which all points iterate asymptotically to the 2-cycle (and hence not to the root of the function). In general, the behavior of the sequence can be very complex (see [[Newton fractal]]). The real solution of this equation is {{val|−1.76929235}}….
+
and take 0 as the starting point. The first iteration produces 1 and the second iteration returns to 0 so the sequence will alternate between the two without converging to a root. In fact, this 2-cycle is stable: there are neighborhoods around 0 and around 1 from which all points iterate asymptotically to the 2-cycle (and hence not to the root of the function). In general, the behavior of the sequence can be very complex (see Newton fractal). The real solution of this equation is −1.769 292 35….
  
 
===Derivative issues===
 
===Derivative issues===
Line 141: Line 140:
  
 
====Derivative does not exist at root====
 
====Derivative does not exist at root====
A simple example of a function where Newton's method diverges is trying to find the cube root of zero. The cube root is continuous and infinitely differentiable, except for {{math|''x'' {{=}} 0}}, where its derivative is undefined:
+
A simple example of a function where Newton's method diverges is trying to find the cube root of zero. The cube root is continuous and infinitely differentiable, except for <math>x = 0</math>, where its derivative is undefined:
  
 
:<math>f(x) = \sqrt[3]{x}.</math>
 
:<math>f(x) = \sqrt[3]{x}.</math>
Line 151: Line 150:
 
The algorithm overshoots the solution and lands on the other side of the {{mvar|y}}-axis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration.
 
The algorithm overshoots the solution and lands on the other side of the {{mvar|y}}-axis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration.
  
In fact, the iterations diverge to infinity for every {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'') {{=}} {{abs|''x''}}<sup>''α''</sup>}}, where {{math|0 < ''α'' < {{sfrac|1|2}}}}. In the limiting case of {{math|''α'' {{=}} {{sfrac|1|2}}}} (square root), the iterations will alternate indefinitely between points {{math|''x''<sub>0</sub>}} and {{math|−''x''<sub>0</sub>}}, so they do not converge in this case either.
+
In fact, the iterations diverge to infinity for every <math> f(x) = |x|^\alpha</math>, where {{math|0 < ''α'' <}} <math>\frac{1}{2}</math>. In the limiting case of <math>\alpha = \frac{1}{2}</math> (square root), the iterations will alternate indefinitely between points {{math|''x''<sub>0</sub>}} and {{math|−''x''<sub>0</sub>}}, so they do not converge in this case either.
  
 
====Discontinuous derivative====
 
====Discontinuous derivative====
Line 169: Line 168:
 
Within any neighborhood of the root, this derivative keeps changing sign as {{math|''x''}} approaches 0 from the right (or from the left) while {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'') ≥ ''x'' − ''x''<sup>2</sup> > 0}} for {{math|0 < ''x'' < 1}}.
 
Within any neighborhood of the root, this derivative keeps changing sign as {{math|''x''}} approaches 0 from the right (or from the left) while {{math|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'') ≥ ''x'' − ''x''<sup>2</sup> > 0}} for {{math|0 < ''x'' < 1}}.
  
So {{math|{{sfrac|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'')|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x'')}}}} is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though:
+
So <math> \frac{f(x)}{f'(x)}</math> is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though:
 
*the function is differentiable (and thus continuous) everywhere;
 
*the function is differentiable (and thus continuous) everywhere;
 
*the derivative at the root is nonzero;
 
*the derivative at the root is nonzero;
 
*{{mvar|f}} is infinitely differentiable except at the root; and
 
*{{mvar|f}} is infinitely differentiable except at the root; and
*the derivative is bounded in a neighborhood of the root (unlike {{math|{{sfrac|''<span style{{=}}"letter-spacing:0.2em">f</span>''(''x'')|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x'')}}}}).
+
*the derivative is bounded in a neighborhood of the root (unlike <math> \frac{f(x)}{f'(x)}</math>).
  
 
===Non-quadratic convergence===
 
===Non-quadratic convergence===
Line 183: Line 182:
 
:<math>f(x) = x^2 \!</math>
 
:<math>f(x) = x^2 \!</math>
  
then {{math|''<span style{{=}}"letter-spacing:0.15em">f</span>′''(''x'') {{=}} 2''x''}} and consequently
+
then <math> f'(x) = 2x</math> and consequently
  
 
:<math>x - \frac{f(x)}{f'(x)} = \frac{x}{2} .</math>
 
:<math>x - \frac{f(x)}{f'(x)} = \frac{x}{2} .</math>
Line 193: Line 192:
 
:<math>f(x) = x^2(x-1000)+1.</math>
 
:<math>f(x) = x^2(x-1000)+1.</math>
  
Then the first few iterations starting at {{math|''x''<sub>0</sub> {{=}} 1}} are
+
Then the first few iterations starting at {{math|''x''<sub>0</sub>}} = 1 are
 
:{{math|''x''<sub>0</sub>}} = 1
 
:{{math|''x''<sub>0</sub>}} = 1
:{{math|''x''<sub>1</sub>}} = {{val|0.500250376}}…
+
:{{math|''x''<sub>1</sub>}} = 0.500 250 376…
:{{math|''x''<sub>2</sub>}} = {{val|0.251062828}}…
+
:{{math|''x''<sub>2</sub>}} = 0.251 062 828…
:{{math|''x''<sub>3</sub>}} = {{val|0.127507934}}…
+
:{{math|''x''<sub>3</sub>}} = 0.127 507 934…
:{{math|''x''<sub>4</sub>}} = {{val|0.067671976}}…
+
:{{math|''x''<sub>4</sub>}} = 0.067 671 976…
:{{math|''x''<sub>5</sub>}} = {{val|0.041224176}}…
+
:{{math|''x''<sub>5</sub>}} = 0.041 224 176…
:{{math|''x''<sub>6</sub>}} = {{val|0.032741218}}…
+
:{{math|''x''<sub>6</sub>}} = 0.032 741 218…
:{{math|''x''<sub>7</sub>}} = {{val|0.031642362}}…
+
:{{math|''x''<sub>7</sub>}} = 0.031 642 362…
 
it takes six iterations to reach a point where the convergence appears to be quadratic.
 
it takes six iterations to reach a point where the convergence appears to be quadratic.
  
Line 211: Line 210:
 
And
 
And
 
:<math>f''(x) = \tfrac49 x^{-\frac23} </math>
 
:<math>f''(x) = \tfrac49 x^{-\frac23} </math>
except when {{math|''x'' {{=}} 0}} where it is undefined. Given {{math|''x<sub>n</sub>''}},
+
except when {{math|''x''}} = 0 where it is undefined. Given {{math|''x<sub>n</sub>''}},
  
 
:<math>x_{n+1} = x_n - \frac{f(x_n)}{f '(x_n)} = \frac{\frac13{x_n}^\frac43}{1 + \tfrac43{x_n}^\frac13} </math>
 
:<math>x_{n+1} = x_n - \frac{f(x_n)}{f '(x_n)} = \frac{\frac13{x_n}^\frac43}{1 + \tfrac43{x_n}^\frac13} </math>
  
which has approximately {{sfrac|4|3}} times as many bits of precision as {{math|''x<sub>n</sub>''}} has. This is less than the 2 times as many which would be required for quadratic convergence. So the convergence of Newton's method (in this case) is not quadratic, even though: the function is continuously differentiable everywhere; the derivative is not zero at the root; and {{mvar|f}} is infinitely differentiable except at the desired root.
+
which has approximately <math>\frac{4}{3}</math> times as many bits of precision as {{math|''x<sub>n</sub>''}} has. This is less than the 2 times as many which would be required for quadratic convergence. So the convergence of Newton's method (in this case) is not quadratic, even though: the function is continuously differentiable everywhere; the derivative is not zero at the root; and {{mvar|f}} is infinitely differentiable except at the desired root.
  
 
== Licensing ==  
 
== Licensing ==  
 
Content obtained and/or adapted from:
 
Content obtained and/or adapted from:
 
* [https://en.wikipedia.org/wiki/Newton%27s_method Newton's Method, Wikipedia] under a CC BY-SA license
 
* [https://en.wikipedia.org/wiki/Newton%27s_method Newton's Method, Wikipedia] under a CC BY-SA license

Latest revision as of 16:13, 22 October 2021

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a single-variable function f defined for a real variable x, the function's derivative f, and an initial guess x0 for a root of f. If the function satisfies sufficient assumptions and the initial guess is close, then

is a better approximation of the root than x0. Geometrically, (x1, 0) is the intersection of the x-axis and the tangent of the graph of f at (x0, f(x0)): that is, the improved guess is the unique root of the linear approximation at the initial point. The process is repeated as

until a sufficiently precise value is reached. This algorithm is first in the class of Householder's methods, succeeded by Halley's method. The method can also be extended to complex functions and to systems of equations.

Description

Illustration of Newton's method
The function f is shown in blue and the tangent line is in red. We see that xn + 1 is a better approximation than xn for the root x of the function f.

The idea is to start with an initial guess which is reasonably close to the true root, then to approximate the function by its tangent line using calculus, and finally to compute the x-intercept of this tangent line by elementary algebra. This x-intercept will typically be a better approximation to the original function's root than the first guess, and the method can be iterated.

More formally, suppose f : (a, b) → ℝ is a differentiable function defined on the interval (a, b) with values in the real numbers , and we have some current approximation xn. Then we can derive the formula for a better approximation, xn + 1 by referring to the diagram on the right. The equation of the tangent line to the curve at is

where 'f denotes the derivative. The x-intercept of this line (the value of x which makes y = 0) is taken as the next approximation, xn + 1, to the root, so that the equation of the tangent line is satisfied when (x, y) = (xn + 1, 0):

Solving for xn + 1 gives

We start the process with some arbitrary initial value x0. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that f(x0) ≠ 0. Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see rate of convergence) in a neighborhood of the zero, which intuitively means that the number of correct digits roughly doubles in every step. More details can be found in the analysis section below.

Householder's methods are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if f or its derivatives are computationally expensive to evaluate.

Analysis

Suppose that the function f has a zero at α, i.e., , and f is differentiable in a neighborhood of α.

If f is continuously differentiable and its derivative is nonzero at α, then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence (xn) will converge to α.

If the function is continuously differentiable and its derivative is not 0 at α and it has a second derivative at α then the convergence is quadratic or faster. If the second derivative is not 0 at α then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of α, then:

where

If the derivative is 0 at α, then the convergence is usually only linear. Specifically, if f is twice continuously differentiable, and f(α) ≠ 0, then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly, with rate . Alternatively, if and f(x) ≠ 0 for xα, x in a neighborhood U of α, α being a zero of multiplicity r, and if fCr(U), then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly.

However, even linear convergence is not guaranteed in pathological situations.

In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhood U+ of α, if f is twice differentiable in U+ and if f ≠ 0, f · f > 0 in U+, then, for each x0 in U+ the sequence xk is monotonically decreasing to α.

Proof of quadratic convergence for Newton's iterative method

According to Taylor's theorem, any function f(x) which has a continuous second derivative can be represented by an expansion about a point that is close to a root of f(x). Suppose this root is α. Then the expansion of f(α) about xn is:

(1)

where the Lagrange form of the Taylor series expansion remainder is

where ξn is in between xn and α.

Since α is the root, (1) becomes:

(2)

Dividing equation (2) by f(xn) and rearranging gives

(3)

Remembering that xn + 1 is defined by

(4)

one finds that

That is,

(5)

Taking the absolute value of both sides gives

(6)

Equation (6) shows that the rate of convergence is at least quadratic if the following conditions are satisfied:

  1. f(x) ≠ 0; for all xI, where I is the interval [αr, α + r] for some r ≥ |αx0|;
  2. f(x) is continuous, for all xI;
  3. x0 is sufficiently close to the root α.

The term sufficiently close in this context means the following:

a. Taylor approximation is accurate enough such that we can ignore higher order terms;
b. |
for some C < ∞;
c.
for n, n ≥ 0 and C satisfying condition b.

Finally, (6) can be expressed in the following way:

where M is the supremum of the variable coefficient of εn2 on the interval I defined in condition 1, that is:

The initial point x0 has to be chosen such that conditions 1 to 3 are satisfied, where the third condition requires that M |ε0| < 1.

Basins of attraction

The disjoint subsets of the basins of attraction—the regions of the real number line such that within each region iteration from any point leads to one particular root—can be infinite in number and arbitrarily small. For example, for the function , the following initial conditions are in successive basins of attraction:

2.352 875 27 converges to 4;
2.352 841 72 converges to −3;
2.352 837 35 converges to 4;
2.352 836 327 converges to −3;
2.352 836 323 converges to 1.

Failure analysis

Newton's method is only guaranteed to converge if certain conditions are satisfied. If the assumptions made in the proof of quadratic convergence are met, the method will converge. For the following subsections, failure of the method to converge indicates that the assumptions made in the proof were not met.

Bad starting points

In some cases the conditions on the function that are necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval where the method converges. This can happen, for example, if the function whose root is sought approaches zero asymptotically as x goes to or −∞. In such cases a different method, such as bisection, should be used to obtain a better estimate for the zero to use as an initial point.

Iteration point is stationary

Consider the function:

It has a maximum at x = 0 and solutions of f(x) = 0 at x = ±1. If we start iterating from the stationary point x0 = 0 (where the derivative is zero), x1 will be undefined, since the tangent at (0, 1) is parallel to the x-axis:

The same issue occurs if, instead of the starting point, any iteration point is stationary. Even if the derivative is small but not zero, the next iteration will be a far worse approximation.

Starting point enters a cycle

The tangent lines of x3 − 2x + 2 at 0 and 1 intersect the x-axis at 1 and 0 respectively, illustrating why Newton's method oscillates between these values for some starting points.

For some functions, some starting points may enter an infinite cycle, preventing convergence. Let

and take 0 as the starting point. The first iteration produces 1 and the second iteration returns to 0 so the sequence will alternate between the two without converging to a root. In fact, this 2-cycle is stable: there are neighborhoods around 0 and around 1 from which all points iterate asymptotically to the 2-cycle (and hence not to the root of the function). In general, the behavior of the sequence can be very complex (see Newton fractal). The real solution of this equation is −1.769 292 35….

Derivative issues

If the function is not continuously differentiable in a neighborhood of the root then it is possible that Newton's method will always diverge and fail, unless the solution is guessed on the first try.

Derivative does not exist at root

A simple example of a function where Newton's method diverges is trying to find the cube root of zero. The cube root is continuous and infinitely differentiable, except for , where its derivative is undefined:

For any iteration point xn, the next iteration point will be:

The algorithm overshoots the solution and lands on the other side of the y-axis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration.

In fact, the iterations diverge to infinity for every , where 0 < α < . In the limiting case of (square root), the iterations will alternate indefinitely between points x0 and x0, so they do not converge in this case either.

Discontinuous derivative

If the derivative is not continuous at the root, then convergence may fail to occur in any neighborhood of the root. Consider the function

Its derivative is:

Within any neighborhood of the root, this derivative keeps changing sign as x approaches 0 from the right (or from the left) while f(x) ≥ xx2 > 0 for 0 < x < 1.

So is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though:

  • the function is differentiable (and thus continuous) everywhere;
  • the derivative at the root is nonzero;
  • f is infinitely differentiable except at the root; and
  • the derivative is bounded in a neighborhood of the root (unlike ).

Non-quadratic convergence

In some cases the iterates converge but do not converge as quickly as promised. In these cases simpler methods converge just as quickly as Newton's method.

Zero derivative

If the first derivative is zero at the root, then convergence will not be quadratic. Let

then and consequently

So convergence is not quadratic, even though the function is infinitely differentiable everywhere.

Similar problems occur even when the root is only "nearly" double. For example, let

Then the first few iterations starting at x0 = 1 are

x0 = 1
x1 = 0.500 250 376…
x2 = 0.251 062 828…
x3 = 0.127 507 934…
x4 = 0.067 671 976…
x5 = 0.041 224 176…
x6 = 0.032 741 218…
x7 = 0.031 642 362…

it takes six iterations to reach a point where the convergence appears to be quadratic.

No second derivative

If there is no second derivative at the root, then convergence may fail to be quadratic. Let

Then

And

except when x = 0 where it is undefined. Given xn,

which has approximately times as many bits of precision as xn has. This is less than the 2 times as many which would be required for quadratic convergence. So the convergence of Newton's method (in this case) is not quadratic, even though: the function is continuously differentiable everywhere; the derivative is not zero at the root; and f is infinitely differentiable except at the desired root.

Licensing

Content obtained and/or adapted from: