Difference between revisions of "Stokes' Theorem"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
 
Line 2: Line 2:
  
  
'''Stokes' theorem''', also known as '''Kelvin–Stokes theorem''' after [[Lord Kelvin]] and [[Sir George Stokes, 1st Baronet|George Stokes]], the '''fundamental theorem for curls''' or simply the '''curl theorem''', is a [[theorem]] in [[vector calculus]] on <math>\mathbb{R}^3</math>. Given a [[vector field]], the theorem relates the [[Surface integral|integral]] of the [[Curl (mathematics)|curl]] of the vector field over some surface, to the [[line integral]] of the vector field around the boundary of the surface. The classical Stokes' theorem can be stated in one sentence: The [[line integral]] of a vector field over a loop is equal to the ''[[flux]] of its curl'' through the enclosed surface.
+
'''Stokes' theorem''', also known as '''Kelvin–Stokes theorem''' after Lord Kelvin and George Stokes, the '''fundamental theorem for curls''' or simply the '''curl theorem''', is a theorem in vector calculus on <math>\mathbb{R}^3</math>. Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical Stokes' theorem can be stated in one sentence: The line integral of a vector field over a loop is equal to the ''flux of its curl'' through the enclosed surface.
  
Stokes' theorem is a special case of the [[generalized Stokes' theorem]]. In particular, a vector field on <math>\mathbb{R}^3</math> can be considered as a [[differential form|1-form]] in which case its curl is its [[exterior derivative]], a 2-form.
+
Stokes' theorem is a special case of the generalized Stokes' theorem. In particular, a vector field on <math>\mathbb{R}^3</math> can be considered as a 1-form in which case its curl is its exterior derivative, a 2-form.
  
 
==Theorem==
 
==Theorem==
  
Let <math>\Sigma</math> be a smooth oriented surface in {{math|'''R'''<sup>3</sup>}} with boundary <math>\partial \Sigma</math>. If a vector field <math>\mathbf{A} = (P(x, y, z), Q(x, y, z), R(x, y, z))</math> is defined and has continuous first order [[partial derivatives]] in a region containing <math>\Sigma</math>, then
+
Let <math>\Sigma</math> be a smooth oriented surface in {{math|'''R'''<sup>3</sup>}} with boundary <math>\partial \Sigma</math>. If a vector field <math>\mathbf{A} = (P(x, y, z), Q(x, y, z), R(x, y, z))</math> is defined and has continuous first order partial derivatives in a region containing <math>\Sigma</math>, then
  
<math display=block>
+
<math>
 
\iint_\Sigma (\nabla \times \mathbf{A}) \cdot \mathrm{d}\mathbf{a}  =  \oint_{\partial\Sigma} \mathbf{A} \cdot  
 
\iint_\Sigma (\nabla \times \mathbf{A}) \cdot \mathrm{d}\mathbf{a}  =  \oint_{\partial\Sigma} \mathbf{A} \cdot  
 
\mathrm{d}\mathbf{l}.
 
\mathrm{d}\mathbf{l}.
 
</math>
 
</math>
 
More explicitly, the equality says that
 
More explicitly, the equality says that
<math display=block>
+
<math>
 
\begin{align}
 
\begin{align}
 
&\iint_\Sigma \left(\left(\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z} \right)\,\mathrm{d}y\, \mathrm{d}z +\left(\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x}\right)\, \mathrm{d}z\, \mathrm{d}x  +\left (\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)\, \mathrm{d}x\, \mathrm{d}y\right) \\
 
&\iint_\Sigma \left(\left(\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z} \right)\,\mathrm{d}y\, \mathrm{d}z +\left(\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x}\right)\, \mathrm{d}z\, \mathrm{d}x  +\left (\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)\, \mathrm{d}x\, \mathrm{d}y\right) \\
Line 22: Line 22:
 
</math>
 
</math>
  
The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary.  Surfaces such as the [[Koch snowflake]], for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in [[Lebesgue integration|Lebesgue theory]] cannot be defined for a non-[[Lipschitz function|Lipschitz]] surface.  One (advanced) technique is to pass to a [[weak formulation]] and then apply the machinery of [[geometric measure theory]]; for that approach see the [[coarea formula]].  In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of {{math|'''R'''{{sup|2}}}}.
+
The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary.  Surfaces such as the Koch snowflake, for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non-Lipschitz surface.  One (advanced) technique is to pass to a weak formulation and then apply the machinery of geometric measure theory; for that approach see the coarea formula.  In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of {{math|'''R'''{{sup|2}}}}.
  
Let {{math|''γ'': [''a'', ''b''] → '''R'''<sup>2</sup>}} be a [[piecewise]] smooth [[Jordan curve|Jordan plane curve]]. The [[Jordan curve theorem]] implies that {{mvar|γ}} divides {{math|'''R'''<sup>2</sup>}} into two components, a [[compact space|compact]] one and another that is non-compact. Let {{mvar|D}} denote the compact part; then {{mvar|D}} is bounded by {{mvar|γ}}.  It now suffices to transfer this notion of boundary along a continuous map to our surface in {{math|'''R'''{{sup|3}}}}.  But we already have such a map: the [[Parametrization (geometry)|parametrization]] of {{math|Σ}}.
+
Let {{math|''γ'': [''a'', ''b''] → '''R'''<sup>2</sup>}} be a piecewise smooth Jordan plane curve. The Jordan curve theorem implies that {{mvar|γ}} divides {{math|'''R'''<sup>2</sup>}} into two components, a compact one and another that is non-compact. Let {{mvar|D}} denote the compact part; then {{mvar|D}} is bounded by {{mvar|γ}}.  It now suffices to transfer this notion of boundary along a continuous map to our surface in {{math|'''R'''{{sup|3}}}}.  But we already have such a map: the parametrization of {{math|Σ}}.
  
Suppose {{math|''ψ'': ''D'' → '''R'''<sup>3</sup>}} is smooth, with {{math|1=Σ = ''ψ''(''D'')}}. If {{math|Γ}} is the [[space curve]] defined by {{math|1=Γ(''t'') = ''ψ''(''γ''(''t''))}},<ref group="note" name=cgamma>{{math|Γ}} may not be a [[Jordan curve]], if the loop {{mvar|γ}} interacts poorly with {{mvar|ψ}}.  Nonetheless, {{math|Γ}} is always a [[loop (topology)|loop]], and topologically a [[connected sum]] of [[countable set|countably-many]] Jordan curves, so that the integrals are well-defined.</ref> then we call {{math|Γ}} the boundary of {{math|Σ}}, written {{math|∂Σ}}.
+
Suppose {{math|''ψ'': ''D'' → '''R'''<sup>3</sup>}} is smooth, with {{math|1=Σ = ''ψ''(''D'')}}. If {{math|Γ}} is the space curve defined by {{math|1=Γ(''t'') = ''ψ''(''γ''(''t''))}}, {{math|Γ}} may not be a Jordan curve, if the loop {{mvar|γ}} interacts poorly with {{mvar|ψ}}.  Nonetheless, {{math|Γ}} is always a loop, and topologically a connected sum of countably-many Jordan curves, so that the integrals are well-defined.</ref> then we call {{math|Γ}} the boundary of {{math|Σ}}, written {{math|∂Σ}}.
  
With the above notation, if {{math|'''F'''}} is any smooth vector field on {{math|'''R'''<sup>3</sup>}}, then<ref name="Jame">{{cite book|url={{Google books |plainurl=yes |id=btIhvKZCkTsC |page=786 }}|title=Essential Calculus: Early Transcendentals|last=Stewart|first=James|publisher=Cole|year=2010}}</ref><ref name="bath">Robert Scheichl, lecture notes for [[University of Bath]] mathematics course  [http://www.maths.bath.ac.uk/~masrs/ma20010/stokesproofs.pdf]</ref><math display="block">\oint_{\partial\Sigma} \mathbf{F}\, \cdot\, \mathrm{d}{\mathbf{\Gamma}}  = \iint_{\Sigma} \nabla\times\mathbf{F}\, \cdot\, \mathrm{d}\mathbf{S}. </math>
+
With the above notation, if {{math|'''F'''}} is any smooth vector field on {{math|'''R'''<sup>3</sup>}}, then  
 +
 
 +
<math>\oint_{\partial\Sigma} \mathbf{F}\, \cdot\, \mathrm{d}{\mathbf{\Gamma}}  = \iint_{\Sigma} \nabla\times\mathbf{F}\, \cdot\, \mathrm{d}\mathbf{S}. </math>
  
 
==Proof==
 
==Proof==
The proof of the theorem consists of 4 steps. We assume [[Green's theorem]], so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem).<ref>{{cite book|title=Vector Calculus|last=Colley|first=Susan Jane|edition=4th|publisher=Pearson|year=2002|location=Boston|pages=500–3}}</ref>  When proving this theorem, mathematicians normally deduce it as a special case of a [[Generalized Stokes' theorem|more general result]], which is stated in terms of [[differential form]]s, and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus.<ref name="bath"/> At the end of this section, a short alternate proof of Stokes' theorem is given, as a corollary of the generalized Stokes' Theorem.
+
The proof of the theorem consists of 4 steps. We assume Green's theorem, so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem). When proving this theorem, mathematicians normally deduce it as a special case of a more general result, which is stated in terms of differential forms, and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus.<ref name="bath"/> At the end of this section, a short alternate proof of Stokes' theorem is given, as a corollary of the generalized Stokes' Theorem.
  
 
===Elementary proof===
 
===Elementary proof===
Line 41: Line 43:
 
= \oint_{\gamma}{\mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))J_{\mathbf{y}}(\boldsymbol{\psi})\,\mathrm{d}\mathbf{y}}</math>
 
= \oint_{\gamma}{\mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))J_{\mathbf{y}}(\boldsymbol{\psi})\,\mathrm{d}\mathbf{y}}</math>
  
where {{mvar|Jψ}} stands for the [[Jacobian matrix and determinant|Jacobian matrix]] of {{mvar|ψ}}.
+
where {{mvar|Jψ}} stands for the Jacobian matrix of {{mvar|ψ}}.
  
 
Now let {{math|{'''e'''<sub>''u''</sub>, '''e'''<sub>''v''</sub>}<nowiki/>}} be an orthonormal basis in the coordinate directions of {{math|'''R'''<sub>2</sub>}}.  Recognizing that the columns of {{math|''J''<sub>'''y'''</sub>'''''ψ'''''}} are precisely the partial derivatives of {{math|'''''ψ'''''}} at {{math|'''y'''}}, we can expand the previous equation in coordinates as
 
Now let {{math|{'''e'''<sub>''u''</sub>, '''e'''<sub>''v''</sub>}<nowiki/>}} be an orthonormal basis in the coordinate directions of {{math|'''R'''<sub>2</sub>}}.  Recognizing that the columns of {{math|''J''<sub>'''y'''</sub>'''''ψ'''''}} are precisely the partial derivatives of {{math|'''''ψ'''''}} at {{math|'''y'''}}, we can expand the previous equation in coordinates as
  
<math display=block>\begin{align}
+
<math>\begin{align}
 
\oint_{\partial\Sigma}{\mathbf{F}(\mathbf{x})\cdot\,\mathrm{d}\mathbf{l}}
 
\oint_{\partial\Sigma}{\mathbf{F}(\mathbf{x})\cdot\,\mathrm{d}\mathbf{l}}
 
&= \oint_{\gamma}{\mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))J_{\mathbf{y}}(\boldsymbol{\psi})\mathbf{e}_u(\mathbf{e}_u\cdot\,\mathrm{d}\mathbf{y}) + \mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))J_{\mathbf{y}}(\boldsymbol{\psi})\mathbf{e}_v(\mathbf{e}_v\cdot\,\mathrm{d}\mathbf{y})} \\
 
&= \oint_{\gamma}{\mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))J_{\mathbf{y}}(\boldsymbol{\psi})\mathbf{e}_u(\mathbf{e}_u\cdot\,\mathrm{d}\mathbf{y}) + \mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))J_{\mathbf{y}}(\boldsymbol{\psi})\mathbf{e}_v(\mathbf{e}_v\cdot\,\mathrm{d}\mathbf{y})} \\
Line 56: Line 58:
 
<math display=block>\mathbf{P}(u,v) = \left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial u}(u,v)\right)\mathbf{e}_u + \left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial v} \right)\mathbf{e}_v</math>
 
<math display=block>\mathbf{P}(u,v) = \left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial u}(u,v)\right)\mathbf{e}_u + \left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial v} \right)\mathbf{e}_v</math>
  
This is the [[Pullback (differential geometry)|pullback]] of {{math|'''F'''}} along {{math|'''''ψ'''''}}, and, by the above, it satisfies
+
This is the pullback of {{math|'''F'''}} along {{math|'''''ψ'''''}}, and, by the above, it satisfies
  
<math display=block>\oint_{\partial\Sigma}{\mathbf{F}(\mathbf{x})\cdot\,\mathrm{d}\mathbf{l}}=\oint_{\gamma}{\mathbf{P}(\mathbf{y})\cdot\,\mathrm{d}\mathbf{l}}</math>
+
<math>\oint_{\partial\Sigma}{\mathbf{F}(\mathbf{x})\cdot\,\mathrm{d}\mathbf{l}}=\oint_{\gamma}{\mathbf{P}(\mathbf{y})\cdot\,\mathrm{d}\mathbf{l}}</math>
  
 
We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side.
 
We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side.
  
 
====Third step of the proof (second equation)====
 
====Third step of the proof (second equation)====
First, calculate the partial derivatives appearing in [[Green's theorem]], via the [[General Leibniz rule|product rule]]:
+
First, calculate the partial derivatives appearing in Green's theorem, via the product rule:
  
<math display=block>\begin{align}
+
<math>\begin{align}
 
\frac{\partial P_1}{\partial v} &= \frac{\partial (\mathbf{F}\circ \boldsymbol{\psi})}{\partial v}\cdot\frac{\partial \boldsymbol\psi}{\partial u} + (\mathbf{F}\circ \boldsymbol\psi) \cdot\frac{\partial^2 \boldsymbol\psi}{\partial v \, \partial u} \\[5pt]
 
\frac{\partial P_1}{\partial v} &= \frac{\partial (\mathbf{F}\circ \boldsymbol{\psi})}{\partial v}\cdot\frac{\partial \boldsymbol\psi}{\partial u} + (\mathbf{F}\circ \boldsymbol\psi) \cdot\frac{\partial^2 \boldsymbol\psi}{\partial v \, \partial u} \\[5pt]
 
\frac{\partial P_2}{\partial u} &= \frac{\partial (\mathbf{F}\circ \boldsymbol{\psi})}{\partial u}\cdot\frac{\partial \boldsymbol\psi}{\partial v} + (\mathbf{F}\circ \boldsymbol\psi) \cdot\frac{\partial^2 \boldsymbol\psi}{\partial u \, \partial v}
 
\frac{\partial P_2}{\partial u} &= \frac{\partial (\mathbf{F}\circ \boldsymbol{\psi})}{\partial u}\cdot\frac{\partial \boldsymbol\psi}{\partial v} + (\mathbf{F}\circ \boldsymbol\psi) \cdot\frac{\partial^2 \boldsymbol\psi}{\partial u \, \partial v}
 
\end{align}</math>
 
\end{align}</math>
  
Conveniently, the second term vanishes in the difference, by [[equality of mixed partials]].  So,
+
Conveniently, the second term vanishes in the difference, by equality of mixed partials.  So,
  
 
<math display=block>\begin{align}
 
<math display=block>\begin{align}
Line 97: Line 99:
 
<math display=block>\left({(J_{\boldsymbol\psi(u,v)}\mathbf{F})}_{\psi(u,v)} - {(J_{\boldsymbol\psi(u,v)}\mathbf{F})}^{\mathsf{T}} \right) \mathbf{x} =(\nabla\times\mathbf{F})\times \mathbf{x}, \quad \text{for all}\, \mathbf{x}\in\R^{3}</math>
 
<math display=block>\left({(J_{\boldsymbol\psi(u,v)}\mathbf{F})}_{\psi(u,v)} - {(J_{\boldsymbol\psi(u,v)}\mathbf{F})}^{\mathsf{T}} \right) \mathbf{x} =(\nabla\times\mathbf{F})\times \mathbf{x}, \quad \text{for all}\, \mathbf{x}\in\R^{3}</math>
  
We can now recognize the difference of partials as a [[Triple product|(scalar) triple product]]:
+
We can now recognize the difference of partials as a (scalar) triple product:
  
<math display=block>\begin{align}
+
<math>\begin{align}
 
\frac{\partial P_1}{\partial v} - \frac{\partial P_2}{\partial u}
 
\frac{\partial P_1}{\partial v} - \frac{\partial P_2}{\partial u}
 
&= \frac{\partial \boldsymbol\psi}{\partial u}\cdot(\nabla\times\mathbf{F}) \times \frac{\partial \boldsymbol\psi}{\partial v} \\
 
&= \frac{\partial \boldsymbol\psi}{\partial u}\cdot(\nabla\times\mathbf{F}) \times \frac{\partial \boldsymbol\psi}{\partial v} \\
Line 105: Line 107:
 
\end{align}</math>
 
\end{align}</math>
  
On the other hand, the definition of a [[surface integral]] also includes a triple product—the very same one!
+
On the other hand, the definition of a surface integral also includes a triple product—the very same one!
  
<math display=block>\begin{align}
+
<math>\begin{align}
 
\iint_S (\nabla\times\mathbf{F})\cdot \, d^2\mathbf{S}
 
\iint_S (\nabla\times\mathbf{F})\cdot \, d^2\mathbf{S}
 
&=\iint_D {(\nabla\times\mathbf{F})(\boldsymbol\psi(u,v))\cdot\left(\frac{\partial \boldsymbol\psi}{\partial u}(u,v)\times \frac{\partial \boldsymbol\psi}{\partial v}(u,v)\,\mathrm{d}u\,\mathrm{d}v\right)}\\
 
&=\iint_D {(\nabla\times\mathbf{F})(\boldsymbol\psi(u,v))\cdot\left(\frac{\partial \boldsymbol\psi}{\partial u}(u,v)\times \frac{\partial \boldsymbol\psi}{\partial v}(u,v)\,\mathrm{d}u\,\mathrm{d}v\right)}\\
Line 118: Line 120:
  
 
====Fourth step of the proof (reduction to Green's theorem)====
 
====Fourth step of the proof (reduction to Green's theorem)====
Combining the second and third steps, and then applying [[Green's theorem]] completes the proof.
+
Combining the second and third steps, and then applying Green's theorem completes the proof.
  
 
===Proof via differential forms===
 
===Proof via differential forms===
Line 129: Line 131:
 
<math display=block>\star\omega_{\nabla\times\mathbf{F}}=\mathrm{d}\omega_{\mathbf{F}}</math>
 
<math display=block>\star\omega_{\nabla\times\mathbf{F}}=\mathrm{d}\omega_{\mathbf{F}}</math>
  
where {{math|★}} is the [[Hodge star]] and <math>\mathrm{d}</math> is the [[exterior derivative]].  Thus, by generalized Stokes' theorem,<ref>{{cite book |last=Edwards |first=Harold M. |title=Advanced Calculus: A Differential Forms Approach |publisher=Birkhäuser |year=1994 |isbn=0-8176-3707-9 }}</ref>
+
where {{math|★}} is the Hodge star and <math>\mathrm{d}</math> is the exterior derivative.  Thus, by generalized Stokes' theorem,
  
 
<math display=block>\oint_{\partial\Sigma}{\mathbf{F}\cdot\,\mathrm{d}\mathbf{l}}
 
<math display=block>\oint_{\partial\Sigma}{\mathbf{F}\cdot\,\mathrm{d}\mathbf{l}}

Revision as of 07:50, 3 November 2021

An illustration of Stokes' theorem, with surface Σ, its boundary ∂Σ and the normal vector n.


Stokes' theorem, also known as Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical Stokes' theorem can be stated in one sentence: The line integral of a vector field over a loop is equal to the flux of its curl through the enclosed surface.

Stokes' theorem is a special case of the generalized Stokes' theorem. In particular, a vector field on can be considered as a 1-form in which case its curl is its exterior derivative, a 2-form.

Theorem

Let be a smooth oriented surface in R3 with boundary . If a vector field is defined and has continuous first order partial derivatives in a region containing , then

More explicitly, the equality says that

The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary. Surfaces such as the Koch snowflake, for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non-Lipschitz surface. One (advanced) technique is to pass to a weak formulation and then apply the machinery of geometric measure theory; for that approach see the coarea formula. In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of RTemplate:Sup.

Let γ: [a, b] → R2 be a piecewise smooth Jordan plane curve. The Jordan curve theorem implies that γ divides R2 into two components, a compact one and another that is non-compact. Let D denote the compact part; then D is bounded by γ. It now suffices to transfer this notion of boundary along a continuous map to our surface in RTemplate:Sup. But we already have such a map: the parametrization of Σ.

Suppose ψ: DR3 is smooth, with Σ = ψ(D). If Γ is the space curve defined by Γ(t) = ψ(γ(t)), Γ may not be a Jordan curve, if the loop γ interacts poorly with ψ. Nonetheless, Γ is always a loop, and topologically a connected sum of countably-many Jordan curves, so that the integrals are well-defined.</ref> then we call Γ the boundary of Σ, written ∂Σ.

With the above notation, if F is any smooth vector field on R3, then

Proof

The proof of the theorem consists of 4 steps. We assume Green's theorem, so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem). When proving this theorem, mathematicians normally deduce it as a special case of a more general result, which is stated in terms of differential forms, and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus.[1] At the end of this section, a short alternate proof of Stokes' theorem is given, as a corollary of the generalized Stokes' Theorem.

Elementary proof

First step of the proof (parametrization of integral)

As in Template:Slink, we reduce the dimension by using the natural parametrization of the surface. Let ψ and γ be as in that section, and note that by change of variables

where stands for the Jacobian matrix of ψ.

Now let {eu, ev} be an orthonormal basis in the coordinate directions of R2. Recognizing that the columns of Jyψ are precisely the partial derivatives of ψ at y, we can expand the previous equation in coordinates as

Second step in the proof (defining the pullback)

The previous step suggests we define the function

This is the pullback of F along ψ, and, by the above, it satisfies

We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side.

Third step of the proof (second equation)

First, calculate the partial derivatives appearing in Green's theorem, via the product rule:

Conveniently, the second term vanishes in the difference, by equality of mixed partials. So,

But now consider the matrix in that quadratic form—that is, . We claim this matrix in fact describes a cross product.

To be precise, let be an arbitrary 3 × 3 matrix and let

Note that xa × x is linear, so it is determined by its action on basis elements. But by direct calculation

Thus (AATemplate:Sup)x = a × x for any x. Substituting J F for A, we obtain

We can now recognize the difference of partials as a (scalar) triple product:

On the other hand, the definition of a surface integral also includes a triple product—the very same one!

So, we obtain

Fourth step of the proof (reduction to Green's theorem)

Combining the second and third steps, and then applying Green's theorem completes the proof.

Proof via differential forms

RRTemplate:Sup can be identified with the differential 1-forms on RTemplate:Sup via the map

Write the differential 1-form associated to a function F as ωF. Then one can calculate that

where is the Hodge star and is the exterior derivative. Thus, by generalized Stokes' theorem,

Licensing

Content obtained and/or adapted from:

  • Cite error: Invalid <ref> tag; no text was provided for refs named bath