# The Cross Product

In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here ${\displaystyle E}$), and is denoted by the symbol ${\displaystyle \times }$. Given two linearly independent vectors a and b, the cross product, a × b (read "a cross b"), is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with the dot product (projection product).

If two vectors have the same direction or have the exact opposite direction from each other (that is, they are not linearly independent), or if either one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths.

The cross product is anticommutative (that is, a × b = − b × a) and is distributive over addition (that is, a × (b + c) = a × b + a × c). The space ${\displaystyle E}$ together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket.

Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on a choice of orientation (or "handedness") of the space (it's why an oriented space is needed). In connection with the cross product, the exterior product of vectors can be used in arbitrary dimensions (with a bivector or 2-form result) and is independent of the orientation of the space.

The product can be generalized in various ways, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can, in n dimensions, take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.

The cross product with respect to a right-handed coordinate system

## Definition

Finding the direction of the cross product by the right-hand rule.

The cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics and applied mathematics, the wedge notation ab is often used (in conjunction with the name vector product), although in pure mathematics such notation is usually reserved for just the exterior product, an abstraction of the vector product to n dimensions.

The cross product a × b is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span.

The cross product is defined by the formula

${\displaystyle \mathbf {a} \times \mathbf {b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\sin(\theta )\ \mathbf {n} }$

where:

• θ is the angle between a and b in the plane containing them (hence, it is between 0° and 180°)
• a‖ and ‖b‖ are the magnitudes of vectors a and b
• and n is a unit vector perpendicular to the plane containing a and b, in the direction given by the right-hand rule (illustrated).

If the vectors a and b are parallel (that is, the angle θ between them is either 0° or 180°), by the above formula, the cross product of a and b is the zero vector 0.

### Direction

The cross product a × b (vertical, in purple) changes as the angle between the vectors a (blue) and b (red) changes. The cross product is always orthogonal to both vectors, and has magnitude zero when the vectors are parallel and maximum magnitude ‖a‖‖b‖ when they are orthogonal.

By convention, the direction of the vector n is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product is anti-commutative; that is, b × a = −(a × b). By pointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in the opposite direction, reversing the sign of the product vector.

As the cross product operator depends on the orientation of the space (as explicit in the definition above), the cross product of two vectors is not a "true" vector, but a pseudovector.

## Names

According to Sarrus's rule, the determinant of a 3×3 matrix involves multiplications between matrix elements identified by crossed diagonals

In 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period (a . b) and an "x" (a x b), respectively, to denote them.

In 1877, to emphasize the fact that the result of a dot product is a scalar while the result of a cross product is a vector, William Kingdon Clifford coined the alternative names scalar product and vector product for the two operations. These alternative names are still widely used in the literature.

Both the cross notation (a × b) and the name cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product ab involves multiplications between corresponding components of a and b. As explained below, the cross product can be expressed in the form of a determinant of a special 3 × 3 matrix. According to Sarrus's rule, this involves multiplications between matrix elements identified by crossed diagonals.

## Computing

### Coordinate notation

Standard basis vectors (i, j, k, also denoted e1, e2, e3) and vector components of a (ax, ay, az, also denoted a1, a2, a3)

If (i, j,k) is a positively oriented orthonormal basis, the basis vectors satisfy the following equalities

{\displaystyle {\begin{alignedat}{2}\mathbf {\color {blue}{i}} &\times \mathbf {\color {red}{j}} &&=\mathbf {\color {green}{k}} \\\mathbf {\color {red}{j}} &\times \mathbf {\color {green}{k}} &&=\mathbf {\color {blue}{i}} \\\mathbf {\color {green}{k}} &\times \mathbf {\color {blue}{i}} &&=\mathbf {\color {red}{j}} \end{alignedat}}}

which imply, by the anticommutativity of the cross product, that

{\displaystyle {\begin{alignedat}{2}\mathbf {\color {red}{j}} &\times \mathbf {\color {blue}{i}} &&=-\mathbf {\color {green}{k}} \\\mathbf {\color {green}{k}} &\times \mathbf {\color {red}{j}} &&=-\mathbf {\color {blue}{i}} \\\mathbf {\color {blue}{i}} &\times \mathbf {\color {green}{k}} &&=-\mathbf {\color {red}{j}} \end{alignedat}}}

The anticommutativity of the cross product (and the obvious lack of linear independence) also implies that

${\displaystyle \mathbf {\color {blue}{i}} \times \mathbf {\color {blue}{i}} =\mathbf {\color {red}{j}} \times \mathbf {\color {red}{j}} =\mathbf {\color {green}{k}} \times \mathbf {\color {green}{k}} =\mathbf {0} }$ (the zero vector).

These equalities, together with the distributivity and linearity of the cross product (though neither follows easily from the definition given above), are sufficient to determine the cross product of any two vectors a and b. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors:

{\displaystyle {\begin{alignedat}{3}\mathbf {a} &=a_{1}\mathbf {\color {blue}{i}} &&+a_{2}\mathbf {\color {red}{j}} &&+a_{3}\mathbf {\color {green}{k}} \\\mathbf {b} &=b_{1}\mathbf {\color {blue}{i}} &&+b_{2}\mathbf {\color {red}{j}} &&+b_{3}\mathbf {\color {green}{k}} \end{alignedat}}}

Their cross product a × b can be expanded using distributivity:

{\displaystyle {\begin{aligned}\mathbf {a} \times \mathbf {b} ={}&(a_{1}\mathbf {\color {blue}{i}} +a_{2}\mathbf {\color {red}{j}} +a_{3}\mathbf {\color {green}{k}} )\times (b_{1}\mathbf {\color {blue}{i}} +b_{2}\mathbf {\color {red}{j}} +b_{3}\mathbf {\color {green}{k}} )\\={}&a_{1}b_{1}(\mathbf {\color {blue}{i}} \times \mathbf {\color {blue}{i}} )+a_{1}b_{2}(\mathbf {\color {blue}{i}} \times \mathbf {\color {red}{j}} )+a_{1}b_{3}(\mathbf {\color {blue}{i}} \times \mathbf {\color {green}{k}} )+{}\\&a_{2}b_{1}(\mathbf {\color {red}{j}} \times \mathbf {\color {blue}{i}} )+a_{2}b_{2}(\mathbf {\color {red}{j}} \times \mathbf {\color {red}{j}} )+a_{2}b_{3}(\mathbf {\color {red}{j}} \times \mathbf {\color {green}{k}} )+{}\\&a_{3}b_{1}(\mathbf {\color {green}{k}} \times \mathbf {\color {blue}{i}} )+a_{3}b_{2}(\mathbf {\color {green}{k}} \times \mathbf {\color {red}{j}} )+a_{3}b_{3}(\mathbf {\color {green}{k}} \times \mathbf {\color {green}{k}} )\\\end{aligned}}}

This can be interpreted as the decomposition of a × b into the sum of nine simpler cross products involving vectors aligned with i, j, or k. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above-mentioned equalities and collecting similar terms, we obtain:

{\displaystyle {\begin{aligned}\mathbf {a} \times \mathbf {b} ={}&\quad \ a_{1}b_{1}\mathbf {0} +a_{1}b_{2}\mathbf {\color {green}{k}} -a_{1}b_{3}\mathbf {\color {red}{j}} \\&-a_{2}b_{1}\mathbf {\color {green}{k}} +a_{2}b_{2}\mathbf {0} +a_{2}b_{3}\mathbf {\color {blue}{i}} \\&+a_{3}b_{1}\mathbf {\color {red}{j}} \ -a_{3}b_{2}\mathbf {\color {blue}{i}} \ +a_{3}b_{3}\mathbf {0} \\={}&(a_{2}b_{3}-a_{3}b_{2})\mathbf {\color {blue}{i}} +(a_{3}b_{1}-a_{1}b_{3})\mathbf {\color {red}{j}} +(a_{1}b_{2}-a_{2}b_{1})\mathbf {\color {green}{k}} \\\end{aligned}}}

meaning that the three scalar components of the resulting vector s = s1i + s2j + s3k = a × b are

{\displaystyle {\begin{aligned}s_{1}&=a_{2}b_{3}-a_{3}b_{2}\\s_{2}&=a_{3}b_{1}-a_{1}b_{3}\\s_{3}&=a_{1}b_{2}-a_{2}b_{1}\end{aligned}}}

Using column vectors, we can represent the same result as follows:

${\displaystyle {\begin{bmatrix}s_{1}\\s_{2}\\s_{3}\end{bmatrix}}={\begin{bmatrix}a_{2}b_{3}-a_{3}b_{2}\\a_{3}b_{1}-a_{1}b_{3}\\a_{1}b_{2}-a_{2}b_{1}\end{bmatrix}}}$

### Matrix notation

Use of Sarrus's rule to find the cross product of a and b

The cross product can also be expressed as the formal determinant:

${\displaystyle \mathbf {a\times b} ={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\a_{1}&a_{2}&a_{3}\\b_{1}&b_{2}&b_{3}\\\end{vmatrix}}}$

This determinant can be computed using Sarrus's rule or cofactor expansion. Using Sarrus's rule, it expands to

{\displaystyle {\begin{aligned}\mathbf {a\times b} &=(a_{2}b_{3}\mathbf {i} +a_{3}b_{1}\mathbf {j} +a_{1}b_{2}\mathbf {k} )-(a_{3}b_{2}\mathbf {i} +a_{1}b_{3}\mathbf {j} +a_{2}b_{1}\mathbf {k} )\\&=(a_{2}b_{3}-a_{3}b_{2})\mathbf {i} +(a_{3}b_{1}-a_{1}b_{3})\mathbf {j} +(a_{1}b_{2}-a_{2}b_{1})\mathbf {k} .\end{aligned}}}

Using cofactor expansion along the first row instead, it expands to

{\displaystyle {\begin{aligned}\mathbf {a\times b} &={\begin{vmatrix}a_{2}&a_{3}\\b_{2}&b_{3}\end{vmatrix}}\mathbf {i} -{\begin{vmatrix}a_{1}&a_{3}\\b_{1}&b_{3}\end{vmatrix}}\mathbf {j} +{\begin{vmatrix}a_{1}&a_{2}\\b_{1}&b_{2}\end{vmatrix}}\mathbf {k} \\&=(a_{2}b_{3}-a_{3}b_{2})\mathbf {i} -(a_{1}b_{3}-a_{3}b_{1})\mathbf {j} +(a_{1}b_{2}-a_{2}b_{1})\mathbf {k} ,\end{aligned}}}

which gives the components of the resulting vector directly.

### Using Levi-Civita tensors

• In any basis, the cross-product ${\displaystyle a\times b}$ is given by the tensorial formula ${\displaystyle E_{ijk}a^{i}b^{j}}$ where ${\displaystyle E_{ijk}}$ is the covariant Levi-Civita tensor (we note the position of the indices). That corresponds to the intrinsic formula given here.
• In an orthonormal basis having the same orientation as the space, ${\displaystyle a\times b}$ is given by the pseudo-tensorial formula ${\displaystyle \varepsilon _{ijk}a^{i}b^{j}}$ where ${\displaystyle \varepsilon _{ijk}}$ is the Levi-Civita symbol (which is a pseudo-tensor). That’s the formula used for everyday physics but it works only for this special choice of basis.
• In any orthonormal basis, ${\displaystyle a\times b}$ is given by the pseudo-tensorial formula ${\displaystyle (-1)^{B}\varepsilon _{ijk}a^{i}b^{j}}$ where ${\displaystyle (-1)^{B}=\pm 1}$ indicates whether the basis has the same orientation as the space or not.

The latter formula avoids having to change the orientation of the space when we inverse an orthonormal basis.

## Properties

### Geometric meaning

Figure 1. The area of a parallelogram as the magnitude of a cross product
Figure 2. Three vectors defining a parallelepiped

The magnitude of the cross product can be interpreted as the positive area of the parallelogram having a and b as sides (see Figure 1):

${\displaystyle \left\|\mathbf {a} \times \mathbf {b} \right\|=\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\||\sin \theta |.}$

Indeed, one can also compute the volume V of a parallelepiped having a, b and c as edges by using a combination of a cross product and a dot product, called scalar triple product (see Figure 2):

${\displaystyle \mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )=\mathbf {b} \cdot (\mathbf {c} \times \mathbf {a} )=\mathbf {c} \cdot (\mathbf {a} \times \mathbf {b} ).}$

Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value:

${\displaystyle V=|\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )|.}$

Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of perpendicularity in the same way that the dot product is a measure of parallelism. Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel.

Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive).

### Algebraic properties

Cross product scalar multiplication. Left: Decomposition of b into components parallel and perpendicular to a. Right: Scaling of the perpendicular components by a positive real number r (if negative, b and the cross product are reversed).
Cross product distributivity over vector addition. Left: The vectors b and c are resolved into parallel and perpendicular components to a. Right: The parallel components vanish in the cross product, only the perpendicular components shown in the plane perpendicular to a remain.
The two nonequivalent triple cross products of three vectors a, b, c. In each case, two vectors define a plane, the other is out of the plane and can be split into parallel and perpendicular components to the cross product of the vectors defining the plane. These components can be found by vector projection and rejection. The triple product is in the plane and is rotated as shown.

If the cross product of two vectors is the zero vector (that is, a × b = 0), then either one or both of the inputs is the zero vector, (a = 0 or b = 0) or else they are parallel or antiparallel (ab) so that the sine of the angle between them is zero (θ = 0° or θ = 180° and sin θ = 0).

The self cross product of a vector is the zero vector:

${\displaystyle \mathbf {a} \times \mathbf {a} =\mathbf {0} .}$

The cross product is anticommutative,

${\displaystyle \mathbf {a} \times \mathbf {b} =-(\mathbf {b} \times \mathbf {a} ),}$

${\displaystyle \mathbf {a} \times (\mathbf {b} +\mathbf {c} )=(\mathbf {a} \times \mathbf {b} )+(\mathbf {a} \times \mathbf {c} ),}$

and compatible with scalar multiplication so that

${\displaystyle (r\,\mathbf {a} )\times \mathbf {b} =\mathbf {a} \times (r\,\mathbf {b} )=r\,(\mathbf {a} \times \mathbf {b} ).}$

It is not associative, but satisfies the Jacobi identity:

${\displaystyle \mathbf {a} \times (\mathbf {b} \times \mathbf {c} )+\mathbf {b} \times (\mathbf {c} \times \mathbf {a} )+\mathbf {c} \times (\mathbf {a} \times \mathbf {b} )=\mathbf {0} .}$

Distributivity, linearity and Jacobi identity show that the R3 vector space together with vector addition and the cross product forms a Lie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3). The cross product does not obey the cancellation law; that is, a × b = a × c with a0 does not imply b = c, but only that:

{\displaystyle {\begin{aligned}\mathbf {0} &=(\mathbf {a} \times \mathbf {b} )-(\mathbf {a} \times \mathbf {c} )\\&=\mathbf {a} \times (\mathbf {b} -\mathbf {c} ).\\\end{aligned}}}

This can be the case where b and c cancel, but additionally where a and bc are parallel; that is, they are related by a scale factor t, leading to:

${\displaystyle \mathbf {c} =\mathbf {b} +t\,\mathbf {a} ,}$

for some scalar t.

If, in addition to a × b = a × c and a0 as above, it is the case that ab = ac then

{\displaystyle {\begin{aligned}\mathbf {a} \times (\mathbf {b} -\mathbf {c} )&=\mathbf {0} \\\mathbf {a} \cdot (\mathbf {b} -\mathbf {c} )&=0,\end{aligned}}}

As bc cannot be simultaneously parallel (for the cross product to be 0) and perpendicular (for the dot product to be 0) to a, it must be the case that b and c cancel: b = c.

From the geometrical definition, the cross product is invariant under proper rotations about the axis defined by a × b. In formulae:

${\displaystyle (R\mathbf {a} )\times (R\mathbf {b} )=R(\mathbf {a} \times \mathbf {b} )}$, where ${\displaystyle R}$ is a rotation matrix with ${\displaystyle \det(R)=1}$.

More generally, the cross product obeys the following identity under matrix transformations:

${\displaystyle (M\mathbf {a} )\times (M\mathbf {b} )=(\det M)\left(M^{-1}\right)^{\mathrm {T} }(\mathbf {a} \times \mathbf {b} )=\operatorname {cof} M(\mathbf {a} \times \mathbf {b} )}$

where ${\displaystyle M}$ is a 3-by-3 matrix and ${\displaystyle \left(M^{-1}\right)^{\mathrm {T} }}$ is the transpose of the inverse and ${\displaystyle \operatorname {cof} }$ is the cofactor matrix. It can be readily seen how this formula reduces to the former one if ${\displaystyle M}$ is a rotation matrix.

The cross product of two vectors lies in the null space of the 2 × 3 matrix with the vectors as rows:

${\displaystyle \mathbf {a} \times \mathbf {b} \in NS\left({\begin{bmatrix}\mathbf {a} \\\mathbf {b} \end{bmatrix}}\right).}$

For the sum of two cross products, the following identity holds:

${\displaystyle \mathbf {a} \times \mathbf {b} +\mathbf {c} \times \mathbf {d} =(\mathbf {a} -\mathbf {c} )\times (\mathbf {b} -\mathbf {d} )+\mathbf {a} \times \mathbf {d} +\mathbf {c} \times \mathbf {b} .}$

### Differentiation

The product rule of differential calculus applies to any bilinear operation, and therefore also to the cross product:

${\displaystyle {\frac {d}{dt}}(\mathbf {a} \times \mathbf {b} )={\frac {d\mathbf {a} }{dt}}\times \mathbf {b} +\mathbf {a} \times {\frac {d\mathbf {b} }{dt}},}$

where a and b are vectors that depend on the real variable t.

### Triple product expansion

The cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as

${\displaystyle \mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} ),}$

It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any order that's an even permutation of the above ordering. The following therefore are equal:

${\displaystyle \mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )=\mathbf {b} \cdot (\mathbf {c} \times \mathbf {a} )=\mathbf {c} \cdot (\mathbf {a} \times \mathbf {b} ),}$

The vector triple product is the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula

${\displaystyle \mathbf {a} \times (\mathbf {b} \times \mathbf {c} )=\mathbf {b} (\mathbf {a} \cdot \mathbf {c} )-\mathbf {c} (\mathbf {a} \cdot \mathbf {b} ).}$

The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vector calculus, is

{\displaystyle {\begin{aligned}\nabla \times (\nabla \times \mathbf {f} )&=\nabla (\nabla \cdot \mathbf {f} )-(\nabla \cdot \nabla )\mathbf {f} \\&=\nabla (\nabla \cdot \mathbf {f} )-\nabla ^{2}\mathbf {f} ,\\\end{aligned}}}

where ∇2 is the vector Laplacian operator.

Other identities relate the cross product to the scalar triple product:

{\displaystyle {\begin{aligned}(\mathbf {a} \times \mathbf {b} )\times (\mathbf {a} \times \mathbf {c} )&=(\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} ))\mathbf {a} \\(\mathbf {a} \times \mathbf {b} )\cdot (\mathbf {c} \times \mathbf {d} )&=\mathbf {b} ^{\mathrm {T} }\left(\left(\mathbf {c} ^{\mathrm {T} }\mathbf {a} \right)I-\mathbf {c} \mathbf {a} ^{\mathrm {T} }\right)\mathbf {d} \\&=(\mathbf {a} \cdot \mathbf {c} )(\mathbf {b} \cdot \mathbf {d} )-(\mathbf {a} \cdot \mathbf {d} )(\mathbf {b} \cdot \mathbf {c} )\end{aligned}}}

where I is the identity matrix.

### Alternative formulation

The cross product and the dot product are related by:

${\displaystyle \left\|\mathbf {a} \times \mathbf {b} \right\|^{2}=\left\|\mathbf {a} \right\|^{2}\left\|\mathbf {b} \right\|^{2}-(\mathbf {a} \cdot \mathbf {b} )^{2}.}$

The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angle θ between the two vectors, as:

${\displaystyle \mathbf {a\cdot b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta ,}$

the above given relationship can be rewritten as follows:

${\displaystyle \left\|\mathbf {a\times b} \right\|^{2}=\left\|\mathbf {a} \right\|^{2}\left\|\mathbf {b} \right\|^{2}\left(1-\cos ^{2}\theta \right).}$

Invoking the Pythagorean trigonometric identity one obtains:

${\displaystyle \left\|\mathbf {a} \times \mathbf {b} \right\|=\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\left|\sin \theta \right|,}$

which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined by a and b.

The combination of this requirement and the property that the cross product be orthogonal to its constituents a and b provides an alternative definition of the cross product.

### Lagrange's identity

The relation:

${\displaystyle \left\|\mathbf {a} \times \mathbf {b} \right\|^{2}\equiv \det {\begin{bmatrix}\mathbf {a} \cdot \mathbf {a} &\mathbf {a} \cdot \mathbf {b} \\\mathbf {a} \cdot \mathbf {b} &\mathbf {b} \cdot \mathbf {b} \\\end{bmatrix}}\equiv \left\|\mathbf {a} \right\|^{2}\left\|\mathbf {b} \right\|^{2}-(\mathbf {a} \cdot \mathbf {b} )^{2}.}$

can be compared with another relation involving the right-hand side, namely Lagrange's identity expressed as:

${\displaystyle \sum _{1\leq i

where a and b may be n-dimensional vectors. This also shows that the Riemannian volume form for surfaces is exactly the surface element from vector calculus. In the case where n = 3, combining these two equations results in the expression for the magnitude of the cross product in terms of its components:

{\displaystyle {\begin{aligned}&\left\|\mathbf {a} \times \mathbf {b} \right\|^{2}\equiv \sum _{1\leq i

The same result is found directly using the components of the cross product found from:

${\displaystyle \mathbf {a} \times \mathbf {b} \equiv \det {\begin{bmatrix}{\hat {\mathbf {i} }}&{\hat {\mathbf {j} }}&{\hat {\mathbf {k} }}\\a_{1}&a_{2}&a_{3}\\b_{1}&b_{2}&b_{3}\\\end{bmatrix}}.}$

In R3, Lagrange's equation is a special case of the multiplicativity |vw| = |v||w| of the norm in the quaternion algebra.

It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of the Binet–Cauchy identity:

${\displaystyle (\mathbf {a} \times \mathbf {b} )\cdot (\mathbf {c} \times \mathbf {d} )\equiv (\mathbf {a} \cdot \mathbf {c} )(\mathbf {b} \cdot \mathbf {d} )-(\mathbf {a} \cdot \mathbf {d} )(\mathbf {b} \cdot \mathbf {c} ).}$

If a = c and b = d this simplifies to the formula above.

### Infinitesimal generators of rotations

The cross product conveniently describes the infinitesimal generators of rotations in R3. Specifically, if n is a unit vector in R3 and R(φ, n) denotes a rotation about the axis through the origin specified by n, with angle φ (measured in radians, counterclockwise when viewed from the tip of n), then

${\displaystyle \left.{d \over d\phi }\right|_{\phi =0}R(\phi ,{\boldsymbol {n}}){\boldsymbol {x}}={\boldsymbol {n}}\times {\boldsymbol {x}}}$

for every vector x in R3. The cross product with n therefore describes the infinitesimal generator of the rotations about n. These infinitesimal generators form the Lie algebra so(3) of the rotation group SO(3), and we obtain the result that the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3).

## Alternative ways to compute

### Conversion to matrix multiplication

The vector cross product also can be expressed as the product of a skew-symmetric matrix and a vector:

{\displaystyle {\begin{aligned}\mathbf {a} \times \mathbf {b} =[\mathbf {a} ]_{\times }\mathbf {b} &={\begin{bmatrix}\,0&\!-a_{3}&\,\,a_{2}\\\,\,a_{3}&0&\!-a_{1}\\-a_{2}&\,\,a_{1}&\,0\end{bmatrix}}{\begin{bmatrix}b_{1}\\b_{2}\\b_{3}\end{bmatrix}}\\\mathbf {a} \times \mathbf {b} ={[\mathbf {b} ]_{\times }}^{\mathrm {\!\!T} }\mathbf {a} &={\begin{bmatrix}\,0&\,\,b_{3}&\!-b_{2}\\-b_{3}&0&\,\,b_{1}\\\,\,b_{2}&\!-b_{1}&\,0\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}},\end{aligned}}}

where superscript T refers to the transpose operation, and [a]× is defined by:

${\displaystyle [\mathbf {a} ]_{\times }{\stackrel {\rm {def}}{=}}{\begin{bmatrix}\,\,0&\!-a_{3}&\,\,\,a_{2}\\\,\,\,a_{3}&0&\!-a_{1}\\\!-a_{2}&\,\,a_{1}&\,\,0\end{bmatrix}}.}$

The columns [a]×,i of the skew-symmetric matrix for a vector a can be also obtained by calculating the cross product with unit vectors. That is,

${\displaystyle [\mathbf {a} ]_{\times ,i}=\mathbf {a} \times \mathbf {{\hat {e}}_{i}} ,\;i\in \{1,2,3\}}$

or

${\displaystyle [\mathbf {a} ]_{\times }=\sum _{i=1}^{3}\left(\mathbf {a} \times \mathbf {{\hat {e}}_{i}} \right)\otimes \mathbf {{\hat {e}}_{i}} ,}$

where ${\displaystyle \otimes }$ is the outer product operator.

Also, if a is itself expressed as a cross product:

${\displaystyle \mathbf {a} =\mathbf {c} \times \mathbf {d} }$

then

${\displaystyle [\mathbf {a} ]_{\times }=\mathbf {d} \mathbf {c} ^{\mathrm {T} }-\mathbf {c} \mathbf {d} ^{\mathrm {T} }.}$

Proof by substitution

Evaluation of the cross product gives

${\displaystyle \mathbf {a} =\mathbf {c} \times \mathbf {d} ={\begin{pmatrix}c_{2}d_{3}-c_{3}d_{2}\\c_{3}d_{1}-c_{1}d_{3}\\c_{1}d_{2}-c_{2}d_{1}\end{pmatrix}}}$

Hence, the left hand side equals

${\displaystyle [\mathbf {a} ]_{\times }={\begin{bmatrix}0&c_{2}d_{1}-c_{1}d_{2}&c_{3}d_{1}-c_{1}d_{3}\\c_{1}d_{2}-c_{2}d_{1}&0&c_{3}d_{2}-c_{2}d_{3}\\c_{1}d_{3}-c_{3}d_{1}&c_{2}d_{3}-c_{3}d_{2}&0\end{bmatrix}}}$

Now, for the right hand side,

${\displaystyle \mathbf {c} \mathbf {d} ^{\mathrm {T} }={\begin{bmatrix}c_{1}d_{1}&c_{1}d_{2}&c_{1}d_{3}\\c_{2}d_{1}&c_{2}d_{2}&c_{2}d_{3}\\c_{3}d_{1}&c_{3}d_{2}&c_{3}d_{3}\end{bmatrix}}}$

And its transpose is

${\displaystyle \mathbf {d} \mathbf {c} ^{\mathrm {T} }={\begin{bmatrix}c_{1}d_{1}&c_{2}d_{1}&c_{3}d_{1}\\c_{1}d_{2}&c_{2}d_{2}&c_{3}d_{2}\\c_{1}d_{3}&c_{2}d_{3}&c_{3}d_{3}\end{bmatrix}}}$

Evaluation of the right hand side gives

${\displaystyle \mathbf {d} \mathbf {c} ^{\mathrm {T} }-\mathbf {c} \mathbf {d} ^{\mathrm {T} }={\begin{bmatrix}0&c_{2}d_{1}-c_{1}d_{2}&c_{3}d_{1}-c_{1}d_{3}\\c_{1}d_{2}-c_{2}d_{1}&0&c_{3}d_{2}-c_{2}d_{3}\\c_{1}d_{3}-c_{3}d_{1}&c_{2}d_{3}-c_{3}d_{2}&0\end{bmatrix}}}$

Comparison shows that the left hand side equals the right hand side.

This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors.

This notation is also often much easier to work with, for example, in epipolar geometry.

From the general properties of the cross product follows immediately that

${\displaystyle [\mathbf {a} ]_{\times }\,\mathbf {a} =\mathbf {0} }$   and   ${\displaystyle \mathbf {a} ^{\mathrm {T} }\,[\mathbf {a} ]_{\times }=\mathbf {0} }$

and from fact that [a]× is skew-symmetric it follows that

${\displaystyle \mathbf {b} ^{\mathrm {T} }\,[\mathbf {a} ]_{\times }\,\mathbf {b} =0.}$

The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation.

As mentioned above, the Lie algebra R3 with cross product is isomorphic to the Lie algebra so(3), whose elements can be identified with the 3×3 skew-symmetric matrices. The map a → [a]× provides an isomorphism between R3 and so(3). Under this map, the cross product of 3-vectors corresponds to the commutator of 3x3 skew-symmetric matrices.

Matrix conversion for cross product with canonical base vectors

Denoting with ${\displaystyle \mathbf {e} _{i}\in \mathbf {R} ^{3\times 1}}$ the ${\displaystyle i}$-th canonical base vector, the cross product of a generic vector ${\displaystyle \mathbf {v} \in \mathbf {R} ^{3\times 1}}$ with ${\displaystyle \mathbf {e} _{i}}$ is given by: ${\displaystyle \mathbf {v} \times \mathbf {e} _{i}=\mathbf {C} _{i}\mathbf {v} }$, where

${\displaystyle \mathbf {C} _{1}={\begin{bmatrix}0&0&0\\0&0&1\\0&-1&0\end{bmatrix}},\quad \mathbf {C} _{2}={\begin{bmatrix}0&0&-1\\0&0&0\\1&0&0\end{bmatrix}},\quad \mathbf {C} _{3}={\begin{bmatrix}0&1&0\\-1&0&0\\0&0&0\end{bmatrix}}}$

These matrices share the following properties:

• ${\displaystyle \mathbf {C} _{i}^{\textrm {T}}=-\mathbf {C} _{i}}$ (skew-symmetric);
• Both trace and determinant are zero;
• ${\displaystyle {\text{rank}}(\mathbf {C} _{i})=2}$;
• ${\displaystyle \mathbf {C} _{i}\mathbf {C} _{i}^{\textrm {T}}=\mathbf {P} _{\mathbf {e} _{i}}^{^{\perp }}}$ (see below);

The orthogonal projection matrix of a vector ${\displaystyle \mathbf {v} \neq \mathbf {0} }$ is given by ${\displaystyle \mathbf {P} _{\mathbf {v} }=\mathbf {v} \left(\mathbf {v} ^{\textrm {T}}\mathbf {v} \right)^{-1}\mathbf {v} ^{T}}$. The projection matrix onto the orthogonal complement is given by ${\displaystyle \mathbf {P} _{\mathbf {v} }^{^{\perp }}=\mathbf {I} -\mathbf {P} _{\mathbf {v} }}$, where ${\displaystyle \mathbf {I} }$ is the identity matrix. For the special case of ${\displaystyle \mathbf {v} =\mathbf {e} _{i}}$, it can be verified that

${\displaystyle \mathbf {P} _{\mathbf {e} _{1}}^{^{\perp }}={\begin{bmatrix}0&0&0\\0&1&0\\0&0&1\end{bmatrix}},\quad \mathbf {P} _{\mathbf {e} _{2}}^{^{\perp }}={\begin{bmatrix}1&0&0\\0&0&0\\0&0&1\end{bmatrix}},\quad \mathbf {P} _{\mathbf {e} _{3}}^{^{\perp }}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\end{bmatrix}}}$

For other properties of orthogonal projection matrices, see projection (linear algebra).

### Index notation for tensors

The cross product can alternatively be defined in terms of the Levi-Civita tensor Eijk and a dot product ηmi, which are useful in converting vector notation for tensor applications:

${\displaystyle \mathbf {c} =\mathbf {a\times b} \Leftrightarrow \ c^{m}=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}\eta ^{mi}E_{ijk}a^{j}b^{k}}$

where the indices ${\displaystyle i,j,k}$ correspond to vector components. This characterization of the cross product is often expressed more compactly using the Einstein summation convention as

${\displaystyle \mathbf {c} =\mathbf {a\times b} \Leftrightarrow \ c^{m}=\eta ^{mi}E_{ijk}a^{j}b^{k}}$

in which repeated indices are summed over the values 1 to 3.

In a positively-oriented orthonormal basis ηmi = δmi (the Kronecker delta) and ${\displaystyle E_{ijk}=\varepsilon _{ijk}}$ (the Levi-Civita symbol). In that case, this representation is another form of the skew-symmetric representation of the cross product:

${\displaystyle [\varepsilon _{ijk}a^{j}]=[\mathbf {a} ]_{\times }.}$

In classical mechanics: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems are isotropic. (An example: consider a particle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation).

### Mnemonic

Mnemonic to calculate a cross product in vector form

The word "xyzzy" can be used to remember the definition of the cross product.

If

${\displaystyle \mathbf {a} =\mathbf {b} \times \mathbf {c} }$

where:

${\displaystyle \mathbf {a} ={\begin{bmatrix}a_{x}\\a_{y}\\a_{z}\end{bmatrix}},\ \mathbf {b} ={\begin{bmatrix}b_{x}\\b_{y}\\b_{z}\end{bmatrix}},\ \mathbf {c} ={\begin{bmatrix}c_{x}\\c_{y}\\c_{z}\end{bmatrix}}}$

then:

${\displaystyle a_{x}=b_{y}c_{z}-b_{z}c_{y}}$
${\displaystyle a_{y}=b_{z}c_{x}-b_{x}c_{z}}$
${\displaystyle a_{z}=b_{x}c_{y}-b_{y}c_{x}.}$

The second and third equations can be obtained from the first by simply vertically rotating the subscripts, xyzx. The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzy sequence.

Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned 3×3 matrix, the first three letters of the word xyzzy can be very easily remembered.

### Cross visualization

Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula.

If

${\displaystyle \mathbf {a} =\mathbf {b} \times \mathbf {c} }$

then:

${\displaystyle \mathbf {a} ={\begin{bmatrix}b_{x}\\b_{y}\\b_{z}\end{bmatrix}}\times {\begin{bmatrix}c_{x}\\c_{y}\\c_{z}\end{bmatrix}}.}$

If we want to obtain the formula for ${\displaystyle a_{x}}$ we simply drop the ${\displaystyle b_{x}}$ and ${\displaystyle c_{x}}$ from the formula, and take the next two components down:

${\displaystyle a_{x}={\begin{bmatrix}b_{y}\\b_{z}\end{bmatrix}}\times {\begin{bmatrix}c_{y}\\c_{z}\end{bmatrix}}.}$

When doing this for ${\displaystyle a_{y}}$ the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation for ${\displaystyle a_{y}}$, the next two components should be z and x (in that order). While for ${\displaystyle a_{z}}$ the next two components should be taken as x and y.

${\displaystyle a_{y}={\begin{bmatrix}b_{z}\\b_{x}\end{bmatrix}}\times {\begin{bmatrix}c_{z}\\c_{x}\end{bmatrix}},\ a_{z}={\begin{bmatrix}b_{x}\\b_{y}\end{bmatrix}}\times {\begin{bmatrix}c_{x}\\c_{y}\end{bmatrix}}}$

For ${\displaystyle a_{x}}$ then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in our ${\displaystyle a_{x}}$ formula –

${\displaystyle a_{x}=b_{y}c_{z}-b_{z}c_{y}.}$

We can do this in the same way for ${\displaystyle a_{y}}$ and ${\displaystyle a_{z}}$ to construct their associated formulas.

## Applications

The cross product has applications in various contexts. For example, it is used in computational geometry, physics and engineering. A non-exhaustive list of examples follows.

### Computational geometry

The cross product appears in the calculation of the distance of two skew lines (lines not in the same plane) from each other in three-dimensional space.

The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed in computer graphics. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle.

In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined by three points ${\displaystyle p_{1}=(x_{1},y_{1}),p_{2}=(x_{2},y_{2})}$ and ${\displaystyle p_{3}=(x_{3},y_{3})}$. It corresponds to the direction (upward or downward) of the cross product of the two coplanar vectors defined by the two pairs of points ${\displaystyle (p_{1},p_{2})}$ and ${\displaystyle (p_{1},p_{3})}$. The sign of the acute angle is the sign of the expression

${\displaystyle P=(x_{2}-x_{1})(y_{3}-y_{1})-(y_{2}-y_{1})(x_{3}-x_{1}),}$

which is the signed length of the cross product of the two vectors.

In the "right-handed" coordinate system, if the result is 0, the points are collinear; if it is positive, the three points constitute a positive angle of rotation around ${\displaystyle p_{1}}$ from ${\displaystyle p_{2}}$ to ${\displaystyle p_{3}}$, otherwise a negative angle. From another point of view, the sign of ${\displaystyle P}$ tells whether ${\displaystyle p_{3}}$ lies to the left or to the right of line ${\displaystyle p_{1},p_{2}.}$

The cross product is used in calculating the volume of a polyhedron such as a tetrahedron or parallelepiped.

### Angular momentum and torque

The angular momentum L of a particle about a given origin is defined as:

${\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} ,}$

where r is the position vector of the particle relative to the origin, p is the linear momentum of the particle.

In the same way, the moment M of a force FB applied at point B around point A is given as:

${\displaystyle \mathbf {M} _{\mathrm {A} }=\mathbf {r} _{\mathrm {AB} }\times \mathbf {F} _{\mathrm {B} }\,}$

In mechanics the moment of a force is also called torque and written as ${\displaystyle \mathbf {\tau } }$

Since position r, linear momentum p and force F are all true vectors, both the angular momentum L and the moment of a force M are pseudovectors or axial vectors.

### Rigid body

The cross product frequently appears in the description of rigid motions. Two points P and Q on a rigid body can be related by:

${\displaystyle \mathbf {v} _{P}-\mathbf {v} _{Q}={\boldsymbol {\omega }}\times \left(\mathbf {r} _{P}-\mathbf {r} _{Q}\right)\,}$

where ${\displaystyle \mathbf {r} }$ is the point's position, ${\displaystyle \mathbf {v} }$ is its velocity and ${\displaystyle {\boldsymbol {\omega }}}$ is the body's angular velocity.

Since position ${\displaystyle \mathbf {r} }$ and velocity ${\displaystyle \mathbf {v} }$ are true vectors, the angular velocity ${\displaystyle {\boldsymbol {\omega }}}$ is a pseudovector or axial vector.

### Lorentz force

The cross product is used to describe the Lorentz force experienced by a moving electric charge qe:

${\displaystyle \mathbf {F} =q_{e}\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)}$

Since velocity v, force F and electric field E are all true vectors, the magnetic field B is a pseudovector.

### Other

In vector calculus, the cross product is used to define the formula for the vector operator curl.

The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar and multi-view geometry, in particular when deriving matching constraints.

## As an external product

The cross product in relation to the exterior product. In red are the orthogonal unit vector, and the "parallel" unit bivector.

The cross product can be defined in terms of the exterior product. It can be generalized to an external product in other than three dimensions. This view allows for a natural geometric interpretation of the cross product. In exterior algebra the exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectors a and b, one can view the bivector ab as the oriented parallelogram spanned by a and b. The cross product is then obtained by taking the Hodge star of the bivector ab, mapping 2-vectors to vectors:

${\displaystyle a\times b=\star (a\wedge b)\,.}$

This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. Only in three dimensions is the result an oriented one-dimensional element – a vector – whereas, for example, in four dimensions the Hodge dual of a bivector is two-dimensional – a bivector. So, only in three dimensions can a vector cross product of a and b be defined as the vector dual to the bivector ab: it is perpendicular to the bivector, with orientation dependent on the coordinate system's handedness, and has the same magnitude relative to the unit normal vector as ab has relative to the unit bivector; precisely the properties described above.

## Handedness

### Consistency

When physics laws are written as equations, it is possible to make an arbitrary choice of the coordinate system, including handedness. One should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of two polar vectors, one must take into account that the result is an axial vector. Therefore, for consistency, the other side must also be an axial vector. More generally, the result of a cross product may be either a polar vector or an axial vector, depending on the type of its operands (polar vectors or axial vectors). Namely, polar vectors and axial vectors are interrelated in the following ways under application of the cross product:

• polar vector × polar vector = axial vector
• axial vector × axial vector = axial vector
• polar vector × axial vector = polar vector
• axial vector × polar vector = polar vector

or symbolically

• polar × polar = axial
• axial × axial = axial
• polar × axial = polar
• axial × polar = polar

Because the cross product may also be a polar vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a polar vector and the other one is an axial vector (e.g., the cross product of two polar vectors). For instance, a vector triple product involving three polar vectors is a polar vector.

A handedness-free approach is possible using exterior algebra.

### The paradox of the orthonormal basis

Let (i, j,k) be an orthonormal basis. The vectors i, j and k don't depend on the orientation of the space. They can even be defined in the absence of any orientation. They can not therefore be axial vectors. But if i and j are polar vectors then k is an axial vector for i × j = k or j × i = k. This is a paradox.

"Axial" and "polar" are physical qualifiers for physical vectors; that is, vectors which represent physical quantities such as the velocity or the magnetic field. The vectors i, j and k are mathematical vectors, neither axial nor polar. In mathematics, the cross-product of two vectors is a vector. There is no contradiction.