Difference between revisions of "Orthogonal Transformations and Orthogonal Matrices"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
(Created page with "== Licensing == Content obtained and/or adapted from: * [https://en.wikipedia.org/wiki/Orthogonal_transformation Orthogonal transformation, Wikipedia] under a CC BY-SA licens...")
 
Line 1: Line 1:
== Licensing ==  
+
= Orthogonal Transformations =
 +
In linear algebra, an '''orthogonal transformation''' is a linear transformation ''T'' : ''V'' → ''V'' on a real inner product space ''V'', that preserves the inner product. That is, for each pair ''u'', ''v'' of elements of ''V'', we have
 +
 
 +
: <math>\langle u,v \rangle = \langle Tu,Tv \rangle \, .</math>
 +
 
 +
Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. In particular, orthogonal transformations map orthonormal bases to orthonormal bases.
 +
 
 +
Orthogonal transformations are injective: if <math>Tv = 0</math> then <math>0 = \langle Tv,Tv \rangle = \langle v,v \rangle</math>, hence <math>v = 0</math>, so the kernel of <math>T</math> is trivial.
 +
 
 +
Orthogonal transformations in two- or three-dimensional Euclidean space are stiff rotations, reflections, or combinations of a rotation and a reflection (also known as improper rotations). Reflections are transformations that reverse the direction front to back, orthogonal to the mirror plane, like (real-world) mirrors do. The matrices corresponding to proper rotations (without reflection) have a determinant of +1. Transformations with reflection are represented by matrices with a determinant of −1. This allows the concept of rotation and reflection to be generalized to higher dimensions.
 +
 
 +
In finite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal transformation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitute an orthonormal basis of&nbsp;''V''. The columns of the matrix form another orthonormal basis of&nbsp;''V''.
 +
 
 +
If an orthogonal transformation is invertible (which is always the case when ''V'' is finite-dimensional) then its inverse is another orthogonal transformation. Its matrix representation is the transpose of the matrix representation of the original transformation.
 +
 
 +
==Examples==
 +
Consider the inner-product space <math>(\mathbb{R}^2,\langle\cdot,\cdot\rangle)</math> with the standard euclidean inner product and standard basis. Then, the matrix transformation
 +
:<math>
 +
T = \begin{bmatrix}
 +
\cos(\theta) & -\sin(\theta) \\
 +
\sin(\theta) & \cos(\theta)
 +
\end{bmatrix} : \mathbb{R}^2 \to \mathbb{R}^2
 +
</math>
 +
is orthogonal. To see this, consider
 +
:<math>
 +
\begin{align}
 +
Te_1 = \begin{bmatrix}\cos(\theta) \\ \sin(\theta)\end{bmatrix} && Te_2 = \begin{bmatrix}-\sin(\theta) \\ \cos(\theta)\end{bmatrix}
 +
\end{align}
 +
</math>
 +
Then,
 +
:<math>
 +
\begin{align}
 +
&\langle Te_1,Te_1\rangle = \begin{bmatrix} \cos(\theta) & \sin(\theta) \end{bmatrix} \cdot \begin{bmatrix} \cos(\theta) \\ \sin(\theta) \end{bmatrix} = \cos^2(\theta) + \sin^2(\theta) = 1\\
 +
&\langle Te_1,Te_2\rangle = \begin{bmatrix} \cos(\theta) & \sin(\theta) \end{bmatrix} \cdot \begin{bmatrix} -\sin(\theta) \\ \cos(\theta) \end{bmatrix} = \sin(\theta)\cos(\theta) - \sin(\theta)\cos(\theta) = 0\\
 +
&\langle Te_2,Te_2\rangle = \begin{bmatrix} -\sin(\theta) & \cos(\theta)  \end{bmatrix} \cdot \begin{bmatrix} -\sin(\theta) \\ \cos(\theta) \end{bmatrix} = \sin^2(\theta) + \cos^2(\theta) = 1\\
 +
\end{align}
 +
</math>
 +
 
 +
The previous example can be extended to construct all orthogonal transformations. For example, the following matrices define orthogonal transformations on <math>(\mathbb{R}^3,\langle\cdot,\cdot\rangle)</math>:
 +
:<math>
 +
\begin{bmatrix}
 +
\cos(\theta) & -\sin(\theta) & 0 \\
 +
\sin(\theta) & \cos(\theta) & 0 \\
 +
0 & 0 & 1
 +
\end{bmatrix},
 +
\begin{bmatrix}
 +
\cos(\theta) & 0 & -\sin(\theta) \\
 +
0 & 1 & 0 \\
 +
\sin(\theta) & 0 & \cos(\theta)
 +
\end{bmatrix},
 +
\begin{bmatrix}
 +
1 & 0 & 0 \\
 +
0 & \cos(\theta) & -\sin(\theta) \\
 +
0 & \sin(\theta) & \cos(\theta)
 +
\end{bmatrix}
 +
</math>
 +
 
 +
= Orthogonal Matrix=
 +
In linear algebra, an '''orthogonal matrix''', or '''orthonormal matrix''', is a real square matrix whose columns and rows are orthonormal vectors.
 +
 
 +
One way to express this is
 +
<math display="block">Q^\mathrm{T} Q = Q Q^\mathrm{T} = I,</math>
 +
where {{math|''Q''<sup>T</sup>}} is the transpose of {{mvar|Q}} and {{mvar|I}} is the identity matrix.
 +
 
 +
This leads to the equivalent characterization: a matrix {{mvar|Q}} is orthogonal if its transpose is equal to its inverse:
 +
<math display="block">Q^\mathrm{T}=Q^{-1},</math>
 +
where {{math|''Q''<sup>−1</sup>}} is the inverse of {{mvar|Q}}.
 +
 
 +
An orthogonal matrix {{mvar|Q}} is necessarily invertible (with inverse {{math|1=''Q''<sup>−1</sup> = ''Q''<sup>T</sup>}}), unitary ({{math|1=''Q''<sup>−1</sup> = ''Q''<sup>∗</sup>}}), where {{math|1=''Q''<sup>∗</sup>}} is the Hermitian adjoint (conjugate transpose) of {{mvar|Q}}, and therefore normal ({{math|1=''Q''<sup>∗</sup>''Q'' = ''QQ''<sup>∗</sup>}}) over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation.
 +
 
 +
The set of {{math|''n'' × ''n''}} orthogonal matrices forms a group, {{math|O(''n'')}}, known as the orthogonal group. The subgroup {{math|SO(''n'')}} consisting of orthogonal matrices with determinant +1 is called the special orthogonal group, and each of its elements is a '''special orthogonal matrix'''. As a linear transformation, every special orthogonal matrix acts as a rotation.
 +
 
 +
==Overview==
 +
An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from any field. However, orthogonal matrices arise naturally from dot products, and for matrices of complex numbers that leads instead to the unitary requirement. Orthogonal matrices preserve the dot product, so, for vectors {{math|'''u'''}} and {{math|'''v'''}} in an {{mvar|n}}-dimensional real Euclidean space
 +
<math display="block">{\mathbf u} \cdot {\mathbf v} = \left(Q {\mathbf u}\right) \cdot \left(Q {\mathbf v}\right) </math>
 +
where {{mvar|Q}} is an orthogonal matrix. To see the inner product connection, consider a vector {{math|'''v'''}} in an {{mvar|n}}-dimensional real Euclidean space. Written with respect to an orthonormal basis, the squared length of {{math|'''v'''}} is {{math|'''v'''<sup>T</sup>'''v'''}}. If a linear transformation, in matrix form {{math|''Q'''''v'''}}, preserves vector lengths, then
 +
<math display="block">{\mathbf v}^\mathrm{T}{\mathbf v} = (Q{\mathbf v})^\mathrm{T}(Q{\mathbf v}) = {\mathbf v}^\mathrm{T} Q^\mathrm{T} Q {\mathbf v} .</math>
 +
 
 +
Thus finite-dimensional linear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent.
 +
 
 +
Orthogonal matrices are important for a number of reasons, both theoretical and practical. The {{math|''n'' × ''n''}} orthogonal matrices form a group under matrix multiplication, the orthogonal group denoted by {{math|O(''n'')}}, which—with its subgroups—is widely used in mathematics and the physical sciences. For example, the point group of a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as {{mvar|QR}} decomposition. As another example, with appropriate normalization the discrete cosine transform (used in MP3 compression) is represented by an orthogonal matrix.
 +
 
 +
==Examples==
 +
Below are a few examples of small orthogonal matrices and possible interpretations.
 +
*<math>
 +
\begin{bmatrix}
 +
1 & 0 \\
 +
0 & 1 \\
 +
\end{bmatrix}</math> &emsp;&emsp; (identity transformation)
 +
*<math>
 +
\begin{bmatrix}
 +
\cos \theta & -\sin \theta \\
 +
\sin \theta & \cos \theta \\
 +
\end{bmatrix}</math> &emsp;&emsp;
 +
*<math>
 +
\begin{bmatrix}
 +
\cos 16.26^\circ & -\sin 16.26^\circ \\
 +
\sin 16.26^\circ & \cos 16.26^\circ \\
 +
\end{bmatrix} \approx \left[\begin{array}{rr}
 +
0.96 & -0.28 \\
 +
0.28 & 0.96 \\
 +
\end{array}\right]</math>(rotation by 16.26°)
 +
*<math>
 +
\begin{bmatrix}
 +
1 & 0 \\
 +
0 & -1 \\
 +
\end{bmatrix}</math> &emsp;&emsp; (reflection across ''x''-axis)
 +
*<math>
 +
\begin{bmatrix}
 +
0 & 0 & 0 & 1 \\
 +
0 & 0 & 1 & 0 \\
 +
1 & 0 & 0 & 0 \\
 +
0 & 1 & 0 & 0
 +
\end{bmatrix}</math> &emsp;&emsp; (permutation of coordinate axes)
 +
 
 +
==Elementary constructions==
 +
 
 +
===Lower dimensions===
 +
The simplest orthogonal matrices are the 1 × 1 matrices [1] and [−1], which we can interpret as the identity and a reflection of the real line across the origin.
 +
 
 +
The 2 × 2 matrices have the form
 +
<math display="block">\begin{bmatrix}
 +
p & t\\
 +
q & u
 +
\end{bmatrix},</math>
 +
which orthogonality demands satisfy the three equations
 +
<math display="block">\begin{align}
 +
1 & = p^2+t^2, \\
 +
1 & = q^2+u^2, \\
 +
0 & = pq+tu.
 +
\end{align}</math>
 +
 
 +
In consideration of the first equation, without loss of generality let {{math|1=''p'' = cos ''θ''}}, {{math|1=''q'' = sin ''θ''}}; then either {{math|1=''t'' = −''q''}}, {{math|1=''u'' = ''p''}} or {{math|1=''t'' = ''q''}}, {{math|1=''u'' = −''p''}}. We can interpret the first case as a rotation by {{mvar|θ}} (where {{math|1=''θ'' = 0}} is the identity), and the second as a reflection across a line at an angle of {{math|{{sfrac|''θ''|2}}}}.
 +
 
 +
<math display="block">
 +
\begin{bmatrix}
 +
\cos \theta & -\sin \theta \\
 +
\sin \theta & \cos \theta \\
 +
\end{bmatrix}\text{ (rotation), }\qquad
 +
\begin{bmatrix}
 +
\cos \theta & \sin \theta \\
 +
\sin \theta & -\cos \theta \\
 +
\end{bmatrix}\text{ (reflection)}
 +
</math>
 +
 
 +
The special case of the reflection matrix with {{math|1=''θ'' = 90°}} generates a reflection about the line at 45° given by {{math|1=''y'' = ''x''}} and therefore exchanges {{mvar|x}} and {{mvar|y}}; it is a [[permutation matrix]], with a single 1 in each column and row (and otherwise 0):
 +
<math display="block">\begin{bmatrix}
 +
0 & 1\\
 +
1 & 0
 +
\end{bmatrix}.</math>
 +
 
 +
The identity is also a permutation matrix.
 +
 
 +
A reflection is its own inverse, which implies that a reflection matrix is symmetric (equal to its transpose) as well as orthogonal. The product of two rotation matrices is a rotation matrix, and the product of two reflection matrices is also a rotation matrix.
 +
 
 +
===Higher dimensions===
 +
Regardless of the dimension, it is always possible to classify orthogonal matrices as purely rotational or not, but for {{nowrap|3 × 3}} matrices and larger the non-rotational matrices can be more complicated than reflections. For example,
 +
<math display="block">
 +
\begin{bmatrix}
 +
-1 & 0 & 0\\
 +
0 & -1 & 0\\
 +
0 & 0 & -1
 +
\end{bmatrix}\text{ and }
 +
\begin{bmatrix}
 +
0 & -1 & 0\\
 +
1 & 0 & 0\\
 +
0 & 0 & -1
 +
\end{bmatrix}</math>
 +
 
 +
represent an ''inversion'' through the origin and a ''rotoinversion'', respectively, about the {{math|z}}-axis.
 +
 
 +
Rotations become more complicated in higher dimensions; they can no longer be completely characterized by one angle, and may affect more than one planar subspace. It is common to describe a 3 × 3 rotation matrix in terms of an axis and angle, but this only works in three dimensions. Above three dimensions two or more angles are needed, each associated with a plane of rotation.
 +
 
 +
However, we have elementary building blocks for permutations, reflections, and rotations that apply in general.
 +
 
 +
===Primitives===
 +
The most elementary permutation is a transposition, obtained from the identity matrix by exchanging two rows. Any {{math|''n'' × ''n''}} permutation matrix can be constructed as a product of no more than {{math|''n'' − 1}} transpositions.
 +
 
 +
A Householder reflection is constructed from a non-null vector {{math|'''v'''}} as
 +
<math display="block">Q = I - 2 \frac{{\mathbf v}{\mathbf v}^\mathrm{T}}{{\mathbf v}^\mathrm{T}{\mathbf v}} .</math>
 +
 
 +
Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude of {{math|'''v'''}}. This is a reflection in the hyperplane perpendicular to {{math|'''v'''}} (negating any vector component parallel to {{math|'''v'''}}). If {{math|'''v'''}} is a unit vector, then {{math|1=''Q'' = ''I'' − 2'''vv'''<sup>T</sup>}} suffices. A Householder reflection is typically used to simultaneously zero the lower part of a column. Any orthogonal matrix of size {{nowrap|''n'' × ''n''}} can be constructed as a product of at most {{mvar|n}} such reflections.
 +
 
 +
A Givens rotation acts on a two-dimensional (planar) subspace spanned by two coordinate axes, rotating by a chosen angle. It is typically used to zero a single subdiagonal entry. Any rotation matrix of size {{math|''n'' × ''n''}} can be constructed as a product of at most {{math|{{sfrac|''n''(''n'' − 1)|2}}}} such rotations. In the case of 3 × 3 matrices, three such rotations suffice; and by fixing the sequence we can thus describe all 3 × 3 rotation matrices (though not uniquely) in terms of the three angles used, often called Euler angles.
 +
 
 +
A Jacobi rotation has the same form as a Givens rotation, but is used to zero both off-diagonal entries of a 2 × 2 symmetric submatrix.
 +
 
 +
==Properties==
 +
 
 +
===Matrix properties===
 +
A real square matrix is orthogonal [[if and only if]] its columns form an orthonormal basis of the Euclidean space {{math|'''R'''<sup>''n''</sup>}} with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of {{math|'''R'''<sup>''n''</sup>}}. It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name; they only satisfy {{math|1=''M''<sup>T</sup>''M'' = ''D''}}, with {{mvar|D}} a diagonal matrix.
 +
 
 +
The determinant of any orthogonal matrix is +1 or −1. This follows from basic facts about determinants, as follows:
 +
<math display="block">1=\det(I)=\det\left(Q^\mathrm{T}Q\right)=\det\left(Q^\mathrm{T}\right)\det(Q)=\bigl(\det(Q)\bigr)^2 .</math>
 +
 
 +
The converse is not true; having a determinant of ±1 is no guarantee of orthogonality, even with orthogonal columns, as shown by the following counterexample.
 +
<math display="block">\begin{bmatrix}
 +
2 & 0 \\
 +
0 & \frac{1}{2}
 +
\end{bmatrix}</math>
 +
 
 +
With permutation matrices the determinant matches the signature, being +1 or −1 as the parity of the permutation is even or odd, for the determinant is an alternating function of the rows.
 +
 
 +
Stronger than the determinant restriction is the fact that an orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) modulus&nbsp;1.
 +
 
 +
===Group properties===
 +
The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of all {{math|''n'' × ''n''}} orthogonal matrices satisfies all the axioms of a group. It is a compact Lie group of dimension {{math|{{sfrac|''n''(''n'' − 1)|2}}}}, called the orthogonal group and denoted by {{math|O(''n'')}}.
 +
 
 +
The orthogonal matrices whose determinant is +1 form a path-connected normal subgroup of {{math|O(''n'')}} of index 2, the special orthogonal group {{math|SO(''n'')}} of rotations. The quotient group {{math|O(''n'')/SO(''n'')}} is isomorphic to {{math|O(1)}}, with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection map splits, {{math|O(''n'')}} is a semidirect product of {{math|SO(''n'')}} by {{math|O(1)}}. In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with 2 × 2 matrices. If {{mvar|n}} is odd, then the semidirect product is in fact a direct product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns. This follows from the property of determinants that negating a column negates the determinant, and thus negating an odd (but not even) number of columns negates the determinant.
 +
 
 +
Now consider {{math|(''n'' + 1) × (''n'' + 1)}} orthogonal matrices with bottom right entry equal to 1. The remainder of the last column (and last row) must be zeros, and the product of any two such matrices has the same form. The rest of the matrix is an {{math|''n'' × ''n''}} orthogonal matrix; thus {{math|O(''n'')}} is a subgroup of {{math|O(''n'' + 1)}} (and of all higher groups).
 +
 
 +
<math display="block">\begin{bmatrix}
 +
  & & & 0\\
 +
  & \mathrm{O}(n) & & \vdots\\
 +
  & & & 0\\
 +
  0 & \cdots & 0 & 1
 +
\end{bmatrix}</math>
 +
 
 +
Since an elementary reflection in the form of a Householder matrix can reduce any orthogonal matrix to this constrained form, a series of such reflections can bring any orthogonal matrix to the identity; thus an orthogonal group is a reflection group. The last column can be fixed to any unit vector, and each choice gives a different copy of {{math|O(''n'')}} in {{math|O(''n'' + 1)}}; in this way {{math|O(''n'' + 1)}} is a bundle over the unit sphere {{math|''S''<sup>''n''</sup>}} with fiber {{math|O(''n'')}}.
 +
 
 +
Similarly, {{math|SO(''n'')}} is a subgroup of {{math|SO(''n'' + 1)}}; and any special orthogonal matrix can be generated by Givens plane rotations using an analogous procedure. The bundle structure persists: {{math|SO(''n'') ↪ SO(''n'' + 1) → ''S''<sup>''n''</sup>}}. A single rotation can produce a zero in the first row of the last column, and series of {{math|''n'' − 1}} rotations will zero all but the last row of the last column of an {{math|''n'' × ''n''}} rotation matrix. Since the planes are fixed, each rotation has only one degree of freedom, its angle. By induction, {{math|SO(''n'')}} therefore has
 +
<math display="block">(n-1) + (n-2) + \cdots + 1 = \frac{n(n-1)}{2}</math>
 +
degrees of freedom, and so does {{math|O(''n'')}}.
 +
 
 +
Permutation matrices are simpler still; they form, not a Lie group, but only a finite group, the order {{math|''n''!}} symmetric group {{math|S<sub>''n''</sub>}}. By the same kind of argument, {{math|S<sub>''n''</sub>}} is a subgroup of {{math|S<sub>''n'' + 1</sub>}}. The even permutations produce the subgroup of permutation matrices of determinant +1, the order {{math|{{sfrac|''n''!|2}}}} alternating group.
 +
 
 +
===Canonical form===
 +
More broadly, the effect of any orthogonal matrix separates into independent actions on orthogonal two-dimensional subspaces. That is, if {{mvar|Q}} is special orthogonal then one can always find an orthogonal matrix {{mvar|P}}, a (rotational) change of basis, that brings {{mvar|Q}} into block diagonal form:
 +
 
 +
<math display="block">P^\mathrm{T}QP = \begin{bmatrix}
 +
R_1 & & \\ & \ddots & \\ & & R_k
 +
\end{bmatrix}\ (n\text{ even}),
 +
\ P^\mathrm{T}QP = \begin{bmatrix}
 +
R_1 & & & \\ & \ddots & & \\ & & R_k & \\ & & & 1
 +
\end{bmatrix}\ (n\text{ odd}).</math>
 +
 
 +
where the matrices {{math|''R''<sub>1</sub>, ..., ''R''<sub>''k''</sub>}} are 2 × 2 rotation matrices, and with the remaining entries zero. Exceptionally, a rotation block may be diagonal, {{math|±''I''}}. Thus, negating one column if necessary, and noting that a 2 × 2 reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the form
 +
<math display="block">P^\mathrm{T}QP = \begin{bmatrix}
 +
\begin{matrix}R_1 & & \\ & \ddots & \\ & & R_k\end{matrix} & 0 \\
 +
0 & \begin{matrix}\pm 1 & & \\ & \ddots & \\ & & \pm 1\end{matrix} \\
 +
\end{bmatrix},</math>
 +
 
 +
The matrices {{math|''R''<sub>1</sub>, ..., ''R''<sub>''k''</sub>}} give conjugate pairs of eigenvalues lying on the unit circle in the complex plane; so this decomposition confirms that all eigenvalues have absolute value 1. If {{mvar|n}} is odd, there is at least one real eigenvalue, +1 or −1; for a 3 × 3 rotation, the eigenvector associated with +1 is the rotation axis.
 +
 
 +
===Lie algebra===
 +
Suppose the entries of {{mvar|Q}} are differentiable functions of {{mvar|t}}, and that {{math|1=''t'' = 0}} gives {{math|1=''Q'' = ''I''}}. Differentiating the orthogonality condition
 +
<math display="block">Q^\mathrm{T} Q = I </math>
 +
yields
 +
<math display="block">\dot{Q}^\mathrm{T} Q + Q^\mathrm{T} \dot{Q} = 0</math>
 +
 
 +
Evaluation at {{math|1=''t'' = 0}} ({{math|1=''Q'' = ''I''}}) then implies
 +
<math display="block">\dot{Q}^\mathrm{T} = -\dot{Q} .</math>
 +
 
 +
In Lie group terms, this means that the Lie algebra of an orthogonal matrix group consists of skew-symmetric matrices. Going the other direction, the matrix exponential of any skew-symmetric matrix is an orthogonal matrix (in fact, special orthogonal).
 +
 
 +
For example, the three-dimensional object physics calls angular velocity is a differential rotation, thus a vector in the Lie algebra <math>\mathfrak{so}(3)</math> tangent to {{math|SO(3)}}. Given {{math|1='''ω''' = (''xθ'', ''yθ'', ''zθ'')}}, with {{math|1='''v''' = (''x'', ''y'', ''z'')}} being a unit vector, the correct skew-symmetric matrix form of {{mvar|'''ω'''}} is
 +
<math display="block">
 +
\Omega = \begin{bmatrix}
 +
0 & -z\theta & y\theta \\
 +
z\theta & 0 & -x\theta \\
 +
-y\theta & x\theta & 0
 +
\end{bmatrix} .</math>
 +
 
 +
The exponential of this is the orthogonal matrix for rotation around axis {{math|'''v'''}} by angle {{mvar|θ}}; setting {{math|1=''c'' = cos {{sfrac|''θ''|2}}}}, {{math|1=''s'' = sin {{sfrac|''θ''|2}}}},
 +
<math display="block">\exp(\Omega) = \begin{bmatrix}
 +
1  -  2s^2  +  2x^2 s^2  &  2xy s^2  -  2z sc  &  2xz s^2  +  2y sc\\
 +
2xy s^2  +  2z sc  &  1  -  2s^2  +  2y^2 s^2  &  2yz s^2  -  2x sc\\
 +
2xz s^2  -  2y sc  &  2yz s^2  +  2x sc  &  1  -  2s^2  +  2z^2 s^2 
 +
\end{bmatrix}.</math>
 +
 
 +
==Numerical linear algebra==
 +
 
 +
===Benefits===
 +
Numerical analysis takes advantage of many of the properties of orthogonal matrices for numerical linear algebra, and they arise naturally. For example, it is often desirable to compute an orthonormal basis for a space, or an orthogonal change of bases; both take the form of orthogonal matrices. Having determinant ±1 and all eigenvalues of magnitude 1 is of great benefit for numeric stability. One implication is that the condition number is 1 (which is the minimum), so errors are not magnified when multiplying with an orthogonal matrix. Many algorithms use orthogonal matrices like Householder reflections and Givens rotations for this reason. It is also helpful that, not only is an orthogonal matrix invertible, but its inverse is available essentially free, by exchanging indices.
 +
 
 +
Permutations are essential to the success of many algorithms, including the workhorse Gaussian elimination with partial pivoting (where permutations do the pivoting). However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list of {{mvar|n}} indices.
 +
 
 +
Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage. For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a full multiplication of order {{math|''n''<sup>3</sup>}} to a much more efficient order {{mvar|n}}. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly. (Following {{harvtxt|Stewart|1976}}, we do ''not'' store a rotation angle, which is both expensive and badly behaved.)
 +
 
 +
===Decompositions===
 +
A number of important matrix decompositions {{harv|Golub|Van Loan|1996}} involve orthogonal matrices, including especially:
 +
 
 +
;{{mvar|QR}} decomposition : {{math|1=''M'' = ''QR''}}, {{mvar|Q}} orthogonal, {{mvar|R}} upper triangular
 +
;Singular value decomposition : {{math|1=''M'' = ''U''Σ''V''<sup>T</sup>}}, {{mvar|U}} and {{mvar|V}} orthogonal, {{math|Σ}} diagonal matrix
 +
;Eigendecomposition of a symmetric matrix (decomposition according to the spectral theorem) : {{math|1=''S'' = ''Q''Λ''Q''<sup>T</sup>}}, {{mvar|S}} symmetric, {{mvar|Q}} orthogonal, {{math|Λ}} diagonal
 +
;Polar decomposition : {{math|1=''M'' = ''QS''}}, {{mvar|Q}} orthogonal, {{mvar|S}} symmetric positive-semidefinite
 +
 
 +
====Examples====
 +
Consider an overdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. Write {{math|1=''A'''''x''' = '''b'''}}, where {{mvar|A}} is {{math|''m'' × ''n''}}, {{math|''m'' > ''n''}}.
 +
A {{mvar|QR}} decomposition reduces {{mvar|A}} to upper triangular {{mvar|R}}. For example, if {{mvar|A}} is 5 × 3 then {{mvar|R}} has the form
 +
<math display="block">R = \begin{bmatrix}
 +
\cdot & \cdot & \cdot \\
 +
0 & \cdot & \cdot \\
 +
0 & 0 & \cdot \\
 +
0 & 0 & 0 \\
 +
0 & 0 & 0
 +
\end{bmatrix}.</math>
 +
 
 +
The linear least squares problem is to find the {{math|'''x'''}} that minimizes {{math|{{norm|''A'''''x''' − '''b'''}}}}, which is equivalent to projecting {{math|'''b'''}} to the subspace spanned by the columns of {{mvar|A}}. Assuming the columns of {{mvar|A}} (and hence {{mvar|R}}) are independent, the projection solution is found from {{math|1=''A''<sup>T</sup>''A'''''x''' = ''A''<sup>T</sup>'''b'''}}. Now {{math|''A''<sup>T</sup>''A''}} is square ({{math|''n'' × ''n''}}) and invertible, and also equal to {{math|''R''<sup>T</sup>''R''}}. But the lower rows of zeros in {{mvar|R}} are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in Gaussian elimination (Cholesky decomposition). Here orthogonality is important not only for reducing {{math|1=''A''<sup>T</sup>''A'' = (''R''<sup>T</sup>''Q''<sup>T</sup>)''QR''}} to {{math|''R''<sup>T</sup>''R''}}, but also for allowing solution without magnifying numerical problems.
 +
 
 +
In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful. With {{mvar|A}} factored as {{math|''U''Σ''V''<sup>T</sup>}}, a satisfactory solution uses the Moore-Penrose pseudoinverse, {{math|''V''Σ<sup>+</sup>''U''<sup>T</sup>}}, where {{math|Σ<sup>+</sup>}} merely replaces each non-zero diagonal entry with its reciprocal. Set {{math|'''x'''}} to {{math|''V''Σ<sup>+</sup>''U''<sup>T</sup>'''b'''}}.
 +
 
 +
The case of a square invertible matrix also holds interest. Suppose, for example, that {{mvar|A}} is a {{nowrap|3 × 3}} rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, so {{mvar|A}} has gradually lost its true orthogonality. A Gram–Schmidt process could orthogonalize the columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. The polar decomposition factors a matrix into a pair, one of which is the unique ''closest'' orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by any matrix norm invariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "Newton's method" approach due to {{harvtxt|Higham|1986}} (1990), repeatedly averaging the matrix with its inverse transpose. {{harvtxt|Dubrulle|1994}} has published an accelerated method with a convenient convergence test.
 +
 
 +
For example, consider a non-orthogonal matrix for which the simple averaging algorithm takes seven steps
 +
<math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix}
 +
\rightarrow
 +
\begin{bmatrix}1.8125 & 0.0625\\3.4375 & 2.6875\end{bmatrix}
 +
\rightarrow \cdots \rightarrow
 +
\begin{bmatrix}0.8 & -0.6\\0.6 & 0.8\end{bmatrix}</math>
 +
and which acceleration trims to two steps (with {{mvar|γ}} = 0.353553, 0.565685).
 +
 
 +
<math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix}
 +
\rightarrow
 +
\begin{bmatrix}1.41421 & -1.06066\\1.06066 & 1.41421\end{bmatrix}
 +
\rightarrow
 +
\begin{bmatrix}0.8 & -0.6\\0.6 & 0.8\end{bmatrix}</math>
 +
 
 +
Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404.
 +
 
 +
<math display="block">\begin{bmatrix}3 & 1\\7 & 5\end{bmatrix}
 +
\rightarrow
 +
\begin{bmatrix}0.393919 & -0.919145\\0.919145 & 0.393919\end{bmatrix}</math>
 +
 
 +
===Randomization===
 +
Some numerical applications, such as Monte Carlo methods and exploration of high-dimensional data spaces, require generation of uniformly distributed random orthogonal matrices. In this context, "uniform" is defined in terms of Haar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix. Orthogonalizing matrices with independent uniformly distributed random entries does not result in uniformly distributed orthogonal matrices, but the {{mvar|QR}} decomposition of independent normally distributed random entries does, as long as the diagonal of {{mvar|R}} contains only positive entries (Mezzadri 2006). Stewart (1980) replaced this with a more efficient idea that Diaconis & Shahshahani (1987) later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations). To generate an {{math|(''n'' + 1) × (''n'' + 1)}} orthogonal matrix, take an {{math|''n'' × ''n''}} one and a uniformly distributed unit vector of dimension ''n'' + 1. Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner).
 +
 
 +
===Nearest orthogonal matrix===
 +
 
 +
The problem of finding the orthogonal matrix {{mvar|Q}} nearest a given matrix {{mvar|M}} is related to the Orthogonal Procrustes problem. There are several different ways to get the unique solution, the simplest of which is taking the singular value decomposition of {{mvar|M}} and replacing the singular values with ones. Another method expresses the {{mvar|R}} explicitly but requires the use of a matrix square root:
 +
<math display="block">Q = M \left(M^\mathrm{T} M\right)^{-\frac 1 2}</math>
 +
 
 +
This may be combined with the Babylonian method for extracting the square root of a matrix to give a recurrence which converges to an orthogonal matrix quadratically:
 +
<math display="block">Q_{n + 1} = 2 M \left(Q_n^{-1} M + M^\mathrm{T} Q_n\right)^{-1}</math>
 +
where {{math|1=''Q''<sub>0</sub> = ''M''}}.
 +
 
 +
These iterations are stable provided the condition number of {{mvar|M}} is less than three.
 +
 
 +
Using a first-order approximation of the inverse and the same initialization results in the modified iteration:
 +
 
 +
<math display="block">N_{n} = Q_n^\mathrm{T} Q_n</math>
 +
<math display="block">P_{n} = \frac 1 2 Q_n N_{n}</math>
 +
<math display="block">Q_{n + 1} = 2 Q_n + P_n N_n - 3 P_n</math>
 +
 
 +
==Spin and pin==
 +
A subtle technical problem afflicts some uses of orthogonal matrices. Not only are the group components with determinant +1 and −1 not connected to each other, even the +1 component, {{math|SO(''n'')}}, is not simply connected (except for SO(1), which is trivial). Thus it is sometimes advantageous, or even necessary, to work with a covering group of SO(''n''), the spin group, {{math|Spin(''n'')}}. Likewise, {{math|O(''n'')}} has covering groups, the pin groups, Pin(''n''). For {{math|''n'' > 2}}, {{math|Spin(''n'')}} is simply connected and thus the universal covering group for {{math|SO(''n'')}}. By far the most famous example of a spin group is {{math|Spin(3)}}, which is nothing but {{math|SU(2)}}, or the group of unit quaternions.
 +
 
 +
The Pin and Spin groups are found within Clifford algebras, which themselves can be built from orthogonal matrices.
 +
 
 +
==Rectangular matrices==
 +
 
 +
If {{mvar|Q}} is not a square matrix, then the conditions {{math|1=''Q''<sup>T</sup>''Q'' = ''I''}} and {{math|1=''QQ''<sup>T</sup> = ''I''}} are not equivalent. The condition {{math|1=''Q''<sup>T</sup>''Q'' = ''I''}} says that the columns of ''Q'' are orthonormal. This can only happen if {{mvar|Q}} is an {{math|''m'' × ''n''}} matrix with {{math|''n'' ≤ ''m''}} (due to linear dependence). Similarly, {{math|1=''QQ''<sup>T</sup> = ''I''}} says that the rows of {{mvar|Q}} are orthonormal, which requires {{math|''n'' ≥ ''m''}}.
 +
 
 +
There is no standard terminology for these matrices. They are variously called "semi-orthogonal matrices", "orthonormal matrices", "orthogonal matrices", and sometimes simply "matrices with orthonormal rows/columns".
 +
 
 +
= Licensing =  
 
Content obtained and/or adapted from:
 
Content obtained and/or adapted from:
 
* [https://en.wikipedia.org/wiki/Orthogonal_transformation Orthogonal transformation, Wikipedia] under a CC BY-SA license
 
* [https://en.wikipedia.org/wiki/Orthogonal_transformation Orthogonal transformation, Wikipedia] under a CC BY-SA license
 
* [https://en.wikipedia.org/wiki/Orthogonal_matrix Orthogonal matrix, Wikipedia] under a CC BY-SA license
 
* [https://en.wikipedia.org/wiki/Orthogonal_matrix Orthogonal matrix, Wikipedia] under a CC BY-SA license

Revision as of 15:48, 29 January 2022

Orthogonal Transformations

In linear algebra, an orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product. That is, for each pair u, v of elements of V, we have

Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. In particular, orthogonal transformations map orthonormal bases to orthonormal bases.

Orthogonal transformations are injective: if then , hence , so the kernel of is trivial.

Orthogonal transformations in two- or three-dimensional Euclidean space are stiff rotations, reflections, or combinations of a rotation and a reflection (also known as improper rotations). Reflections are transformations that reverse the direction front to back, orthogonal to the mirror plane, like (real-world) mirrors do. The matrices corresponding to proper rotations (without reflection) have a determinant of +1. Transformations with reflection are represented by matrices with a determinant of −1. This allows the concept of rotation and reflection to be generalized to higher dimensions.

In finite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal transformation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitute an orthonormal basis of V. The columns of the matrix form another orthonormal basis of V.

If an orthogonal transformation is invertible (which is always the case when V is finite-dimensional) then its inverse is another orthogonal transformation. Its matrix representation is the transpose of the matrix representation of the original transformation.

Examples

Consider the inner-product space with the standard euclidean inner product and standard basis. Then, the matrix transformation

is orthogonal. To see this, consider

Then,

The previous example can be extended to construct all orthogonal transformations. For example, the following matrices define orthogonal transformations on :

Orthogonal Matrix

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

One way to express this is

where QT is the transpose of Q and I is the identity matrix.

This leads to the equivalent characterization: a matrix Q is orthogonal if its transpose is equal to its inverse:

where Q−1 is the inverse of Q.

An orthogonal matrix Q is necessarily invertible (with inverse Q−1 = QT), unitary (Q−1 = Q), where Q is the Hermitian adjoint (conjugate transpose) of Q, and therefore normal (QQ = QQ) over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation.

The set of n × n orthogonal matrices forms a group, O(n), known as the orthogonal group. The subgroup SO(n) consisting of orthogonal matrices with determinant +1 is called the special orthogonal group, and each of its elements is a special orthogonal matrix. As a linear transformation, every special orthogonal matrix acts as a rotation.

Overview

An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from any field. However, orthogonal matrices arise naturally from dot products, and for matrices of complex numbers that leads instead to the unitary requirement. Orthogonal matrices preserve the dot product, so, for vectors u and v in an n-dimensional real Euclidean space

where Q is an orthogonal matrix. To see the inner product connection, consider a vector v in an n-dimensional real Euclidean space. Written with respect to an orthonormal basis, the squared length of v is vTv. If a linear transformation, in matrix form Qv, preserves vector lengths, then

Thus finite-dimensional linear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent.

Orthogonal matrices are important for a number of reasons, both theoretical and practical. The n × n orthogonal matrices form a group under matrix multiplication, the orthogonal group denoted by O(n), which—with its subgroups—is widely used in mathematics and the physical sciences. For example, the point group of a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as QR decomposition. As another example, with appropriate normalization the discrete cosine transform (used in MP3 compression) is represented by an orthogonal matrix.

Examples

Below are a few examples of small orthogonal matrices and possible interpretations.

  •    (identity transformation)
  •   
  • (rotation by 16.26°)
  •    (reflection across x-axis)
  •    (permutation of coordinate axes)

Elementary constructions

Lower dimensions

The simplest orthogonal matrices are the 1 × 1 matrices [1] and [−1], which we can interpret as the identity and a reflection of the real line across the origin.

The 2 × 2 matrices have the form

which orthogonality demands satisfy the three equations

In consideration of the first equation, without loss of generality let p = cos θ, q = sin θ; then either t = −q, u = p or t = q, u = −p. We can interpret the first case as a rotation by θ (where θ = 0 is the identity), and the second as a reflection across a line at an angle of Template:Sfrac.

The special case of the reflection matrix with θ = 90° generates a reflection about the line at 45° given by y = x and therefore exchanges x and y; it is a permutation matrix, with a single 1 in each column and row (and otherwise 0):

The identity is also a permutation matrix.

A reflection is its own inverse, which implies that a reflection matrix is symmetric (equal to its transpose) as well as orthogonal. The product of two rotation matrices is a rotation matrix, and the product of two reflection matrices is also a rotation matrix.

Higher dimensions

Regardless of the dimension, it is always possible to classify orthogonal matrices as purely rotational or not, but for Template:Nowrap matrices and larger the non-rotational matrices can be more complicated than reflections. For example,

represent an inversion through the origin and a rotoinversion, respectively, about the z-axis.

Rotations become more complicated in higher dimensions; they can no longer be completely characterized by one angle, and may affect more than one planar subspace. It is common to describe a 3 × 3 rotation matrix in terms of an axis and angle, but this only works in three dimensions. Above three dimensions two or more angles are needed, each associated with a plane of rotation.

However, we have elementary building blocks for permutations, reflections, and rotations that apply in general.

Primitives

The most elementary permutation is a transposition, obtained from the identity matrix by exchanging two rows. Any n × n permutation matrix can be constructed as a product of no more than n − 1 transpositions.

A Householder reflection is constructed from a non-null vector v as

Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude of v. This is a reflection in the hyperplane perpendicular to v (negating any vector component parallel to v). If v is a unit vector, then Q = I − 2vvT suffices. A Householder reflection is typically used to simultaneously zero the lower part of a column. Any orthogonal matrix of size Template:Nowrap can be constructed as a product of at most n such reflections.

A Givens rotation acts on a two-dimensional (planar) subspace spanned by two coordinate axes, rotating by a chosen angle. It is typically used to zero a single subdiagonal entry. Any rotation matrix of size n × n can be constructed as a product of at most Template:Sfrac such rotations. In the case of 3 × 3 matrices, three such rotations suffice; and by fixing the sequence we can thus describe all 3 × 3 rotation matrices (though not uniquely) in terms of the three angles used, often called Euler angles.

A Jacobi rotation has the same form as a Givens rotation, but is used to zero both off-diagonal entries of a 2 × 2 symmetric submatrix.

Properties

Matrix properties

A real square matrix is orthogonal if and only if its columns form an orthonormal basis of the Euclidean space Rn with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of Rn. It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name; they only satisfy MTM = D, with D a diagonal matrix.

The determinant of any orthogonal matrix is +1 or −1. This follows from basic facts about determinants, as follows:

The converse is not true; having a determinant of ±1 is no guarantee of orthogonality, even with orthogonal columns, as shown by the following counterexample.

With permutation matrices the determinant matches the signature, being +1 or −1 as the parity of the permutation is even or odd, for the determinant is an alternating function of the rows.

Stronger than the determinant restriction is the fact that an orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) modulus 1.

Group properties

The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of all n × n orthogonal matrices satisfies all the axioms of a group. It is a compact Lie group of dimension Template:Sfrac, called the orthogonal group and denoted by O(n).

The orthogonal matrices whose determinant is +1 form a path-connected normal subgroup of O(n) of index 2, the special orthogonal group SO(n) of rotations. The quotient group O(n)/SO(n) is isomorphic to O(1), with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection map splits, O(n) is a semidirect product of SO(n) by O(1). In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with 2 × 2 matrices. If n is odd, then the semidirect product is in fact a direct product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns. This follows from the property of determinants that negating a column negates the determinant, and thus negating an odd (but not even) number of columns negates the determinant.

Now consider (n + 1) × (n + 1) orthogonal matrices with bottom right entry equal to 1. The remainder of the last column (and last row) must be zeros, and the product of any two such matrices has the same form. The rest of the matrix is an n × n orthogonal matrix; thus O(n) is a subgroup of O(n + 1) (and of all higher groups).

Since an elementary reflection in the form of a Householder matrix can reduce any orthogonal matrix to this constrained form, a series of such reflections can bring any orthogonal matrix to the identity; thus an orthogonal group is a reflection group. The last column can be fixed to any unit vector, and each choice gives a different copy of O(n) in O(n + 1); in this way O(n + 1) is a bundle over the unit sphere Sn with fiber O(n).

Similarly, SO(n) is a subgroup of SO(n + 1); and any special orthogonal matrix can be generated by Givens plane rotations using an analogous procedure. The bundle structure persists: SO(n) ↪ SO(n + 1) → Sn. A single rotation can produce a zero in the first row of the last column, and series of n − 1 rotations will zero all but the last row of the last column of an n × n rotation matrix. Since the planes are fixed, each rotation has only one degree of freedom, its angle. By induction, SO(n) therefore has

degrees of freedom, and so does O(n).

Permutation matrices are simpler still; they form, not a Lie group, but only a finite group, the order n! symmetric group Sn. By the same kind of argument, Sn is a subgroup of Sn + 1. The even permutations produce the subgroup of permutation matrices of determinant +1, the order Template:Sfrac alternating group.

Canonical form

More broadly, the effect of any orthogonal matrix separates into independent actions on orthogonal two-dimensional subspaces. That is, if Q is special orthogonal then one can always find an orthogonal matrix P, a (rotational) change of basis, that brings Q into block diagonal form:

where the matrices R1, ..., Rk are 2 × 2 rotation matrices, and with the remaining entries zero. Exceptionally, a rotation block may be diagonal, ±I. Thus, negating one column if necessary, and noting that a 2 × 2 reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the form

The matrices R1, ..., Rk give conjugate pairs of eigenvalues lying on the unit circle in the complex plane; so this decomposition confirms that all eigenvalues have absolute value 1. If n is odd, there is at least one real eigenvalue, +1 or −1; for a 3 × 3 rotation, the eigenvector associated with +1 is the rotation axis.

Lie algebra

Suppose the entries of Q are differentiable functions of t, and that t = 0 gives Q = I. Differentiating the orthogonality condition

yields

Evaluation at t = 0 (Q = I) then implies

In Lie group terms, this means that the Lie algebra of an orthogonal matrix group consists of skew-symmetric matrices. Going the other direction, the matrix exponential of any skew-symmetric matrix is an orthogonal matrix (in fact, special orthogonal).

For example, the three-dimensional object physics calls angular velocity is a differential rotation, thus a vector in the Lie algebra tangent to SO(3). Given ω = (, , ), with v = (x, y, z) being a unit vector, the correct skew-symmetric matrix form of ω is

The exponential of this is the orthogonal matrix for rotation around axis v by angle θ; setting c = cos Template:Sfrac, s = sin Template:Sfrac,

Numerical linear algebra

Benefits

Numerical analysis takes advantage of many of the properties of orthogonal matrices for numerical linear algebra, and they arise naturally. For example, it is often desirable to compute an orthonormal basis for a space, or an orthogonal change of bases; both take the form of orthogonal matrices. Having determinant ±1 and all eigenvalues of magnitude 1 is of great benefit for numeric stability. One implication is that the condition number is 1 (which is the minimum), so errors are not magnified when multiplying with an orthogonal matrix. Many algorithms use orthogonal matrices like Householder reflections and Givens rotations for this reason. It is also helpful that, not only is an orthogonal matrix invertible, but its inverse is available essentially free, by exchanging indices.

Permutations are essential to the success of many algorithms, including the workhorse Gaussian elimination with partial pivoting (where permutations do the pivoting). However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list of n indices.

Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage. For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a full multiplication of order n3 to a much more efficient order n. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly. (Following Template:Harvtxt, we do not store a rotation angle, which is both expensive and badly behaved.)

Decompositions

A number of important matrix decompositions Template:Harv involve orthogonal matrices, including especially:

QR decomposition 
M = QR, Q orthogonal, R upper triangular
Singular value decomposition 
M = UΣVT, U and V orthogonal, Σ diagonal matrix
Eigendecomposition of a symmetric matrix (decomposition according to the spectral theorem) 
S = QΛQT, S symmetric, Q orthogonal, Λ diagonal
Polar decomposition 
M = QS, Q orthogonal, S symmetric positive-semidefinite

Examples

Consider an overdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. Write Ax = b, where A is m × n, m > n. A QR decomposition reduces A to upper triangular R. For example, if A is 5 × 3 then R has the form

The linear least squares problem is to find the x that minimizes Template:Norm, which is equivalent to projecting b to the subspace spanned by the columns of A. Assuming the columns of A (and hence R) are independent, the projection solution is found from ATAx = ATb. Now ATA is square (n × n) and invertible, and also equal to RTR. But the lower rows of zeros in R are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in Gaussian elimination (Cholesky decomposition). Here orthogonality is important not only for reducing ATA = (RTQT)QR to RTR, but also for allowing solution without magnifying numerical problems.

In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful. With A factored as UΣVT, a satisfactory solution uses the Moore-Penrose pseudoinverse, VΣ+UT, where Σ+ merely replaces each non-zero diagonal entry with its reciprocal. Set x to VΣ+UTb.

The case of a square invertible matrix also holds interest. Suppose, for example, that A is a Template:Nowrap rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, so A has gradually lost its true orthogonality. A Gram–Schmidt process could orthogonalize the columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. The polar decomposition factors a matrix into a pair, one of which is the unique closest orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by any matrix norm invariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "Newton's method" approach due to Template:Harvtxt (1990), repeatedly averaging the matrix with its inverse transpose. Template:Harvtxt has published an accelerated method with a convenient convergence test.

For example, consider a non-orthogonal matrix for which the simple averaging algorithm takes seven steps

and which acceleration trims to two steps (with γ = 0.353553, 0.565685).

Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404.

Randomization

Some numerical applications, such as Monte Carlo methods and exploration of high-dimensional data spaces, require generation of uniformly distributed random orthogonal matrices. In this context, "uniform" is defined in terms of Haar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix. Orthogonalizing matrices with independent uniformly distributed random entries does not result in uniformly distributed orthogonal matrices, but the QR decomposition of independent normally distributed random entries does, as long as the diagonal of R contains only positive entries (Mezzadri 2006). Stewart (1980) replaced this with a more efficient idea that Diaconis & Shahshahani (1987) later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations). To generate an (n + 1) × (n + 1) orthogonal matrix, take an n × n one and a uniformly distributed unit vector of dimension n + 1. Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner).

Nearest orthogonal matrix

The problem of finding the orthogonal matrix Q nearest a given matrix M is related to the Orthogonal Procrustes problem. There are several different ways to get the unique solution, the simplest of which is taking the singular value decomposition of M and replacing the singular values with ones. Another method expresses the R explicitly but requires the use of a matrix square root:

This may be combined with the Babylonian method for extracting the square root of a matrix to give a recurrence which converges to an orthogonal matrix quadratically:

where Q0 = M.

These iterations are stable provided the condition number of M is less than three.

Using a first-order approximation of the inverse and the same initialization results in the modified iteration:

Spin and pin

A subtle technical problem afflicts some uses of orthogonal matrices. Not only are the group components with determinant +1 and −1 not connected to each other, even the +1 component, SO(n), is not simply connected (except for SO(1), which is trivial). Thus it is sometimes advantageous, or even necessary, to work with a covering group of SO(n), the spin group, Spin(n). Likewise, O(n) has covering groups, the pin groups, Pin(n). For n > 2, Spin(n) is simply connected and thus the universal covering group for SO(n). By far the most famous example of a spin group is Spin(3), which is nothing but SU(2), or the group of unit quaternions.

The Pin and Spin groups are found within Clifford algebras, which themselves can be built from orthogonal matrices.

Rectangular matrices

If Q is not a square matrix, then the conditions QTQ = I and QQT = I are not equivalent. The condition QTQ = I says that the columns of Q are orthonormal. This can only happen if Q is an m × n matrix with nm (due to linear dependence). Similarly, QQT = I says that the rows of Q are orthonormal, which requires nm.

There is no standard terminology for these matrices. They are variously called "semi-orthogonal matrices", "orthonormal matrices", "orthogonal matrices", and sometimes simply "matrices with orthonormal rows/columns".

Licensing

Content obtained and/or adapted from: