Difference between revisions of "Determinants"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
 
(2 intermediate revisions by the same user not shown)
Line 109: Line 109:
 
:<math>\det(A) = \sum_{i_1,i_2,\ldots,i_n=1}^n \varepsilon_{i_1\cdots i_n} a_{1,i_1} \cdots a_{n,i_n},</math>
 
:<math>\det(A) = \sum_{i_1,i_2,\ldots,i_n=1}^n \varepsilon_{i_1\cdots i_n} a_{1,i_1} \cdots a_{n,i_n},</math>
 
This gives back the formula above since the Levi-Civita symbol is zero if the indices <math>i_1, \dots, i_n</math> do not form a permutation.
 
This gives back the formula above since the Levi-Civita symbol is zero if the indices <math>i_1, \dots, i_n</math> do not form a permutation.
 +
 +
== Properties of the determinant ==
 +
 +
===Characterization of the determinant===
 +
The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an <math>n \times n</math>-matrix ''A'' as being composed of its <math>n</math> columns, so denoted as
 +
:<math>A = \big ( a_1, \dots, a_n \big ),</math>
 +
where the column vector <math>a_i</math> (for each ''i'') is composed of the entries of the matrix in the ''i''-th column.
 +
 +
# <li value="A"> <math>\det\left(I\right) = 1</math>, where <math>I</math> is an identity matrix.
 +
# <li value="B"> The determinant is ''multilinear'': if the ''j''th column of a matrix <math>A</math> is written as a linear combination <math>a_j = r \cdot v + w</math> of two column vectors ''v'' and ''w'' and a number ''r'', then the determinant of ''A'' is expressible as a similar linear combination:
 +
#: <math>\begin{align}|A|
 +
  &= \big | a_1, \dots, a_{j-1}, r \cdot v + w, a_{j+1}, \dots, a_n | \\
 +
  &= r \cdot | a_1, \dots, v, \dots a_n | + | a_1, \dots, w, \dots, a_n |
 +
\end{align}</math>
 +
# <li value="C">The determinant is ''alternating'': whenever two columns of a matrix are identical, its determinant is 0:
 +
#: <math>| a_1, \dots, v, \dots, v, \dots, a_n| = 0.</math>
 +
 +
If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any <math>n \times n</math>-matrix ''A'' a number that satisfies these three properties. This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula.
 +
 +
To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property&nbsp;9) or else ±1 (by properties 1 and&nbsp;12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.
 +
 +
===Immediate consequences===
 +
These rules have several further consequences:
 +
* The determinant is a homogeneous function, i.e.,
 +
:<math>\det(cA) = c^n\det(A)</math> (for an <math>n \times n</math> matrix <math>A</math>).
 +
* Interchanging any pair of columns of a matrix multiplies its determinant by&nbsp;−1. This follows from the determinant being multilinear and alternating (properties 2 and 3 above):
 +
: <math>|a_1, \dots, a_j, \dots a_i, \dots, a_n| = - |a_1, \dots, a_i, \dots, a_j, \dots, a_n|.</math>
 +
: This formula can be applied iteratively when several columns are swapped. For example
 +
:<math>|a_3, a_1, a_2, a_4 \dots, a_n| = - |a_1, a_3, a_2, a_4, \dots, a_n| = |a_1, a_2, a_3, a_4, \dots, a_n|.</math>
 +
:Yet more generally, any permutation of the columns multiplies the determinant by the sign of the permutation.
 +
* If some column can be expressed as a linear combination of the ''other'' columns (i.e. the columns of the matrix form a linearly dependent set), the determinant is 0. As a special case, this includes: if some column is such that all its entries are zero, then the determinant of that matrix is 0.
 +
* Adding a scalar multiple of one column to ''another'' column does not change the value of the determinant. This is a consequence of multilinearity and being alternative: by multilinearity the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0, since the determinant is alternating.
 +
* If <math>A</math> is a triangular matrix, i.e. <math>a_{ij}=0</math>, whenever <math>i>j</math> or, alternatively, whenever <math>i<j</math>, then its determinant equals the product of the diagonal entries:
 +
:<math>\det(A) = a_{11} a_{22} \cdots a_{nn} = \prod_{i=1}^n a_{ii}.</math>
 +
: Indeed, such a matrix can be reduced, by appropriately adding multiples of the columns with fewer nonzero entries to those with more entries, to a diagonal matrix (without changing the determinant).  For such a matrix, using the linearity in each column reduces to the identity matrix, in which case the stated formula holds by the very first characterizing property of determinants. Alternatively, this formula can also be deduced from the Leibniz formula, since the only permutation <math>\sigma</math> which gives a non-zero contribution is the identity permutation.
 +
 +
====Example====
 +
These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix <math>A</math> using that method:
 +
 +
:<math>A = \begin{bmatrix}
 +
  -2 & -1 & 2 \\
 +
  2 & 1 & 4 \\
 +
  -3 & 3 & -1
 +
\end{bmatrix}. </math>
 +
 +
{| class="wikitable"
 +
|+ Caption text
 +
|-
 +
| Matrix || <math>B = \begin{bmatrix}
 +
  -3 & -1 & 2 \\
 +
  3 & 1 & 4 \\
 +
  0 & 3 & -1
 +
\end{bmatrix} </math> ||
 +
<math>C = \begin{bmatrix}
 +
  -3 & 5 & 2 \\
 +
  3 & 13 & 4 \\
 +
  0 & 0 & -1
 +
\end{bmatrix} </math>
 +
||
 +
<math>D = \begin{bmatrix}
 +
  5 & -3 & 2 \\
 +
  13 & 3 & 4 \\
 +
  0 & 0 & -1
 +
\end{bmatrix} </math>
 +
||
 +
<math>E = \begin{bmatrix}
 +
  18 & -3 & 2 \\
 +
  0 & 3 & 4 \\
 +
  0 & 0 & -1
 +
\end{bmatrix} </math>
 +
|-
 +
| Obtained by ||
 +
add the second column to the first
 +
||
 +
add 3 times the third column to the second
 +
||
 +
swap the first two columns
 +
||
 +
add <math>-\frac{13} 3</math> times the second column to the first
 +
|-
 +
| Determinant || <math>|A| = |B|</math> ||
 +
<math>|B| = |C|</math>
 +
||
 +
<math>|D| = -|C|</math>
 +
||
 +
<math>|E| = |D|</math>
 +
|}
 +
 +
Combining these equalities gives <math>|A| = -|E| = -18 \cdot 3 \cdot (-1) = 54.</math>
 +
 +
===Transpose===
 +
The determinant of the transpose of <math>A</math> equals the determinant of ''A'':
 +
:<math>\det\left(A^\textsf{T}\right) = \det(A)</math>.
 +
This can be proven by inspecting the Leibniz formula. This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an ''n'' × ''n'' matrix as being composed of ''n'' rows, the determinant is an ''n''-linear function.
 +
 +
=== Multiplicativity and matrix groups ===
 +
Thus the determinant is a ''multiplicative map'', i.e., for square matrices <math>A</math> and <math>B</math> of equal size, the determinant of a matrix product equals the product of their determinants:
 +
:<math>\det(AB) = \det (A) \det (B)</math>
 +
 +
This key fact can be proven by observing that, for a fixed matrix <math>B</math>, both sides of the equation are alternating and multilinear as a function depending on the columns of <math>A</math>. Moreover, they both take the value <math>\det B</math> when <math>A</math> is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim.
 +
 +
A matrix <math>A</math> is invertible precisely if its determinant is nonzero. This follows from the multiplicativity of <math>\det</math> and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by
 +
:<math>\det\left(A^{-1}\right) = \frac{1}{\det(A)} = [\det(A)]^{-1}</math>.
 +
 +
In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size <math>n</math>) forms a group known as the general linear group <math>\operatorname{GL}_n</math> (respectively, a subgroup called the special linear group <math>\operatorname{SL}_n \subset \operatorname{GL}_n</math>. More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if ''n'' is 2 or 3 consists of all rotation matrices), and the special unitary group.
 +
 +
The Cauchy–Binet formula is a generalization of that product formula for ''rectangular'' matrices. This formula can also be recast as a multiplicative formula for compound matrices whose entries are the determinants of all quadratic submatrices of a given matrix.
 +
 +
=== Laplace expansion ===
 +
Laplace expansion expresses the determinant of a matrix <math>A</math> in terms of determinants of smaller matrices, known as its minors. The minor <math>M_{i,j}</math> is defined to be the determinant of the <math>(n-1) \times (n-1)</math>-matrix that results from <math>A</math> by removing the <math>i</math>-th row and the <math>j</math>-th column. The expression <math>(-1)^{i+j}M_{i,j}</math> is known as a cofactor. For every <math>i</math>, one has the equality
 +
:<math>\det(A) = \sum_{j=1}^n (-1)^{i+j} a_{ij} M_{ij},</math>
 +
which is called the ''Laplace expansion along the {{mvar|i}}th row''. For example, the Laplace expansion along the first row (<math>i=1</math>) gives the following formula:
 +
:<math>
 +
  \begin{vmatrix}a&b&c\\ d&e&f\\ g&h&i\end{vmatrix} =
 +
  a\begin{vmatrix}e&f\\ h&i\end{vmatrix} - b\begin{vmatrix}d&f\\ g&i\end{vmatrix} + c\begin{vmatrix}d&e\\ g&h\end{vmatrix}
 +
</math>
 +
Unwinding the determinants of these <math>2 \times 2</math>-matrices gives back the Leibniz formula mentioned above. Similarly, the ''Laplace expansion along the <math>j</math>-th column'' is the equality
 +
:<math>\det(A)= \sum_{i=1}^n (-1)^{i+j} a_{ij} M_{ij}.</math>
 +
Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the Vandermonde matrix
 +
::<math>\left|\begin{array}{ccccc}
 +
            1 &        1 &        1 & \cdots &        1 \\
 +
          x_1 &      x_2 &      x_3 & \cdots &      x_n \\
 +
        x_1^2 &    x_2^2 &    x_3^2 & \cdots &    x_n^2 \\
 +
      \vdots &    \vdots &    \vdots & \ddots &    \vdots \\
 +
    x_1^{n-1} & x_2^{n-1} & x_3^{n-1} & \cdots & x_n^{n-1}
 +
  \end{array}\right| =
 +
  \prod_{1 \leq i < j \leq n} \left(x_j - x_i\right).
 +
</math>
 +
This determinant has been applied, for example, in the proof of Baker's theorem in the theory of transcendental numbers.
 +
 +
====Adjugate matrix====
 +
The adjugate matrix <math>\operatorname{adj}(A)</math> is the transpose of the matrix of the cofactors, that is,
 +
: <math>(\operatorname{adj}(A))_{ij} = (-1)^{i+j} M_{ji}.</math>
 +
 +
For every matrix, one has
 +
: <math>(\det A) I = A\operatorname{adj}A = (\operatorname{adj}A)\,A. </math>
 +
 +
Thus the adjugate matrix can be used for expressing the inverse of a nonsingular matrix:
 +
: <math>A^{-1} = \frac 1{\det A}\operatorname{adj}A. </math>
 +
 +
=== Block matrices ===
 +
The formula for the determinant of a <math>2 \times 2</math>-matrix above continues to hold, under appropriate further assumptions, for a block matrix, i.e., a matrix composed of four submatrices <math>A, B, C, D</math> of dimension <math>n \times n</math>, <math>n \times m</math>, <math>m \times n</math> and <math>m \times m</math>, respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is
 +
:<math>\det\begin{pmatrix}A& 0\\ C& D\end{pmatrix} = \det(A) \det(D) = \det\begin{pmatrix}A& B\\ 0& D\end{pmatrix}.</math>
 +
If <math>A</math> is invertible (and similarly if <math>D</math> is invertible), one has
 +
 +
:<math>\det\begin{pmatrix}A& B\\ C& D\end{pmatrix} = \det(A) \det\left(D - C A^{-1} B\right) .</math>
 +
If <math>D</math> is a <math>1 \times 1</math>-matrix, this simplifies to <math>\det (A) (D - CA^{-1}B)</math>.
 +
 +
If the blocks are square matrices of the ''same'' size further formulas hold. For example, if <math>C</math> and <math>D</math> commute (i.e., <math>CD=DC</math>), then there holds
 +
 +
:<math>\det\begin{pmatrix}A& B\\ C& D\end{pmatrix} = \det(AD - BC).</math>
 +
 +
This formula has been generalized to matrices composed of more than <math>2 \times 2</math> blocks, again under appropriate commutativity conditions among the individual blocks.
 +
 +
For <math>A = D </math> and <math>B=C</math>, the following formula holds (even if <math>A</math> and <math>B</math> do not commute)
 +
:<math>\det\begin{pmatrix}A& B\\ B& A\end{pmatrix} = \det(A - B) \det(A + B).</math>
 +
 +
=== Sylvester's determinant theorem ===
 +
Sylvester's determinant theorem states that for ''A'', an ''m'' × ''n'' matrix, and ''B'', an ''n'' × ''m'' matrix (so that ''A'' and ''B'' have dimensions allowing them to be multiplied in either order forming a square matrix):
 +
 +
:<math>\det\left(I_\mathit{m} + AB\right) = \det\left(I_\mathit{n} + BA\right),</math>
 +
 +
where ''I''<sub>''m''</sub> and ''I''<sub>''n''</sub> are the ''m'' × ''m'' and ''n'' × ''n'' identity matrices, respectively.
 +
 +
From this general result several consequences follow.
 +
:a. For the case of column vector ''c'' and row vector ''r'', each with ''m'' components, the formula allows quick calculation of the determinant of a matrix that differs from the identity matrix by a matrix of rank 1:
 +
::<math>\det\left(I_\mathit{m} + cr\right) = 1 + rc.</math>
 +
:b. More generally, for any invertible ''m'' × ''m'' matrix ''X'',
 +
::<math>\det(X + AB) = \det(X) \det\left(I_\mathit{n} + BX^{-1}A\right),</math>
 +
:c. For a column and row vector as above:
 +
:: <math>\det(X + cr) = \det(X) \det\left(1 + rX^{-1}c\right) = \det(X) + r\,\operatorname{adj}(X)\,c.</math>
 +
:d. For square matrices <math>A</math> and <math>B</math> of the same size, the matrices <math>AB</math> and <math>BA</math> have the same characteristic polynomials (hence the same eigenvalues).
 +
 +
===Sum===
 +
 +
The determinant of the sum <math>A+B</math> of two square matrices of the same size is not in general expressible in terms of the determinants of ''A'' and of ''B''. However, for positive semidefinite matrices <math>A</math>, <math>B</math> and <math>C</math> of equal size, <math>\det(A + B + C) + \det(C) \geq \det(A + C) + \det(B + C)</math>, for <math>A,B,C \geq 0</math> with the corollary <math>\det(A + B) \geq \det(A) + \det(B)</math>.
  
 
==Resources==
 
==Resources==
Line 116: Line 292:
 
* [https://www.khanacademy.org/math/linear-algebra/matrix-transformations/inverse-of-matrices/v/linear-algebra-rule-of-sarrus-of-determinants Rules of Sarrus of Determinant], Khan Academy
 
* [https://www.khanacademy.org/math/linear-algebra/matrix-transformations/inverse-of-matrices/v/linear-algebra-rule-of-sarrus-of-determinants Rules of Sarrus of Determinant], Khan Academy
  
==References==
+
== Licensing ==  
# Lang, Serge (1985), Introduction to Linear Algebra, Undergraduate Texts in Mathematics (2 ed.), Springer, ISBN 9780387962054
+
Content obtained and/or adapted from:
# McConnell (1957). Applications of Tensor Analysis. Dover Publications. pp. 10–17.
+
* [https://en.wikipedia.org/wiki/Determinant Determinant, Wikipedia] under a CC BY-SA license
# Harris, Frank E. (2014), Mathematics for Physical Science and Engineering, Elsevier, ISBN 9780128010495
 

Latest revision as of 17:49, 14 November 2021

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix A is denoted det(A), det A, or |A|.

In the case of a 2 × 2 matrix the determinant can be defined as

Similarly, for a 3 × 3 matrix A, its determinant is

Each determinant of a 2 × 2 matrix in this equation is called a minor of the matrix A. This procedure can be extended to give a recursive definition for the determinant of an n × n matrix, known as Laplace expansion.

Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a matrix, whose roots are the eigenvalues. In geometry, the signed n-dimensional volume of a n-dimensional parallelepiped is expressed by a determinant. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.

2 × 2 matrices

The determinant of a 2 × 2 matrix is denoted either by "det" or by vertical bars around the matrix, and is defined as

For example,

First properties

The determinant has several key properties that can be proved by direct evaluation of the definition for -matrices, and that continue to hold for determinants of larger matrices. They are as follows: first, the determinant of the identity matrix is 1. Second, the determinant is zero if two rows are the same:

This holds similarly if the two columns are the same. Moreover,

Finally, if any column is multiplied by some number (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:

Definition

In the sequel, A is a square matrix with n rows and n columns, so that it can be written as

The entries etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are elements in more abstract algebraic structures known as commutative rings.

The determinant of A is denoted by det(A), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:

There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.

Leibniz formula

The Leibniz formula for the determinant of a 3 × 3 matrix is the following:

The rule of Sarrus is a mnemonic for this formula: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration:

Schema sarrus-regel.png

This scheme for calculating the determinant of a 3 × 3 matrix does not carry over into higher dimensions.

n × n matrices

The Leibniz formula for the determinant of an -matrix is a more involved, but related expression. It is an expression involving the notion of permutations and their signature. A permutation of the set is a function that reorders this set of integers. The value in the -th position after the reordering is denoted by . The set of all such permutations, the so-called symmetric group, is denoted . The signature of is defined to be whenever the reordering given by σ can be achieved by successively interchanging two entries an even number of times, and whenever it can be achieved by an odd number of such interchanges. Given the matrix and a permutation , the product

is also written more briefly using Pi notation as

.

Using these notions, the definition of the determinant using the Leibniz formula is then

a sum involving all permutations, where each summand is a product of entries of the matrix, multiplied with a sign depending on the permutation.

The following table unwinds these terms in the case . In the first column, a permutation is listed according to its values. For example, in the second row, the permutation satisfies . It can be obtained from the standard order (1, 2, 3) by a single exchange (exchanging the second and third entry), so that its signature is .

Permutations of and their contribution to the determinant
Permutation
1, 2, 3
1, 3, 2
3, 1, 2
3, 2, 1
2, 3, 1
2, 1, 3

The sum of the six terms in the third column then reads

This gives back the formula for -matrices above. For a general -matrix, the Leibniz formula involves (n factorial) summands, each of which is a product of n entries of the matrix.

The Leibniz formula can also be expressed using a summation in which not only permutations, but all sequences of indices in the range occur. To do this, one uses the Levi-Civita symbol instead of the sign of a permutation

This gives back the formula above since the Levi-Civita symbol is zero if the indices do not form a permutation.

Properties of the determinant

Characterization of the determinant

The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an -matrix A as being composed of its columns, so denoted as

where the column vector (for each i) is composed of the entries of the matrix in the i-th column.

  1. , where is an identity matrix.
  2. The determinant is multilinear: if the jth column of a matrix is written as a linear combination of two column vectors v and w and a number r, then the determinant of A is expressible as a similar linear combination:
  3. The determinant is alternating: whenever two columns of a matrix are identical, its determinant is 0:

If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any -matrix A a number that satisfies these three properties. This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula.

To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.

Immediate consequences

These rules have several further consequences:

  • The determinant is a homogeneous function, i.e.,
(for an matrix ).
  • Interchanging any pair of columns of a matrix multiplies its determinant by −1. This follows from the determinant being multilinear and alternating (properties 2 and 3 above):
This formula can be applied iteratively when several columns are swapped. For example
Yet more generally, any permutation of the columns multiplies the determinant by the sign of the permutation.
  • If some column can be expressed as a linear combination of the other columns (i.e. the columns of the matrix form a linearly dependent set), the determinant is 0. As a special case, this includes: if some column is such that all its entries are zero, then the determinant of that matrix is 0.
  • Adding a scalar multiple of one column to another column does not change the value of the determinant. This is a consequence of multilinearity and being alternative: by multilinearity the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0, since the determinant is alternating.
  • If is a triangular matrix, i.e. , whenever or, alternatively, whenever , then its determinant equals the product of the diagonal entries:
Indeed, such a matrix can be reduced, by appropriately adding multiples of the columns with fewer nonzero entries to those with more entries, to a diagonal matrix (without changing the determinant). For such a matrix, using the linearity in each column reduces to the identity matrix, in which case the stated formula holds by the very first characterizing property of determinants. Alternatively, this formula can also be deduced from the Leibniz formula, since the only permutation which gives a non-zero contribution is the identity permutation.

Example

These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix using that method:

Caption text
Matrix

Obtained by

add the second column to the first

add 3 times the third column to the second

swap the first two columns

add times the second column to the first

Determinant

Combining these equalities gives

Transpose

The determinant of the transpose of equals the determinant of A:

.

This can be proven by inspecting the Leibniz formula. This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an n × n matrix as being composed of n rows, the determinant is an n-linear function.

Multiplicativity and matrix groups

Thus the determinant is a multiplicative map, i.e., for square matrices and of equal size, the determinant of a matrix product equals the product of their determinants:

This key fact can be proven by observing that, for a fixed matrix , both sides of the equation are alternating and multilinear as a function depending on the columns of . Moreover, they both take the value when is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim.

A matrix is invertible precisely if its determinant is nonzero. This follows from the multiplicativity of and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by

.

In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size ) forms a group known as the general linear group (respectively, a subgroup called the special linear group . More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if n is 2 or 3 consists of all rotation matrices), and the special unitary group.

The Cauchy–Binet formula is a generalization of that product formula for rectangular matrices. This formula can also be recast as a multiplicative formula for compound matrices whose entries are the determinants of all quadratic submatrices of a given matrix.

Laplace expansion

Laplace expansion expresses the determinant of a matrix in terms of determinants of smaller matrices, known as its minors. The minor is defined to be the determinant of the -matrix that results from by removing the -th row and the -th column. The expression is known as a cofactor. For every , one has the equality

which is called the Laplace expansion along the ith row. For example, the Laplace expansion along the first row () gives the following formula:

Unwinding the determinants of these -matrices gives back the Leibniz formula mentioned above. Similarly, the Laplace expansion along the -th column is the equality

Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the Vandermonde matrix

This determinant has been applied, for example, in the proof of Baker's theorem in the theory of transcendental numbers.

Adjugate matrix

The adjugate matrix is the transpose of the matrix of the cofactors, that is,

For every matrix, one has

Thus the adjugate matrix can be used for expressing the inverse of a nonsingular matrix:

Block matrices

The formula for the determinant of a -matrix above continues to hold, under appropriate further assumptions, for a block matrix, i.e., a matrix composed of four submatrices of dimension , , and , respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is

If is invertible (and similarly if is invertible), one has

If is a -matrix, this simplifies to .

If the blocks are square matrices of the same size further formulas hold. For example, if and commute (i.e., ), then there holds

This formula has been generalized to matrices composed of more than blocks, again under appropriate commutativity conditions among the individual blocks.

For and , the following formula holds (even if and do not commute)

Sylvester's determinant theorem

Sylvester's determinant theorem states that for A, an m × n matrix, and B, an n × m matrix (so that A and B have dimensions allowing them to be multiplied in either order forming a square matrix):

where Im and In are the m × m and n × n identity matrices, respectively.

From this general result several consequences follow.

a. For the case of column vector c and row vector r, each with m components, the formula allows quick calculation of the determinant of a matrix that differs from the identity matrix by a matrix of rank 1:
b. More generally, for any invertible m × m matrix X,
c. For a column and row vector as above:
d. For square matrices and of the same size, the matrices and have the same characteristic polynomials (hence the same eigenvalues).

Sum

The determinant of the sum of two square matrices of the same size is not in general expressible in terms of the determinants of A and of B. However, for positive semidefinite matrices , and of equal size, , for with the corollary .

Resources

Licensing

Content obtained and/or adapted from: