Inverse of an n-by-n matrix
An n-by-n matrix A is the inverse of n-by-n matrix B (and B the inverse of A) if BA = AB = I,
where I is an identity matrix.
The inverse of an n-by-n matrix can be calculated by creating an n-by-2n matrix which has the original matrix on the left and the identity matrix on the right.  Row reduce this matrix and the right half will be the inverse.  If the matrix does not row reduce completely (i.e., a row is formed with all zeroes as its entries), it does not have an inverse.
Example 1
Let  
We begin by expanding and partitioning A to include the identity matrix, and then proceed to row reduce A until we reach the identity matrix on the left-hand side.
 
 
 
The matrix  is then the inverse of the original matrix A.
 is then the inverse of the original matrix A.
Inverse of a Linear Transformation
We now consider how to represent the inverse of a linear map. We start by recalling some facts about function 
inverses. Some functions have no inverse, or have an inverse on the left side 
or right side only.
Definitions
Where 
 is the projection map
 is the projection map 
 
and  is the embedding
 is the embedding  
 
the composition  is the identity map on
 is the identity map on  .
.
 
We say  is a left inverse map 
of
 is a left inverse map 
of  or, what is the same thing, 
that
 or, what is the same thing, 
that  is a right inverse map
of
 is a right inverse map
of  .
However, composition in the other order
.
However, composition in the other order  doesn't give the identity map— here is a vector that is not 
sent to itself under
 
doesn't give the identity map— here is a vector that is not 
sent to itself under  .
. 
 
In fact, the projection
 has no left inverse at all.
For, if
 has no left inverse at all.
For, if  were to be a left inverse of
 were to be a left inverse of  then we would have
then we would have
 
for all of the infinitely many  's. But no function
's. But no function  can send a single argument to more than one value.
 can send a single argument to more than one value.
(An example of a function with no inverse on either side
is the zero transformation on  .)
.)
Some functions have a two-sided inverse map, another function that is the inverse of the first, both from the left and from the right. For instance, the map given by  has the two-sided inverse
 has the two-sided inverse  .  
In this subsection we will focus on two-sided inverses. The appendix shows that a function has a two-sided inverse if and only if it is both one-to-one and onto. Template:AnchorThe appendix also shows that if a function
.  
In this subsection we will focus on two-sided inverses. The appendix shows that a function has a two-sided inverse if and only if it is both one-to-one and onto. Template:AnchorThe appendix also shows that if a function  has a two-sided inverse then it is unique, and so it is called "the" inverse, and is denoted
 has a two-sided inverse then it is unique, and so it is called "the" inverse, and is denoted  . So our purpose in this subsection is, where a linear map
. So our purpose in this subsection is, where a linear map  has an inverse, to find the relationship between
 has an inverse, to find the relationship between  and
 and  .
.
A matrix  is a left inverse matrix of the matrix
 is a left inverse matrix of the matrix  if
 if  is the identity matrix. It is a right inverse matrix if
 is the identity matrix. It is a right inverse matrix if  is the identity. A matrix
 is the identity. A matrix  with a two-sided inverse is an invertible matrix. That two-sided inverse is called the inverse matrix and is denoted
 with a two-sided inverse is an invertible matrix. That two-sided inverse is called the inverse matrix and is denoted  .
.
Because of the correspondence between linear maps and matrices, statements about map inverses translate into statements about matrix inverses.
Lemmas and Theorems
- If a matrix has both a left inverse and a right inverse then the two are equal.
- A matrix is invertible if and only if it is nonsingular.
- Proof: (For both results.) Given a matrix  , fix spaces of appropriate dimension for the domain and codomain. Fix bases for these spaces. With respect to these bases, , fix spaces of appropriate dimension for the domain and codomain. Fix bases for these spaces. With respect to these bases, represents a map represents a map . The statements are true about the map and therefore they are true about the matrix. . The statements are true about the map and therefore they are true about the matrix.
 
- A product of invertible matrices is invertible— if  and and are invertible and if are invertible and if is defined then is defined then is invertible and is invertible and . .- Proof: (This is just like the prior proof except that it requires two maps.) Fix appropriate spaces and bases and consider the represented maps  and and . Note that . Note that is a two-sided map inverse of is a two-sided map inverse of since since and and . This equality is reflected in the matrices representing the maps, as required. . This equality is reflected in the matrices representing the maps, as required.
 
Here is the arrow diagram giving the relationship
between map inverses and matrix inverses. 
It is a special case
of the diagram for function composition and matrix multiplication.
 
Beyond its place in our general program of 
seeing how to represent map operations, 
another reason for our interest in inverses comes from solving
linear systems.
A linear system is equivalent to a matrix equation, as here.
 
By fixing spaces and bases (e.g.,  and
 and  ),
we take the matrix
),
we take the matrix  to represent some map
 to represent some map  .
Then solving the system is the same as 
asking: what domain vector
.
Then solving the system is the same as 
asking: what domain vector  is mapped by
 is mapped by  to the result
 to the result 
 ?
If we could invert
?
If we could invert  then we could solve the system  
by multiplying
 then we could solve the system  
by multiplying  to get
to get  .
.
Example 2
We can find a left inverse for the matrix just given
 
by using Gauss' method to solve the resulting linear system.
 
Answer:  ,
,  ,
,  , and
, and  .
This matrix is actually the two-sided inverse of
.
This matrix is actually the two-sided inverse of  , 
as can easily be checked.
With it we can solve the system (
, 
as can easily be checked.
With it we can solve the system ( ) above by
applying the inverse.
) above by
applying the inverse.
 
Why solve systems this way, when Gauss' method takes less arithmetic (this assertion can be made precise by counting the number of arithmetic operations, as computer algorithm designers do)? Beyond its conceptual appeal of fitting into our program of discovering how to represent the various map operations, solving linear systems by using the matrix inverse has at least two advantages.
First, once the work of finding an inverse has been done, solving a system with the same coefficients but different constants is easy and fast: if we change the entries on the right of the system ( ) then we get a related problem
) then we get a related problem
 
with a related solution method.
 
In applications, solving many systems having the same matrix of
coefficients is common.
Another advantage of inverses is that we can 
explore a system's sensitivity to changes in the constants.
For example, tweaking the  on the right of the system (
 on the right of the system ( ) to
) to
 
can be solved with the inverse.
 
to show that  changes by
 changes by  of the tweak while
 of the tweak while  moves by
 moves by  of that tweak. This sort of analysis is used, for example, to decide how accurately data must be specified in a linear model to ensure that the solution has a desired accuracy.
}}
 of that tweak. This sort of analysis is used, for example, to decide how accurately data must be specified in a linear model to ensure that the solution has a desired accuracy.
}}
We finish by describing the computational procedure
usually used to find the inverse matrix.
- A matrix is invertible if and only if it can be written as the product of elementary reduction matrices. The inverse can be computed by applying to the identity matrix the same row steps, in the same order, as are used to Gauss-Jordan reduce the invertible matrix.
- Proof: A matrix  is invertible if and only if it is nonsingular and thus Gauss-Jordan reduces to the identity. This reduction can be done with elementary matrices is invertible if and only if it is nonsingular and thus Gauss-Jordan reduces to the identity. This reduction can be done with elementary matrices
 
 . .
- This equation gives the two halves of the result.
 
 
- First, elementary matrices are invertible and their inverses are also elementary. Applying  to the left of both sides of that equation, then to the left of both sides of that equation, then , etc., gives , etc., gives as the product of elementary matrices as the product of elementary matrices (the (the is here to cover the trivial is here to cover the trivial case). case).
 
 
- Second, matrix inverses are unique and so comparison of the above equation with  shows that shows that . Therefore, applying . Therefore, applying to the identity, followed by to the identity, followed by , etc., yields the inverse of , etc., yields the inverse of . .
 
 
Example 3
To find the inverse of
 
we do Gauss-Jordan reduction, meanwhile performing the same operations on
the identity.
For clerical convenience we write the matrix and the identity side-by-side,
and do the reduction steps together.
![{\displaystyle {\begin{array}{rcl}\left({\begin{array}{cc|cc}1&1&1&0\\2&-1&0&1\end{array}}\right)&{\xrightarrow[{}]{-2\rho _{1}+\rho _{2}}}&\left({\begin{array}{cc|cc}1&1&1&0\\0&-3&-2&1\end{array}}\right)\\&{\xrightarrow[{}]{-1/3\rho _{2}}}&\left({\begin{array}{cc|cc}1&1&1&0\\0&1&2/3&-1/3\end{array}}\right)\\&{\xrightarrow[{}]{-\rho _{2}+\rho _{1}}}&\left({\begin{array}{cc|cc}1&0&1/3&1/3\\0&1&2/3&-1/3\end{array}}\right)\end{array}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/db83d4a9ced0e57d276820f800b6841c182301ee) 
This calculation has found the inverse.
 
Example 4
This one happens to start with a row swap.
![{\displaystyle {\begin{array}{rcl}\left({\begin{array}{ccc|ccc}0&3&-1&1&0&0\\1&0&1&0&1&0\\1&-1&0&0&0&1\end{array}}\right)&{\xrightarrow[{}]{\rho _{1}\leftrightarrow \rho _{2}}}&\left({\begin{array}{ccc|ccc}1&0&1&0&1&0\\0&3&-1&1&0&0\\1&-1&0&0&0&1\end{array}}\right)\\&{\xrightarrow[{}]{-\rho _{1}+\rho _{3}}}&\left({\begin{array}{ccc|ccc}1&0&1&0&1&0\\0&3&-1&1&0&0\\0&-1&-1&0&-1&1\end{array}}\right)\\&\vdots \\&{\xrightarrow[{}]{}}&\left({\begin{array}{ccc|ccc}1&0&0&1/4&1/4&3/4\\0&1&0&1/4&1/4&-1/4\\0&0&1&-1/4&3/4&-3/4\end{array}}\right)\end{array}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f0e49ec1321435e8e7b06733c15059c380f38a29) 
Example 5
A non-invertible matrix is detected by the fact that the left half won't
reduce to the identity.
![{\displaystyle \left({\begin{array}{cc|cc}1&1&1&0\\2&2&0&1\end{array}}\right){\xrightarrow[{}]{-2\rho _{1}+\rho _{2}}}\left({\begin{array}{cc|cc}1&1&1&0\\0&0&-2&1\end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/920de8987fe84b9cf78efce1afd5cec95aea5d22) 
This procedure will find the inverse of a general  matrix.
The
 matrix.
The  case is handy.
 case is handy.