The Column Space and Nullspace of a Linear Transformation

From Department of Mathematics at UTSA
Revision as of 20:17, 14 November 2021 by Khanh (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Nullspace

Among the three important vector spaces associated with a matrix of order m x n is the Null Space. Null spaces apply to linear transformations.

Range

Let T be a linear transformation from an m-dimension vector space X to an n-dimensional vector space Y, and let x1, x2, x3, ..., xm be a basis for X and let y1, y2, y3, ..., yn be a basis for Y, and consider its corresponding n × m matrix,

.

The image of X, T(X), is called the range of T. T(A) is obviously a subspace of Y.

Since any element x within X can be expressed as

,

implying that the range of T is the vector space spanned by the vectors T(xi) which is indicated by the columns of the matrix. By a theorem proven earlier, the dimension of the vector space spanned by those vectors is equal to the maximum number of vectors that are linearly independent. Since the linear dependence of columns in the matrix is the same as the linear dependence of the vectors T(xi), the dimension is equal to the maximum number of columns that are linearly independent, which is equal to the rank. We have the following important conclusion:

The dimension of the range of a linear transformation is equal to the rank of its corresponding matrix.

Null Space

For example, consider the matrix:.

The null space of this matrix consists of the set:

It may not be immediately obvious how we found this set but it can be readily checked that any element of this set indeed gives the zero vector on being multiplied by A. Clearly,

as

.

Null Space as a vector space

It is easy to show that the null space is in fact a vector space. If we identify a n x 1 column matrix with an element of the n dimensional Euclidean space then the null space becomes its subspace with the usual operations. The null space may also be treated as a subspace of the vector space of all n x 1 column matrices with matrix addition and scalar multiplication of a matrix as the two operations.

To show that the null space is indeed a vector space it is sufficient to show that

and


These are true due to the distributive law of matrices. The details of the proof are left to the reader as an exercise.

Properties

Null spaces of row equivalent matrices

If A and B are two row equivalent matrices then they share the same null space. This fact, which is in fact a little theorem, can be proved as follows:

Suppose x is an element of the null space of A. Then Ax = 0. Also since A is row equivalent to B so where each is an elementary matrix. (Recall that an elementary matrix is the matrix obtained from I from performing any elementary row operation.) Now,

and so x is in the null space of B as well. So the null space of A is contained in that of B. Similarly the null space of B is contained in that of A. It is now clear that A and B have the same null space.

Basis of Null Space

As the null space of a matrix is a vector space, it is natural to wonder what its basis will be. Of course, since the null space is a subspace of , its basis can have at most n elements in it. The number of elements in the basis of the null space is important and is called the nullity of A. To find out the basis of the null space of A we follow the following steps:

  1. First convert the given matrix into row echelon form say U.
  2. Next circle the first non zero entries in each row.
  3. Call the variable as a basic variable if the first column has a circled entry, and call it a free variable if the first column doesn't have a circled entry. Similarly call the variable basic if the second column has a non zero entry and free otherwise. In this way name n variables .
  4. If for any i, is a free variable, then let be the solution obtained by solving the system Ux = 0 where all the free variables are exactly 0, except for which is 1. If is not a free variable don't do anything.
  5. Repeat the above step for all the free variables getting vectors etc in the process.
  6. The set is the required basis.

The key point in the above algorithm was that A and U have the same null space. For a complete proof of why the algorithm works we refer the reader to the excellent text book given in the references by Hoffman and Kunze.

Let us look at an example:

Suppose

The first step involves reducing A to its row echelon form U.

Now

We encircle the first non zero entries in each row by brackets:


Clearly the free variables are and and the rest and are basic variables. Now we shall solve the system Ux = 0 with to get the vector . Thus we need to solve,

This reduces to the following system on matrix multiplication:

It is clear from here that is the solution.

Thus . Similarly is found to be .

The set is the basis of the null space and the nullity of the matrix A is 2. In fact this method gives us a way to describe the null space as well which would be: (Why? - Because the linear combination of solutions is also a solution)

Implications of nullity being zero

The example given above gives no hint as to what happens when there are no free variables in the row echelon form of A. All we said that in step 4 of our algorithm was that if is not a free variable then don't do anything. Following that logic, if no variable is free then we keep on doing nothing, leading to the conclusion that if no variable is free then the basis of the null space is an empty set i.e. . In that case we say that the nullity of the null space is 0. Note that the null space itself is not empty and contains precisely one element which is the zero vector.

Now suppose that A is any matrix of order m x n with columns . Each is a vector in the m-dimensional space. If the nullity of A is zero, then it follows that Ax=0 has only the zero vector as the solution.

More precisely,

has the trivial solution only. This implies that nullity being zero makes it necessary for the columns of A to be linearly independent. By retracing our steps we can show that the converse is true as well.

Let us examine the special case of a square matrix, i.e. when m = n. Now if the nullity is zero then there is no free variable in the row reduced echelon form of the matrix A, which is say U. Hence each row contains a pivot, or a leading non zero entry. In that case U must be of the form,

or U must precisely be the identity matrix I. Conversely, if A is row equivalent to I then Ax = 0 and Ix = 0 have the same solutions, due to their being equivalent. Since Ix = 0 has only the trivial solution x = 0, so does Ax = 0. It follows that the null space of A is merely {0} and so the nullity of A is 0.

Thus nullity of A is 0 A is row equivalent to I.

Now if A is row equivalent to I then where each is an elementary matrix. Since a product of invertible matrices is invertible and each is invertible so A is invertible. Conversely if A was invertible, and U its row reduced echelon form then which is clearly invertible (by virtue of being a product of invertible matrices). Now a matrix containing a zero row can never be invertible (why?), so U has pivots in each row. It follows that there are n pivots all equal to 1, with zeros above and below them and so U = I. Thus A is row equivalent to I.

In summary, A is row equivalent to I A is invertible.

We can collect the entire argument in this section, to state the:

Theorem: For a square matrix of order n, the following are equivalent:

  1. A is invertible.
  2. Nullity of A is 0.
  3. A is row equivalent to the identity matrix.
  4. Columns of A are linearly independent.
  5. The system Ax = 0 has only the trivial solution.
  6. A is a product of elementary matrices.

It will be a good exercise for the reader at this stage to try to rewrite the proof of the theorem in detail.

Exercises

  1. Evaluate null spaces and bases for:
  2. Show that null space of a matrix is a vector space.
  3. Prove the theorem regarding invertibility of a square matrix. Also by showing that A is invertible iff A is, show that the condition that the rows are linearly independent can be added to the list.
  4. Is the solution set for Ax = b where b is a non zero vector (i.e. has at least one component non zero) a vector space? Give reasons.
  5. Let r be the number of basic variables associated with a n order matrix A (which is equal to those associated with its row echelon form). Show that A is invertible if and only if r = n.

Column and Row Spaces

The column space is an important vector space used in studying an m x n matrix. If we consider multiplication by a matrix as a sort of transformation that the vectors undergo, then the null space and the column space are the two natural collections of vectors which need to be studied to understand how this transformation works. While the null space focussed on those vectors which vanished under action of the matrix (i.e. the solutions of Ax = 0) the column space corresponds to the transformed vectors themselves (i.e. all Ax). It is the totality of all the vectors after the transformation. Those readers who have studied abstract algebra can relate the null space to the kernel of a homomorphism and the column space to the range.

Another important space associated with the matrix is the row space. Like its name suggests it is built entirely out of the rows of the matrix. We shall later see that the row space can be identified with the column space in a particular sense. In the special case of an invertible matrix, the row space and the column space are exactly equal.

The Column Space

Given an m x n matrix A the column space of A, denoted by C(A), is the collection of all the vectors formed by linear combinations of the columns of A.

More precisely if the columns of A are , where each is an m-dimensional vector then,

The column space is thus essentially built out of the columns of the matrix. It is the linear span of the columns and in that capacity is also a vector space with the standard operations. (Recall that span of any collection of vectors is always a vector space). Also as the columns are vectors in the m - dimensional space so the column space is naturally a subspace of .

Clearly if we let,

then we can say that,

Hence our definition can be modified to,

This gives us the idea that the column space is precisely the set of transformed vectors by the action of multiplication by the matrix.

Basis for the column space

We shall now prove a theorem regarding the basis for the column space of a matrix. This constructive proof will also allow us to state a very important theorem called the rank-nullity theorem as a corollary.

Theorem: The columns of the m x n matrix A corresponding to the basic variables form a basis of the column space of A.

Proof: First note that the column space of A is formed by the span of all the columns, .... If we take the basis of the null space of A, then as

so =0. But this shows us that the column associated with is a linear combination of the columns associated with the basic variables. (Note that the contribution of the other free variable columns is 0 and that of the linked column is 1.) In this way all the free variable associated columns are linear combinations of the basic variable associated columns. So all the basic variable linked columns span the entire column space.

Now suppose that the row echelon form of is , and if we take the basic variable columns the submatrix obtained from A is and that obtained from U is . Clearly is the row echelon form of (see exercises) and so the two share the same null space. Now has only basic variables associated with it because we take only basic variables from A to A1 and so its nullity is zero. This tells us that has only the trivial solution. Hence also has only the trivial solution and so its nullity is zero as well. It follows that the columns of which were the basic variable linked columns of A are linearly independent.

Thus the basic variable columns are both linearly independent and spanning as a result of which they form a basis of the column space. Q.E.D

Let us look at an example:

Suppose

The first step involves reducing A to its row echelon form U.

Now

We encircle the first non zero entries in each row by brackets:

Clearly the basic variables are and and the free variables are and .

So by the theorem, the basis of the column space of A consists of the columns and and is given by:

Note that the nullity of A is 2 which is equal to the difference between the total number of columns and the number of elements in the column space.

The Row Space

The row space, as, the name suggests is the space built out by the rows of the matrix. Given an m x n matrix A the row space of A, denoted by RS(A), is defined as the collection of all the vectors formed by linear combinations of the rows of A.

More precisely if the rows of A are ..., where each is an n-dimensional vector then,

The row space is thus the linear span of the rows and so is also a vector space with the standard operations. It is a subspace of .

Basis of the row space

The basis of the row space of A consists of precisely the non zero rows of U where U is the row echelon form of A. This fact is derived from combining two results which are:

  1. R(A) = R(U) if U is the row echelon form of A.
  2. The non zero rows of a matrix in row echelon form are linearly independent.

The proofs are outlined in the exercises.

Let us look at an example:

Suppose

If we take its row echelon form then we have,

The first three rows are non zero and so the basis of the row space is:

Rank of a matrix

Notice that in our example the basis of the row space has 3 elements which is the same number as that in the basis of the column space which we derived earlier. This is not a coincidence. It is in fact always true that the number of elements in the basis of the row space and that in of the column space is equal. The steps in the reasoning are as follows:

  1. The number of non zero rows in U is equal to the number of pivots (or leading non zero entries in each row) in U.
  2. The number of basic variables by their definition equals the number of pivot containing columns (and so equal the number of pivots).
  3. By the previous theorem the number of basic variables equals the number of elements in the basis of the column space.
  4. It follows that the bases have equal number of elements.

This unique number of elements in the two bases is called the rank of the matrix. Clearly since the rows of a matrix are the columns of its transpose, so a matrix and its transpose have the same rank. The rank of a matrix A is often written as Rank (A) just as the nullity is written as Nullity (A).

The rank nullity theorem

We now proceed to a very important theorem of linear algebra, called the rank nullity theorem. Another form of this theorem will appear in the chapter on linear transformations. The theorem, in our context, is :

For an m x n matrix A,

Rank (A) + Nullity (A) = n = Number of columns of A

The logic behind this theorem is clear. The rank, as we have seen earlier, corresponds to the number of basic variables and the nullity to the number of free variables. Since any column is either basic or free (but not both) obviously the theorem is true.

Readers acquainted with the first isomorphism theorem of groups can note that this theorem can be related with it. We shall later see how this is done.

Exercises

1. Evaluate bases for the column and row spaces for:

1.
2.
3.

2. Show that if A is an m x n matrix then the row space of A coincides with that of its row echelon form U.

Hint: Prove that an elementary row transformation on any row of A doesn't alter the fact that it is a linear combination of the rows of A.

3. Show that the non zero rows of a matrix in row echelon form are linearly independent.

Hint: Suppose where are scalars and the non zero rows. Now if the first pivot occurs at the (1,j)th position of the matrix then the jth component of each row other then of is 0. So is zero. Similarly all other 's are also necessarily zero.

4. Combine the above two results to show that the non zero rows of a matrix in its row echelon form U form a basis of both R(A) and R(U).

5. Show that a n order square matrix is invertibile its rank is n R(A) = C(A) = Ax = b has a unique solution for each m dimensional vector b.

6. Show that if A is a m x n and B a n x p matrix then:

1. C(AB) C(A).
2. R(AB) R(B).
Hint: For (1) note that and for (2) note that

7. A submatrix of a matrix A is a matrix obtained by deleting some rows and/or columns of A. Show that for every submatrix C of A, we have Rank (C) Rank (A).

Hint: Consider a matrix B formed by deleting rows of A not in C. Then Rank (B) Rank (A) and Rank (C) Rank (B).

8. Show that a m x n matrix A of rank r has at least one r x r submatrix of rank r, that is, A has an invertible submatrix of order r.

Hint: Let B be the matrix consisting of r linearly independent row vectors of A. Since Rank (B) = r so we can take r linearly independent vectors of B to get an r x r invertible submatrix C.

Licensing

Content obtained and/or adapted from: