Difference between revisions of "Subspaces of Rn and Linear Independence"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
Line 140: Line 140:
 
</math>
 
</math>
 
: (<math> p </math>, <math> r </math> scalars) is a linear combination of elements of <math> S </math> and so is in <math> [S] </math> (possibly some of the <math>\vec{s}_i</math>'s forming <math>\vec{v}</math> equal some of the <math>\vec{s}_j</math>'s from <math>\vec{w}</math>, but it does not matter).
 
: (<math> p </math>, <math> r </math> scalars) is a linear combination of elements of <math> S </math> and so is in <math> [S] </math> (possibly some of the <math>\vec{s}_i</math>'s forming <math>\vec{v}</math> equal some of the <math>\vec{s}_j</math>'s from <math>\vec{w}</math>, but it does not matter).
 +
 +
===Example 1===
 +
The span of this set
 +
is all of <math>\mathbb{R}^2</math>.
 +
 +
:<math>
 +
\{\begin{pmatrix} 1 \\ 1 \end{pmatrix},\begin{pmatrix} 1 \\ -1 \end{pmatrix}\}
 +
</math>
 +
 +
To check this we must show that any member of <math>\mathbb{R}^2</math> is a linear combination
 +
of these two vectors.
 +
So we ask: for which
 +
vectors (with real components <math>x</math> and <math>y</math>)
 +
are there scalars <math>c_1</math> and <math>c_2</math> such that this holds?
 +
 +
:<math>
 +
c_1\begin{pmatrix} 1 \\ 1 \end{pmatrix}+c_2\begin{pmatrix} 1 \\ -1 \end{pmatrix}=\begin{pmatrix} x \\ y \end{pmatrix}
 +
</math>
 +
 +
Gauss' method
 +
 +
:<math>\begin{array}{rcl}
 +
\begin{array}{*{2}{rc}r}
 +
c_1  &+  &c_2  &=  &x  \\
 +
c_1  &-  &c_2  &=  &y
 +
\end{array}
 +
&\xrightarrow[]{-\rho_1+\rho_2}
 +
&\begin{array}{*{2}{rc}r}
 +
c_1  &+  &c_2    &=  &x  \\
 +
&  &-2c_2  &=  &-x+y
 +
\end{array}
 +
\end{array}
 +
</math>
 +
with back substitution gives <math>c_2=(x-y)/2</math> and <math>c_1=(x+y)/2</math>. These two equations show that for any <math>x</math> and <math>y</math> that we start with, there are appropriate coefficients <math>c_1</math> and <math>c_2</math> making the above vector equation true. For instance, for <math>x=1</math> and <math>y=2</math> the coefficients <math>c_2=-1/2</math> and <math>c_1=3/2</math> will do. That is, any vector in <math>\mathbb{R}^2</math> can be written as a linear combination of the two given vectors.

Revision as of 15:00, 29 September 2021

Subspaces

For any vector space, a subspace is a subset that is itself a vector space, under the inherited operations.

Important Lemma on Subspaces

For a nonempty subset of a vector space, under the inherited operations, the following are equivalent statements.

  1. is a subspace of that vector space
  2. is closed under linear combinations of pairs of vectors: for any vectors and scalars the vector is in
  3. is closed under linear combinations of any number of vectors: for any vectors and scalars the vector is in .

Briefly, the way that a subset gets to be a subspace is by being closed under linear combinations.

Proof:
"The following are equivalent" means that each pair of statements are equivalent.
We will show this equivalence by establishing that . This strategy is suggested by noticing that and are easy and so we need only argue the single implication .
For that argument, assume that is a nonempty subset of a vector space and that is closed under combinations of pairs of vectors. We will show that is a vector space by checking the conditions.
The first item in the vector space definition has five conditions. First, for closure under addition, if then , as .
Second, for any , because addition is inherited from , the sum in equals the sum in , and that equals the sum in (because is a vector space, its addition is commutative), and that in turn equals the sum in . The argument for the third condition is similar to that for the second.
For the fourth, consider the zero vector of and note that closure of under linear combinations of pairs of vectors gives that (where is any member of the nonempty set ) is in ; showing that acts under the inherited operations as the additive identity of is easy.
The fifth condition is satisfied because for any , closure under linear combinations shows that the vector is in ; showing that it is the additive inverse of under the inherited operations is routine.


We usually show that a subset is a subspace with .

Example 1

The plane is a subspace of . As specified in the definition, the operations are the ones inherited from the larger space, that is, vectors add in as they add in
and scalar multiplication is also the same as it is in . To show that is a subspace, we need only note that it is a subset and then verify that it is a space. Checking that satisfies the conditions in the definition of a vector space is routine. For instance, for closure under addition, just note that if the summands satisfy that and then the sum satisfies that .

Example 2

The -axis in is a subspace where the addition and scalar multiplication operations are the inherited ones.
As above, to verify that this is a subspace, we simply note that it is a subset and then check that it satisfies the conditions in definition of a vector space. For instance, the two closure conditions are satisfied: (1) adding two vectors with a second component of zero results in a vector with a second component of zero, and (2) multiplying a scalar times a vector with a second component of zero results in a vector with a second component of zero.

Example 3

Another subspace of is
which is its trivial subspace.
Any vector space has a trivial subspace .

At the opposite extreme, any vector space has itself for a subspace. Template:AnchorThese two are the improper subspaces. Template:AnchorOther subspaces are proper.

Example 4

The condition in the definition requiring that the addition and scalar multiplication operations must be the ones inherited from the larger space is important. Consider the subset of the vector space . Under the operations and that set is a vector space, specifically, a trivial space. But it is not a subspace of because those aren't the inherited operations, since of course has .

Example 5

All kinds of vector spaces, not just 's, have subspaces. The vector space of cubic polynomials has a subspace comprised of all linear polynomials .

Example 6

This is a subspace of the matrices
(checking that it is nonempty and closed under linear combinations is easy).
To parametrize, express the condition as .
As above, we've described the subspace as a collection of unrestricted linear combinations (by coincidence, also of two elements).


Span

The span(or linear closure) of a nonempty subset of a vector space is the set of all linear combinations of vectors from .

The span of the empty subset of a vector space is the trivial subspace. No notation for the span is completely standard. The square brackets used here are common, but so are "" and "".

Lemma

In a vector space, the span of any subset is a subspace.

Proof:

Call the subset . If is empty then by definition its span is the trivial subspace. If is not empty, then we need only check that the span is closed under linear combinations. For a pair of vectors from that span, and , a linear combination
(, scalars) is a linear combination of elements of and so is in (possibly some of the 's forming equal some of the 's from , but it does not matter).

Example 1

The span of this set is all of .

To check this we must show that any member of is a linear combination of these two vectors. So we ask: for which vectors (with real components and ) are there scalars and such that this holds?

Gauss' method

with back substitution gives and . These two equations show that for any and that we start with, there are appropriate coefficients and making the above vector equation true. For instance, for and the coefficients and will do. That is, any vector in can be written as a linear combination of the two given vectors.