Subspaces
For any vector space, a subspace is a subset that is itself a vector space, under the inherited operations.
Lemma 1
For a nonempty subset
of a vector space, under the inherited
operations, the following are equivalent statements.
is a subspace of that vector space
is closed under linear combinations of pairs of vectors: for any vectors
and scalars
the vector
is in 
is closed under linear combinations of any number of vectors: for any vectors
and scalars
the vector
is in
.
Briefly, the way that a subset gets to be a
subspace is by being closed under linear combinations.
- Proof:
- "The following are equivalent" means that each pair of statements are equivalent.

- We will show this equivalence by establishing that
. This strategy is suggested by noticing that
and
are easy and so we need only argue the single implication
.
- For that argument, assume that
is a nonempty subset of a vector space
and that
is closed under combinations of pairs of vectors. We will show that
is a vector space by checking the conditions.
- The first item in the vector space definition has five conditions. First, for closure under addition, if
then
, as
.
- Second, for any
, because addition is inherited from
, the sum
in
equals the sum
in
, and that equals the sum
in
(because
is a vector space, its addition is commutative), and that in turn equals the sum
in
. The argument for the third condition is similar to that for the second.
- For the fourth, consider the zero vector of
and note that closure of
under linear combinations of pairs of vectors gives that (where
is any member of the nonempty set
)
is in
; showing that
acts under the inherited operations as the additive identity of
is easy.
- The fifth condition is satisfied because for any
, closure under linear combinations shows that the vector
is in
; showing that it is the additive inverse of
under the inherited operations is routine.
We usually show that a subset is a subspace with
.
Example 1
- The plane
is a subspace of
. As specified in the definition, the operations are the ones inherited from the larger space, that is, vectors add in
as they add in 

- and scalar multiplication is also the same as it is in
. To show that
is a subspace, we need only note that it is a subset and then verify that it is a space. Checking that
satisfies the conditions in the definition of a vector space is routine. For instance, for closure under addition, just note that if the summands satisfy that
and
then the sum satisfies that
.
Example 2
- The
-axis in
is a subspace where the addition and scalar multiplication operations are the inherited ones.

- As above, to verify that this is a subspace, we simply note that it is a subset and then check that it satisfies the conditions in definition of a vector space. For instance, the two closure conditions are satisfied: (1) adding two vectors with a second component of zero results in a vector with a second component of zero, and (2) multiplying a scalar times a vector with a second component of zero results in a vector with a second component of zero.
Example 3
- Another subspace of
is

- which is its trivial subspace.
- Any vector space has a trivial subspace
.
At the opposite extreme, any vector space has itself for a subspace.
Template:AnchorThese two are the improper subspaces.
Template:AnchorOther subspaces are proper.
Example 4
The condition in the definition requiring that the addition and scalar multiplication operations must be the ones inherited from the larger space is important. Consider the subset
of the vector space
. Under the operations
and
that set is a vector space, specifically, a trivial space. But it is not a subspace of
because those aren't the inherited operations, since of course
has
.
Example 5
- All kinds of vector spaces, not just
's, have subspaces. The vector space of cubic polynomials
has a subspace comprised of all linear polynomials
.
Example 6
- This is a subspace of the
matrices

- (checking that it is nonempty and closed under linear combinations is easy).
- To parametrize, express the condition as
.

- As above, we've described the subspace as a collection of unrestricted linear combinations (by coincidence, also of two elements).
Span
The span(or linear closure) of a nonempty subset
of a vector space is the set of all linear combinations of vectors from
.
![{\displaystyle [S]=\{c_{1}{\vec {s}}_{1}+\cdots +c_{n}{\vec {s}}_{n}\,{\big |}\,c_{1},\ldots ,c_{n}\in \mathbb {R} {\text{ and }}{\vec {s}}_{1},\ldots ,{\vec {s}}_{n}\in S\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/219ddb696c8bec889c0845f78f9f20e559d3df8d)
The span of the empty subset of a vector space is the trivial subspace. No notation for the span is completely standard. The square brackets used here are common, but so are "
" and "
".
Lemma 2
In a vector space, the span of any subset is a subspace.
Proof:
- Call the subset
. If
is empty then by definition its span is the trivial subspace. If
is not empty, then we need only check that the span
is closed under linear combinations. For a pair of vectors from that span,
and
, a linear combination


- (
,
scalars) is a linear combination of elements of
and so is in
(possibly some of the
's forming
equal some of the
's from
, but it does not matter).
Example 7
The span of this set
is all of
.

To check this we must show that any member of
is a linear combination
of these two vectors.
So we ask: for which
vectors (with real components
and
)
are there scalars
and
such that this holds?

Gauss' method
![{\displaystyle {\begin{array}{rcl}{\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&x\\c_{1}&-&c_{2}&=&y\end{array}}&{\xrightarrow[{}]{-\rho _{1}+\rho _{2}}}&{\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&x\\&&-2c_{2}&=&-x+y\end{array}}\end{array}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b458b01cf0fa308a38f6d7f6fd1e10d7c8963846)
with back substitution gives
and
. These two equations show that for any
and
that we start with, there are appropriate coefficients
and
making the above vector equation true. For instance, for
and
the coefficients
and
will do. That is, any vector in
can be written as a linear combination of the two given vectors.
Linear Independence
We first characterize when a vector can be removed from a set without changing the span of that set.
Lemma 3
Where
is a subset of a vector space
,
![{\displaystyle [S]=[S\cup \{{\vec {v}}\}]\quad {\text{if and only if}}\quad {\vec {v}}\in [S]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/35158ef7d1bd5e3efec9dc1d9f92f87b1ccac9bc)
for any
.
- Proof: The left to right implication is easy. If
then, since
, the equality of the two sets gives that
.
- For the right to left implication assume that
to show that
by mutual inclusion. The inclusion
, write an element of
as
and substitute
's expansion as a linear combination of members of the same set
. This is a linear combination of linear combinations and so distributing
results in a linear combination of vectors from
. Hence each member of
is also a member of
.
- Example: In
, where

- the spans
and
are equal since
is in the span
.
Lemma 2 says that if we have a spanning set then we can remove a
to get a new set
with the same span if and only if
is a linear combination of vectors from
. Thus, under the second sense described above, a spanning set is minimal if and only if it contains no vectors that are linear combinations of the others in that set. We have a term for this important property.
Definition of Linear Independence
A subset of a vector space is linearly independent if none of
its elements is a linear combination of the others. Otherwise it is
linearly dependent.
}}
Here is an important observation:

although this way of writing one vector as a combination of the others visually sets
off from the other vectors, algebraically there is nothing special in that equation about
. For any
with a coefficient
that is nonzero, we can rewrite the relationship to set off
.

When we don't want to single out any vector by writing it alone on one side of the equation, we will instead say that
are in a linear relationship and write the
relationship with all of the vectors on the same side. The next result
rephrases the linear independence definition in this style. It gives
what is usually the easiest way to compute whether a finite set is
dependent or independent.
Lemma 4
A subset
of a vector space is linearly independent if and only if for any distinct
the only linear relationship among those vectors

is the trivial one:
.
Proof: This is a direct consequence of the observation above.
- If the set
is linearly independent then no vector
can be written as a linear combination of the other vectors from
so there is no linear relationship where some of the
's have nonzero coefficients. If
is not linearly independent then some
is a linear combination
of other vectors from
, and subtracting
from both sides of that equation gives a linear relationship involving a nonzero coefficient, namely the
in front of
.
Example 8
In the vector space of two-wide row vectors, the two-element set
is linearly independent. To check this, set

and solving the resulting system
![{\displaystyle {\begin{array}{*{2}{rc}r}40c_{1}&-&50c_{2}&=&0\\15c_{1}&+&25c_{2}&=&0\end{array}}\;{\xrightarrow[{}]{-(15/40)\rho _{1}+\rho _{2}}}\;{\begin{array}{*{2}{rc}r}40c_{1}&-&50c_{2}&=&0\\&&(175/4)c_{2}&=&0\end{array}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e903e9591f0bde977493c5ccbc066170b66472b8)
shows that both
and
are zero. So the only linear relationship between the two given row vectors is the trivial relationship.
In the same vector space,
is linearly dependent since we can satisfy

with
and
.
Example 9
The set
is linearly independent in
, the space of quadratic polynomials with real coefficients, because

gives
![{\displaystyle {\begin{array}{rcl}{\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&0\\c_{1}&-&c_{2}&=&0\end{array}}&{\xrightarrow[{}]{-\rho _{1}+\rho _{2}}}&{\begin{array}{*{2}{rc}r}c_{1}&+&c_{2}&=&0\\&&2c_{2}&=&0\end{array}}\end{array}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/47f119f8c82189e8b937f8e12ba0dbb9e955f451)
since polynomials are equal only if their coefficients are equal. Thus, the only linear relationship between these two members of
is the trivial one.
Example 10
In
, where

the set
is linearly dependent because this is a relationship

where not all of the scalars are zero (the fact that some of the scalars are zero doesn't matter).
Resources
- Subspaces, Interactive Linear Algebra from Georgia Tech
Licensing
Content obtained and/or adapted from: