Difference between revisions of "Subspaces of Rn and Linear Independence"

From Department of Mathematics at UTSA
Jump to navigation Jump to search
(Created page with "One of the examples that led us to introduce the idea of a vector space was the solution set of a homogeneous system. For instance, we've seen in Linear Algebra/Definition a...")
 
 
(13 intermediate revisions by one other user not shown)
Line 1: Line 1:
One of the examples that led us to introduce the idea of a vector space was
+
==Subspaces==
the solution set of a homogeneous system.
+
For any vector space, a '''subspace''' is a subset that is itself a vector space, under the inherited operations.
For instance, we've seen in [[Linear Algebra/Definition and Examples of Vector Spaces#ex:PlaneThruOriginSubsp|Example 1.4]]<!--\ref{PlaneThruOriginSubsp}-->
 
such a space that is a planar subset of <math>\mathbb{R}^3</math>.
 
There, the vector space <math>\mathbb{R}^3</math> contains inside it another
 
vector space, the plane.
 
  
{{TextBox|1=
+
===Lemma 1===
;Definition 2.1{{anchor|subspace}}:
+
For a nonempty subset <math> S </math> of a vector space, under the inherited  
For any vector space, a '''subspace''' is a subset that is itself a vector space, under the inherited operations.
+
operations, the following are equivalent statements.
}}
+
 
 +
# <math> S </math> is a subspace of that vector space
 +
# <math> S </math> is closed under linear combinations of pairs of vectors: for any vectors <math> \vec{s}_1,\vec{s}_2\in S </math> and scalars <math> r_1,r_2 </math> the vector <math> r_1\vec{s}_1+r_2\vec{s}_2 </math> is in <math> S </math>
 +
# <math> S </math> is closed under linear combinations of any number of vectors: for any vectors <math> \vec{s}_1,\ldots,\vec{s}_n\in S </math> and scalars <math> r_1, \ldots,r_n </math> the vector <math> r_1\vec{s}_1+\cdots+r_n\vec{s}_n </math> is in <math> S </math>.
 +
 
 +
Briefly, the way that a subset gets to be a
 +
subspace is by being closed under linear combinations.
  
{{TextBox|1=
+
: Proof:
;Example 2.2{{anchor|ex:PlaneSubspRThree}}:<!--\label{ex:PlaneSubspRThree}-->
+
:: "The following are equivalent" means that each pair of statements are equivalent.
The plane from the prior subsection,
 
  
:<math>
+
:::<math>
P=\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x+y+z=0\}
+
(1)\!\iff\!(2)
 +
\qquad
 +
(2)\!\iff\!(3)
 +
\qquad
 +
(3)\!\iff\!(1)
 
</math>
 
</math>
  
is a subspace of <math> \mathbb{R}^3 </math>.
+
:: We will show this equivalence by establishing that <math> (1)\implies (3)\implies (2)\implies (1)</math>. This strategy is suggested by noticing that <math> (1)\implies (3) </math> and <math> (3)\implies (2) </math> are easy and so we need only argue the single implication <math> (2)\implies (1) </math>.
As specified in the definition,
 
the operations are the ones that are inherited from the larger space, that is,
 
vectors add in <math>P</math> as they add in <math>\mathbb{R}^3</math>
 
  
:<math>
+
:: For that argument, assume that <math> S </math> is a nonempty subset of a vector space <math>V</math> and that <math>S</math> is closed under combinations of pairs of vectors. We will show that <math>S</math> is a vector space by checking the conditions.
 +
 
 +
:: The first item in the vector space definition has five conditions. First, for closure under addition, if <math> \vec{s}_1,\vec{s}_2\in S </math> then <math> \vec{s}_1+\vec{s}_2\in S </math>, as <math> \vec{s}_1+\vec{s}_2=1\cdot\vec{s}_1+1\cdot\vec{s}_2 </math>.
 +
:: Second, for any <math> \vec{s}_1,\vec{s}_2\in S </math>, because addition is inherited from <math> V </math>, the sum <math> \vec{s}_1+\vec{s}_2 </math> in <math> S </math> equals the sum <math> \vec{s}_1+\vec{s}_2 </math> in <math> V </math>, and that equals the sum <math> \vec{s}_2+\vec{s}_1 </math> in <math> V </math> (because <math>V</math> is a vector space, its addition is commutative), and that in turn equals the sum <math> \vec{s}_2+\vec{s}_1 </math> in <math> S </math>. The argument for the third condition is similar to that for the second.
 +
:: For the fourth, consider the zero vector of <math> V </math> and note that closure of <math>S</math> under linear combinations of pairs of vectors gives that (where <math> \vec{s} </math> is any member of the nonempty set <math> S </math>) <math> 0\cdot\vec{s}+0\cdot\vec{s}=\vec{0} </math> is in <math>S</math>; showing that <math> \vec{0} </math> acts under the inherited operations as the additive identity of <math> S </math> is easy.
 +
:: The fifth condition is satisfied because for any <math> \vec{s}\in S </math>, closure under linear combinations shows that the vector <math> 0\cdot\vec{0}+(-1)\cdot\vec{s} </math> is in <math> S </math>; showing that it is the additive inverse of <math> \vec{s} </math> under the inherited operations is routine.
 +
 
 +
 
 +
We usually show that a subset is a subspace with <math> (2)\implies (1) </math>.
 +
 
 +
===Example 1===
 +
: The plane <math> P=\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x+y+z=0\} </math> is a subspace of <math> \mathbb{R}^3 </math>. As specified in the definition, the operations are the ones inherited from the larger space, that is, vectors add in <math>P</math> as they add in <math>\mathbb{R}^3</math>
 +
 
 +
:: <math>
 
\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}+\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
 
\begin{pmatrix} x_1 \\ y_1 \\ z_1 \end{pmatrix}+\begin{pmatrix} x_2 \\ y_2 \\ z_2 \end{pmatrix}
 
=\begin{pmatrix} x_1+x_2 \\ y_1+y_2 \\ z_1+z_2 \end{pmatrix}
 
=\begin{pmatrix} x_1+x_2 \\ y_1+y_2 \\ z_1+z_2 \end{pmatrix}
 
</math>
 
</math>
  
and scalar multiplication is also the same as it is in <math>\mathbb{R}^3</math>. To show that <math>P</math> is a subspace, we need only note that it is a subset and then verify that it is a space. Checking that <math>P</math> satisfies the conditions in the definition of a vector space is routine. For instance, for closure under addition, just note that if the summands satisfy that <math>x_1+y_1+z_1=0</math> and <math>x_2+y_2+z_2=0</math> then the sum satisfies that <math>(x_1+x_2)+(y_1+y_2)+(z_1+z_2)=(x_1+y_1+z_1)+(x_2+y_2+z_2)=0</math>.
+
: and scalar multiplication is also the same as it is in <math>\mathbb{R}^3</math>. To show that <math>P</math> is a subspace, we need only note that it is a subset and then verify that it is a space. Checking that <math>P</math> satisfies the conditions in the definition of a vector space is routine. For instance, for closure under addition, just note that if the summands satisfy that <math>x_1+y_1+z_1=0</math> and <math>x_2+y_2+z_2=0</math> then the sum satisfies that <math>(x_1+x_2)+(y_1+y_2)+(z_1+z_2)=(x_1+y_1+z_1)+(x_2+y_2+z_2)=0</math>.
}}
 
  
{{TextBox|1=
+
===Example 2===
;Example 2.3{{anchor|ex:SubspacesRTwo}}:<!--\label{ex:SubspacesRTwo}-->
 
The <math> x </math>-axis in <math> \mathbb{R}^2 </math>
 
is a subspace where
 
the addition and scalar multiplication operations are
 
the inherited ones.
 
  
:<math>
+
: The <math> x </math>-axis in <math> \mathbb{R}^2 </math> is a subspace where the addition and scalar multiplication operations are the inherited ones.
 +
:: <math>
 
\begin{pmatrix} x_1 \\ 0 \end{pmatrix}
 
\begin{pmatrix} x_1 \\ 0 \end{pmatrix}
 
+
 
+
Line 50: Line 60:
 
</math>
 
</math>
  
As above, to verify that this is a subspace, we simply note that it is a subset and then check that it satisfies the conditions in definition of a vector space. For instance, the two closure conditions are satisfied: (1) adding two vectors with a second component of zero results in a vector with a second component of zero, and (2) multiplying a scalar times a vector with a second component of zero results in a vector with a second component of zero.
+
: As above, to verify that this is a subspace, we simply note that it is a subset and then check that it satisfies the conditions in definition of a vector space. For instance, the two closure conditions are satisfied: (1) adding two vectors with a second component of zero results in a vector with a second component of zero, and (2) multiplying a scalar times a vector with a second component of zero results in a vector with a second component of zero.
}}
 
  
{{TextBox|1=
+
===Example 3===
;Example 2.4:
+
: Another subspace of <math>\mathbb{R}^2</math> is  
Another subspace of <math>\mathbb{R}^2</math> is  
 
  
:<math>
+
:: <math>
 
\{\begin{pmatrix} 0 \\ 0 \end{pmatrix}\}
 
\{\begin{pmatrix} 0 \\ 0 \end{pmatrix}\}
 
</math>
 
</math>
  
its trivial subspace.
+
: which is its '''trivial subspace'''.
}}
 
  
Any vector space has a trivial subspace <math> \{\vec{0}\,\} </math>.
+
: Any vector space has a trivial subspace <math> \{\vec{0}\,\} </math>.
 
At the opposite extreme, any vector space has itself for a subspace.
 
At the opposite extreme, any vector space has itself for a subspace.
 
{{anchor|improper}}These two are the '''improper''' subspaces.
 
{{anchor|improper}}These two are the '''improper''' subspaces.
 
{{anchor|proper}}Other subspaces are '''proper'''.  
 
{{anchor|proper}}Other subspaces are '''proper'''.  
  
{{TextBox|1=
+
===Example 4===
;Example 2.5{{anchor|ex:OperNotInherit}}:<!--\label{ex:OperNotInherit}-->
 
 
The condition in the definition requiring that the addition and scalar multiplication operations must be the ones inherited from the larger space is important. Consider the subset <math> \{1\} </math> of the vector space <math> \mathbb{R}^1 </math>. Under the operations <math>1+1=1</math> and  <math>r\cdot 1=1</math> that set is a vector space, specifically, a trivial space. But it is not a subspace of <math> \mathbb{R}^1 </math> because those aren't the inherited operations, since of course <math> \mathbb{R}^1 </math> has <math> 1+1=2 </math>.
 
The condition in the definition requiring that the addition and scalar multiplication operations must be the ones inherited from the larger space is important. Consider the subset <math> \{1\} </math> of the vector space <math> \mathbb{R}^1 </math>. Under the operations <math>1+1=1</math> and  <math>r\cdot 1=1</math> that set is a vector space, specifically, a trivial space. But it is not a subspace of <math> \mathbb{R}^1 </math> because those aren't the inherited operations, since of course <math> \mathbb{R}^1 </math> has <math> 1+1=2 </math>.
}}
 
 
{{TextBox|1=
 
;Example 2.6{{anchor|ex:LinSubspPolyThree}}:  <!--\label{ex:LinSubspPolyThree}-->
 
All kinds of vector spaces, not just <math>\mathbb{R}^n</math>'s, have subspaces. The vector space of cubic polynomials <math> \{a+bx+cx^2+dx^3\,\big|\, a,b,c,d\in\mathbb{R}\} </math> has a subspace comprised of all linear polynomials <math> \{m+nx\,\big|\, m,n\in\mathbb{R}\} </math>.
 
}}
 
  
{{TextBox|1=
+
===Example 5===
;Example 2.7:
+
: All kinds of vector spaces, not just <math>\mathbb{R}^n</math>'s, have subspaces. The vector space of cubic polynomials <math> \{a+bx+cx^2+dx^3\,\big|\, a,b,c,d\in\mathbb{R}\} </math> has a subspace comprised of all linear polynomials <math> \{m+nx\,\big|\, m,n\in\mathbb{R}\} </math>.
Another example of a subspace not taken from an <math>\mathbb{R}^n</math> is one from the examples following the definition of a vector space. The space of all real-valued functions of one real variable <math>f:\mathbb{R}\to \mathbb{R}</math> has a subspace of functions satisfying the restriction <math>(d^2\,f/dx^2)+f=0</math>.
 
}}
 
  
{{TextBox|1=
+
===Example 6===
;Example 2.8{{anchor|ex:RPlusNotSubSp}}:<!--\label{ex:RPlusNotSubSp}-->
+
: This is a subspace of the <math> 2 \! \times \! 2 </math> matrices
Being vector spaces themselves, subspaces must satisfy the closure conditions. The set <math> \mathbb{R}^+ </math> is not a subspace of the vector space <math> \mathbb{R}^1 </math> because with the inherited operations it is not closed under scalar multiplication: if <math> \vec{v}=1 </math> then <math> -1\cdot\vec{v}\not\in\mathbb{R}^+ </math>.
 
}}
 
  
The next result says that [[#ex:RPlusNotSubSp|Example 2.8]]<!--\ref{ex:RPlusNotSubSp}--> is prototypical.
+
::<math>
The only way that a subset can fail to be a subspace
 
(if it is nonempty and the inherited operations are used)
 
is if it isn't closed.
 
 
 
{{TextBox|1=
 
;Lemma 2.9{{anchor|le:SubspIffClosed}}:<!--\label{le:SubspIffClosed}-->
 
For a nonempty subset <math> S </math> of a vector space, under the inherited
 
operations, the following are equivalent
 
statements.<ref>More information on equivalence of statements is in the appendix.</ref> 
 
<ol type=1 start=1>
 
<li> <math> S </math> is a subspace of that vector space
 
<li> <math> S </math> is closed under linear combinations of pairs of vectors:
 
for any vectors <math> \vec{s}_1,\vec{s}_2\in S </math> and scalars <math> r_1,r_2 </math>
 
the vector <math> r_1\vec{s}_1+r_2\vec{s}_2 </math> is in <math> S </math>
 
<li> <math> S </math> is closed under linear combinations of any number of vectors:
 
for any vectors <math> \vec{s}_1,\ldots,\vec{s}_n\in S </math> and scalars
 
<math> r_1, \ldots,r_n </math>
 
the vector <math> r_1\vec{s}_1+\cdots+r_n\vec{s}_n </math> is in <math> S </math>.
 
</ol>
 
}}
 
Briefly, the way that a subset gets to be a
 
subspace is by being closed under linear combinations.
 
 
 
{{TextBox|1=
 
;Proof:
 
"The following are equivalent" means that each pair of
 
statements are equivalent.
 
 
 
:<math>
 
(1)\!\iff\!(2)
 
\qquad
 
(2)\!\iff\!(3)
 
\qquad
 
(3)\!\iff\!(1)
 
</math>
 
 
 
We will show this equivalence by establishing that
 
<math> (1)\implies (3)\implies (2)\implies (1)</math>.
 
This strategy is suggested by noticing that
 
<math> (1)\implies (3) </math> and <math> (3)\implies (2) </math> are easy and so we need only
 
argue the single implication <math> (2)\implies (1) </math>.
 
 
 
For that argument, assume that <math> S </math> is a nonempty subset of a vector space
 
<math>V</math> and that <math>S</math> is closed under combinations of pairs of vectors.
 
We will show that <math>S</math> is a vector space by checking the conditions.
 
 
 
The first item in the vector space definition has five conditions.
 
First, for closure under addition, if
 
<math> \vec{s}_1,\vec{s}_2\in S </math> then <math> \vec{s}_1+\vec{s}_2\in S </math>,
 
as <math> \vec{s}_1+\vec{s}_2=1\cdot\vec{s}_1+1\cdot\vec{s}_2 </math>.
 
Second, for any <math> \vec{s}_1,\vec{s}_2\in S </math>, because addition
 
is inherited from <math> V </math>, the sum <math> \vec{s}_1+\vec{s}_2 </math>
 
in <math> S </math> equals the sum <math> \vec{s}_1+\vec{s}_2 </math>
 
in <math> V </math>, and that equals the sum <math> \vec{s}_2+\vec{s}_1 </math> in
 
<math> V </math> (because <math>V</math> is a vector space, its addition is commutative),
 
and that in turn equals the sum <math> \vec{s}_2+\vec{s}_1 </math> in <math> S </math>.
 
The argument for the third condition is similar to that for the second.
 
For the fourth, consider the zero vector of <math> V </math> and note that
 
closure of <math>S</math> under linear combinations of pairs of vectors gives that
 
(where <math> \vec{s} </math> is any member of the nonempty set <math> S </math>)
 
<math> 0\cdot\vec{s}+0\cdot\vec{s}=\vec{0} </math> is in <math>S</math>;
 
showing that <math> \vec{0} </math> acts under the inherited operations as the additive
 
identity of <math> S </math> is easy.
 
The fifth condition is satisfied because for any <math> \vec{s}\in S </math>,
 
closure under linear combinations shows that the vector
 
<math> 0\cdot\vec{0}+(-1)\cdot\vec{s} </math> is in <math> S </math>; showing that it is the
 
additive inverse of <math> \vec{s} </math> under the inherited operations is
 
routine.
 
 
 
The checks for item 2 are similar and are saved for [[#exer:SubspIffClosed|Problem 13]]<!--\ref{exer:SubspIffClosed}-->.
 
}}
 
 
 
We usually show that a subset is a subspace with <math> (2)\implies (1) </math>.
 
 
 
{{TextBox|1=
 
;Remark 2.10:
 
At the start of this chapter we introduced vector spaces as collections in
 
which linear combinations are "sensible".
 
The above result speaks to this.
 
 
 
The vector space definition has ten conditions but eight of them&mdash; the
 
conditions not about closure&mdash; simply ensure that referring to the
 
operations as an "addition" and a "scalar multiplication" is sensible.
 
The proof above checks that these eight
 
are inherited from the
 
surrounding vector space provided that the nonempty set <math>S</math> satisfies
 
[[#le:SubspIffClosed|Lemma 2.9]]<!--\ref{le:SubspIffClosed}-->'s statement (2)
 
(e.g., commutativity of addition in <math>S</math> follows right from
 
commutativity of addition in <math>V</math>).
 
So, in this context, this meaning of "sensible" is automatically
 
satisfied.
 
 
 
In assuring us that this first meaning of the word is met, the result draws
 
our attention to the second meaning of "sensible".
 
It has to do with the two remaining conditions, the closure conditions.
 
Above, the two separate closure conditions inherent in statement (1) are
 
combined in statement (2) into the single condition of closure under all
 
linear combinations of two vectors, which is then extended in statement (3) to
 
closure under combinations of any number of vectors.
 
The latter two statements say that we can always make sense of
 
an expression like
 
<math>r_1\vec{s}_1+r_2\vec{s}_2</math>, without restrictions on the <math>r</math>'s&mdash; such
 
expressions are "sensible" in that the vector described is defined
 
and is in the set <math>S</math>.
 
 
 
This second meaning suggests that a good way to think of a vector space is as a collection of unrestricted linear combinations. The next two examples take some spaces and describe them in this way. That is, in these examples we parametrize, just as we did in Chapter One to describe the solution set of a homogeneous linear system.
 
}}
 
 
 
{{TextBox|1=
 
;Example 2.11:
 
This subset of <math>\mathbb{R}^3</math>
 
 
 
:<math>
 
S=\{\begin{pmatrix} x \\ y \\ z \end{pmatrix}\,\big|\, x-2y+z=0\}
 
</math>
 
 
 
is a subspace under the usual addition and scalar multiplication
 
operations of column vectors (the check that it is nonempty and closed under
 
linear combinations of two vectors is just like the one in
 
[[#ex:PlaneSubspRThree|Example 2.2]]<!--\ref{ex:PlaneSubspRThree}-->).
 
To parametrize, we can take <math>x-2y+z=0</math> to be a one-equation linear system and
 
expressing the leading
 
variable in terms of the free variables <math>x=2y-z</math>.
 
 
 
:<math>
 
S
 
=\{\begin{pmatrix} 2y-z \\ y \\ z \end{pmatrix}\,\big|\, y,z\in\mathbb{R}\}
 
=\{y\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}+z\begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix}\,\big|\, y,z\in\mathbb{R}\}
 
</math>
 
 
 
Now the subspace is described as the collection of unrestricted linear combinations of those two vectors. Of course, in either description, this is a plane through the origin.
 
}}
 
 
 
{{TextBox|1=
 
;Example 2.12{{anchor|ex:ParamSubspace}}:<!--\label{ex:ParamSubspace}-->
 
This is a subspace of the <math> 2 \! \times \! 2 </math> matrices
 
 
 
:<math>
 
 
L=\{\begin{pmatrix}
 
L=\{\begin{pmatrix}
 
a  &0  \\
 
a  &0  \\
Line 235: Line 93:
 
</math>
 
</math>
  
(checking that it is nonempty and closed under linear combinations is easy).
+
: (checking that it is nonempty and closed under linear combinations is easy).
To parametrize, express the condition as <math>a=-b-c</math>.
+
: To parametrize, express the condition as <math>a=-b-c</math>.
  
:<math>
+
::<math>
 
L
 
L
 
=\{\begin{pmatrix}
 
=\{\begin{pmatrix}
Line 256: Line 114:
 
</math>
 
</math>
  
As above, we've described the subspace as a collection of unrestricted linear combinations (by coincidence, also of two elements).
+
: As above, we've described the subspace as a collection of unrestricted linear combinations (by coincidence, also of two elements).
}}
 
  
Parametrization is an easy technique, but it is important.
 
We shall use it often.
 
 
{{TextBox|1=
 
;Definition 2.13:
 
The '''span'''(or
 
'''linear closure''') of a nonempty subset <math> S </math> of a
 
vector space is the set of all linear combinations of vectors from <math> S </math>.
 
  
 +
==Span==
 +
The '''span'''(or '''linear closure''') of a nonempty subset <math> S </math> of a vector space is the set of all linear combinations of vectors from <math> S </math>.
 
:<math>
 
:<math>
 
[S] =\{ c_1\vec{s}_1+\cdots+c_n\vec{s}_n
 
[S] =\{ c_1\vec{s}_1+\cdots+c_n\vec{s}_n
Line 273: Line 124:
 
\text{ and } \vec{s}_1,\ldots,\vec{s}_n\in S \}
 
\text{ and } \vec{s}_1,\ldots,\vec{s}_n\in S \}
 
</math>
 
</math>
 +
The span of the empty subset of a vector space is the trivial subspace. No notation for the span is completely standard. The square brackets used here are common, but so are "<math>\mbox{span}(S)</math>" and "<math>\mbox{sp}(S)</math>".
  
The span of the empty subset of a vector space is the trivial subspace.
+
===Lemma 2===
}}
 
No notation for the span is completely standard.
 
The square brackets used here are common, but so are
 
"<math>\mbox{span}(S)</math>" and "<math>\mbox{sp}(S)</math>".
 
 
 
{{TextBox|1=
 
;Remark 2.14:
 
In Chapter One, after we showed that the solution
 
set of a homogeneous linear system can be written as
 
<math>\{c_1\vec{\beta}_1+\cdots+c_k\vec{\beta}_k\,\big|\,
 
c_1,\ldots,c_k\in\mathbb{R}\}</math>,
 
we described that as the set "generated" by the <math>{\vec{\beta}}</math>'s.
 
We now have the technical term; we call that the "span" of the set
 
<math>\{\vec{\beta}_1,\ldots,\vec{\beta}_k\}</math>.
 
 
 
Recall also the discussion of the "tricky point" in that proof. The span of the empty set is defined to be the set <math> \{\vec{0}\} </math> because we follow the convention that a linear combination of no vectors sums to <math> \vec{0} </math>. Besides, defining the empty set's span to be the trivial subspace is a convenience in that it keeps results like the next one from having annoying exceptional cases.
 
}}
 
 
 
{{TextBox|1=
 
;Lemma 2.15{{anchor|le:SpanIsASubsp}}:<!--\label{le:SpanIsASubsp}-->
 
 
In a vector space, the span of any subset is a subspace.
 
In a vector space, the span of any subset is a subspace.
}}
 
  
{{TextBox|1=
+
Proof:
;Proof:
+
: Call the subset <math> S </math>. If <math> S </math> is empty then by definition its span is the trivial subspace. If <math> S</math> is not empty, then we need only check that the span <math> [S] </math> is closed under linear combinations. For a pair of vectors from that span, <math> \vec{v}=c_1\vec{s}_1+\cdots+c_n\vec{s}_n </math> and <math> \vec{w}=c_{n+1}\vec{s}_{n+1}+\cdots+c_m\vec{s}_m </math>, a linear combination
Call the subset <math> S </math>.
+
:: <math>
If <math> S </math> is empty then by definition its span is the trivial
 
subspace.
 
If <math> S</math> is not empty then by [[#le:SubspIffClosed|Lemma 2.9]]<!--\ref{le:SubspIffClosed}--> we need
 
only check that the span <math> [S] </math> is closed under linear combinations.
 
For a pair of vectors from that span,
 
<math> \vec{v}=c_1\vec{s}_1+\cdots+c_n\vec{s}_n </math> and
 
<math> \vec{w}=c_{n+1}\vec{s}_{n+1}+\cdots+c_m\vec{s}_m </math>,
 
a linear combination
 
:<math>
 
 
p\cdot(c_1\vec{s}_1+\cdots+c_n\vec{s}_n)+
 
p\cdot(c_1\vec{s}_1+\cdots+c_n\vec{s}_n)+
 
r\cdot(c_{n+1}\vec{s}_{n+1}+\cdots+c_m\vec{s}_m)</math>
 
r\cdot(c_{n+1}\vec{s}_{n+1}+\cdots+c_m\vec{s}_m)</math>
Line 317: Line 139:
 
+rc_{n+1}\vec{s}_{n+1}+\cdots+rc_m\vec{s}_m
 
+rc_{n+1}\vec{s}_{n+1}+\cdots+rc_m\vec{s}_m
 
</math>
 
</math>
(<math> p </math>, <math> r </math> scalars) is a linear combination of elements of <math> S </math> and so is in <math> [S] </math> (possibly some of the <math>\vec{s}_i</math>'s forming <math>\vec{v}</math> equal some of the <math>\vec{s}_j</math>'s from <math>\vec{w}</math>, but it does not matter).
+
: (<math> p </math>, <math> r </math> scalars) is a linear combination of elements of <math> S </math> and so is in <math> [S] </math> (possibly some of the <math>\vec{s}_i</math>'s forming <math>\vec{v}</math> equal some of the <math>\vec{s}_j</math>'s from <math>\vec{w}</math>, but it does not matter).
}}
 
  
The converse of the lemma
+
===Example 7===
holds: any subspace is the span of some set, because
 
a subspace is obviously the span of the set of its members.
 
Thus a subset of a vector space is a subspace if and only if it is a span.
 
This fits the intuition
 
that a good way to think of a vector space is as
 
a collection in which linear combinations are sensible.
 
 
 
Taken together, [[#le:SubspIffClosed|Lemma 2.9]]<!--\ref{le:SubspIffClosed}--> and
 
[[#le:SpanIsASubsp|Lemma 2.15]]<!--\ref{le:SpanIsASubsp}--> show that the span of a subset <math>S</math> of a
 
vector space is the smallest subspace containing all the members of <math>S</math>.
 
 
 
{{TextBox|1=
 
;Example 2.16{{anchor|ex:SpanSingVec}}:<!--\label{ex:SpanSingVec}-->
 
In any vector space <math> V </math>, for any vector <math> \vec{v} </math>, the set <math> \{r\cdot\vec{v} \,\big|\, r\in\mathbb{R}\} </math> is a subspace of <math> V </math>. For instance, for any vector <math> \vec{v}\in\mathbb{R}^3 </math>, the line through the origin containing that vector, <math> \{k\vec{v}\,\big|\, k\in\mathbb{R} \} </math> is a subspace of <math> \mathbb{R}^3 </math>. This is true even when <math>\vec{v}</math> is the zero vector, in which case the subspace is the degenerate line, the trivial subspace.
 
}}
 
 
 
{{TextBox|1=
 
;Example 2.17:
 
 
The span of this set
 
The span of this set
 
is all of <math>\mathbb{R}^2</math>.
 
is all of <math>\mathbb{R}^2</math>.
Line 371: Line 174:
 
</math>
 
</math>
 
with back substitution gives <math>c_2=(x-y)/2</math> and <math>c_1=(x+y)/2</math>. These two equations show that for any <math>x</math> and <math>y</math> that we start with, there are appropriate coefficients <math>c_1</math> and <math>c_2</math> making the above vector equation true. For instance, for <math>x=1</math> and <math>y=2</math> the coefficients <math>c_2=-1/2</math> and <math>c_1=3/2</math> will do. That is, any vector in <math>\mathbb{R}^2</math> can be written as a linear combination of the two given vectors.
 
with back substitution gives <math>c_2=(x-y)/2</math> and <math>c_1=(x+y)/2</math>. These two equations show that for any <math>x</math> and <math>y</math> that we start with, there are appropriate coefficients <math>c_1</math> and <math>c_2</math> making the above vector equation true. For instance, for <math>x=1</math> and <math>y=2</math> the coefficients <math>c_2=-1/2</math> and <math>c_1=3/2</math> will do. That is, any vector in <math>\mathbb{R}^2</math> can be written as a linear combination of the two given vectors.
 +
 +
==Linear Independence==
 +
We first characterize when a vector can be removed from a set without changing the span of that set.
 +
 +
===Lemma 3===
 +
Where <math> S </math> is a subset of a vector space <math>V</math>,
 +
:<math>
 +
[S]=[S\cup\{\vec{v}\}]
 +
\quad\text{if and only if}\quad
 +
\vec{v}\in[S]
 +
</math>
 +
for any <math>\vec{v}\in V</math>.
 +
 +
: Proof: The left to right implication is easy. If <math>[S]=[S\cup\{\vec{v}\}]</math> then, since <math> \vec{v}\in[S\cup\{\vec{v}\}] </math>, the equality of the two sets gives that <math> \vec{v}\in[S] </math>.
 +
: For the right to left implication assume that <math> \vec{v}\in [S] </math> to show that <math> [S]=[S\cup\{\vec{v}\}] </math> by mutual inclusion. The inclusion <math [S]\subseteq[S\cup\{\vec{v}\}] </math> is obvious. For the other inclusion <math> [S]\supseteq[S\cup\{\vec{v}\}] </math>, write an element of <math> [S\cup\{\vec{v}\}] </math> as <math> d_0\vec{v}+d_1\vec{s}_1+\dots+d_m\vec{s}_m </math> and substitute <math> \vec{v} </math>'s expansion as a linear combination of members of the same set <math> d_0(c_0\vec{t}_0+\dots+c_k\vec{t}_k)+d_1\vec{s}_1+\dots+d_m\vec{s}_m </math>. This is a linear combination of linear combinations and so distributing <math> d_0 </math> results in a linear combination of vectors from <math> S </math>. Hence each member of <math>[S\cup\{\vec{v}\}]</math> is also a member of <math>[S]</math>.
 +
 +
: Example: In <math> \mathbb{R}^3 </math>, where
 +
:: <math>
 +
\vec{v}_1=\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\quad
 +
\vec{v}_2=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\quad
 +
\vec{v}_3=\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}
 +
</math>
 +
 +
: the spans <math> [\{\vec{v}_1,\vec{v}_2\}] </math> and <math> [\{\vec{v}_1,\vec{v}_2,\vec{v}_3\}] </math> are equal since <math> \vec{v}_3 </math> is in the span <math> [\{\vec{v}_1,\vec{v}_2\}] </math>.
 +
 +
Lemma 2 says that if we have a spanning set then we can remove a <math>\vec{v}</math> to get a new set <math>S</math> with the same span if and only if <math>\vec{v}</math> is a linear combination of vectors from <math>S</math>. Thus, under the second sense described above, a spanning set is minimal if and only if it contains no vectors that are linear combinations of the others in that set. We have a term for this important property.
 +
 +
===Definition of Linear Independence===
 +
A subset of a vector space is '''linearly independent''' if none of
 +
its elements is a linear combination of the others. Otherwise it is
 +
'''linearly dependent'''.
 
}}
 
}}
  
Since spans are subspaces, and we know that a
+
Here is an important observation:
good way to understand a subspace is
+
 
to parametrize its description, we can try to understand a set's span in
+
:<math>
that way.
+
\vec{s}_0=c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n
 +
</math>
 +
 
 +
although this way of writing one vector as a combination of the others visually sets <math> \vec{s}_0 </math> off from the other vectors, algebraically there is nothing special in that equation about <math> \vec{s}_0 </math>. For any <math> \vec{s}_i </math> with a coefficient <math>c_i</math> that is nonzero, we can rewrite the relationship to set off <math> \vec{s}_i </math>.
 +
 
 +
:<math>
 +
\vec{s}_i=(1/c_i)\vec{s}_0+(-c_1/c_i)\vec{s}_1
 +
+\dots+(-c_n/c_i)\vec{s}_n
 +
</math>
  
{{TextBox|1=
+
When we don't want to single out any vector by writing it alone on one side of the equation, we will instead say that
;Example 2.18:
+
<math>\vec{s}_0,\vec{s}_1,\dots,\vec{s}_n </math> are in a '''linear relationship''' and write the
Consider, in <math> \mathcal{P}_2 </math>,  
+
relationship with all of the vectors on the same side. The next result
the span of the set <math> \{3x-x^2, 2x\} </math>.
+
rephrases the linear independence definition in this style. It gives
By the definition of span, it is the set of unrestricted linear
+
what is usually the easiest way to compute whether a finite set is
combinations of the two <math>\{c_1(3x-x^2)+c_2(2x)\,\big|\, c_1,c_2\in\mathbb{R}\}</math>.
+
dependent or independent.
Clearly polynomials in this span must have a constant term of zero.
 
Is that necessary condition also sufficient?
 
  
We are asking: for which members <math>a_2x^2+a_1x+a_0</math>  
+
===Lemma 4===
of <math>\mathcal{P}_2</math> are there <math>c_1</math> and <math>c_2</math> such that
+
A subset <math> S </math> of a vector space is linearly independent if and only if for any distinct <math> \vec{s}_1,\dots,\vec{s}_n\in S </math> the only linear relationship among those vectors
<math>a_2x^2+a_1x+a_0=c_1(3x-x^2)+c_2(2x)</math>?
 
Since polynomials are equal if and only if their coefficients are equal,
 
we are looking for conditions on <math>a_2</math>, <math>a_1</math>, and <math>a_0</math> satisfying these.
 
  
 
:<math>
 
:<math>
 +
c_1\vec{s}_1+\dots+c_n\vec{s}_n=\vec{0}
 +
\qquad c_1,\dots,c_n\in\mathbb{R}
 +
</math>
 +
 +
is the trivial one: <math> c_1=0,\dots,\,c_n=0 </math>.
 +
Proof: This is a direct consequence of the observation above.
 +
 +
: If the set <math> S </math> is linearly independent then no vector <math>\vec{s}_i</math> can be written as a linear combination of the other vectors from <math>S</math> so there is no linear relationship where some of the <math>\vec{s}\,</math>'s have nonzero coefficients. If <math> S </math> is not linearly independent then some <math> \vec{s}_i </math> is a linear combination <math>\vec{s}_i=c_1\vec{s}_1+\dots+c_{i-1}\vec{s}_{i-1} +c_{i+1}\vec{s}_{i+1}+\dots+c_n\vec{s}_n</math> of other vectors from <math> S </math>, and subtracting <math>\vec{s}_i</math> from both sides of that equation gives a linear relationship involving a nonzero coefficient, namely the <math> -1 </math> in front of <math> \vec{s}_i </math>.
 +
 +
===Example 8===
 +
In the vector space of two-wide row vectors, the two-element set <math> \{ \begin{pmatrix} 40 &15 \end{pmatrix},\begin{pmatrix} -50 &25 \end{pmatrix}\} </math> is linearly independent. To check this, set
 +
 +
:<math>
 +
c_1\cdot\begin{pmatrix} 40 &15 \end{pmatrix}+c_2\cdot\begin{pmatrix} -50 &25 \end{pmatrix}=\begin{pmatrix} 0 &0 \end{pmatrix}
 +
</math>
 +
 +
and solving the resulting system
 +
 +
:<math>
 +
\begin{array}{*{2}{rc}r}
 +
40c_1 &- &50c_2 &= &0 \\
 +
15c_1 &+ &25c_2 &= &0
 +
\end{array}
 +
\;\xrightarrow[]{-(15/40)\rho_1+\rho_2}\;
 +
\begin{array}{*{2}{rc}r}
 +
40c_1 &- &50c_2    &= &0 \\
 +
& &(175/4)c_2 &= &0
 +
\end{array}
 +
</math>
 +
 +
shows that both <math> c_1 </math> and <math> c_2 </math> are zero. So the only linear relationship between the two given row vectors is the trivial relationship.
 +
 +
In the same vector space, <math> \{ \begin{pmatrix} 40 &15 \end{pmatrix},\begin{pmatrix} 20 &7.5 \end{pmatrix}\} </math> is linearly dependent since we can satisfy
 +
 +
:<math>
 +
c_1\begin{pmatrix} 40 &15 \end{pmatrix}+c_2\cdot\begin{pmatrix} 20 &7.5 \end{pmatrix}=\begin{pmatrix} 0 &0 \end{pmatrix}
 +
</math>
 +
 +
with <math> c_1=1 </math> and <math> c_2=-2 </math>.
 +
 +
===Example 9===
 +
The set <math> \{1+x,1-x\} </math> is linearly independent in <math>\mathcal{P}_2 </math>, the space of quadratic polynomials with real coefficients, because
 +
 +
:<math>
 +
0+0x+0x^2
 +
=
 +
c_1(1+x)+c_2(1-x)
 +
=
 +
(c_1+c_2)+(c_1-c_2)x+0x^2
 +
</math>
 +
 +
gives
 +
 +
:<math>\begin{array}{rcl}
 
\begin{array}{*{2}{rc}r}
 
\begin{array}{*{2}{rc}r}
-c_1 &   &     &= &a_2  \\
+
c_1 &+ &c_2 &= &0 \\
3c_1  &+ &2c_2 &= &a_1  \\
+
c_1 &- &c_2 &= &0
&   &0    &= &a_0                                 
+
\end{array}
 +
&\xrightarrow[]{-\rho_1+\rho_2}
 +
&\begin{array}{*{2}{rc}r}
 +
c_1 &+ &c_2 &= &0 \\
 +
& &2c_2 &= &0
 +
\end{array}
 
\end{array}
 
\end{array}
 
</math>
 
</math>
 +
since polynomials are equal only if their coefficients are equal. Thus, the only linear relationship between these two members of <math>\mathcal{P}_2</math> is the trivial one.
  
Gauss' method gives that
+
===Example 10===
<math>c_1=-a_2</math>, <math>c_2=(3/2)a_2+(1/2)a_1</math>, and <math>0=a_0</math>.
+
In <math> \mathbb{R}^3 </math>, where
Thus the only condition on polynomials in the span
 
is the condition that we knew of&mdash; as long as <math>a_0=0</math>,
 
we can give appropriate coefficients <math>c_1</math> and <math>c_2</math>
 
to describe the polynomial <math>a_0+a_1x+a_2x^2</math> as in the span.
 
For instance, for the polynomial <math>0-4x+3x^2</math>, the coefficients
 
<math>c_1=-3</math> and <math>c_2=5/2</math> will do.
 
So the span of the given set is
 
<math>\{a_1x+a_2x^2\,\big|\, a_1,a_2\in\mathbb{R}\}</math>.
 
  
This shows, incidentally, that the set <math> \{x,x^2\} </math> also spans this subspace. A space can have more than one spanning set. Two other sets spanning this subspace are <math> \{x,x^2,-x+2x^2\} </math> and <math> \{x,x+x^2,x+2x^2,\ldots\,\} </math>. (Naturally, we usually prefer to work with spanning sets that have only a few members.)
+
:<math>
}}
+
\vec{v}_1=\begin{pmatrix} 3 \\ 4 \\ 5 \end{pmatrix}
 +
\quad
 +
\vec{v}_2=\begin{pmatrix} 2 \\ 9 \\ 2 \end{pmatrix}
 +
\quad
 +
\vec{v}_3=\begin{pmatrix} 4 \\ 18 \\ 4 \end{pmatrix}
 +
</math>
 +
 
 +
the set <math> S=\{\vec{v}_1,\vec{v}_2,\vec{v}_3\} </math> is linearly dependent because this is a relationship
 +
 
 +
:<math>
 +
0\cdot\vec{v}_1
 +
+2\cdot\vec{v}_2
 +
-1\cdot\vec{v}_3
 +
=\vec{0}
 +
</math>
 +
 
 +
where not all of the scalars are zero (the fact that some of the scalars are zero doesn't matter).
  
{{TextBox|1=
+
==Resources==
;Example 2.19{{anchor|ex:SubspRThree}}:<!--\label{ex:SubspRThree}-->
+
* [https://textbooks.math.gatech.edu/ila/subspaces.html Subspaces], Interactive Linear Algebra from Georgia Tech
These are the subspaces of <math> \mathbb{R}^3 </math> that we now know of, the
 
trivial subspace, the lines through the origin,
 
the planes through the origin, and the whole space
 
(of course, the picture shows only a few of the infinitely many subspaces).  
 
In the next section we will prove that <math>\mathbb{R}^3</math> has no other
 
type of subspaces, so in fact this picture shows them all.
 
<center>
 
[[Image:Linalg R3 subspaces.png|x250px]]
 
</center>
 
The subsets are described as spans of sets, using a minimal number of members, and are shown connected to their supersets. Note that these subspaces fall naturally into levels&mdash; planes on one level, lines on another, etc.&mdash; according to how many vectors are in a minimal-sized spanning set.
 
}}
 
  
So far in this chapter we have seen that to study the
+
== Licensing ==
properties of linear combinations, the right setting is a
+
Content obtained and/or adapted from:
collection that is closed under these combinations.
+
* [https://en.wikibooks.org/wiki/Linear_Algebra/Definition_and_Examples_of_Linear_Independence Definition and Examples of Linear Independence, WikiBooks Linear Algebra] under a CC BY-SA license
In the first subsection we introduced such collections, vector spaces,
+
* [https://en.wikibooks.org/wiki/Linear_Algebra/Subspaces_and_Spanning_sets Subspaces and Spanning Sets, WikiBooks Linear Algebra] under a CC BY-SA license
and we saw a great variety of examples.
+
* [https://en.wikibooks.org/wiki/Linear_Algebra/Subspaces Subspaces, WikiBooks Linear Algebra] under a CC BY-SA license
In this subsection we saw still
 
more spaces, ones that happen to be subspaces of others.
 
In all of the variety we've seen a commonality.
 
[[#ex:SubspRThree|Example 2.19]]<!--\ref{ex:SubspRThree}--> above
 
brings it out: vector spaces and subspaces are best understood as a span,  
 
and especially as a span of a small number of vectors.
 
The next section studies spanning sets that are minimal.
 

Latest revision as of 20:15, 14 November 2021

Subspaces

For any vector space, a subspace is a subset that is itself a vector space, under the inherited operations.

Lemma 1

For a nonempty subset of a vector space, under the inherited operations, the following are equivalent statements.

  1. is a subspace of that vector space
  2. is closed under linear combinations of pairs of vectors: for any vectors and scalars the vector is in
  3. is closed under linear combinations of any number of vectors: for any vectors and scalars the vector is in .

Briefly, the way that a subset gets to be a subspace is by being closed under linear combinations.

Proof:
"The following are equivalent" means that each pair of statements are equivalent.
We will show this equivalence by establishing that . This strategy is suggested by noticing that and are easy and so we need only argue the single implication .
For that argument, assume that is a nonempty subset of a vector space and that is closed under combinations of pairs of vectors. We will show that is a vector space by checking the conditions.
The first item in the vector space definition has five conditions. First, for closure under addition, if then , as .
Second, for any , because addition is inherited from , the sum in equals the sum in , and that equals the sum in (because is a vector space, its addition is commutative), and that in turn equals the sum in . The argument for the third condition is similar to that for the second.
For the fourth, consider the zero vector of and note that closure of under linear combinations of pairs of vectors gives that (where is any member of the nonempty set ) is in ; showing that acts under the inherited operations as the additive identity of is easy.
The fifth condition is satisfied because for any , closure under linear combinations shows that the vector is in ; showing that it is the additive inverse of under the inherited operations is routine.


We usually show that a subset is a subspace with .

Example 1

The plane is a subspace of . As specified in the definition, the operations are the ones inherited from the larger space, that is, vectors add in as they add in
and scalar multiplication is also the same as it is in . To show that is a subspace, we need only note that it is a subset and then verify that it is a space. Checking that satisfies the conditions in the definition of a vector space is routine. For instance, for closure under addition, just note that if the summands satisfy that and then the sum satisfies that .

Example 2

The -axis in is a subspace where the addition and scalar multiplication operations are the inherited ones.
As above, to verify that this is a subspace, we simply note that it is a subset and then check that it satisfies the conditions in definition of a vector space. For instance, the two closure conditions are satisfied: (1) adding two vectors with a second component of zero results in a vector with a second component of zero, and (2) multiplying a scalar times a vector with a second component of zero results in a vector with a second component of zero.

Example 3

Another subspace of is
which is its trivial subspace.
Any vector space has a trivial subspace .

At the opposite extreme, any vector space has itself for a subspace. Template:AnchorThese two are the improper subspaces. Template:AnchorOther subspaces are proper.

Example 4

The condition in the definition requiring that the addition and scalar multiplication operations must be the ones inherited from the larger space is important. Consider the subset of the vector space . Under the operations and that set is a vector space, specifically, a trivial space. But it is not a subspace of because those aren't the inherited operations, since of course has .

Example 5

All kinds of vector spaces, not just 's, have subspaces. The vector space of cubic polynomials has a subspace comprised of all linear polynomials .

Example 6

This is a subspace of the matrices
(checking that it is nonempty and closed under linear combinations is easy).
To parametrize, express the condition as .
As above, we've described the subspace as a collection of unrestricted linear combinations (by coincidence, also of two elements).


Span

The span(or linear closure) of a nonempty subset of a vector space is the set of all linear combinations of vectors from .

The span of the empty subset of a vector space is the trivial subspace. No notation for the span is completely standard. The square brackets used here are common, but so are "" and "".

Lemma 2

In a vector space, the span of any subset is a subspace.

Proof:

Call the subset . If is empty then by definition its span is the trivial subspace. If is not empty, then we need only check that the span is closed under linear combinations. For a pair of vectors from that span, and , a linear combination
(, scalars) is a linear combination of elements of and so is in (possibly some of the 's forming equal some of the 's from , but it does not matter).

Example 7

The span of this set is all of .

To check this we must show that any member of is a linear combination of these two vectors. So we ask: for which vectors (with real components and ) are there scalars and such that this holds?

Gauss' method

with back substitution gives and . These two equations show that for any and that we start with, there are appropriate coefficients and making the above vector equation true. For instance, for and the coefficients and will do. That is, any vector in can be written as a linear combination of the two given vectors.

Linear Independence

We first characterize when a vector can be removed from a set without changing the span of that set.

Lemma 3

Where is a subset of a vector space ,

for any .

Proof: The left to right implication is easy. If then, since , the equality of the two sets gives that .
For the right to left implication assume that to show that by mutual inclusion. The inclusion , write an element of as and substitute 's expansion as a linear combination of members of the same set . This is a linear combination of linear combinations and so distributing results in a linear combination of vectors from . Hence each member of is also a member of .
Example: In , where
the spans and are equal since is in the span .

Lemma 2 says that if we have a spanning set then we can remove a to get a new set with the same span if and only if is a linear combination of vectors from . Thus, under the second sense described above, a spanning set is minimal if and only if it contains no vectors that are linear combinations of the others in that set. We have a term for this important property.

Definition of Linear Independence

A subset of a vector space is linearly independent if none of its elements is a linear combination of the others. Otherwise it is linearly dependent. }}

Here is an important observation:

although this way of writing one vector as a combination of the others visually sets off from the other vectors, algebraically there is nothing special in that equation about . For any with a coefficient that is nonzero, we can rewrite the relationship to set off .

When we don't want to single out any vector by writing it alone on one side of the equation, we will instead say that are in a linear relationship and write the relationship with all of the vectors on the same side. The next result rephrases the linear independence definition in this style. It gives what is usually the easiest way to compute whether a finite set is dependent or independent.

Lemma 4

A subset of a vector space is linearly independent if and only if for any distinct the only linear relationship among those vectors

is the trivial one: . Proof: This is a direct consequence of the observation above.

If the set is linearly independent then no vector can be written as a linear combination of the other vectors from so there is no linear relationship where some of the 's have nonzero coefficients. If is not linearly independent then some is a linear combination of other vectors from , and subtracting from both sides of that equation gives a linear relationship involving a nonzero coefficient, namely the in front of .

Example 8

In the vector space of two-wide row vectors, the two-element set is linearly independent. To check this, set

and solving the resulting system

shows that both and are zero. So the only linear relationship between the two given row vectors is the trivial relationship.

In the same vector space, is linearly dependent since we can satisfy

with and .

Example 9

The set is linearly independent in , the space of quadratic polynomials with real coefficients, because

gives

since polynomials are equal only if their coefficients are equal. Thus, the only linear relationship between these two members of is the trivial one.

Example 10

In , where

the set is linearly dependent because this is a relationship

where not all of the scalars are zero (the fact that some of the scalars are zero doesn't matter).

Resources

  • Subspaces, Interactive Linear Algebra from Georgia Tech

Licensing

Content obtained and/or adapted from: