Bases - Rohan Ramchand

BASES OF FINITE-DIMENSIONAL VECTOR SPACES
ROHAN RAMCHAND
Let S = {v1 , v2 , ..., vn } ⊆ V /F , where V is a vector space over field F .
Definition 1. S is said to span V iff
∀v ∈ V ∃ α1 , α2 , ..., αn ∈ F : v =
N
X
αi vi .
i=1
In other words, every vector in V can be written as a linear combination of the vectors
in S.
Definition 2. A set X = {u1 , u2 , ..., un } ⊆ V is linearly dependent if
∃ α1 , α2 , ..., αn ∈ F : αi0 6= 0
X
∧ 0=
α1 ui
i
= αi0 ui0 +
X
αi ui = 0
i6=i0
ui0 = −
X αi
ui .
α i0
i6=i0
In other words, every vector can be written as a linear combination of every other vector.
Definition 3. Let
δx (y) =
1
0
y=x
.
otherwise
δx (y) is known as the delta function.
Example. Let F be a field and V = F n a vector space over that field. Let ei = (δi (1), δi (2), ..., δi (n)).
The set S = {e1 , e2 , ..., en } is a basis of V .
Example. Let R[X] be the vector space of real-valued polynomials over R. Let Sn =
{1, x, x2 , ..., xn } for some natural number n. S is linearly independent; however, it does
not span R[X], since any term of degree n + 1 cannot be expressed as a linear combination
of the elements of S.
Let R≤n [X] be the vector space of real-valued polynomials over R whose highest term is
of or less than order n. Since any polynomial of degree n can be expressed as a linear
1
BASES OF FINITE-DIMENSIONAL VECTOR SPACES
2
combination of xi , i ≤ n, Sn spans R≤n [X]. Therefore, since Sn spans and is linearly
independent on R≤n [X], Sn is the basis of this vector space.
Additionally, the space R[X] does not have a finite spanning set, since it can only be spanned
by a set containing every power of x expressed in the space.
Let X be a finite set and V = F (X) be the space of F -valued functions on X.
Theorem 1. The set SV = {δx : x ∈ X} is a basis on V .
Proof. Let f ∈ F (X) be an F -valued function on X. Let αx = f (x)∀x ∈ X. Then for
some y ∈ X
X
f (y) = (
f (x)δx )(y)
x∈X
=
X
f (x)δx (y)
x∈X
= f (y)
(since δx (y) = 0 if x 6= y, the sum is nonzero only at x = y)
Therefore, SV spans V .
Let 0 be the zero function (i.e. the function ∀x ∈ X 0(x) = 0). Assume SV is linearly
dependent; then for some x0 ∈ X,
!
X
αx δx (x0 ) = 0(x0 )
x∈X
α x0 = 0
Therefore, the only way for a function to equal zero as a linear combination of the elements
of SV is for every coefficient to equal zero; therefore, SV is linearly independent.
Therefore, since SV both spans V and is linearly independent on V , it is a basis for V and
the proof is complete.
The following theorems are properties of bases. The first is given without proof.
Theorem 2.
(1) Let S be a linearly independent set on a finite vector field V . Let
0
S ⊆ S. Then S 0 is linearly independent on V .
(2) Let S be a spanning set of a finite vector field V . Let S 0 ⊆ S. Then S 0 is a spanning
set of V .
Theorem 3. Let V be a finite-dimensional vector space. Then V admits a basis.
The following lemmas are used in the proof of this theorem and are given without proof.
BASES OF FINITE-DIMENSIONAL VECTOR SPACES
3
Lemma 1. Let V be a finite-dimensional vector space. Then there exists a set S ⊆ V that
spans V .
Lemma 2. Let V be a finite-dimensional vector space and S be a spanning, linearly dependent set on V . Then there exists at least one element vi ∈ S such that S 0 = S − {vi } is
linearly independent. Then S 0 spans V .
Proof. Let S = {v1 , v2 , ..., vn } ⊆ V be a spanning set of a finite space V .
(1) If S is linearly independent, the proof is complete.
(2) If S is linearly dependent,
∃ i 1 : v i1 =
X
αi vi .
i6=i1
Then define
S0
= S − {vi1 }. By lemma, S 0 spans V .
These cases are repeated as necessary until S ∗ = S − {v1 , v2 , ..., vj } for some j < n is a
linearly independent set. (S ∗ is guaranteed to be non-empty, since the empty set cannot
span a non-empty vector space.) Then S ∗ is a basis of V and the proof is complete.
Theorem 4. Let V be a finite space with basis B = {v1 , v2 , ..., vn }. By definition,
X
∀v ∈ V ∃ α1 , α2 , ..., αn : v =
αi vi .
i≤n
These coefficients α1 , α2 , ..., αn are unique.
Proof. Let v ∈ V and assume its coefficients as defined above are non-unique:
X
X
v=
αi vi =
βi vi
i≤n
∴
X
i≤n
(αi − βi )vi = 0
i≤n
∴ ∀ i ≤ n : αi − βi = 0
∴ ∀ i ≤ n : αi = βi
We are now equipped to define the notion of the dimension of a vector space. However, we
will first need to prove a fundamental theorem of vector spaces.
Theorem 5. Let V be a finite space. Let B1 = {v1 , v2 , ..., vm } and B2 = {u1 , u2 , ..., un } be
bases of V of length m and n, respectively. Then m = n.
We will rely on the following lemma in the proof of this statement.
BASES OF FINITE-DIMENSIONAL VECTOR SPACES
4
Lemma 3. (the Exchange Lemma) Let S = {v1 , ..., vn } be a spanning set and let L =
{u1 , ..., um } be a linearly independent set on a finite space V . Then m ≤ n.
Proof. Assume by contradiction that m > n. Let u1 ∈ L0 be an element of the linearly
independent set L0 . By definition of spanning,
X
u1 =
αi vi
i
for some α1 , ..., αn , where at least one αi1 is nonzero. Then
X αi
1
v i1 =
u1 −
vi .
α i1
αi1
i6=i1
We will now “swap” this vector vi1 ∈ S with the vector u1 ∈ L0 : let S1 = S0 − {vi1 } ∪ {u1 }.
Since any vector v ∈ V can be written as
v=
n
X
βi vi = βi1 vi1 +
i=1
X
βi vi ,
i6=i1
where βi is the corresponding coefficient for each vector vi ∈ S, S1 spans V .
This process is then repeated n times (since n < m by assumption).
Then Sn = {u1 , ..., un }
P
is a spanning set by the proof above, and therefore un+1 = ni=1 αi ui . Since this implies
that un+1 is linearly dependent on the vectors in L0 , we have reached a contradiction.
Therefore, m ≤ n and the proof is complete.
We can now prove the theorem above.
Proof. Since B1 is a spanning set and B2 is a linearly independent set on V , n ≤ m.
However, B1 is a linearly independent set and B2 is a spanning set on V as well, so m ≤ n.
Therefore, m = n.
We have arrived at the fundamental result, therefore, that all bases on finite spaces are of
equal length. This length is known as the dimension of V .
Definition 4. Let V be a finite space with basis B = {v1 , v2 , ..., vn }. Then dim V = n.
We will now state two corollaries about the relationship of spanning sets and linearly
independent sets to bases. Both are provided without proof.
Corollary 1. Let S be the spanning set of a vector space V such that dim V = n. Then
#S ≥ n.
Corollary 2. Let L be a linearly independent set over a vector space V such that dim V =
n. Then #L ≤ n.
BASES OF FINITE-DIMENSIONAL VECTOR SPACES
5
With these corollaries, we can now state and prove two theorems of spanning sets of and
linearly independent sets on a finite space.
Theorem 6. Let S be the spanning set of a vector space V with dim V = n. Then if
#S = n, S is a basis.
Proof. Assume by contradiction
that S = {v1 , ..., vn } is not a linearly independent set.
P
α
v
.
Then
S1 = S − {v1 } is also a spanning set; however,
Then ∃ i1 : vi1 =
i
i
i6=i1
#S1 = n − 1 < n, which is a contradiction. Therefore, S is linearly independent and
spanning and is thus a basis.
Theorem 7. Let L be a linearly independent set on a vector space V with dim V = n.
Then if #L = n, L is a basis.
Proof. Assume by contradiction that L is not spanning. Then ∃ v ∈ V such that v cannot be
expressed as a linear combination of the vectors in L. Then let L1 = L ∪ {v}; by definition,
L1 is linearly independent. However, #L1 = n + 1 > n, which is a contradiction; therefore,
L is spanning and linearly independent and is thus a basis.
Ordered Bases
We now seek to induce some form of order on the notion of a basis.
Definition 5. Let B = {v1 , ..., vn } be the basis of a finite vector space V . Then the
~ = (v1 , ..., vn ).
ordered basis over V is an n-tuple of vectors B
One of the properties of a basis over a vector space V is that any vector in this space can
be written as a linear combination of vectors in the basis (since a basis by definition spans
a vector space). Let ϕB : V → F n = {α1 , ..., α} be the mapping from the vector space to
the n-tuple of coefficients in F , such that for some v ∈ V
n
X
v=
αi vi .
i=1
Since B is unordered, the mapping results in a set and is therefore not unique; if, however,
~ is used instead, the mapping becomes unique.
B
~ = (v1 , ..., vn ) be an ordered
Theorem 8. Let V be a finite vector space over a field F and B
basis on V . Define the map
ϕB~ : V → F n : ϕB~ (v) = (α1 , ..., αn )
such that
v=
n
X
i=1
Then ϕB~ is an isomorphism.
αi vi .
BASES OF FINITE-DIMENSIONAL VECTOR SPACES
6
P
P
Proof. Let u, v ∈ V , such that u = ni=1 αi vi and v = ni=1 βi vi . Then
P ϕB~ (u) = (α1 , ..., αn )
and ϕB~ (v) = (β1 , ..., βn ). By definition of vector addition, u + v = ni=1 (αi + βi )vi ; then
ϕB~ (u + v) = (α1 + β1 , ..., αn + βn )
= (α1 , ..., αn ) + (β1 , ..., βn )
= ϕB~ (u) + ϕB~ (v).
Now
Pn let λ ∈ F be a scalar; then by definition of scalar multiplication, λu = λ
i=1 (λαi )vi . Then
Pn
i=1 αi vi
=
ϕB~ (λu) = (λα1 , ..., λαn )
= λ(α1 , ..., αn )
= λϕB~ (u)
Therefore, ϕB~ (u + v) = ϕB~ (u) + ϕB~ (v) and ϕB~ (λu) = λϕB~ (u); then ϕB~ is a linear transformation.
Let g : F n → V such that g(α1 , ..., αn ) = α1 v1 + ...αn vn . Then g ◦ ϕB~ = IdV and
ϕB~ ◦ g = IdF n , where IdV and IdF n are the identity mappings on V and F n , respectively.
(The proof of this statement is left as an exercise.)
The mapping ϕB~ is an isomorphism; moreover, it is a mapping between abstract algebra
and linear algebra, as illustrated in the following examples.
~ = ((1, 1), (1, −1)). Then
Example. Let V = R2 /R, with B
x+y x−y
ϕB~ (x, y) = (α, β) =
,
.
| {z }
2
2
v∈V
This can also be expressed as
" # "
1
α
= 12
β
2
#" #
x
1
−2 y
1
2
~ θ = ((cos θ, sin θ), (− sin θ, cos θ)). ϕ ~ is left as an
Example. Let V = R2 and let B
Bθ
exercise to the reader.
ϕ−1
~ (α, β) = (x, y)
Bθ
= α(cos θ, sin θ) + β(− sin θ, cos θ)
= (α cos θ − β sin θ, α sin θ + β cos θ)
x
cos θ − sin θ α
=
y
sin θ cos θ
β