38 General Solutions to Homogeneous Linear Systems

38
General Solutions to Homogeneous
Linear Systems
In this chapter, we will develop the basic theory regarding solutions to standard first-order homogeneous N × N linear systems of differential equations. Fortunately, this theory is very similar
to that for single linear differential equations developed in chapters 12, 14 and 15. In fact, to
some extent, our discussion will be guided by what we already know about general solutions to
N th -order linear differential equations. You should also expect to see significant use of a few
results from basic linear algebra.
Will we finally actually solve a few systems in this chapter? No, not really, but we will need
the theory developed here when we finally do start solving systems in the next chapter.
38.1
Basic Assumptions and Terminology
Throughout this chapter, N is some positive integer, (α, β) is some interval, and


p11 (t) p12 (t) · · · p1N (t)
 p (t) p (t) · · · p (t) 

 21
22
2N

P = P(t) = 
.
.
.
.
 .
..
..
.. 
 .

p N 1 (t) p N 2 (t) · · · p N N (t)
(38.1)
is a continuous N × N matrix-valued function on the interval (α, β) .
For now, our interest is just in the possible solutions to the homogeneous linear system
x′ = Px
(38.2)
over (α, β) . For brevity, we may just refer to this as “our system x′ = Px ” with the implicit
understanding that P is as just described. Along these same lines, let us simplify our verbage
and agree that, in our discussion, the phrases “solution” and “solution to x′ = Px ” both mean
“solution over (α, β) to x′ = Px ”.
Also keep in mind that a solution x to this system is a vector-valued function on (α, β)
T
x(t) = x1 (t) , x2 (t) , . . . , x N (t)
3/21/2014
Chapter & Page: 38–2
General Solutions to Homogeneous Linear Systems
satisfying x′ = Px at every point in the interval (α, β) . Often, we will have several such vectorvalued functions. When we do, we will use superscripts to distinguish the different vector-valued
functions; that is, we will write the set of vector-valued functions as either
1 2
1
x , x , . . . , xM
or
x (t) , x2 (t) , . . . , x M (t)
with

x11 (t)
 x 1 (t) 
 2 
1

x (t) = 
 ... 



x N1 (t)
!◮Example 38.1:

x12 (t)
 x 2 (t) 
 2 
2

, x (t) = 
 ... 



,
...
and
x N2 (t)

x1M (t)
x M (t)
 2

M

x (t) = 
 ... 



.
x NM (t)
Consider the linear system of differential equations
"
#
1
2
x′ = Px
with P =
5 −2
In this case P is a constant matrix and, hence, is a continuous 2×2 matrix-valued function
over the interval (−∞, ∞) . It is easily verified (see examples 36.2 and 37.1) that one pair of
solutions {x1 , x2 } to this is given by
" #
"
#
3t
−4t
e
−2e
x1 (t) =
and
x2 (t) =
,
e3t
5e−4t
which we may write more simply as
" #
1 3t
1
x (t) =
e
1
and
2
x (t) =
"
#
−2 −4t
e
5
.
In the following, we will also be referring to ‘constants’, ‘vectors’, and, maybe, ‘constant
vectors’. Just to be clear, when we refer to something as being just a constant (not constant
vector), then we mean that something is a single real number. And if we refer to something just
a vector or constant vector then that something is a column vector whose N components are
constants. So “ a is a vector” means a = [a1 , a2 , . . . , a N ]T with each ak being some single real
number.1
38.2
Deriving the Main Results
We’ll derive the main results, summarized in theorem 38.8, piece by piece in this section, culminating with a discussion of “fundamental sets of solutions”. Along the way, we will also develop
some of the concepts and terminology used in that theorem. Many of these will be concepts and
terms that you should recall from your study of linear algebra.
1 It is worth noting that the set of all such column vectors with N components is an N -dimensional vector space.
Deriving the Main Results
Chapter & Page: 38–3
Immediate Results on Existence and Uniqueness
Our first lemma is simply a restatement of theorem 37.4 on page 37–19 with g = 0 .
Lemma 38.1 (existence and uniqueness of solutions)
Assume P is a continuous N × N matrix-valued function over the interval (α, β) , and let t0
and a be, respectively, a point in the interval (α, β) and a constant vector. Then the initial-value
problem
x′ = Px
with x(t0 ) = a ,
has exactly one solution over the interval (α, β) .
Linear Combinations and the Principle of Superposition
Recall that a linear combination of xk ’s from any finite set
1 2
x , x , . . . , xM
of either vectors or vector-valued functions is any expression of the form
c1 x 1 + c2 x 2 + · · · + c M x M
where the ck ’s are constants. Keep in mind that, if the xk ’s are vector-valued functions on the
interval (α, β) , then
x = c1 x 1 + c2 x 2 + · · · + c M x M
means
x(t) = c1 x1 (t) + c2 x2 (t) + · · · + c M x M (t)
1
for α < t < β
2
.
M
Now suppose we have a linear combination c1 x + c2 x + · · · + c M x in which each of
these x j ’s is a solution to our linear system of differential equations; that is,
dx j
= Px j
dt
for
j = 1, 2, . . . , M
.
Because of the linearity of differentiation and matrix multiplication, we then have
d 1
dx1
dx2
dx M
c1 x + c2 x 2 + · · · + c M x M = c1
+ c2
+ · · · + cM
dt
dt
dt
dt
= c1 Px1 + c2 Px2 + · · · + c M Px M
= P c1 x 1 + c2 x 2 + · · · + c M x M
.
Cutting out the middle yields the systems version of superposition:
Lemma 38.2 (principle of superposition for systems)
If x1 , x2 , . . . and x M are all solutions to x′ = Px , then so is any linear combination of these
xk ’s .
Observe that, if {x1 , x2 , . . . , x M } is a set of solutions to our system x′ = Px and x is any
single solution equaling some linear combination of the xk ’s at one point t0 in (α, β) ,
x(t0 ) = c1 x1 (t0 ) + c2 x2 (t0 ) + · · · + c M x M (t0 ) ,
(38.3)
Chapter & Page: 38–4
General Solutions to Homogeneous Linear Systems
then
x
and
c1 x 1 + c2 x 2 + · · · + c M x M
are both solutions to x′ = Px satisfying the same initial condition at t0 . But lemma 38.1 tells us
that there is only one solution to this initial-value problem. Hence, x and this linear combination
must be the same. That is,
x(t) = c1 x1 (t) + c2 x2 (t) + · · · + c M x M (t) for every value t in (α, β) .
(38.4)
This, along with the obvious fact that equation (38.4) implies equation (38.3), gives us our next
lemma.
Lemma 38.3
Let {x1 , x2 , . . . , x M } be any set of solutions to x′ = Px , where P is a continuous N × N
matrix-valued function on the interval (α, β) . Also let {c1 , c2 , . . . , c M } be a set of constants,
and let t0 be a point in the interval (α, β) . Then, for any solution x to x′ = Px ,
x(t0 ) = c1 x1 (t0 ) + c2 x2 (t0 ) + · · · + c M x M (t0 ) for one value t0 in (α, β)
if and only if
x(t) = c1 x1 (t) + c2 x2 (t) + · · · + c M x M (t) for every value t in (α, β) .
An application using the above lemmas is now in order. It will give you an idea of where
we are heading.
!◮Example 38.2:
We already know that
" #
1 3t
1
x (t) =
e
1
and
2
x (t) =
"
#
−2 −4t
e
5
are both solutions (over (−∞, ∞) ) to
′
x = Px
with P =
"
1 2
5 −2
#
.
The principle of superposition now assures us that, for any pair c1 and c2 of constants, the
linear combination
" #
" #
1 3t
−2 −4t
1
2
c1 x (t) + c2 x (t) = c1
e + c2
e
1
5
is also a solution to our homogeneous system.
The obvious question now is whether every solution is given by a linear combination of
1
x and x2 . To answer that, let x(t) = [x(t), y(t)]T be any single solution to x′ = Px , and
consider the problem of finding constants c1 and c2 such that
x(t) = c1 x1 (t) + c2 x2 (t)
for
−∞<t <∞
.
Deriving the Main Results
Chapter & Page: 38–5
According to our last lemma, this problem is completely equivalent to the problem of finding
constants c1 and c2 such that
x(t0 ) = c1 x1 (t0 ) + c2 x2 (t0 )
for some t0 in (−∞, ∞) .
Letting t0 = 0 , and using the formulas for x1 and x2 , the last equation becomes
"
#
" #
" #
x(0)
1 3·0
−2 −4·0
= c1
e + c2
e
,
y(0)
1
5
which we can rewrite as pair of linear algebraic equations,
x(0) = 1c1 − 2c2
.
y(0) = 1c1 + 5c2
But this is clearly a solvable algebraic system of linear equations no matter what x(0) and
y(0) happen to be. In fact, as you can easily verify, the one and only one solution (c1 , c2 ) to
this system is given by
c1 =
1
[y(0) − x(0)]
7
and
c2 =
1
[6x(0) + y(0)]
7
.
Thus, using these values for c1 and c2 , we have
x(t) = c1 x1 (t) + c2 x2 (t)
for
−∞<t <∞
.
So, at least for the system of differential equations being considered here, the answer to
the question of whether every solution is given by a linear combination of x1 and x2 is yes.
The above shows that, given any solution x , we can find one (and only one) corresponding
pair of constants (c1 , c2 ) such that
" #
" #
1 3t
−2 −4t
1
2
x(t) = c1 x (t) + c2 x (t) = c1
e + c2
e
.
1
5
In other words, the above expression is a general solution to our system x′ = Px .
As suggested in the above example, our goal is to show that, for any given P , every solution
x to x′ = Px can be written as a linear combination of solutions from some ‘fundamental set’
of solutions,
1 2
x , x , . . . , xM
.
Moreover, as illustrated in the above example, we can use lemma 38.3 us to convert the problem
of finding constants c1 , c2 , . . . and c M such that
x(t) = c1 x1 (t) + c2 x2 (t) + · · · + c M x M (t)
for α < t < β
to the problem of finding constants c1 , c2 , . . . and c M such that
x(t0 ) = c1 x1 (t0 ) + c2 x2 (t0 ) + · · · + c M x M (t0 ) .
for a single t0 . But remember that another lemma, lemma 38.1, assures us that there is a solution
x to our system of differential equations satisfying x(t0 ) = a for each vector a and each t0 in
Chapter & Page: 38–6
General Solutions to Homogeneous Linear Systems
(α, β) . Combining this fact with the results from lemma 38.3 gives our next lemma, which will
play a major role in our final derivations.
Lemma 38.4
Assume P be a continuous N × N matrix-valued function on the interval (α, β) . Let t0 be a
point in this interval, and let
1 2
x , x , . . . , xM
be any set of solutions to x′ = Px . Then every solution x to x′ = Px can be written as a linear
combination of the xk ’s if and only if every vector a can be written as a linear combination of
vectors from the set
1
x (t0 ), x2 (t0 ), . . . , x M (t0 )
.
This lemma, along with a similar lemma concerning ‘linear independence’, will play a major
role in our final derivations. So let’s now bring back the basic notion of linear (in)dependence.
Linear Independence
Let
x1 , x2 , . . . , x M
be either a set of vectors or a set of vector-valued functions on (α, β) . Recall that this set is said
to be linearly independent if and only if none these xk ’s can be written as as linear combination
of the other xk ’s . Otherwise, we say this set is linearly dependent; that is, the set is linearly
dependent if and only if one these xk ’s can be written as as linear combination of the other xk ’s .
Two quick observations:
1.
Any constant multiple of a single xk is a (very simple) linear combination of that xk .
In particular, since 0 = 0xk , any set containing the zero vector or the zero vector-valued
function is automatically linearly dependent.
2.
If we just have a pair {x1 , x2 } , the concepts of linear (in)dependence simplify to the pair
being linearly independent if and only if neither x1 nor x2 is a constant multiple of the
other.
!◮Example 38.3:
where
Consider the set {x1 , x2 } of vector-valued functions from the last example,
" #
" #
1
−2 −4t
x1 (t) =
e3t
and
x2 (t) =
e
.
1
5
Clearly, there is no constant C such that either
" #
" #
1 3t
−2 −4t
e = C
e
for
1
5
or
" #
" #
−2 −4t
1 3t
e
= C
e
for
5
1
−∞<t <∞
−∞<t <∞
So this set of two vector-valued functions is linearly independent.
.
Deriving the Main Results
Chapter & Page: 38–7
Similarly, consider the set of vectors {b1 , b2 } given by the above vector-valued functions
at t = 0 ,
" #
" #
" #
" #
1
1
−2
−2
b1 =
e3·0 =
and
b2 =
e−4·0 =
.
1
1
5
5
Again, it should be clear that there is no constant C such that either
" #
" #
" #
" #
1
−2
−2
1
= C
or
= C
1
5
5
1
.
So this set of two vectors is linearly independent.
At this point, let us recall a test for linear independence that you should recall from your
study of linear algebra.2
Lemma 38.5 (a basic test for linear independence)
A set {x1 , x2 , . . . , x M } of vectors or vector-valued functions is linearly independent if and only
if the only choice of constants c1 , c2 , . . . and c M such that
c1 x 1 + c2 x 2 + · · · + c M x M = 0
is
c1 = c2 = · · · = c M = 0
.
Observe that if we have two linear combinations of the same xk ’s equaling the same a ,
a = c1 x 1 + c2 x 2 + · · · + c M x M
and
a = C 1 x1 + C 2 x2 + · · · + C M x M
,
then
(c1 − C1 )x1 + (c2 − C2 )x2 + · · · + (c M − C M)x M = a − a = 0
.
From this, you should have no problem in verifying that the above test for linear independence
is equivalent to the following “test”.
Lemma 38.6 (alternative test for linear independence)
Let {x1 , x2 , . . . , x M } be a set of vectors or vector-valued functions. This set is linearly independent if and only if, for each a that can be written as a linear combination of the xk ’s , there is
only one choice of constants c1 , c2 , . . . and c M such that
a = c1 x 1 + c2 x 2 + · · · + c M x N
.
Now suppose
x1 , x2 , . . . , x M
is a set of solutions over (α, β) to our system x′ = Px , and let t0 be in (α, β) . Lemma 38.3
tells us that any one solution x j is a linear combination of the other xk ’s if and only if the
corresponding vector x j (t0 ) is a linear combination of the other xk (t0 )’s . This observation is
worth writing down as a lemma in terms of linear independence.
2 If you don’t recall this test, see exercise 38.1 at the end of the chapter.
Chapter & Page: 38–8
General Solutions to Homogeneous Linear Systems
Lemma 38.7
Assume P is a continuous N × N matrix-valued function on the interval (α, β) . Let t0 be a
point in this interval, and let
1 2
x , x , . . . , xM
be any set of M solutions to x′ = Px . Then this set is a linearly independent set of vector-valued
functions if and only if
1
x (t0 ) , x2 (t0 ) , . . . , x M (t0 )
is a linearly independent set of vectors.
Compare the above lemma with lemma 38.4. Both will play a major role in the following.
Fundamental Sets of Solutions
Basic Definition
We now define a fundamental set of solutions for x′ = Px to be any linearly independent set of
solutions to x′ = Px
1 2
x , x , . . . , xM
such that every solution to x′ = Px can be written as a linear combination of the x j ’s in this set.
Note that, if the above is a fundamental set of solutions for x′ = Px , then
x = c1 x 1 + c2 x 2 + · · · + c N x M
(with the ck ’s being arbitrary constants) is a general solution for x′ = Px .
Describing Fundamental Sets of Solutions
It turns out that there are numerous alternative ways to describe fundamental sets. To see this,
let
X = x1 , x2 , . . . , x M
be a set of solutions to x′ = Px where, as usual, P is a continuous N×N matrix valued function
on an interval (α, β) . Take any point t0 in the interval, and let
B = b1 , b2 , . . . , b M
be the set of vectors given by
bk = xk (t0 )
for k = 1, 2, . . . , M
.
From our basic definition of a ‘fundamental set of solutions’ we know:
The set X is a fundamental set of solutions to x′ = Px if and only if X is a linearly
independent set of vector-valued functions such that any solution to x′ = Px can
be written as a linear combination of the xk ’s .
From lemmas 38.4 and 38.7, we know this last statement is completely equivalent to:
The set X is a fundamental set of solutions to x′ = Px if and only if B is a linearly
independent set of vectors such that any vector can be written as a linear combination
of the bk ’s .
Deriving the Main Results
Chapter & Page: 38–9
Throwing in lemma 38.6 we get another equivalent statement:
The set X is a fundamental set of solutions to x′ = Px if and only if, for each
vector a = [a1 , a2 , . . . , a N ]T , there is one and only one choice of constants c1 , c2 ,
. . . and c M such that
a = c1 b1 + c2 b2 + · · · + c M b M
.
At this point, you probably realize that the last two statements are saying that the set of xk ’s
is a fundamental set of solutions if and only if the set of bk ’s is a ‘basis’ for the vector space of all
column vectors with N components, and, from linear algebra, we know that M , the number of
vectors in the set B must equal N the number of components in each column vector. Moreover,
from linear algebra, we know that any set of N linearly independent vectors will be a basis for
this space of column vectors.3 So either of the last two statements about X can be rephrased as
The set X is a fundamental set of solutions to x′ = Px if and only if M = N and
B is a linearly independent set of vectors.
Applying lemma 38.7 once again with the last yields:
The set X is a fundamental set of solutions to x′ = Px if and only if M = N and
X is linearly independent.
All of the above could be considered pieces of one big lemma. Rather than state that lemma
here, we will summarize the most relevant pieces in a major theorem in a page or two.
Existence of Fundamental Sets of Solutions
Finally, let us observe that fundamental sets of solutions do exist. After all, no matter what N
is, we can always find a linearly independent set of N vectors with N components,
1 2
b , b , . . . , bN
.
For example, if N = 3 we can use
 
1
 
b1 = 0
,
0
 
0
 
b2 = 1
0
and
 
0
 
b3 = 0
1
.
And for any point t0 in (α, β) and every bk , lemma 38.1 assures us that there is a solution xk
to x′ = Px satisfying xk (t0 ) = bk . As noted in the last paragraph above, it then follows that
1 2
x , x , . . . , xN
is a fundamental set of solutions to our system of differential equations.
3 An alternative derivation not using ‘basis’ of the fact that M = N is given in section 38.4.
Chapter & Page: 38–10
38.3
General Solutions to Homogeneous Linear Systems
The Main Result on General Solutions to Linear
Systems
Looking back over the discussion on fundamental sets of solutions in the last section, you will
see that we have verified the following major theorem on general solutions to linear systems of
differential equations.
Theorem 38.8 (general solutions to homogenous systems)
Let P be a continuous N × N matrix-valued function on an interval (α, β) , and consider the
system of differential equations x′ = Px . Then all the following statements hold:
1.
Fundamental sets of solutions over (α, β) for this system exist.
2.
Every fundamental set of solutions contains exactly N solutions.
3.
If {x1 , x2 , . . . , x N } is any linearly independent set of N solutions to x′ = Px on (α, β) ,
then
(a)
{x1 , x2 , . . . , x N } is a fundamental set of solutions for x′ = Px on (α, β) .
(b)
A general solution to x′ = Px on (α, β) is given by
x(t) = c1 x1 (t) + c2 x2 (t) + · · · + c N x N (t)
where c1 , c2 , . . . and c N are arbitrary constants.
(c)
Given any single point t0 in (α, β) and any constant vector a , there is exactly one
ordered set of constants {c1 , c2 , . . . , c N } such that
x(t) = c1 x1 (t) + c2 x2 (t) + · · · + c N x N (t)
satisfies the initial condition x(t0 ) = a .
This theorem is the systems analog of theorem 14.2 on page 348 concerning general solutions
to single N th -order homogeneous linear differential equations. In fact, theorem 14.2 can be
considered a corollary to the above.
38.4
Wronskians and Identifying Fundamental Sets
As illustrated in the previous examples, determining whether a set of solutions is a fundamental
set for our problem x′ = Px is fairly easy when P is 2×2 . Our goal now is to come up with a
method for identifying a fundamental set of solutions that be easily applied when P is N × N
even when N > 2 .
Let us start by assuming we have a set
x1 , x2 , . . . , x M
Wronskians and Identifying Fundamental Sets
Chapter & Page: 38–11
of vector-valued functions on the interval (α, β) , each with N components,

x11 (t)
 x 1 (t) 
 2 
1

x (t) = 
 ... 



x N1 (t)

x12 (t)
 x 2 (t) 
 2 
2

, x (t) = 
 ... 



,
...
and
x N2 (t)

x1M (t)
x M (t)
 2

M

x (t) = 
 ... 



.
x NM (t)
For the moment, we need not assume the xk ’s are solutions to our N ×N system of differential
equations, nor will we assume N = M .
A “Matrix/Vector” Formula for Linear Combinations
Observe:
c1 x 1 + c2 x 2 + · · · + c M x M
 2
 M

x11
x1
x1
x M 
 x2 
 x1 
 2 
 2
 2
 + c2  .  + · · · + c M  . 
= c1 
.
 .. 
 .. 
 .. 
 
 
 

x N1

x N2
x NM
x11 c1 + x12 c2 + · · · + x1M c M

 1

 x2 c1 + x22 c2 + · · · + x2M c M 

= 


..


.
x N1 c1 + x N2 c2 + · · · + x NM c M

x11
 x1
 2
= 
 ..
 .
x N1
x12
x22
..
.
x N2
 
· · · x1M
c1
M 
· · · x 2   c2 

 
.. 
..
  .. 
. .  . 
· · · x NM
cM
.
That is, for α < t < β ,
c1 x1 (t) + c2 x2 (t) + · · · + c M x M (t) = [X(t)]c
where

x11 (t) x12 (t)
 x 1 (t) x 2 (t)
 2
2
X(t) = 
..
 ..
.
 .
1
2
x N (t) x N (t)

· · · x1M (t)
· · · x2M (t)

.. 
..

.
. 
· · · x NM (t)
and


c1
c 
 2

c = 
 .. 
 . 
cM
.
The above N × M matrix-valued function X will be important to us. In general, we’ll
simply call it the matrix whose k th column is given by xk .
!◮Example 38.4:
The matrix whose k th column is given by xk when
" #
"
#
e3t
−2e−4t
1
2
x (t) =
and
x (t) =
e3t
5e−4t
Chapter & Page: 38–12
General Solutions to Homogeneous Linear Systems
is
X(t) =
"
e3t
e3t
−2e−4t
5e−4t
#
.
Observe that, indeed,
[X(t)]c =
"
e3t
e3t
−2e−4t
5e−4t
#" #
"
#
c1
c1 e3t + c2 (−2)e−4t
=
c2
c1 e3t + 5c2 e−4t
" #
"
#
e3t
−2e−4t
= c1 3t + c2
= c1 x1 (t) + c2 x2 (t) .
−4t
e
e
Deriving a ‘Simple’ Test
Now assume these xk ’s are solutions to our system x′ = Px , and let t0 be any single value
in (α, β) . From lemmas 38.4, 38.7 and 38.6, we know (as noted on page 38–9 using slightly
different notation) that:
The set {x1 , x2 , . . . , x M } is a fundamental set of solutions to x′ = Px if and only if,
for each vector a = [a1 , a2 , . . . , a N ]T , there is one and only one choice of constants
c1 , c2 , . . . and c M such that
c1 x1 (t0 ) + c2 x2 (t0 ) + · · · + c M x M (t0 ) = a
.
(38.5)
However, from the observations made just before our last example, we know that equation (38.5)
is equivalent to the algebraic system of N equations and M unknowns
x11 (t0 )c1 + x12 (t0 )c2 + · · · + x1M (t0 )c M = a1
x21 (t0 )c1 + x22 (t0 )c2 + · · · + x2M (t0 )c M = a2
..
.
,
(38.6)
x N1 (t0 )c1 + x N2 (t0 )c2 + · · · + x NM (t0 )c M = a N
which can also be written as the matrix/vector equation
[X(t0 )] c = a
(38.7)
where c = [c1 , c2 , . . . , c M ]T and X(t) is the N×M matrix whose k th column is given by xk (t) .
But solving either algebraic system (38.6) or matrix/vector equation (38.7) is a classic
problem in linear algebra, and from linear algebra we know there is one and only one solution
c for each a if and only if
M=N
and X(t0 ) is invertible .
If these two conditions are both satisfied, then c can be determined from each a by
c = [X(t0 )]−1 a
Wronskians and Identifying Fundamental Sets
Chapter & Page: 38–13
where [X(t0 )]−1 is the inverse of matrix X(t0 ) . (In practice, though, a “row reduction” method
may be a more efficient way to find c .)
Now, to make life easier, recall that there is a relatively simple test for determining if a given
square matrix M is invertible4 based on the matrix’s determinant, det(M) ; namely,
M is invertible
⇐⇒
det(M) 6= 0
.
Thus, our set of M solutions is a fundamental set of solutions if and only if
M=N
and
.
det(X(t0 )) 6= 0
Wronskians and Identifying Fundamental Sets
The last line above gives us a useful test for determining if a given set of solutions is a fundamental set of solutions. It also gives the author an excuse for introducing additional ‘standard’
terminology concerning any set
1 2
x , x , . . . , xN
of N vector-valued functions on an interval (α, β) , with each xk having N components. The
Wronskian, W , of this set is the function on (α, β) given by
W (t) = det(X(t))
where X is the matrix whose k th column is given by xk .
Using the ‘Wronskian’, we can now properly state the test we have just derived above.
Theorem
38.9 (Identifying
Fundamental Sets of Solutions)
Let x1 , x2 , . . . , x M be a set of M solutions to x′ = Px , with P being a continuous N × N
matrix-valued function on an interval (α, β) . Then this set is a fundamental set of solutions for
x′ = Px if and only if both of the following hold:
1.
M=N.
2.
For any single t0 in (α, β) , W (t0 ) 6= 0 , where W is the Wronskian of {x1 , x2 , . . . , x M } .
!◮Example 38.5:
It is not hard to verify that three solutions (on

12
2 −
5

2 −3
′
x = Px
with P = 

6
0
5
(−∞, ∞) to

4
5
1


8
5
4 More terminology you should recall:
M is singular
⇐⇒
M is not invertible
M is nonsingular
⇐⇒
M is invertible .
Chapter & Page: 38–14
are
General Solutions to Homogeneous Linear Systems
 
1
  2t
1
x =  1 e
3
,


2
 
x2 =  3  e−2t
−1
and
The corresponding matrix whose k th columm given by

e2t 2e−2t
 2t
X(t) =  e
3e−2t
3e2t −e−2t
and the Wronskian is
e2t
W (t) = det(X(t)) = e2t
2t
3e
xk is

3e2t

e2t 
3e2t
2e−2t
3e−2t
−e−2t
 
3
  2t
3
x =  1 e
3
3e2t e2t 3e2t .
.
Computing out this determinant is not difficult, but not necessary. All we need is to compute
is W (t0 ) for some convenient value t0 , say t0 = 0 ,
e2·0 2e−2·0 3e2·0 W (0) = det(X(0)) = e2·0 3e−2·0 e2·0 2·0
3e
−e−2·0 3e2·0 1 2 3
= 1 3 1
3 −1 3
3 1
1 1
1 3 = 1
− 2
+ 3
−1 3
3 3
3 −1
= 1[9 + 1] − 2[3 − 3] + 3[−1 − 9] = −20 .
Since W (0) 6= 0 , the above theorem tells us that the set x1 , x2 , x3 is a fundamental set of
solutions for the above system of differential equations. And that means
 
 
 
1
2
3
  2t
  −2t
  2t
x(t) = c1 1 e + c2  3  e
+ c3 1 e
3
−1
3
is a general solution to the 3×3 system x′ = Px being considered here.
By the way, the fact that we can choose t0 arbitrarily in (α, β) tells us that whether W (t0 )
is zero or not is totally independent of the choice of t0 . That gives us the following corollary.
Corollary 38.10
Assume P is a continuous N × N matrix-valued function on an interval (α, β) , and let W be
the Wronskian of a set of N solutions to x′ = Px . Then
W (t0 ) 6= 0
for one value t0 in (α, β)
Additional Exercises
Chapter & Page: 38–15
if and only if
W (t) 6= 0
38.5
for every value t in (α, β) .
Fundamental Matrices
In the last section, we introduced the matrix-valued function X whose k th column is given by
the k th vector-valued function in a set
1 2
x , x , . . . , xN
.
In the future, we will refer to X as a fundamental matrix for x′ = Px if and only if the above
set is a fundamental set of solutions for x′ = Px . Fundamental matrices will play a role in some
of our later discussions.
!◮Example 38.6:
In example 38.5, just above, we considered the problem


12 4
2 −
5
5

2 −3 1 
′
x = Px
with P = 
 ,


6
8
0
5
and saw that the set {x1 , x2 , x3 } with
 
 
1
2
  2t
  −2t
1
2
x = 1 e
,
x =  3 e
3
−1
is a fundamental set of solutions to the given problem
columm given by xk ,

e2t 2e−2t
 2t
3e−2t
X(t) =  e
3e2t −e−2t
is a fundamental matrix for this problem.
5
 
3
  2t
3
x = 1 e
3
and
x′ = Px . Hence, the matrix whose k th

3e2t

e2t 
3e2t
,
Additional Exercises
38.1. Consider the two equations
x M = C1 x1 + C2 x2 + · · · + C M−1 x M−1
.
(38.8)
Chapter & Page: 38–16
General Solutions to Homogeneous Linear Systems
and
c1 x 1 + c2 x 2 + · · · + c M x M = 0
.
(38.9)
where {x1 , x2 , . . . , x M } is a set of vector-valued functions on an interval (α, β) .
a. Using simple algebra, show that equation (38.8) holds for some constants C1 , C2 ,
. . . and C M−1 if and only if equation (38.9) holds for some constants c1 , c2 , . . .
and c M with c M 6= 0 .
b. Expand on the above and explain how it follows that at least one of the xk ’s must be a
linear combination of the other xk ’s if and only if equation (38.9) holds with at least
one of the ck ’s being nonzero.
c. Finish proving lemma 38.5 on page 38–7.
38.2. Consider the system
x′ = y
y ′ = −4t −2 x + 3t −1 y
.
a. Rewrite this system in matrix/vector form.
b. What are the largest intervals over which we can be sure solutions to this system exist?
c. Verify that
" #
t2
x (t) =
2t
1
and
2
x (t) =
"
#
t 2 ln |t|
t (1 + 2 ln |t|)
are both solutions to this system.
d. Compute the Wronskian W (t) of the set of the above xk ’s at some convenient nonzero
point t = t0 (part of this problem is to choose a convenient point). What does this
value of W (t0 ) tell you?
e. Using the above, find the solution to the above system satisfying
i. x(1) = [1, 0]
T
ii. x(1) = [0, 1]
T
38.3. Consider the system
x′ =
0x + 2y − 2z
y ′ = −2x + 4y − 2z
′
z =
.
2x + 2y − 4z
a. Rewrite this system in matrix/vector form.
b. What is the largest interval over which we are sure solutions to this system exist?
c. Verify that
 
1
x(t) = 1
1
,
 
1
  −2t
y(t) = 1 e
2
are all solutions to this system.
and
 
1
  2t
z(t) = 2 e
1
Additional Exercises
Chapter & Page: 38–17
d. Compute the Wronskian W (t) of the set {x, y, z} some convenient point t = t0
(choosing a convenient point is part of the problem), and verify that the above {x, y, z}
is a fundamental set of solutions to the above system of differential equations.
38.4. Four solutions to


0 −2 0


x ′ =  1 0 1 x
0 −2 0
are


cos(2t)


x1 (t) =  sin(2t) 
cos(2t)
and


sin(2t)


x2 (t) = − cos(2t)
sin(2t)
,


1
 
x4 (t) =  0 
−1
,


− sin2 (t)


x3 (t) = sin(t) cos(t)
cos2 (t)
.
Given this, determine which of the following are fundamental sets of solutions to the
given system:
a. {x1 , x2 }
b. {x1 , x4 }
c. {x1 , x2 , x3 }
d. {x1 , x2 , x4 }
e. {x1 , x3 , x4 }
f. {x1 , x2 , x3 , x4 }
38.5. Four solutions to
are
and
 
0
  3t
1
x (t) = 2 e
1


−1 −1 2


x′ = −8 1 4 x
−4 −1 5
,
 
1
  3t
2
x (t) = 0 e
2
 
1
  −t
4
x (t) = 2 e
1
,


3
 
x3 (t) = −4 e3t
4
.
Given this, determine which of the following are fundamental sets of solutions to the
given system:
a. {x1 , x2 }
b. {x1 , x4 }
c. {x1 , x2 , x3 }
d. {x1 , x2 , x4 }
e. {x1 , x3 , x4 }
f. {x1 , x2 , x3 , x4 }
38.6. Traditionally (i.e., in most other texts), corollary 38.10 on page 38–14 is usually proven
by showing that the Wronskian W of a set of N solutions to an N×N system x′ = Px
satisfies the differential equation
W ′ = p1,1 + p2,2 + · · · + p N ,N W ,
,
Chapter & Page: 38–18
General Solutions to Homogeneous Linear Systems
and then solving this differential equation and verifying that the solution is nonzero over
the interval of interest if and only if it is nonzero at one point in the interval. Do this
yourself for the case where N = 2 .
Additional Exercises
Chapter & Page: 38–19
Chapter & Page: 38–20
General Solutions to Homogeneous Linear Systems
Some Answers to Some of the Exercises
WARNING! Most of the following answers were prepared hastily and late at night. They
have not been properly proofread! Errors are likely!
′ x
0
1
x
2a.
=
y′
−4t −2 3t −1 y
2b. (−∞, 0) and (0, ∞)
2d. W (1) = 1 6= 0 (Hence {x1 , x2 } is a fundamental
set of solutions.)
t 2 (1 − 2 ln |t|)
1
2
2e i. x(t) = x (t) − 2x (t) =
−4t ln |t|
2
t ln |t|
2
2e ii. x(t) = x (t) =
t (1 + 2 ln |t|)
 ′ 
 
x
x
0 2 −2
3a.  y ′  = −2 4 −2  y 
z′
2 2 −4
z
3b. (−∞, ∞)
3d. W (0) = −1
4a. It is not a fundamental set since – the set is too small.
4b. It is not a fundamental set since – the set is too small.
4c. It is a fundamental set – there are three solutions in the set, and W (0) 6= 0 .
4d. Is a fundamental set – there are three solutions in the set, and W (0) 6= 0 .
4e. Is not a fundamental set – there are three solutions in the set, but W (0) = 0 .
4f. Is not a fundamental set – the set is too large.
5a. It is not a fundamental set – the set is too small.
5b. It is not a fundamental set – the set is too small.
5c. It is not a fundamental set – there are three solutions in the set, but W (0) = 0 .
5d. It is a fundamental set – there are three solutions in the set, and W (0) 6= 0 .
5e. It is a fundamental set – there are three solutions in the set, and W (0) 6= 0 .
5f. It is not a fundamental set – the set is too large.