1 7.4 Basic Theory of Systems of first order linear equations

1 7.4 Basic Theory of Systems of first order linear equations:
Introduction
The most general system of first order linear equations has the form:
x01 = p11 (t)x1 + p12 x2 + · · · + p1n (t)xn + g1 (t),
x02 = p21 (t)x1 + p22 x2 + · · · + p2n (t)xn + g2 (t),
..
.
0
xn = pn1 (t)x1 + pn2 x2 + · · · + pnn (t)xn + gn (t).
In the matrix form:
x0 = P (t)x + g(t).
Here x = (x1 , x2 , . . . , xn )T , g(t) = (g1 (t), g2 (t), . . . , gn (t))T , and the matrix P =
(pij (t)) is the coefficients matrix. We first investigate the solutions for homogeneous systems.
x0 = P (t)x.
(1)
In this section, we are going to address these questions:
(1) If we have two or more solutions on hand, can we generate more solutions?
(2) Does the system have a fundamental set of solutions?
(3) How many ”special” solutions we need in order to form a fundamental set?
(4) How can we tell if a group of solutions forms a fundamental set?
(5) How can we actually find a fundamental set?
These questions are exactly the same as for the second order linear ODE in
Section 3.2. The answers are also almost the same. Recall that a solution of the
system (1) is a vector function:


x1 (t)
 x2 (t) 


x(t) =  ..  .
 . 
xn (t)
Theorem 1.1. If the vector functions x(1) , x(2) are solutions of the system (1),
then the linear combination c1 x(1) + c2 x(2) is also a solution for any constants c1
and c2 .
1
As a consequence, if we have k different solutions: x(1) , x(2) , . . . , x(k) , then any
linear combination of these solutions
x = c1 x(1) + c2 x(2) + · · · + ck x(k)
also solves the system (1). This is called the principle of superposition. This
is the answer for Question (1).
The following theorem answers the Question (2) and (3):
Theorem 1.2. If the vector functions x(1) , x(2) , . . . , x(n) are linearly independent solutions of the system (1) for each point in the interval α < t < β, then
these n solutions form a fundamental set of solutions. That is, each solution φ(t)
can be expressed as a linear combination of x(1) , x(2) , . . . , x(n) :
φ(t) = c1 x(1) + c2 x(2) + · · · + cn x(n) .
To answer Question (4), we will need our old friend Wronskian. If we have n
solutions x(1) , x(2) , . . . , x(n) , we can form a n × n matrix x whose columns are the
vectors x(1) , x(2) , . . . , x(n) :


x11 (t) x12 (t) · · · x1n (t)
 x21 (t) x22 (t) · · · x2n (t) 


X=
.
..
..
..


.
.
.
xn1 (t) xn2 (t) · · · xnn (t)
Theorem 1.3. The solutions x(1) , x(2) , . . . , x(n) are linearly independent at a point
if and only if the Wronskian:
W [x(1) , x(2) , . . . , x(n) ](t) = det X(t)
is nonzero at there. Moreover, the Wronskian can be either identically zero or else
never vanishes on the interval (α, β).
This theorem tells us how to determine if a group of solutions form a fundamental set (linearly independent at each point). In fact, according to the second
part of the theorem, it suffices to check the Wronskian of the solutions at one
single point t.
Finally, the answer for Question 5 is the following theorem: (in practice how
to find a fundamental set)
Theorem 1.4. Let



e(1) = 

1
0
..
.
0



,




e(2) = 

0
1
..
.
0
2



,




. . . , e(n) = 

0
0
..
.
1



;

further, for each e(i) , let x(i) be the unique solution of the system (1) with the
initial condition:
x(i) (t0 ) = e(i) .
Then the solutions x(1) , x(2) , . . . , x(n) form a fundamental set of solutions of the
system (1).
Example 1.5. Consider the system:
0
x =
The two solutions are given:
1
(1)
x (t) =
e3t ,
2
1 1
4 1
(2)
x.
x (t) =
1
−2
e−t .
Verify they solve the system and they form a fundamental set of solutions.
3