Chapter 7 - Penn Math - University of Pennsylvania

Chapter 7: Systems of Linear Differential
Equations
Philip Gressman
University of Pennsylvania
Philip Gressman
Math 240 002 2014C: Chapter 7
1 / 23
The Beginning
Definition
Every vector function (with values in Rn ) which is k-times
continuously differentiable on an interval I is said to belong to
C k (I , Rn ).
Basic Fact
The space C k (I , Rn ) is a vector space over the reals under
pointwise addition and scalar multiplication. This vector space is
infinite-dimensional.
Important Transformations
Differentiation maps C k (I , Rn ) to C k−1 (I , Rn ) for k > 0 and maps
C ∞ (I , Rn ) to itself. If A(t) is a k-times differentiable
matrix-valued function, then multiplication by A on the left also
maps C k (I , Rn ) to itself.
Philip Gressman
Math 240 002 2014C: Chapter 7
2 / 23
Linear Independence
Definition
Vector functions x1 , . . . , x` are, as always, called linearly
independent when there are no constants c1 , . . . , c` for which
c1 x1 + · · · c` x` = 0
except c1 = · · · = c` = 0.
IMPORTANT: Remember that when we say that a vector
function equals zero, that means it equals the old-fashioned zero
vector at every single point.
Linear DEPENDENCE is hard: if x1 , . . . , x` are linearly
independent at even a single point, then as vector functions they
are linearly independent. The converse is not true: they might
even be linearly dependent at every point and still be linearly
independent as vector functions.
Philip Gressman
Math 240 002 2014C: Chapter 7
3 / 23
Linear Independence
Linear Independence
~ 1, . . . , X
~ n are linearly
We say that time-dependent vectors X
independent on an interval I when the only constants c1 , . . . , cn
such that
~ 1 (t) + c2 X
~ 2 (t) + · · · + cn X
~ n (t) ≡ 0
c1 X
on the entire interval are all zeros.
Ind:
1
t
1
cos2 t
0
sin t
sin2 t
,
,
,
Dep:
,
1+t
t
1
cos t
1
Ind:
0
t
Philip Gressman
0
,
.
t2
Math 240 002 2014C: Chapter 7
4 / 23
The Wronskian
~ 1, . . . , X
~ n each of length n,
Given time-dependent column vectors X
we form the Wronskian to be the determinant of the matrix whose
~ 1, . . . , X
~ n , i.e.,
columns are exactly X
~ 1, . . . , X
~ n ) = det(X
~ 1, . . . , X
~ n ).
W (X
FACT: If the Wronskian is nonzero even at a single point, then
~ 1, . . . , X
~ n must be linearly independent. In fact, they might even
X
be linearly independent when the Wronskian is always zero, but for
solutions to first-order systems of ODEs this pathology does not
happen.
Philip Gressman
Math 240 002 2014C: Chapter 7
5 / 23
Linear ODE Systems: §7.1–7.2
First-Order Linear Systems
A first-order linear system may be written in the form
d ~
~ +G
~ (t).
X = A(t)X
dt
Here A is an n × n matrix whose entries may or may not depend
~ (t) is a column vector of length n which is fully described
on t. G
~ (t) is an unknown column vector of
in the problem itself, and X
length n whose entries may depend on t.
General Solutions
The general solution is a complete listing of all solution vectors.
Initial Value Problem
This is the specific solution for which X (0) is prescribed.
Philip Gressman
Math 240 002 2014C: Chapter 7
6 / 23
Important General Facts about First-Order Systems
• Higher-order systems of ODEs can always be recast as a
system of first-order ODEs with more unknown functions.
• Systems of ODEs can always be solved by elimination; this is,
however, a labor-intensive way to do it since unknown
constants will be related and you’ll have to do a lot of linear
equation solving.
Philip Gressman
Math 240 002 2014C: Chapter 7
7 / 23
Homogeneous Equations
Definition
d ~
~ is called homogeneous. If your
X = A(t)X
The equation dt
d ~
~ +G
~ (t) for some nonzero G
~ (t),
equation is given as dt X = A(t)X
d ~
~ is called the associated
then the equation dt X = A(t)X
homogeneous first-order system.
Superposition Principle
~ 1 (t) and X
~ 2 (t) are solutions of the homogeneous ODE
If X
d ~
~
~
~
dt X = A(t)X , then c1 X1 + c2 X2 will also be a solution. The same
can be said for any linear combination of any number of solutions
(i.e., more than two solutions).
Philip Gressman
Math 240 002 2014C: Chapter 7
8 / 23
7.3: Theory of First-order Systems
Theorem: Existence and Uniqueness
The IVP x(t0 ) = x0 , x 0 = A(t)x(t) + b(t) for x, b vector-valued
functions of time and A a matrix-valued function of time, has a
unique C 1 solution on any interval I containing x0 when A and b
are continuous.
Consequences
Theorem: When x(t) is a time-dependent vector in Rn , the
general solution of x 0 (t) = A(t)x(t) on any interval is an
n-dimensional vector space.
Theorem: Solutions x1 , . . . , xn are linearly independent if and only
if the Wronskian is never zero.
Theorem: If xp is any solution to x 0 = Ax + b, then the general
solution to this ODE is given by x = xc + xp where xc ranges over
all solutions of the associated homogeneous equation.
Philip Gressman
Math 240 002 2014C: Chapter 7
9 / 23
Inhomogeneous Systems
Finding the general solution of an inhomogeneous system is only
slightly more difficult than solving a homogeneous one.
~ p . It is called a particular
1 You must first find some solution X
solution.
2
The general solution of the inhomogeneous system will always
be of the form
~ = c1 X
~ 1 + · · · + cn X
~n + X
~p
X
~ 1, . . . , X
~ n are a fundamental set of solutions (i.e., a
Where X
complete set) for the associated homogeneous system.
Philip Gressman
Math 240 002 2014C: Chapter 7
10 / 23
Homogeneous Linear Systems: §7.4
We consider a system of ODEs with the form
d ~
~
X = AX
dt
~ is a column vector of length n and A is an n × n matrix
where X
with constant entries. We begin by looking for very simple
solutions, then use the superposition principle to describe the more
complicated ones.
The Simplest Case
An example of a very simple solution is one whose direction does
not change (only the magnitude). It would be expressible in the
form
~ (t) = f (t)E
~
X
~ is a constant vector and f is some unknown function of t.
where E
Philip Gressman
Math 240 002 2014C: Chapter 7
11 / 23
When you assume that a solution has some special form, it is
known as an ansatz. It’s a completely reasonable question to ask
and mathematically rigorous because you might end up learning
that no such solutions exist. For us, we plug our ansatz
Our Ansatz
~ (t) = f (t)E
~
X
into the equation and get
~ = f (t)AE
~ ⇒ AE
~ =
f 0 (t)E
f 0 (t) ~
E.
f (t)
0
(t)
must be
If the equation must be true at all times, then ff (t)
constant. Call the constant λ. We arrive at the eigenvector
equation...
Philip Gressman
Math 240 002 2014C: Chapter 7
12 / 23
The Conclusion
~ is an eigenvector of A with eigenvalue λ, then
If E
~ (t) = Ce λt E
~
X
solves the first-order system
d ~
~.
X = AX
dt
Moreover, linearly independent eigenvectors give linearly
independent solutions of the system.
~ 1, . . . , E
~n
If A is n × n and has n linearly independent eigenvectors E
with eigenvalues λ1 , . . . , λn , then the general solution of the
system will be
~ (t) = C1 e λ1 t E
~ 1 + · · · + C n e λn t E
~n.
X
Philip Gressman
Math 240 002 2014C: Chapter 7
13 / 23
Complex Eigenvalues
If A is a real matrix with complex eigenvalue λ = α + iβ and
~ =E
~ re + i E
~ im , Then
eigenvector E
~ (t) = e αt+iβt (E
~ re + i E
~ im )
X
will be a solution. This can only happen if the real parts and
imaginary parts are each solutions by themselves. We conclude
~ re (t) = e αt (cos βt)E
~ re − e αt (sin βt)E
~ im
X
~ im (t) = e αt (sin βt)E
~ re + e αt (cos βt)E
~ im
X
are linearly independent real solutions of the system of ODEs.
Philip Gressman
Math 240 002 2014C: Chapter 7
14 / 23
“Missing” Eigenvectors
If A does not have n eigenvectors, the ansatz gives only a partial
answer and we end up missing some solutions. We fix this by
making a better ansatz (with increasing complexity depending on
how bad the situation is). For example:
New Ansatz
~ (t) = e λt E
~ 2 + te λt E
~.
X
~ , and we get
= AX
~ 2 + (1 + λt)E
~ = e λt AE
~ 2 + tAE
~ .
e λt λE
Plug it into
d ~
dt X
~ = λE
~ and (A − λI ) E
~2 = E
~.
We must have AE
~ to be an eigenvector, but E
~ 2 satisfies a different
We must take E
equality and is called a generalized eigenvector.
Philip Gressman
Math 240 002 2014C: Chapter 7
15 / 23
“Missing” Eigenvectors in General
General Ansatz
n
λt t ~
~
~
~
X (t) = e
En + · · · + t E1 + E0
n!
Generalized Eigenvectors
The general ansatz will solve the system when
~ n = 0,
(A − λI )E
~ n−1 = E
~n,
(A − λI )E
..
.
~0 = E
~ 1.
(A − λI )E
Philip Gressman
Math 240 002 2014C: Chapter 7
16 / 23
§7.8: Solution by diagonalization
Given the first-order system
d ~
~
X = AX
dt
one useful technique you should be able to use is solution by
diagonalization. Here the idea is like substitution: you assume
~ = PY
~ for some matrix P and then try to solve for Y instead of
X
X:
d ~
~)⇒ d Y
~ = P −1 AP Y
~.
P Y = A(P Y
dt
dt
So if A is diagonalizable, you can do the following:
1
Solve the system
d ~
~
Y = DY
dt
where D is the diagonalization of A.
2
~ = PY
~.
To solve the original system, simply set X
Philip Gressman
Math 240 002 2014C: Chapter 7
17 / 23
§7.8: Solution by exponentiation
Matrix exponentiation
e At := I + tA +
t2 2 t3 3
A + A + ···
2
3!
Solution of the IVP
There is exactly one solution to the IVP
d ~
~ (t), X
~ (0) = V
~
X (t) = AX
dt
~ (t) = e At V
~.
and it equals X
There are several tricks that you might use to carry out the infinite
sum and write down a simple formula that equals e At .
Philip Gressman
Math 240 002 2014C: Chapter 7
18 / 23
§7.8: Solution by exponentiation
1
2
Use diagonalization to find a pattern for the powers
A, A2 , A3 , A4 , . . .. The exponential of a diagonal matrix is
simply the exponential of each of the diagonal entries.
Solve it like a system of equations: You can write
e At = b0 (t)I + b1 (t)A + · · · + bn−1 (t)An−1 for unknown
functions b0 , . . . , bn−1 . Often you can solve for these
functions using the fact that
e λt = b0 (t) + b1 (t)λ + · · · + bn−1 (t)λn−1
3
for each of the eigenvalues λ (note that you will be able to
solve when you have n distinct eigenvalues).
For 2 × 2: if there is only one eigenvalue and only one
eigenvector, then the matrix exponential will take the form
e At = e λt [I + t(A − λI )] .
Philip Gressman
Math 240 002 2014C: Chapter 7
19 / 23
Phase Portraits
A phase portrait is a simultaneous plotting of several solutions of
an ODE. The axes are the coordinates of the vector and the time
variable is suppressed.
Two Eigenvals. < 0
Two Eigenvals. > 0
Mixed Signs
Pictures from Paul’s Online Math Notes
Philip Gressman
Math 240 002 2014C: Chapter 7
20 / 23
Phase Portraits for Complex Eigenvalues
When eigenvalue λ = α + iβ:
α<0
α=0
α>0
Pictures from Paul’s Online Math Notes
Philip Gressman
Math 240 002 2014C: Chapter 7
21 / 23
Phase Portraits for “Missing” Eigenvectors
λ<0
λ>0
Pictures from Paul’s Online Math Notes
Philip Gressman
Math 240 002 2014C: Chapter 7
22 / 23
Inhomogeneous Systems/Undetermined Coefficients
Just like for single inhomogeneous ODEs, one can often make an
educated guess about the form of the particular solution:
d ~
2 −1 ~
1 + e −t
~p = V
~1 + V
~ 2 e −t
X =
X+
⇒X
0 3
2
dt
The structure of the method is still the same:
1
Expand the inhomogeneous terms to look like vectors times
constants, exponentials, powers of t, and/or sines and cosines.
2
Use the tables from undetermined coeffs and/or your intuition
to guess the form of the particular solution.
~ p by unknown constants,
Instead of multiplying the terms in X
3
multiply by unknown vectors.
4
Try to solve for the unknown vectors. If it doesn’t work, try
including more terms in your guess with higher powers of t
attached.
Philip Gressman
Math 240 002 2014C: Chapter 7
23 / 23