Second problem set solutions.

18.755 second problems solutions
1. Suppose V is a finite-dimensional real vector space, and
φ: R → GL(V )
is a continuous homomorphism, and ǫ ∈ Cc∞ (R) is a smooth function of
compact support. For v ∈ V , define
vǫ =
Z
∞
Z
∞
ǫ(t)φ(t)v dt.
−∞
(1) Prove that
φ(s)vǫ =
ǫ(t − s)φ(t)v dt.
−∞
In order to apply a linear transformation to an integral of a vector-valued continuous function over a compact set, you can apply the linear transformation under
the integral sign. This is a generalization of
A
Z
b
f (t) dt + B
a
Z
b
g(t) dt =
a
Z
b
(Af + Bg) dt,
a
(from 18.01) with exactly the same proof. Since φ(s) is linear, we get
∞
φ(s)vǫ =
Z
=
Z
ǫ(t)[φ(s)φ(t)v] dt
=
Z
ǫ(t)[φ(t + s)v] dt
=
Z
ǫ(t′ − s)[φ(t′ )v] dt′
φ(s)[ǫ(t)φ(t)v] dt
−∞
∞
−∞
∞
because φ is a homomorphism
−∞
∞
change of variable t′ = t + s
−∞
This is the desired formula.
(2) Prove that s 7→ φ(s)vǫ is a smooth map from R to V .
You learn in 18.100 that a Riemann integral of two-variable function can be
differentiated under the integral sign with respect to the extra variable (for example
Theorem 9.42 in Rudin’s book). The conclusion is
dm [φ(s)vǫ ]/dsm =
Z
∞
(−1)m [dm ǫ(t − s)/dsm ][φ(t)v] dt
−∞
This exhibits all the derivatives of φ(s)vǫ as continuous functions of s.
(3) Prove that φ is smooth.
1
2
A matrix valued-function like φ is smooth if and only if all the vector-valued
functions φv are smooth. (These are linear combinations of the columns of φ.
Define
V ∞ = {v ∈ V | φ(s)v is smooth};
our job is to show that V ∞ = V . The second part of the problem showed that each
vector vǫ ∈ V ∞ . Suppose we choose some norm k · k on V . For any ǫ′ > 0 (sorry
that I already used ǫ!) choose δ > 0 so small that
kφ(t)v − vk < ǫ′
(|t| < δ);
this is possible by the continuity of φ, and the fact that φ(0) is the identity. Now
choose the function ǫ to be non-negative, supported on [−δ, δ], and satisfying
Z ∞
ǫ(t) dt = 1.
−∞
2
(This is easy to do: you can even write a formula for ǫ using e−1/x .) Then
Z ∞
kvǫ − vk = k
ǫ(t)[φ(t)v − v] dtk
−∞
Z ∞
ǫ(t)k[φ(t)v − v]k dt
≤
−∞
Z ∞
≤ ǫ′
ǫ(t) dt = ǫ′
−∞
That is, we can make vǫ as close to v as we wish. This proves that the subspace
V ∞ of V is dense in V . But the only dense subspace of a finite-dimensional real
vector space is the whole space; so V ∞ = V , as we wished to show.
2. Suppose A is an n × n real matrix. Find necessary and sufficient
conditions on A for the one-parameter group {exp(tA) | t ∈ R} to be closed
in GL(n, R).
Here is the answer: exp(RA) is closed if and only if A is diagonalizable as a
complex matrix, and all the nonzero eigenvalues are purely imaginary numbers iyj ,
and all the ratios yj /yk are rational numbers.
The proof requires some detailed understanding of Jordan canonical form for
real matrices. I will just quote a useful version of this, without helping you find a
reference for exactly this statement.
Theorem. Suppose A is a linear transformation on a finite dimensional real vector
space V . Then there is a unique decomposition
A = Ah + Ae + An
subject
(1)
(2)
(3)
to the requirements
the linear transformations Ah , Ae , and An commute with each other;
the linear transformation Ah is diagonalizable with real eigenvalues;
the linear transformation Ae is diagonalizable over C, with purely imaginary
eigenvalues; and
(4) the linear transformation An is nilpotent: AN = 0 for some N > 0.
3
The subscripts h, e, and n stand for “hyperbolic,” “elliptic,” and “nilpotent.”
Suppose f is a continuous map from R to a metric space. The image f (R) can
fail to be closed only if there is an unbounded sequence of real numbers ti such that
f (ti ) converges in the metric space. (You should think carefully about why this is
true: the proof is very short, but maybe not obvious.)
So if the image is not closed, then we can found an unbounded sequence ti so
that exp(ti A) is convergent in GL(n, R), and in particular is a bounded sequence
of matrices. By passing to a subsequence, we may assume that all ti have the same
sign. Since matrix inversion is a homeomorphism, exp(−ti A) is also a (convergent
and) bounded sequence of matrices. Perhaps replacing the sequence by its negative,
we may assume all ti > 0.
Now the Jordan decomposition guarantees
exp(tA) = exp(tAh ) exp(tAe ) exp(tAn ).
In appropriate coordinates the matrix Ae is block diagonal with blocks
0
yj
−yj 0
(with yj 6= 0) and zeros; so k exp(tAe )k is bounded. The power series for exp(tAn )
ends after the term tN AN /N !; so k exp(tAn )k has polynomial growth in t.
If Ah has a positive eigenvalue, then exp(tAh ) grows exponentially in t, so the
sequence exp(ti A) cannot be bounded. Similarly, if Ah has a negative eigenvalue,
then exp(−ti A) grows exponentially. The conclusion is that if the image is not
closed, then Ah = 0.
N +1
= 0. Then exp(tAn ) grows
In exactly the same way, suppose AN
n 6= 0 but An
like a polynomial of degree exactly N ; so (because of the boundedness of exp(tAe ))
we conclude that exp(tA) also grows like a polynomial of degree exactly N . The
conclusion is that if the image is not closed, then N = 0, which means An = 0.
We have shown that the image can fail to be closed only if A = Ae . In this case
the image is bounded; so it is closed if and only if it is compact. Suppose that the
eigenvalues of A = Ae are iyj as above, so that exp(tA) has diagonal blocks
cos(tyj ) sin(tyj )
.
− sin(tyj ) cos(tyj )
If all the ratios yj /y1 = pj /qj are rational, then it’s easy to see that exp(tA) is
periodic with period (dividing)
(least common multiple of all qj )(2π/y1 );
so the image is a circle (or a point), and is closed.
Conversely, suppose that the image is compact. More or less the example done
in class shows that the group


cos(ty1 ) sin(ty1 )
0

 − sin(ty1 ) cos(ty1 )



cos(ty2 ) sin(ty2 ) 
0
− sin(ty2 ) cos(ty2 )
is compact if and only if y1 /y2 is rational. (I’m tired of typing, so I won’t write out
a proof.) By projecting the (assumed compact) {exp(tA)} on various collections of
four coordinates, and using “continuous image of compact is compact,” we deduce
that all the ratios yj /yk are rational, as we wished to show.