RF 1

Random fields, Fall 2014
fMRI brain scan
Nobel prize 2996 to
John C. Mather
George F. Smoot
“for discovery of the
blackbody form and
anisotropy of the cosmic
microwave background
radiation"
Ocean waves
The oceans cover 72% of
the earth’s surface.
Essential for life on earth,
and huge economic
importance through
fishing, transportation,
oil and gas extraction
PET brain scan
1
The course
• Kolmogorov existence theorem, separable processes,
measurable processes
• Stationarity and isotropy
• Orthogonal and spectral representations
• Geometry
• Exceedance sets
• Rice formula
• Slepian models
Literature
An unfinished manuscript “Applications of RANDOM FIELDS AND
GEOMETRY: Foundations and Case Studies” by Robert Adler, Jonathan
Taylor, and Keith Worsley.
Complementary literature:
“Level sets and extrema of random processes and fields” by JeanMarc Azais and Mario Wschebor, Wiley, 2009
“Asymptotic Methods in the Theory of Gaussian Processes” by Vladimir
Piterbarg, American Mathematical Society, ser. Translations of
Mathematical Monographs, Vol. 148, 1995
“Random fields and Geometry” by Robert Adler and Jonathan Taylor,
Springer 2007
Slides for ATW ch 2, p 24-39
Exercises: 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.8.5, 2.8.6 + excercises in
slides
Stochastic convergence (assumed known)
Almost sure convergence:
𝑋𝑛
Mean square convergence:
𝑋𝑛
Convergence in probability:
𝑋𝑛
Convergence in distribution: 𝑋𝑛
• 𝑋𝑛
• 𝑋𝑛
• 𝑋𝑛
• 𝑋𝑛
𝑎.𝑠.
𝐿2
𝑃
𝑃
X
X
⇒
⇒
𝑋𝑛
𝑋𝑛
𝑃
𝑃
𝑎.𝑠.
𝐿2
𝑃
𝑑
X
X
⇒
X, 𝐹𝑛
𝑑
F, 𝑃𝑛
𝑑
P, …
𝑋
𝑋
𝑋 plus uniform integrability
𝑋
X
⇒
𝑋𝑛
𝐿2
X
there is a subsequence {𝑛𝑘 } with 𝑋𝑛𝑘
𝑎.𝑠.
𝑑
X
• The random variables don’t really mean anything for
In particular, the 𝑋𝑛 and 𝑋 don’t need to be defined on
the same probability space, and don’t need to have a
simultaneous distribution
Random field
𝑇
𝑹𝑑
parameter space. In this course 𝑇 is 𝑹𝑁 some 𝑁 ≥ 1 or
a subset (e.g. box or sphere or surface of sphere) of 𝑹𝑁
value space
An (𝑁, 𝑑) random field is a collection (or family) of random
variables
𝑓𝑡 ; 𝑡 ∈ 𝑇
where 𝑇 is a set of dimension 𝑁 and the 𝑓𝑡 (or 𝑓(𝑡)) take
values in 𝑹𝑑
Or, a random function with values in
Or, a probablility measure on
𝑇
𝑑
𝑹
𝑇
𝑑
𝑹
A realisation (or sample function, or sample path, or sample
field, or observation, or trajectory, or …) is the function
𝑓𝑡 𝜔 : 𝑇 𝑹𝑑
𝑡 ft (𝜔)
for 𝜔 fixed. Two examples below:
Microscopy image of
tablet coating
Thresholded Gaussian
field
Terminology
random variable
stochastic variable
random element
stochastic element
random vector
stochastic vector
random field
stochastic field
random process
stochastic process
Finite dimensional distributions
The finite-dimensional distribution functions of an (𝑁, 𝑑)
random field {𝑓𝑡 } are defined as
𝐹𝑡1 ,…,𝑡𝑛 𝒙1 , … , 𝒙𝑛 = 𝑃(𝑓𝑡1 ≤ 𝒙1 , … , 𝑓𝑡𝑛 ≤ 𝒙n )
and the family of finite-dimensional distribution functions is
the set
{𝐹𝑡1 ,…,𝑡𝑛 𝒙1 , … , 𝒙𝑛 ; 𝑡1 , … , 𝑡𝑛 ∈ 𝑇, 𝒙1 , … , 𝒙𝑛 ∈ 𝑅𝑑 , 𝑛 ≥ 1}
This family has the following obvious properties:
Symmetry: it is not changed under a simultaneous
permutation of 𝑡1 , … , 𝑡𝑛 and 𝒙1 , … , 𝒙𝑛
Consistency:
𝐹𝑡1 ,…,𝑡𝑛 𝒙1 , … , 𝒙𝑛−1 , ∞ = 𝐹𝑡1 ,…,𝑡𝑛−1 𝒙1 , … , 𝒙𝑛−1
Example of symmetry: 𝐹𝑡1 ,𝑡2 𝒙1 , 𝒙2 = 𝐹𝑡2 ,𝑡1 𝒙2 , 𝒙1
Example of consistency: Marginal distributions may be
obtained from bivariate distributions,
𝐹𝑡 𝒙 = 𝐹𝑡,𝑠 𝒙, ∞
Three sample paths of a 1,1 random
field. 𝐹2,5,8 𝑥1 , 𝑥2 , 𝑥3 is the
probability to obtain a sample path
which passes through all three vertical
lines
(in a more general theory one instead of finite-dimensional
distributions uses probabilities of cylindersets,
𝑃(𝑓𝑡1 ∈ 𝑩1 , … , 𝑓𝑡𝑛 ∈ 𝑩n ) )
(Daniell-)Kolmogorov extension theorem
To any symmetric and consistent family of finite-dimensional
distributions
{𝐹𝑡1 ,…,𝑡𝑛 𝒙1 , … , 𝒙𝑛 ; 𝑡1 , … , 𝑡𝑛 ∈ 𝑇, 𝒙1 , … , 𝒙𝑛 ∈ 𝑅𝑑 , 𝑛 ≥ 1}
there exists a probability triple (Ω, B , 𝑃) and an 𝑁, 𝑑
random field {𝑓𝑡 ; 𝑡 ∈ 𝑇 } which has these finite-dimensional
distributions
In the proof one takes
Ω=
𝑇
𝑑
𝑹 ,
B=
B(𝑹𝑑 )
𝑇
𝑑
(𝑹 ) which
𝑇
,
and 𝑃 as the measure on B
is uniquely
determined by the finite-dimensional distribution. Thus an
element of Ω is a function 𝑓: 𝑇 𝑅𝑑 which maps a point
𝑡 ∈ 𝑇 to the value 𝑓(𝑡). The field is defined as
{𝑓𝑡 𝜔 = 𝑓 𝑡 ; 𝑡 ∈ 𝑇}
Limitations of Kolmogorov’s theorem
Many interesting sets, such as the set
𝐶 = {𝜔; 𝑓𝑡 𝜔 is a continous function of 𝑡}
do not belong to B = B (𝑹𝑑 )
𝑇
, and hence, in
Kolmogorov’s construction , the probabability of such events
is not defined.
One important way around this problem is to make a direct
construction of the field on some other probability space
(Ω, B , 𝑃) where the interesting sets belong to B, say 𝐶 ∈ B ,
so that their probabilities, say 𝑃(𝐶), is well defined. And
then, more fields are obtained as functions of the already
constructed field!
Modifications
A field 𝑔𝑡 ; 𝑡 ∈ 𝑇 is a modification of the field 𝑓𝑡 ; 𝑡 ∈ 𝑇 if
𝑃 𝑔𝑡 = 𝑓𝑡 = 1,
∀𝑡 ∈ 𝑇
It is obvious (!) that 𝑔𝑡 has the same finite dimensional
distributions as 𝑓𝑡 .
The other common way to circumvent the limitation is to
construct, on Kolmogorov’s (Ω, B, 𝑃) a modification of 𝑓𝑡
which has the desired properties, say continuity.
Whether this is possible or not (of course) dependes on
which finite-dimensional distributions one is interested in.
E.g. if they correspond to a Browninan motion it is possible, if
they correspond to a Poisson process, it isn’t.
Doob’s separability
A field 𝑓𝑡 ; 𝑡 ∈ 𝑇 is separable if there is a countable subset
𝑑
𝑇
𝑆 ∈ 𝑇 and a null set Λ ∈ B (𝑹 ) such that for every
closed set 𝐵 ∈ 𝑹𝑑 and open set 𝐼 ∈ 𝑇 it holds that
𝑓𝑡 𝜔 ∈ 𝐵, ∀𝑡 ∈ 𝑆 ∩ 𝐼 ⇒ 𝜔 ∈ Λ or 𝑓𝑡 𝜔 ∈ 𝐵, ∀𝑡 ∈ 𝐼
A separable modification of a field always exists (at least
for 𝑁 = 𝑑 = 1? ), and it can be seen that e.g. if a continuous
modification of a field exists, then the separable
modification is continuous.
Example of modification: Ω = 0,1 , B is the Borel sets on
[0, 1], 𝑃 is Lebesgue measure, 𝑓𝑡 𝜔 = 0, ∀𝑡, 𝜔 and
𝑔𝑡 𝜔 =
0 if 𝑡 ≠ 𝜔
1 if 𝑡 = 𝜔
Measurable fields
A field 𝑓𝑡 ; 𝑡 ∈ 𝑇 is measurable if for almost all 𝜔 the
sample path (function)
𝑅𝑑
𝑓. 𝜔 :𝑇
𝑡
𝑓𝑡 (𝜔)
is B(𝑅𝑑 )-measurable (holds e.g. if the field is a.s.
continuous). It then follows that the function of two
variables 𝑓𝑡 (𝜔) is measurable with respect to the product
sigma-algebra B × B(𝑅𝑑 ), and one can then define integrals
like 𝑇 ℎ 𝑓𝑡 𝑑𝑡 and use Fubini’s theorem for calculations like
𝐸
𝑇
ℎ 𝑓𝑡 𝑑𝑡 =
𝑇
𝐸(ℎ 𝑓𝑡 )𝑑𝑡
(above we have assumed that B and B (𝑅𝑑 ) are complete)
ATW basically say that it is nice if one has seen the concepts
of Kolmogorov extension, modification, and Doob
separability, but that this has been taken care of once and for
all by Kolmogorov, Doob and others, and that we shouldn’t
worry about it any more in this course. And this is right (I
hope).
However, things are different for the theory of ”Empirical
Processes”, the so far most efficient and high-tech tool to find
asymptotic distributions of statistical estimators. In this
theory, such ”measureability problems” pose important
techical problems, and has formed much of the entire theory.
Empirical process theory is closely related to the metods used
to prove continuity and differentiability in this course.
Gaussian fields (ATW p. 25-28)
A random vector 𝐗 = 𝑋1 , … , 𝑋𝑑 has a multivariate
Gaussian distribution iff one of the following conditions hold:
𝛼, 𝑥 ≜ 𝑑𝑖 𝛼𝑖 𝑋𝑖 has a univariate normal distribution for
all 𝛼 ∈ 𝑅𝑑 .
• There exist a vector 𝒎 ∈ 𝑹𝑑 and a non-negative definite
matrix 𝐶 such that for all 𝜽 ∈ 𝑹𝑑
•
𝜙 𝜽 =𝐸
𝑒 𝑖𝜽𝑋
=
1
𝑖𝜃𝒎− 𝜽𝐶𝜽´
2
e
If 𝐶 is positive definite and 𝑋 has the probability density
1
2𝜋
𝑑
𝐶
1/2
1
−2 𝒙−𝒎 𝐶 𝒙−𝒎 ´
𝑒
then 𝑋 is Gaussian.
Here 𝑚 = 𝐸 𝑿 and 𝐶 = 𝐶𝑜𝑣 𝑿 . Similarly if the 𝑿𝒊 ∈ 𝑹𝒅
We write 𝐗~𝑁𝑑 𝒎, 𝐶 if 𝐗 has a d-variate Gaussian
distribution with mean 𝑚 and covariance matrix 𝐶 .
Excercises (the first is (2.2.5), the second Exercise 2.8.2):
(i) if 𝐗~𝑁𝑑 𝒎, 𝐶 and 𝐴 is a 𝑑 × 𝑑 matrix, then
𝐗𝐴~𝑁𝑑 𝒎𝐴, 𝐴´𝐶𝐴
(ii) If 𝑿 = 𝑿1 , 𝑿2 with 𝑿1 = 𝑋1 , … , 𝑋𝑛 , 𝑿2 = 𝑋𝑛+1 , … , 𝑋𝑑 ,
with mean vectors 𝑚1 and 𝑚2 and covariance matrix
𝐶=
𝐶1,1 𝐶1,2
𝐶2,1 𝐶2,2
, then the conditional distribution of 𝑿1 given
𝑿2 is n-variate normal with mean
𝒎1|2 = 𝒎1 + (𝑿2 −𝒎2 )𝐶2,2 −1 𝐶2,1
and covariance matrix
𝐶1|2 = 𝐶1,1 − 𝐶1,2 𝐶2,2 −1 𝐶2,1
A Gaussian random field is hence, by the Kolmogorov theorem,
determined by its means and covariances
Conversely, it also follows from the Kolmogorov theorem that
given a function
𝑚: 𝑇 𝑹
and a non-negative definite function
𝐶: 𝑇 × 𝑇 𝑹
there exist an 𝑁, 1 Gaussian random field which has 𝒎 as
mean function and 𝐶 as covariance function.
𝑁, 𝑑 Gaussian random fields for 𝑑 > 1 are the same, one just
has to use more general notation.
Gaussian related fields (ATW p. 28-30)
An (𝑁, 𝑑) Gaussian related field 𝑓(𝑡); 𝑡 ∈ 𝑇 is defined from
a (𝑁, 𝑘) Gaussian field 𝑔(𝑡); 𝑡 ∈ 𝑇 using a function
𝐹: 𝑅𝑘 𝑅𝑑 by the formula
𝑓 𝑡 =𝐹 𝑔 𝑡
= 𝐹 g1 t , … , 𝑔𝑘 𝑡 .
Examples:
• Instantaneous function of Gaussian field: 𝑘 = 𝑑 and 𝐹 is
invertible
• 𝜒 2 -field: 𝑑 = 1 and 𝐹 𝒙 =
• 𝑡-field: 𝑑 = 1 and 𝐹 𝒙 =
𝑘
2
𝑥
𝑖=1 𝑖
𝑥1 𝑘−1
2 1/2
( 𝑘
𝑖=2 𝑥𝑖 )
• 𝐹-field: 𝑑 = 1, 𝑘 = 𝑚 + 𝑛 and 𝐹 𝒙 =
2
𝑚 𝑛
𝑖=1 𝑥𝑖
2
𝑛 𝑛+𝑚
𝑥
𝑖=𝑛+1 𝑖
Stationarity and isotropy (ATW p. 30-31)
Weak stationarity: A random field is weakly stationary if
• 𝒎 𝑡 ≜ 𝐸 𝑓(𝑡) is constant
• 𝐶 𝑠, 𝑡 ≜ 𝐸{(𝑓 𝑠 − 𝒎 𝑠 )´(𝑓 𝑡 − 𝒎 𝑡 } only depends
on 𝑡 − 𝑠
Weak isotropy: A random field is weakly isotropic if 𝐶 𝑠, 𝑡
only depends on |𝑡 − 𝑠|
A random field is strictly stationary if the joint distribution of
{𝑓 𝑡1 + 𝜏 , … , 𝑓 𝑡𝑛 + 𝜏 ) doesn’t depend on 𝜏, for all
𝑛 ≥ 1, 𝑡1 , … , 𝑡𝑛 ∈ 𝑹𝑁 .
A random field is strictly isotropic if it is stationary and the
joint distribution of {𝑓 𝑡1 , … , 𝑓 𝑡𝑛 } is invariant under
rotations, for all 𝑛 ≥ 1, 𝑡1 , … , 𝑡𝑛 ∈ 𝑹𝑁 .
Weak is the same as strict for real Gaussian fields
”Weak” is sometimes instead called ”second order”
Abuse of notation:
For weakly stationary fields one writes
𝐶 𝑠, 𝑡 = 𝐶 𝑡 − 𝑠
For istropic fields one writes
𝐶 𝑠, 𝑡 = 𝐶 |𝑡 − 𝑠|
Cosine processes and fields (ATW p. 32-36)
Cosine process (a (1,1) field):
𝑓 𝑡 ≜ 𝜉 cos 𝜆𝑡 + 𝜉 ′ sin 𝜆𝑡
= 𝑅𝑐𝑜𝑠(𝜆𝑡 − 𝜃)
where 𝜉 and 𝜉′ are uncorrelated and have the same
distribution, and (for convenience?) mean 0, and 𝑅2 = 𝜉 2 +
𝜉
′ 2
, and 𝜃 =
𝜉′
arctan( ). R
𝜉
is ”amplitude”, 𝜃 is ”phase”,
and 𝜆 is ”angular frequency” . Then
𝐸 𝑓 𝑡
=0
and
𝐶 𝑠, 𝑡 = 𝐸{𝑓 𝑠 𝑓 𝑡 }
= 𝐸{(𝜉 cos 𝜆𝑠 + 𝜉 ′ sin 𝜆𝑠)(𝜉 cos 𝜆𝑡 + 𝜉 ′ sin 𝜆𝑡)}
= 𝐸 𝜉 2 (cos 𝜆𝑠 cos 𝜆𝑡 + sin 𝜆𝑠 sin 𝜆𝑡)
= 𝐸(𝜉 2 ) cos 𝜆 𝑡 − 𝑠
𝜆 in the cosine process is
“angular frequency”.
Sometimes one instead writes
𝑓 𝑡 = Rcos(2𝜋𝜔𝑡 + 𝜃)
𝜔 then is “ frequency”
If 𝜉, 𝜉′ are Gaussian, then 𝑅2 is exponential with parameter
2𝜎 2 (why?), so that 𝑃(𝑅 ≥ 𝑢) = exp(−𝑢2 /𝜎 2 ), and 𝜃 is
independent of 𝑅 and uniformly distributed on 0, 2𝜋
(do the calculation!).
The following are of central interest in the course:
• 𝑁𝑢 = 𝑁𝑢 (𝑓, 𝑇) ≜ #{𝑡 ∈ 𝑇; 𝑓 𝑡 = 𝑢 and
• 𝑃( sup 𝑓 𝑡 ≥ 𝑢)
0≤𝑡≤𝑇
𝑑𝑓 𝑡
𝑑𝑡
> 0}
For a Gaussian cosine process, and Ψ 𝑢 ≜ 𝑃 𝑁 0,1 > 𝑢 ,
𝑃( sup 𝑓 𝑡 ≥ 𝑢) = 𝑃 𝑓 0 ≥ 𝑢 + 𝑃(𝑓 0 < 𝑢, 𝑁𝑢 ≥ 1)
0≤𝑡≤𝑇
= Ψ 𝑢/𝜎 + 𝑃(𝑓 0 < 𝑢, 𝑁𝑢 ≥ 1)
If 𝑇 ≤ 𝜋/𝜆 then
𝑃 𝑓 0 < 𝑢, 𝑁𝑢 ≥ 1 = 𝑃 𝑁𝑢 ≥ 1 = 𝑃 𝑁𝑢 = 1 ,
and 𝑁𝑢 = 1 iff both 𝑅 ≥ 𝑢 and 𝜃 falls in an interval of
lenght 𝜆𝑇 (requires some thinking: draw a picture). Since
these two events are independent,
𝑃( sup 𝑓 𝑡 ≥ 𝑢) = Ψ
0≤𝑡≤𝑇
𝑢
𝜎
𝜆𝑇
+
2𝜋
×
2 /2𝜎 2
−𝑢
𝑒
If 𝑇 > 2𝜋/𝜆, then sup 𝑓 𝑡 ≥ 𝑢 iff 𝑅 > 𝑢, so that
0≤𝑡≤𝑇
𝑃( sup 𝑓 𝑡 ≥ 𝑢) = 𝑃(𝑅 ≥ 𝑢) =
0≤𝑡≤𝑇
2 /2𝜎 2
−𝑢
𝑒
Without assuming Gaussianity, for any differentiable
stochastic process (i.e. (1,1)-field) we get the important
general bound
𝑃( sup 𝑓 𝑡 ≥ 𝑢) = 𝑃 𝑓 0 ≥ 𝑢 + 𝑃 𝑓 0 < 𝑢, 𝑁𝑢 ≥ 1
0≤𝑡≤𝑇
≤ 𝑃 𝑓 0 ≥ 𝑢 + 𝑃 𝑁𝑢 ≥ 1
≤ 𝑃 𝑓 0 ≥ 𝑢 + 𝐸(𝑁𝑢 )
Cosine field (a (𝑁, 1) field):
𝑓 𝑡 = 𝑓 𝑡1 , … , 𝑡𝑁 ≜
where
1
𝑁
𝑁 𝑘=1
𝑓𝑘 𝜆𝑘 𝑡𝑘 ,
𝑓𝑘 𝑡 = 𝜉𝑘 cos 𝑡 + 𝜉𝑘′ sin 𝑡
and the 𝜉𝑘 and 𝜉𝑘′ are uncorrelated and have the same
distribution, with mean 0.
If 𝑇 = 𝑁
𝑘=1 0, 𝑇𝑘 , then taking the supremum first over 𝑡1 ,
then over 𝑡2 , then … we get that
sup 𝑓 𝑡 =
0≤𝑡≤𝑇
1
𝑁
sup 𝑓𝑘 𝜆𝑘 𝑡 .
𝑁 𝑘=1 0≤𝑡≤𝑇
If the 𝜉𝑘 and 𝜉𝑘′ are Gaussian, and 𝑇𝑘 ≤ 𝜋/𝜆𝑘 , k = 1, … , 𝑁
this gives an explicit (but complicated) formula.
Orthogonal expansions (ATW p. 36-39)
An orthogonal exansion of an (𝑁, 1) field is an expression
𝑓 𝑡 =
∞
𝜉𝑛 𝜑𝑛 (𝑡) ,
𝑛=1
with 𝜉𝑛 uncorrelated centered (i.e. 𝐸 𝜉𝑛 = 0, just for
convenience) random variables with 𝐸 𝜉𝑛2 = 𝜎𝑛2 , and 𝜑𝑛
non-random orthogonal functions 𝑇 𝑹. (For 𝑁, 𝑑 fields
the 𝜉𝑛 are matrices and the 𝜑𝑛 are vector valued.)
The moment functions then are 𝐸 𝑓 𝑡
𝐶 𝑠, 𝑡 = 𝐸 𝑓 𝑠 𝑓 𝑡
𝑉 𝑡 ≜𝐸 𝑓 𝑡
2
=
=
∞
𝑛=1
∞
= 0 and
𝜎𝑛2 𝜑𝑛 𝑠 𝜑𝑛 (𝑡)
𝑛=1
𝜎𝑛2 𝜑𝑛 𝑡
2
Important for theory, application , and computation.
A Gaussian field with continuous covariance always has a
Reproducing Kernel Hilbert Space (RKHS) ortogonal
expansion. Loosley it is obtained as follows: set
𝑛
𝑆 = {𝑢: 𝑇
𝑹; 𝑢 ∙ =
𝑎𝑖 𝐶 𝑠𝑖 ,∙ , 𝑎𝑖 real, 𝑠𝑖 ∈ 𝑇, 𝑛 ≥ 1},
𝑖=1
and define an inner product on 𝑆 by
𝑛
𝑢, 𝑣 = (
𝑚
𝑎𝑖 𝐶 𝑠𝑖 ,∙ ,
𝑖=1
𝑛
𝑚
𝑏𝑖 𝐶 𝑡𝑗 ,∙ ) =
𝑗=1
𝑎𝑖 𝑏𝑗 𝐶(𝑠𝑖 , 𝑡𝑗 )
𝑖_1 𝑗=1
”Reproducing kernel ” comes from
𝑛
𝑢, 𝐶(𝑡,∙ ) =
𝑛
𝑎𝑖 𝐶 𝑠𝑖 ,∙ , 𝐶 𝑡,∙
𝑖=1
=
𝑎𝑖 𝐶 𝑠𝑖 , 𝑡 = 𝑢(𝑡)
𝑖=1
If 𝐶 𝑠, 𝑡 is positive definite, then 𝑢 = 𝑢, 𝑢 1/2 is a norm
and one can define the RKHS 𝐻(𝑆) as the closure of 𝑆 in
this norm. If {𝜑𝑛 } is a complete orhtonormal system in
𝐻(𝑆), and the 𝜉𝑖 are 𝑁(0,1) then the field {𝑓 𝑡 } has an
orthogonal expansion
𝑓 𝑡 =𝑑
∞
𝑛=1
𝜉𝑛 𝜑𝑛 (𝑡)
This is important, but not always easy to handle. We will next
briefly describe a somewhat more concrete expansion, the
Karhunen-Loeve expansion, and then, in much more detail,
the by far most important orthogonal expansions, the
spectral representations, which expresses the field as a sum
of cosine processes.
The Karhunen-Loeve expansion applies to the case when 𝑇 is
a compact set in 𝑹𝑁 . Let the operator 𝐶: 𝐿2 𝑇
𝐿2 𝑇 be
defined by
𝐶𝜓 𝑡 =
𝐶 𝑠, 𝑡 𝜓 𝑠 𝑑𝑠
𝑇
and let 𝜆1 ≥ 𝜆2 ≥ ⋯ its eigenvalues and 𝜓1 ≥ 𝜓2 ≥ ⋯ the
corresponding orthonormal eigenfunctions. It can be shown
that
𝜆𝑛 𝜓𝑛 is an orthonormal system in the RKHS 𝐻 𝐶 ,
and hence
𝑓 𝑡 =𝑑
∞
𝑛=1 𝜉𝑛
𝜆𝑛 𝜓(𝑡).
In general convergence is in mean square. For continuous
fields, the sum also converges 𝑎. 𝑠.
Again it may be difficult to find the eigenvalues and
eigenfunctions, but discretization may often work