Application of the Local Markov Approximation Method for

Applied Mathematical Sciences, Vol. 8, 2014, no. 90, 4469 - 4496
HIKARI Ltd, www.m-hikari.com
http://dx.doi.org/10.12988/ams.2014.46415
Application of the Local Markov Approximation
Method for the Analysis of Information Processes
Processing Algorithms with Unknown
Discontinuous Parameters
O. V. Chernoyarov, Sai Si Thu Min, A. V. Salnikova
Department of Radio Engineering Devices
National Research University “Moscow Power Engineering Institute”
Moscow, Russia
B. I. Shakhtarin
Department of Self-contained Information and Control Systems
Bauman Moscow State University
Moscow, Russia
A. A. Artemenko
Department of Bionics and Statistical Radio physics
Lobachevsky State University of Nizhni Novgorod
Nizhni Novgorod, Russia
Copyright © 2014 O. V. Chernoyarov, Sai Si Thu Min, A. V. Salnikova, B. I. Shakhtarin and
A. A. Artemenko. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
Abstract
We considered the local Markov approximation method for the definition of the
precision characteristics of statistical analysis algorithms of information processes
with unknown discontinuous parameters in the presence of Gaussian distortions
4470
O. V. Chernoyarov et al.
and illustrated the use of the stated approach in practical applications for the
analysis of operating efficiency of detectors and measurers of the
quasi-determined and random signals.
Keywords: Discontinuous signal parameter, maximum likelihood method, local
Markov approximation method, signal detection and measuring characteristics,
statistical modeling.
1 Introduction
The problem of the statistical analysis of information processes under
conditions of parametrical prior uncertainty has wide appendices in radio
engineering, medicine, technical diagnostics, financial statistics etc. As is well
known [1-3 et al.], the optimal (according to the maximum likelihood method)
processing algorithm of the information process (signal) st, l0  with unknown
parameter l0 deformed by random distortions generates output effect which is
proportional to the functional of likelihood ratio (FLR) or its logarithm. Thus
logarithm of FLR M l  is a function of current value l of parameter l0 and is
formed within all prior interval   L1 , L 2  of its definition. As a result, it is
possible to solve a task of signal detection against noise having compared
maximum (supremum) of the logarithm of FLR M m  sup M l  to a threshold c
chosen according to the accepted optimality criterion:
1

Mm c ,

(1)
0
or a task of measuring of information signal parameter l0 , having accepted as its
estimate the position of the greatest maximum of solving statistics M l  :
l m  arg sup M l  .
(2)
l L1 , L 2 
However, for the problem solution of the applicability of one or another
processing algorithm it is not enough to define the degree of algorithm optimality.
The final decision should be made on the basis of the concrete algorithm
performance analysis with the assistance of the quantitative characteristics of its
functioning. Besides, in the majority of real situations some of prior data can
appear inexact, and real working conditions of devices can deviate from the
established prior data. Working capacity of the synthesized processing algorithms
under changed conditions can be estimated by the analysis of algorithms only.
If the unknown parameter l0 is continuous [2] (i.e. logarithm of FLR is mean
square differentiable at least twice), then characteristics of processing algorithm
Application of the local Markov approximation method
4471
can be found by means of a small parameter method [3]. However, in some
practical tasks more adequate description of real information processes can be
specified with the assistance of discontinuous models [2, 4, 5 et al.]. In this case
realizations of the logarithm of FLR will be non-differentiable with respect to
current value of unknown parameter in any probability meaning. Consequently, it
is not possible to calculate even the potential accuracy of the processing algorithm
(for example, Cramer-Rao bound).
The purpose of the present work is to illustrate a technique of the statistical
analysis of the signals processing algorithms with unknown discontinuous
parameters in the presence of random distortions.
2 Definition of Characteristics of Signal Detection and
Discontinuous Parameter Estimation by the Local Markov
Approximation Method
When quasideterministic [1-3] or random (Gaussian) [4, 5] signal with
unknown discontinuous parameter against Gaussian distortions is observed, the
logarithm of FLR M l  is Gaussian or asymptotically Gaussian (with increasing
a signal-to-noise ratio (SNR)) random process [2, 3, 5]. We designate as
Sl   M l  and N l   M l   M l  signal and noise functions of the
logarithm of FLR [2, 3], so
Here
M l   Sl   N l  .
(3)
indicate averaging operation on all possible realizations of the
observable data.
According to [2, 5] for signal function the following approximation is valid
Sl   S0 max 0,1  l  l0   S N ,
(4)
where components S0 and SN characterize the accumulated (output) energy of a
useful signal and distortions accordingly. Noise function N l  , as well as M l  ,
is Gaussian or asymptotically Gaussian centered random process which
covariance function Bl1 , l2   N l1 N l2  under li  l0  1 , i  1,2 can be
presented in a kind [2, 5, 15]
Bl1 , l2    2N max 0,1  l1  l2   S2   2N 


 max0,1  min0, l1  l0 , l2  l0   max 0, l1  l0 , l2  l0  
(5)
 1  l1  l2  g min l1  l0 , l2  l0  , l1  l0 l2  l0   0 ;
 S2 
l1  l0 l2  l0   0 ;
 1  l1  l2 ,


where g  S2   2N S2 . If max  l1  l0 , l 2  l0   1 then
Bl1 , l 2    2N max 0,1  l1  l 2  .
(6)
4472
O. V. Chernoyarov et al.
Values S2 ,  2N describe dispersions of the process N l  in signal l  S
and noise l  N   \ S areas of an interval of possible values of the parameter
l0 accordingly. We understand subintervals
S   l 0  1, l 0  1  , N  L1 , l 0  1  l 0  1, L 2 
(7)
to be signal and noise areas within which the signal function (4) is distinct from
S N (i.e., depends on true value of unknown signal parameter l0 ) or is equal to
S N (i.e., does not depends on value l0 ).
To start with we consider characteristics of the measurer (2). According to [3]
the most general and full (in a probability sense) characteristic of the estimate l m
is its conditional (under fixed l0 ) probability density w x l0  which can be
written down in terms of
w x l0   P0 w 0 x l0   1  P0 w a x l0  .
(8)
Here
P0  P l m  S 
(9)
is the reliable-estimate probability, w 0 x l0  and w a x l0  are conditional
probability densities of a reliable and anomalous estimate l m accordingly. As a
reliable estimate we understand the estimate l m located within the interval S
(7). If l m  N , then, following [2, 3], we name such estimate and the
corresponding estimation error as anomalous. The allowance of anomalous errors
is necessary, if prior interval length m  L 2  L1 of possible values of the
parameter l0 is much greater than the range of the reliable estimate interval S
[2, 3], i.e.
m  1 .
(10)
2
Let us assume that output (power) SNR z for the algorithm (2) is
sufficiently great:
z 2  Sl0   S N 2
N 2 l0   S02 S2  1 .
(11)
Then, according to [2], the estimate l m converges in mean square to the true
value of the estimated parameter l0 with increasing z 2 . Thereupon, for the
definition of the characteristics of the reliable estimate l m under z 2  1 it is
enough to investigate the behavior of the functional M l  in the small
neighborhood L  of point l  l0 : L   l0  , l0   so that l m  L  . Here
  1 is some value, limiting the neighborhood of the point l  l0 . Taking the
last remark into account, we introduce the functional
 l   M l   M x   S ,
l, x  L
(12)
Application of the local Markov approximation method
and present the distribution function F0 x l 0   
x
L1
4473
w x  l 0  dx  of the reliable
estimate l m in terms of
F0 x l 0   Pl m  x   P  max M l   max M l   P  max  l   max  l  . (13)
 l  x

 l  x

l x
l x
The probability P  max  l   max  l  in Eq. (13) can be found with the help
 l  x

l x
of the two-dimensional distribution function F2 u, , x  or probability density
w 2 u, , x  of absolute maxima of the functional (12):
u
F2 u, , x     w 2 u , , x  du d  P  max  l   u, max  l    .
 l  x

l x
00
(14)
Here it has been taken into account that max  l   0 by definition. Really,
comparing Eqs. (13) and (14) we get
F0 x l 0  
 u

0 0
0
 F2 u, , x 

u

  w 2 u, , x  ddu   
du .
(15)
 u
According to Eqs. (4)-(6), (12),  l  is Gaussian or asymptotically Gaussian
random process with covariance function in form of
B l1 , l2    l1    l1   l2    l2   
 min l1  x , l2  x  , l1  x l2  x   0 ;
 2  g  
l1  x l2  x   0 .
 0,
(16)
This implies
1) realizations of the process  l  in the intervals l0  , x  , x, l0   are not
correlated and therefore they are statistically independent, as being Gaussian
(asymptotic Gaussian);
2) within each of the intervals l0  , x  , x, l0   conditions of the Doob’s
theorem in the wording [6] are satisfied, so  l  is Markov random process of
diffusion type. Under l  x the drift K1 and diffusion K 2 coefficients of the
process  l  are determined as
K 1  lim
 l  l    l   l 
l 0
l
 z , l  l0 ;

  z , l  l0 ;
(17)
l  l   l 
2
K 2  lim
 l 
 2  g.
l
Making use of the asymptotical statistical independence of the random process
 l  values in intervals l0  , x  , x, l0   we now rewrite the probability
l  0
4474
O. V. Chernoyarov et al.
(14) in a form of
F2 u, , x   P1x u  P2 x  .
(18)
Here P1x u   P  max  l   u  , P2 x   P  max  l    . Then taking into
 l  x

 l  x

account Eq. (15) the distribution function F0 x l0  of the reliable estimate l m
can be expressed as
F0 x l 0  

 P2x u  dP1x u  ,
x  l0  , l0   .
(19)
0
We calculate the probabilities P1x u  and P2 x  , using Markov properties
of the process  l  . For this purpose we introduce the random process
l      l  ,
0
(20)
within the interval l  x, l0   and express the probability P2 x  as follows



 
P2 x   P   l     P  l   0   w 2 x y, l 0    dy ,   0 . (21)
 x  l  l0  

 x  l  l0  
 0
Here w 2 x y, l0    is the probability density that the process l  , beginning at
the moment l  x from the value x    , will reach the value l 0     y by
the moment l  l0   , and, at the same time, over the interval x, l0   the
process l  lies within an interval 0,   .
According to Eq. (20) the process l  , as well as  l  , is Markov random
process of diffusion type with drift coefficient K 1   K1 and diffusion
coefficient K 2  K 2 where K1 and K 2 are defined from Eqs. (17). Then the
probability density w 2 x y, l0    can be found from the solution of the direct
Fokker-Planck-Kolmogorov equation [7, 8]
w 2 x y, l  
1 2
 K 2 w 2x y, l    0
(22)
  K1 w 2 x y, l   
l
y
2 y 2
with starting condition
w 2 x y, l  l  x  x  
(23)
and boundary conditions
w 2 x y, l 
y0
 0,
w 2 x y, l 
y
 0.
(24)
Solving Eq. (22), we single out two cases.
1) x  l0
The given condition means that drift and diffusion coefficients remain constants
over the interval x, l0   , whereupon Eq. (22) can be rewritten as follows
Application of the local Markov approximation method
4475
w 2 x y, l 
w 2 x y, l  K 2  2 w 2 x y, l 
.
(25)
 K1

l
y
2
y 2
The solution of Eq. (25) can be received by the method of characteristic function
[7, 8]. Taking into account the starting condition (23), we have as a result
   K1 l  x   y2 
1
( 0)
w 2 x y, l  
exp 
.
2 K 2 l  x 
2K 2 l  x 


or substituting an explicit form of coefficients K1 and K 2 from Eq. (17)
w (20x) y, l  
    z l  x   y 2 
1
exp 
.
22  g l  x  
22  g l  x 

(26)
The upper index “0” of the probability density (26) means that boundary
conditions (24), while solving Eq. (25), were not imposed so far.
In order to find the solution for Eq. (25) with boundary conditions, we use a
reflection method with sign inversion [7, 8]. According to this method, the
solution of Eq. (25), with an absorbing barrier arranged in point y  C (that
corresponds to the condition w 2 x y, l  y  C  0 ), can be written down in a form
of
w 2 x y, l   w (20x) y, l   w (20x) 2C  y, l  .
(27)
Then, substituting Eq. (26) in Eq. (27) and believing, according to Eq. (24), that
C  0 we finally receive
     z l  x   y 2 
1
w 2 x y, l  

exp  
22  g l  x  
22  g l  x   
(28)
    z l  x   y 2  
 2z 
 exp 
 exp  
 .
22  g l  x   
 2g

Using Eq. (28) in Eq. (21), where l  l0   , and having carried out
integration, for probability P2 x  we get
   z l 0    x  
 2z      z l 0    x  
P2 x    
 
 . (29)
  exp 
 2  g   2  g l 0    x  
 2  g l 0    x  

Here  x   1
2
 x exp t 2 2dt
is probability integral [9].
2) x  l0
In this case, according to Eq. (17), within the interval x, l0   the drift
coefficient K1 varies stepwise in a point l  l0 . Thereupon, we divide an
interval x, l0   into two subintervals: x, l0  and l0 ,l0   . As it has been
shown above, the solution of Eq. (22) with starting and boundary conditions (23),
4476
O. V. Chernoyarov et al.
(24) under l  x, l 0  ( K1  z , K 2  2  g ) is of the form
~ y,l  
w
2x
     z l  x   y 2 
1

exp  
22  g l  x  
22  g l  x   
(30)

.

For the definition of the probability density w 2 x y, l  at the moment
l  l0   , we find the solution of Eq. (22) within the interval l  l0 ,l0   ,
where K1  z , K 2  2  g , the boundary conditions are defined from Eqs. (24),
and it is necessary to choose the probability density (30) at the moment l  l0 , as
the starting condition. Then, following [7, 8] we receive

2

1
~ y , l  exp    y   K1 l  l0   y   
w 2 x y, l  
w

 2x 0  
2 K 2 l  l0 
2K 2 l  l0  0

 
    z l  x   y 2 
 2z 
 exp
 exp 

22  g l  x  
2g

  y   K1 l  l0   y 2 
 2 y  K1 
 exp  
 exp

2 K 2 l  l0 


 K2 
~ y, l 
or substituting explicit expressions of K , K , w
w 2 x y, l  
1
22  g  l  l0 l0  x 
1


0
2
2x

 dy 

0
     z l  x   y 2 
0

exp 
22  g l0  x  
 
 2z      z l0  x   y  2 
 exp
exp  

22  g l0  x  
 2  g  
    y   z l  l   y 2 
0
  (31)
 exp  
22  g l  l0  
  
  y   z l  l0   y 2 
 2 y z 
 exp
 exp  

22  g l  l0  
2g


 dy  .

Now using Eq. (31) in Eq. (21) at l  l0   and carrying out integration on a
variable y, we have for probability P2 x  
P2 x  


exp  z 2 l0  x  22  g 
2 2  g l0  x 


0

 z    y  
   y 2  
exp 
exp




 2  g  
 22  g l0  x 

   y 2      z  y   exp  2 yz   z  y   dy .
 exp  
 
 2  g   2  g   


 
 22  g l0  x     2  g  
 
(32)
Here the index «’» of the integration variable is omitted.
In consequence of the symmetry of the signal function Sl  (4) of functional
Application of the local Markov approximation method
4477
M l  with respect to the point l  l0 and stationarity of its noise function N l 
(5), it is easy to establish that the probability P1x u  is connected to the
probability P2 x u  by a relation P1x u   P2 2l0  x u  . Then, with Eqs. (29), (32)
in mind, it can be written down with x  l0 as
 z x  l 0     u 
 2uz   z x  l 0     u 
P1x u    
 
 (33)
  exp 
 2  g   2  g x  l 0    
 2  g x  l 0    
and with x  l0 as


exp  z 2 x  l0  22  g 
P1x u  
22  g x  l0 


0

 z  u  y  
 u  y 2  

exp 
exp




 2  g  
 22  g x  l0 

 u  y 2      z  y   exp  2 yz   z  y   dy .
 exp 
 

 2g  


   2  g   
 22  g x  l0     2  g  
(34)
Eqs. (29), (32)-(34) allow us to find the distribution function F0 x l0  of the
reliable estimate l m . So, if x  l0 , then, by substituting Eqs. (32), (33) in Eq.
(19), we get
F0 x l0   Pm l0  x  ,
(35)
where

 u  y 2 

 z u  y  
1
z2v 
Pm v  
exp  
exp

 exp  

2  g  2v  22  g  0 0  2  g    2 v 2  g 
 u  y 2     z  y 
 2 yz   z  y    2z
 exp  

 
  

  exp 
 2  g   2  g     2  g
 2 v 2  g     2  g  
 2uz   z   v   u 
 exp 
 

 2  g   2  g   v  
 z   v   u 2 
2
exp  

  v 
 22  g   v  

 dudy .

(36)
When x  l0 , in order to find the distribution function F0 x l0  , we use the
expression

F0 x l 0   1   P1x u  dP2 x u  ,
(37)
0
which is received from Eq. (19) by its integration by parts. Then, substitution of
probabilities P2 x u  (29), P1x u  (34) in Eq. (37) leads us to
(38)
F0 x l0   1  Pm x  l0  .
Uniting Eqs. (35) and (38), we can definitively write down
4478
O. V. Chernoyarov et al.
 Pm  l 0  x  ,
l0    x  l0 ;
F0 x l 0   
(39)
 1  Pm  l 0  x  , l 0  x  l 0   .
Let us consider behavior of the distribution function F0 x l0  (39) under
z 2   . In Eq. (36) we use the asymptotic formula for probability integral [9]:
 x  x
 1  exp  x 2 2 2 x .
(40)



and neglect higher-order infinitesimal terms compared with z. Then, for function
(36), the following approximation is valid


 z u  y 
2z
z2v 
Pm v  
exp 
exp  




 2g 
 22  g  0 0
2v 2  g 3

 u  y 2 
 u  y 2   
 2 yz 
  exp  
 dudy
  exp  
  1  exp 

 2  g 
 2 v 2  g 
 2 v 2  g   
or after integration operation completed:
 4z 2 v 
 5 2z 2 v  

v  3






Pm v  
1   z

   2 exp 2  g  
2 2g
2
g

 



 

(41)



v 
2z 2 v
z2v 

exp  
 1    3z
.


2  g  
2  g 
2
2

g




According to Eq. (41), the function Pm v  is distinct from zero in a small
neighborhood of the point v  0 . Similarly [2] it allows to extend approximation
(39), (41) to the total number axis, sacrificing no accuracy:
 Pm  l 0  x  ,    x  l 0 ;
F0 x l 0   
(42)
 1  Pm  l 0  x  , l 0  x   .
We use Eq. (42) for distribution function F0 x l0  with finite but large z.
Then conditional probability density w 0 x l0  , bias b 0 l m l0  and variance
V0 l m l0  of a reliable estimate l m are determined as
w 0 x l0  
 4z 2 x  l 0
d
2z 2 

F0 x l0  
3
exp

 2g
dx
2g 




x  l0
 1    3z

2g




     z x  l0


2g


b 0 l m l 0   l m  l 0 






  1  , x   ,   ,



(43)

 x  l0  w 0 x l0  dx  0 ,

(44)
Application of the local Markov approximation method
V0 l m l0   l m  l0 2 

 x  l0 
2
4479
w 0 x l0  dx  132  g 2 8z 4 .
(45)

The accuracy of formulas (41)-(45) increases with z.
Now let us calculate the probability P0 (9) of the reliable estimate l m . For

this purpose we introduce the functional M l   M l   S N and the random
variables

H N  sup M l  ,
lN

H S  sup M l  ,
(46)
lS
where S N , S , N are defined from Eqs. (4), (7). Then the probability P0 (9)
can be presented as
P0  P H S  H N  .
(47)
According to Eqs. (5), (6), the correlation time of the random process N l 

(and M l  too) does not exceed 1. Thus, if the condition (10) is satisfied, then

the occurrence probability of the greatest maximum of functional M l  within
the interval l 0  2, l 0  1  l 0  1, l 0  2 can be neglected in comparison with

the occurrence probability of the greatest maximum of M l  within the interval
L1 , l0  2  l0  2, L 2  .

So, as the functional M l  is Gaussian (asymptotic

Gaussian), then, therefore, the values of M l  and also the random variables
H N , H S (46), in the intervals L1 , l0  2  l0  2, L 2  and l 0  1, l 0  1 are
approximately statistically independent. We designate
FN    PH N  N    ,
FS    PH S  S   
(48)
as distribution functions of random variables H N  N and H S  S accordingly.
Owing to statistical independence of H N and H S , likewise in Eq. (19), for
probability P0 (47) of the reliable estimate l m we can write down
P0   FN   dFS  r  ,
(49)
where r  S  N and integration is conducted for all possible values κ.
Let us find probabilities FN   and FS   . According to Eqs. (3), (4), (48)


FN    P  M l   N     PN l   N    , l  L1 , l0  1  l0  1, L 2  . (50)


4480
O. V. Chernoyarov et al.
Here N l  is Gaussian (asymptotically Gaussian) stationary random process with
zero mathematical expectation and covariance function (6). Then on the basis of
the results [2, 10] for the function FN   (50) the following approximation can
be carried out

 m
  2 
 ,   1;
exp 
 exp 
 2 
FN    
(51)
2





  1.
 0,
Accuracy of Eq. (51) increases with m and .
Let us pass to the probability FS   definition. As it was noted above, when
condition (11) is satisfied, it is possible to consider that the estimate l m is
situated in the small -neighborhood of the point l  l0 . Then the distribution
function FS   can be presented in a form of
FS    P 0 l      0  ,
Here
l  l0  ,l0   .
 0 l    l  x  l   M l   M l 0    S ,  0   M l0   S N  S
0
(52)
(53)
and  l  is defined from Eq. (12). As appears from Eq. (16), realizations of the
random process  0 l  are approximately statistically independent in the intervals
l0  ,l0  and l0 ,l0   . Then for FS   (52) we have


 
FS    P   0 l      0  P   0 l      0  .
(54)

l0 l l0
 l0 l l0  
According to Eqs. (2)-(5) and as the functional M l  is Gaussian
(asymptotical Gaussian), the random variable  0 (53) is Gaussian
(asymptotically Gaussian) random value with mathematical expectation z and unit
dispersion. We designate




F1    P   0 l     , F2    P   0 l     ,
(55)
l0 l l0

l0 l l0  

and


w 0 x   exp  x  z 2 2 2
as probability density of the random variable  0 . Then the distribution function
(54) can be expressed as
FS   

 F1   y  F2   y  w 0 y  dy .

(56)
Let us find the probabilities F1   , F2   (55). For this purpose we intro-
Application of the local Markov approximation method
4481
duce the random process  0 l      0 l  , assigned to the interval l 0 ,l 0  
and, by analogy to Eq. (21), we write down the probability F2   as

 
F2    P   0 l   0   w 2 y, l 0    dy ,
l0 l l0  
 0
where w 2 y, l   w 2 x y, l  x l and w 2 x y, l 0    is the solution of Eq. (22)
0
with conditions (23), (24). It is easy to show that F2    P2 x   x l , where
P2 x   is defined from Eq. (29). Then for function F2   we have
0
 z   
 2 z   z    
F2     
(57)
 
.
  exp 
 2  g   2  g  
 2  g  
Owing to symmetry of statistical properties (4), (5) of the functional M l  (3)
concerning a point l  l0 , the probabilities F1   , F2   are connected by a
parity
F1    F2   .
(58)
As a result, taking into account Eqs. (56)-(58), for function FS   (56), we find
2
  z    y 
 y  z 2 
 2z   y   z  y    




FS   
exp
exp

 dy .








2  g   2  g   
2 
2    2  g  


1

(59)
Let us consider the behavior of the function FS   (59) for the case of the
sufficiently great SNR (11). Similarly to Eq. (39), using asymptotic representation
(40) of probability integral under z 2   and neglecting higher-order
infinitesimal terms compared with z, we have
2
     z 2 

 2z  
FS   
 d ,
 1  exp  2  g   exp 
2
2 0 


or after integration is carried out –
1



FS       z   2 exp  2 z 2 2  z z       z   1  


 exp 2 2 z 2  2z z       z 2  1  .
Here   2 2  g  .
(60)
Substituting Eqs. (51), (60) in Eq. (49), for the probability P0 of the reliable
estimate l m , we find
4482
O. V. Chernoyarov et al.

 mx
 2 2 z 2
 x 2    zx 
2 z
2


 exp 
P0 
exp
 2z  exp  
exp 

 2


  
r
2
r 
2


1


(61)
2 2



2x   x
3 z



x
    z   1  exp 
 z  z 
    z 2  1  dx .
r   r
 
r


 2
Now we can write down the expressions for the characteristics of the signal
parameter estimate l m with anomalous errors. According to [2, 3], when
condition (10) is satisfied and M l  (or N l  ) is stationary Gaussian random
process, for probability density w a x l 0  it is possible to use the approximation
1 m, x;
w a x l 0   
(62)
 0, x.
Substituting Eqs. (43), (61), (62) in Eq. (8), by analogy to Eqs. (44), (45) for
conditional bias bl m l 0  and variance V l m l0  of the estimate l m with
anomalous errors we have
bl m l0   P0 b 0 l m l0   1  P0  b a l m l0   1  P0  b a l m l0  ,
V l m l 0   P0 V0 l m l 0   1  P0  Va l m l 0  .
(63)
Here b 0 l m l 0  , V0 l m l0  are defined from Eqs. (44), (45), and
b a l m l0  
L2
 x  l0  w a x l0  dx  L 2  L1  2  l0 ,
L1
Va l m l0  
L2
 x  l0 
2


w a x l0  dx  L22  L1L 2  L21 3  L 2  L1  l0  l02
L1
stay for the conditional bias and variance of the anomalous estimate l m ,
accordingly. Hence in general the estimate l m is conditionally biased. Accuracy
of formulas (61), (63) increases with m (10), z (11).
The results received above allow simply to write down expressions for
detector (1) characteristics of the information signal st, l0  with unknown
discontinuous parameter l0 against Gaussian distortions t  . The type I (false
alarm) and II (signal missing) error probabilities [1, 2, 5, 10] will be used by us as
detection characteristics. We are limited to a practically important case, when the
prior interval length m essentially exceeds the range of the reliable estimate
interval S (7), i.e. the condition (10) is satisfied.
Firstly, we believe that the useful signal st, l0  is absent. Then the
false-alarm probability α can be presented in a form of
Application of the local Markov approximation method


  P  sup M l   c x t   t   1  FN c  ,
lL1 , L 2 



where FN c   P  sup M l   c x t   t  .
lL1 , L 2 

If Eq. (10) holds then
4483
(64)
 M l m   S N c  S N

FN c   P 
lm  Γ N  ,

σN
σN


and, taking into account Eqs. (3), (4), (6), for function FN c  the approximation
(51) can be used with substitution of
u  c  S N  σ N
(65)
in place of κ. Therefore, for false-alarm probability we have

 

 1  exp  mu 2 exp  u 2 2 , u  1 ;

(66)
u  1.
 1,
Now let us believe that the useful signal st, l0  is present on the detector
input. Then the missing probability will be determined as


 M l m   S N c  S N 

. (67)
β  P  sup M l   c x t   st, l0   t   P 
σS
σ S 

lL1,L 2 

Using a condition of approximate statistical independence of random variables
(46), for probability (67) we can write down
  FN u  FS u r 
or when expressing functions (51), (60) in explicit form

 u r  z   2 exp 2z 2 2  zz  u r   (68)
 u r  z   1   exp2 2 z 2  2z z  u r  u r  z 2  1  ,
  exp  mu
 
2  exp  u 2 2
if u  1 , and   0 if u  1 . Accuracy of the formula (68) increases with u, m, z.
Let us now consider the application of the Markov local approximation
method for the definition of the accuracy characteristics of the concrete detectors
and measurers of discontinuous signals.
3 Reception of the Quasideterministic Video Pulse with Unknown
Appearance Time
Let an additive mix of kind
x t   st,  0   n t   t  ,
is accessible to observation. Here
t  0, T 
(69)
4484
O. V. Chernoyarov et al.
1 , x  1 2 ,
 t  0 
st,  0   aI
(70)
 , I x   
0 , x  1 2 ,
  
is the useful signal (video pulse) with amplitude a, duration τ and unknown
appearance time  0 , while n t  is Gaussian white noise with one-sided spectral
density N 0 , and t  are correlated distortions. As a model of correlated
distortions, we choose the stationary centered Gaussian random process,
possessing spectral density [4, 5, 11]
G    2  I   ,
where Ω – bandwidth, γ – value of spectral density (intensity) of the process t  .
In radio engineering appendices white noise n t  describes internal instrument
noises. And an unintentional (interburst) interference which has passed through
input filter (preselector) of receiving device or barrage jamming can afford
examples of random distortions t  [11].
With the observable realization (69), it is necessary to estimate the parameter
 0 , which values are from the prior interval  1 ,  2  . It is assumed that the
condition 0   1   2   2   2  T is satisfied, so the pulse (70) is always
situated within the observation interval 0, T  .
If
   4   1 ,
(71)
then the logarithm of FLR can be written down as [12]
M   
T

2a
y 2 t  dt 

N 0 N 0    0
N0  
Here yt   


 2

x t  dt 
 2

 
a 2
 . (72)
  ln1 
N0  
N
0

x t  h t  t  dt , and h t  is the function which spectrum H 
satisfies to a condition H   I   .
2
In accordance with Eq. (72), the maximum likelihood estimate (MLE) m of
the signal (70) appearance time is determined as
λ m  arg sup M 0   ,
 1 ,  2 
M 0   
 2
 x t  dt .
(73)
 2
Let us define characteristics of the estimate (73). For this purpose we put into
consideration the dimentionless parameter l    , designate l0   0  and,
following Eq. (3), present the sufficient statistic M 0   (73) as the sum of signal
Sl   M 0   and noise N l   M 0    M 0   functions:
M 0    Sl   N l  .
Carrying out averaging for all the possible realizations of the observable data (69)
Application of the local Markov approximation method
4485
under fixed λ 0 , we find from Eq. (73) that the signal function Sl  and
covariance function of the noise function Bl1 , l2   N l1 N l2  are determined
according to Eqs. (4)-(6) under S0  a , S N  0 , σ S2  σ 2N   N 0    2 . Then
Eqs. (63), with reference to conditional bias and variance of the normalized MLE
l m  λ m τ and with anomalous errors taken into account, have the appearance
bl m l0   1  P0  L 2  L1  2  l0  ,

Vlm l0   P0 V0 l m l0   1  P0 
 L1L 2 
where
V0 l m l0   13 2z 4 ,
L22
L21
 3  L2 

L1 l0  l02
,
(74)
(75)
2 

 3z 2  

 exp   mx exp  x exp zx  
P0  2z exp
 2 
 2 
(76)
2


1


  x  2z   expz 5z  4x  2x  3zdx
– conditional variance and probability of the reliable estimate l m , obtained on the
basis of Eqs. (45), (61), z 2  2a 2  N 0    – output SNR (11) for algorithm
(73), L1,2   1,2  , m  L 2  L1 .
For signal (70) detection, in accordance with Eqs. (1), (72) detector should
compare a maximum of the functional M 0   (73) with the threshold c
determined on the basis of accepted optimality criterion. It is easy to see that
false-alarm and missing probabilities can be found from Eqs. (66), (68), where
u  c 2 N 0    ,
(77)
  1 , r  1 , and m, z are defined the same as in Eq. (76). In particular, when
u  1 for missing probability, we have

  exp  mu
 
2 exp  u 2 2
  u  z  
(78)
 2 expz 3z 2  u   u  2z   exp2z 2z  u   u  3z  .
In order to establish the applicability range of asymptotically exact formulas
(66), (74)-(78) the statistical computer modeling of the measuring and detection
algorithms of the signal (70) with unknown appearance time was carried out.
During the modeling for specified values μ (71), z, q ν   N 0 and l0 samples
~
of the normalized functional M 0 l   M 0   N 0 were formed within the
interval
 L1 , L 2 
with digitization step l  0.01 as follows
K
1
max
1 qν
α 
~

M 0 l   z
max 0,1  l  l0   Δ   ~ν k  k  ,
2
2Δ 
k  K min 
(79)
4486
O. V. Chernoyarov et al.
In
Eq.
(79)
it
is
designated:
K min  int l  l0  1 2   ,
K max  int l  l0  1 2  , int
 – integer part, ~ν k  ~ν l0  kΔ  – samples of
l 0  k  1  ~ ~ ~
~
the normalized process ~
  t   t   N 0 , α k  2  
n t d t –
l 0  k
independent Gaussian random numbers with characteristics α k  0 , α 2k  1 ,
~
~
~
n  t   n t   N 0 – normalized Gaussian white noise, t  t  – normalized
time, and Δ – sampling step chosen so that mean square error of stepwise
~
  t  does
approximation ~ν ~t   ~ν k , l0  kΔ  ~t  l0  k  1 Δ of the process ~
not exceed 10 % [5, 13], i.e.
ε  2 1  R ν Δ 2    0.1 .
(80)
~
~
~
~
 t  .
Here R ν  t   sin 2 t  2 t is correlation coefficient of the process ~
The inequality (80) is satisfied, in particular, if
  0.05  .
(81)
~
~
In this case neighboring samples ν k , ν k 1 have correlation coefficient
R ν Δ   0.98 .
~
  t  were formed on the basis of the sequence
Samples ~ν of the process ~
k
of independent Gaussian random numbers by a moving summation method [5,
13]:
~ν  q
k
ν
2 p 1
 C jβ j k ,
j 0
Cj 
1 sin2πμΔ j  p  
,
j p
π 2Δ
(82)
where β j are independent Gaussian random numbers with zero mathematical
expectations and unit dispersions.
In the sum (82) number of summands was chosen proceeding from a condition
[5, 13]
σ 2~ν  σ 2~ν k
Here σ 2~ν  μq ν , σ 2~ν k  q ν
2 p 1
 C2j
σ 2~ν   .
(83)
~
are dispersions of the process ~
  t  and its
j 0
formed samples accordingly, and   1 is the maximum allowed deviation of a
dispersion of the generated sample from a dispersion of the modeled process. We
are limited to   0.1 [5]. Then inequality (83) starts to be carried out, if
p  103 .
(84)
Formation of Gaussian numbers α k , β j with parameters (0,1) was
implemented from sequences of independent random numbers n ,  n ,
uniformly distributed within the interval [0,1], by the Cornish-Fisher method [14]:
Application of the local Markov approximation method
 i  Zi 
Z 3i  3Z i
,
20 N
Zi 
4487
12 N
 θ N i 1 n  0,5 ,
Nn1


(85)
where  i is one of sequences α k , β j , and  n is sequence n ,  n
corresponding to it. The number of summands N in the sum (85), following [14],
was chosen equal to 5. Thus the mean square error of the step approximation (79)
~
of functional M 0 l  realization does not exceed 5 %.
~
For each realization of M 0 l  , generated by Eqs. (79), (82), (85), according
to Eqs. (1), (2) the normalized estimate l m was determined, and the decision on
presence or absence of a useful signal (70) in realization of the observable data
was made. Also experimental detection and measuring characteristics were found.
In Figs. 1-3 some results of statistical modeling are presented, where
corresponding theoretical dependences are shown also. Each experimental value
~
was received as a result of processing of not less than 3  10 4 realizations M 0 l 
(79) under l0  L1  L 2  2 , L1  1 2 , L 2  m  1 2 ,   50 , q ν  1 . Thus
with probability of 0.9 confidence intervals boundaries deviate from experimental
values no more than for 10...15 %.
In Fig. 1 solid line represents dependence (74) of the normalized variance
~
Vl  12V l m l0  m 2 of estimate lm from SNR z, taking into account
anomalous errors, if m  20 . Here analogous dependence (75) of the normalized
~
variance V0l  12V0 l m l0  m 2 of reliable MLE lm is also drawn by dashed
~ ~
line. The experimental values of estimate variances Vl , V0l are designated by
squares and crosses accordingly. In Fig. 2 the theoretical dependence of
false-alarm probability (66), where u is normalized threshold (77), is traced by
solid line. The length of the reduced interval m (76) is taken equal to 20. By
squares the experimental values of false-alarm probability are designated here. In
Fig. 3 the theoretical dependence of missing probability (78) is plotted for
m  20 . The threshold c was defined from Eqs. (66), (77) by Neumann-Pirson
criterion, according to the specified level of false-alarm probability   0.01 .
Experimental values of missing probability are designated by squares.
4488
O. V. Chernoyarov et al.
Fig. 1. Normalized variance of
appearance time estimate.
Fig. 2. False-alarm probability.
As follows from Fig. 1-3, theoretical dependences for probabilities α (66),
(77), β (78) and variance V l m l0  (94) well approximate experimental data, if
SNR z  0 , and theoretical dependence for V0 l m l0  (75), – if z  3 . Under
z  3 theoretical dependence for V0 l m l0  (75) deviates from experimental data
as formula (75) was received without considering finite length of the prior
definition interval L1 , L 2  of the parameter l0 . Thereof, when conditional
variance V0 l m l0  of the reliable MLE lm becomes commensurable or more
than value m 2 12 , the error of Eq. (75) increases essentially. For more exact
approximation of variance V0 l m l0  in the domain of small SNR z the following
expression can be used
V0 l m l0   minV0 max ,V0 l m l0 .
(86)
Here V0 max  1 3 is variance of the uniform random value in the interval S (7)
of the reliable estimate lm .
It should be also noted that with decreasing z, when SNR z  78 , the
probability of anomalous errors Pa  1  P0 (76) considerably increases and
verges towards 1. It leads to abrupt (in comparison with a case of a reliable
estimate) increment of MLE lm variance. With increasing z, when z  78 ,
values of variances V l m l0  (74), V0 l m l0  (75) almost coincide, and the
estimate of video pulse (70) appearance time becomes reliable with the
probability close to 1.
Application of the local Markov approximation method
4489
4 Reception of the Random Radio Pulse with Unknown
Appearance Time
Now let us present the useful signal st,  0  as a random radio pulse, which
represents a realization segment of the stationary centered high-frequency random
process t  [4, 5]
st,  0   t  It   0   .
(87)
Here  0 – appearance time, τ – duration of the pulse and I x  – unit duration
indicator (70). We fit spectral density of the pulse substructure t  as [4, 5]
G    D 1   I   1   I   1  ,
where ϑ – band center, 1 – bandwidth, and D – dispersion of the process t  .
As well as earlier, we approximate internal instrument noises by Gaussian
white noise n t  with one-sided spectral density N 0 . As a model of correlated
distortions t  , we choose the stationary centered Gaussian random process,
possessing spectral density [4, 5, 11]
G     2   I    2   I    2  ,  2  1 .
Here  2 – bandwidth, and γ – value of spectral density (intensity) of the process
t  .
Let us consider that the signal duration τ is much greater than the correlation
time of the process t  (process t  fluctuations are "fast"), then
1  1 2   1 .
(88)
Appearance time  0 of the signal (87) is unknown a priori and possesses values
from a prior interval  1 ,  2  . Meanwhile, observation interval boundaries
0, T satisfy to condition 0  1   2   2   2  T , i.e. the pulse (87) is
always located within an interval 0, T  .
Synthesizing measurer of pulse (87) appearance time by a maximum
likelihood method and in view of a condition (88), we write down the logarithm
of FLR as follows [15]
  γ d

d Mτ λ 
γ MT
γ 
  K  1 ln1 
  .
Mλ  

 μ ln1 
N 0  γ N 0  γ  d N 0 N0  γ    N0 
 N0  
(89)
In Eq. (89) it is designated that: d  2D 1 , K  T 2 1 ,
M τ λ  
 2
 t  dt ,
y12
 2
T
M T   y 22 t  dt ,
0
(90)
4490
O. V. Chernoyarov et al.
and y i t   


x t  h i t  t  dt  , i  1,2 , where h i t  is the function which
spectrum H i  satisfies to a condition H i ω   I  ω  Ω i   I  ω  Ω i  ,
2
and x t   st,  0   n t   t  is realization of the observable data.
Then MLE λ m of the parameter  0 is determined as
λ m  arg sup M λ   arg sup M τ λ  .
(91)
λ Λ 1,Λ 2 
λΛ 1,Λ 2 
In order to calculate characteristics of the estimate λ m (91), following Eq.
(3), we present the sufficient statistic M τ λ  (90), using dimentionless values
l    , l0   0  ,
as the sum of signal Sl   M τ λ 
and noise
N l   M τ λ   M τ λ  functions:
M τ λ   Sl   N l  .
If condition (88) holds, then it can be shown [15] that the signal function S l  is
defined from Eq. (4) under
S 0  D and S N  τ E N  E γ .
(92)


where E N  N 0 Ω1 2π , E γ  γΩ1 2π are signal band average powers of noise
n t  and hindrance t  . Noise function N l  is asymptotically (when
1   ) Gaussian random process [5] with a zero mathematical expectation and
covariance function of a kind (5), (6) under [15]
σ S2  τE N 1  q ν  q 2 μ1 ,
σ 2N  τE N 1  q ν 2 μ1 ,
(93)
g  q 2  2q ν  q  1  q ν  q 2 , q  D E N , q ν  γ N 0 . Therefore conditional
bias and variance of the normalized MLE l m  λ m τ (91), and in view of
anomalous errors, can be found from Eqs. (63) as
bl m l0   1  P0 L 2  L1  2  l0  ,



V l m l0   P0 V0 l m l0   1  P0  L22  L1L 2  L21 3  L 2  L1  l0  l02 ,
(94)
where V0 l m l0  and P0 are conditional variance and probability of the reliable
estimate l m calculated using Eqs. (45), (61) under g (93),
z 2  μ1q 2 1  q ν  q 2 ,
  21  q   q 2
 1  q

2  1  q   q 2 ,
(95)
r  1  q ν  q  1  q ν  , m  L 2  L1 , L1,2   1,2  .
For signal (87) detection, in accordance with Eqs. (1), (89), detector should
compare a maximum of the functional M τ λ  (90) with the threshold c,
determined on the basis of the accepted optimality criterion. It is easy to see that
Application of the local Markov approximation method
4491
false-alarm and missing probabilities can be found from Eqs. (66), (68), where
u  c  S N  σ N
(96)
and S N , σ N , z, ψ, r, m are defined from Eqs. (92), (93), (95).
Experimental characteristics of the measurer and detector of the random pulse
with unknown appearance time were found by methods of statistical computer
modeling. For the reduction of the computational burden during formation of the
sufficient statistic M τ λ  samples, it was supposed that the bandlimitedness
condition of a kind   1 for the process t  is satisfied. It allows us to use
representation of the function y1 t  (90) through their low-frequency
quadratures [13] and to form sufficient statistics M τ λ  (90), as the sum of the
two independent random processes:
M τ λ   M1 λ   M 2 λ  2 ,
M j λ  
λ τ 2

y12j t  dt ,
j  1,2 ,
(97)
λ τ 2
y1 j t  

 x j t  h 0 t  t  dt  ,
x j t   s j t   n j t   ν j t  .

Here s j t   ξ j t  It  λ 0  τ  , ξ j t  , n j t  ,  j t  are statistically independent
centered Gaussian random processes with the spectral densities
G ξ 0 ω   2πD Ω1  Iω Ω1  , N 0 and G ν 0 ω   γIω Ω 2  respectively, while
the
spectrum
H 0 
of
the
function
H 0  2  I  1  .
h 0 t 
satisfies
a
condition
During modeling within the interval  L1 , L 2  with discretization step
~
~
~
 t  0.05 1 in normalized time t  t  , samples ~
y jn  ~
y1 j l0  nΔ t  were
~
y1 j  t   y1 j t   N 0 , j  1,2
formed of normalized random process realizations ~
~
~
~
(90) possessing by the correlation coefficient R yj  t   sin πμ1 t  πμ1 t . Thus,
according to Eq. (80) mean square errors of step approximations
~
~ ~
~
~
~
y1 j  t   ~
y jn , l0   t  t  l0  n  1  t for continuous realizations ~
y1 j  t 
did not exceed 10 %. With Eqs. (97) in mind, we can achieve the normalized
~
sufficient statistics M τ l   M τ λ  N 0 (90), presented as
~ N 1
 t max ~ 2 ~ 2
~
(98)
M τ l  
 y1n  y2 n .
2 nN
min
~
Here N min  intl  l0  1 2 Δ t , N max  intl  l0  1 2  ~t .
Samples of processes ~
y jn , j  1,2 were generated in terms of the sequence of


independent Gaussian random numbers by a moving summation method [13], as it
is described in [15]:
4492
O. V. Chernoyarov et al.
~
y jn 
R max  1

r  R min
~ 1  2
ξ jr H nr 
1
2p
n  p 1

r n p
~
1 q
1
β j mr ,
 jr 
 H mp
 1 m  0
H nr1
r  11
2 1

s  r 1  2
 α js


 ~ν js  .
 Δ

2


(99)

2 p 1
~ν  1 q ν
 H 2  χ j m  s .
js
π Δ 2 m  0 mp 
(100)
Here R min  max  R , n  p  , R max  minR , n  p  , R  int1 2 1  ,
~
~
ξ jr  ξ j l0  rΔ1  , ~
 js  ~
 j l0  s 2  ;  1 ,  2 are discretization steps of
~ ~
~
normalized processes ξ j  t   ξ j t  τ N 0 and ~ν j  t   ν j t  τ N 0 accordingly,
α jm , β jm ,  jm are independent Gaussian random numbers with zero
mathematical
expectations
and
unit
i
H mp
 sinπμ i Δ i m  p  m  p , i  1,2 ,  2   2 2  .
dispersions,
Discretization steps  1 ,  2 in Eq. (99) are chosen, proceeding from a
~
condition of type (80), where R ν  t  are substituted by correlation coefficient
~ ~
~
~
~
~
~
~
R ξj  t   sin πμ1 t  πμ1 t or R νj  t   sin  2 t   2 t of the process  j  t  or
~
~
 j  t  accordingly. Then, by analogy Eq. (81), values Δ i , i  1,2 can be taken
equal to Δ i  0.05 μ i . In the sums (100) number of summands corresponds to
the value p  p  103 . According to relation similar to Eq. (83), it provides a
~
relative deviation of the generated sample  jr , ~ν js dispersions from the
modeled process dispersions to be no more than 5 %, as in Eq. (84). Formation of
independent Gaussian numbers with parameters (0,1) was carried out following
Eqs. (85). In the issue the mean square error of step approximation of continuous
~
realization of functional M  l  received on the basis of Eqs. (98)-(100) did not
exceed 10 % at a digitization step l  0.01 .
In Figs. 4-6 some results of statistical modeling are presented, where
corresponding theoretical dependences are also shown. Each experimental value
~
was received as a result of a processing of not less than 10 4 realizations M τ l 
(98) under l0  L1  L 2  2 , L1  1 2 , L 2  m  1 2 , 1   2 , q ν  0,5 .
Thus, with probability of 0.9, confidence intervals boundaries deviate from
experimental values no more than for 10...15 %.
Application of the local Markov approximation method
4493
Fig. 3. Missing probability.
Fig. 4. Normalized variance of
appearance time estimate.
Fig. 5. False-alarm probability.
Fig. 6. Missing probability.
In Fig. 4 solid lines represent dependences (94) of the normalized variance
~
Vl  12V l m l0  m 2 of estimate l m from parameter q (93) taking into account
anomalous errors, if m  20 . Here analogous dependences (45), (93), (95) of the
~
normalized variance V0l  12V0 l m l0  m 2 of reliable MLE l m are also drawn
by dashed lines. Curves 1 are calculated with   50 , 2 – 100, 3 – 200. The
~ ~
experimental values of estimate variances Vl , V0l are designated by squares,
crosses, rhombuses and pluses, circles, triangles for   50 , 100 and 200
accordingly. In Fig. 5 the theoretical dependence of false-alarm probability (66),
where u is normalized threshold (96), is traced by solid line. The length of the
4494
O. V. Chernoyarov et al.
reduced interval m (95) is taken equal to 20. By squares, crosses and rhombuses
the experimental values of false-alarm probability are designated for   50 , 100
and 200. Finally, in Fig. 6 the theoretical dependences of missing probability
calculated, using Eqs. (68), (95), (96), are plotted for m  20 . Curve 1
corresponds to   50 , 2 – 100, 3 – 200. The threshold c was defined from Eqs.
(66), (96) by Neumann-Pirson criterion according to the specified level of
false-alarm probability   0.01 . Experimental values of missing probability are
designated by squares, crosses and rhombuses for   50 , 100, 200 accordingly.
As follows from Fig. 4-6, theoretical dependences for probabilities α (66),
(96), β (68), (95), (96) and variances V0 l m l0  (45), (93), (95), V l m l0  (94)
well approximate experimental data, if, at least,   50 , q  0.1 and z  11.5 .
With decreasing q, when SNR z  3 4 , the probability of anomalous errors
Pa  1  P0 (61), (95) considerably increases and verges towards 1. It leads to
abrupt (in comparison with a case of a reliable estimate) increment of MLE l m
variance. With increasing q, when z  3 4 , values of variances V0 l m l0  ,
V l m l0  almost coincide, and the estimate of pulse (87) appearance time
becomes reliable with the probability close to 1.
The divergence of theoretical and experimental dependences V0 l m l0  under
z  1...1.5 is connected by that final length of an prior interval L1 , L 2  of
possible values of parameter l0 was not considered upon receipt of expression
(45). Thereof, when conditional variance of reliable MLE l m becomes
commensurable or more than value m 2 12 , accuracy of the formula (45)
essentially worsens. In order to improve rather the mean square error amount of
an appearance time estimate under small SNR the transformation similar to Eq.
(86) can be used. The deviation of theoretical dependences (45), (93), (95) (and,
therefore, (94)) from corresponding experimental values is observed in case of
great SNR z also, when q  2  3 . It is connected by that formulas (89), (92), (93)
for functional M   and its characteristics were found on the assumption that
sizes of order of correlation time of the pulse (87) random substructure t  are
negligible. Hence, when MLE l m variance decreases to the size of order  2 ,
the miscalculation on formulas (45), (94), (95) becomes considerable.
5 Conclusion
For operating efficiency definition of optimal (maximum-likelihood) receiving
devices of signals with unknown discontinuous parameters a method based on
approximation of the solving statistics increments by Markov random process
Application of the local Markov approximation method
4495
(local Markov approximation method) can be used. With the help of the given
approach the closed analytical expressions for characteristics can be found of
detectors and measurers of discontinuous quasidetermined and Gaussian random
signals, which well describe corresponding experimental data in a wide range of
output signal-to-noise ratios. The received results make it possible theoretically to
estimate practical application appropriateness of one or another processing
algorithm of discontinuous signals in each specific case.
Acknowledgements. The reported study was supported by Russian Foundation
for Basic Research (research projects No. 13-08-00735a, 13-08-97538) and
Russian Science Foundation (research project No. 14-29-00208).
References
[1] H.L. van Trees, Detection, Estimation, and Modulation Theory. Part I, Wiley,
New York, 1971.
[2] A.P. Trifonov, Yu.S. Shinakov, Joint Discrimination of Signals and
Estimation of Their Parameters Against Background (in Russian), Radio i
Svyaz', Moscow, 1986.
[3] E.I. Kulikov, A.P. Trifonov, Estimation of Signal Parameters Against
Hindrances (in Russian), Sovetskoe Radio, Moscow, 1978.
[4] Applied Theory of Random Processes and Fields (in Russian) / Edited by
K.K. Vasilyev, V.A. Omelychenko, Ulyanovsk State Technical University,
Ulyanovks, 1995.
[5] A.P. Trifonov, E.P. Nechaev, V.I. Parfenov, Detection of Stochastic Signals
with Unknown Parameters (in Russian), Voronezh State University,
Voronezh, 1991.
[6] T. Kailath, Some integral equations with nonrational kernels, IEEE
Transactions on Information Theory, 4 (1966), 442-447.
[7] E.B. Dynkin, Theory of Markov Processes, Dover Publications Inc., New
York, 2006.
[8] V.I. Tikhonov, M.A. Mironov, Markov Processes (in Russian), Sovetskoe
Radio, Moscow, 1977.
[9] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions with
Formulas, Graphs and Mathematical Tables. National Bureau of Standards.
Applied Mathematics Series 55, USA, 1964.
[10] Signal Detection Theory (in Russian) / Edited by P.A. Bakut, Radio i Svyaz',
Moscow, 1984.
[11] V.D. Dobykin, A.I. Kupriyanov, V.G. Ponomarev, L.N. Shustov, Electronic
Warfare. Digital Storing and Reproduction of Radio Signals and
Electromagnetic Waves (in Russian), Vuzovskaya Kniga, Moscow, 2009.
4496
O. V. Chernoyarov et al.
[12] O.V. Chernoyarov, The efficiency of reception of a random pulse signal with
unknown parameters under the conditions of detuned duration.
Telecommunications and Radio Engineering, 1 (2013), 1-23.
[13] V.V. Bykov, Numerical Modeling in Statistical Radio Engineering (in
Russian), Sovetskoe Radio, Moscow, 1971.
[14] L. Devroye, Non-uniform Random Variate Generation. Springer-Verlag,
1986.
[15] Information and Communication Systems and Technologies: Problems and
Perspectives (in Russian) / Edited by A.V. Babkin, Polytechnic University
Publisher, St.-Petersburg, 2007.
Received: June 11, 2014