Exponential Inequality for ˜ρ-Mixing Sequences and its

Filomat 28:4 (2014), 859–870
DOI 10.2298/FIL1404859S
Published by Faculty of Sciences and Mathematics,
University of Niˇs, Serbia
Available at: http://www.pmf.ni.ac.rs/filomat
˜
Exponential Inequality for ρ-Mixing
Sequences and its Applications
Aiting Shena , Huayan Zhua , Ying Zhanga
a School
of Mathematical Sciences, Anhui University, Hefei 230601, P.R. China
˜
Abstract. Exponential inequality and complete convergence for ρ-mixing
sequence are given. By using the
˜
exponential inequality, we study the asymptotic approximation of inverse moments for ρ-mixing
sequences,
which generalizes the corresponding one for independent sequence.
1. Introduction
Let {Zn , n ≥ 1} be a sequence of independent nonnegative random variables with finite second moments.
Denote
Xn =
n
X
i=1
Zi /Bn and B2n =
n
X
VarZi .
(1.1)
i=1
We will show that under suitable conditions the following equivalence relation holds, namely,
E(a + Xn )−r ∼ (a + EXn )−r , n → ∞,
(1.2)
where a > 0 and r > 0 are arbitrary real numbers. Here and below, cn ∼ dn means cn d−1
n → 1 as n → ∞.
The inverse moments can be applied in many practical applications. For example, they may be applied
in Stein estimation and post-stratification (see Wooff [1] and Pittenger [2]), evaluating risks of estimators
and powers of tests (see Marciniak and Wesolowski [3] and Fujioka [4]). In addition, they also appear in
the reliability (see Gupta and Akman [5]) and life testing (see Mendenhall and Lehman [6]), insruance and
financial mathematics (see Ramsay [7]), complex systems (see Jurlewicz and Weron [8]), and so on.
Under certain asymptotic-normality condition on Xn , relation (1.2) is established in Theorem 2.1 of
Garcia and Palacios [9]. But, unfortunately, that theorem is not true under the suggested assumptions,
as pointed out by Kaluszka and Okolewski [10]. The latter authors established (1.2) by modifying the
assumptions, as follows:
(i) r < 3 (r < 4, in the i.i.d. case);
2010 Mathematics Subject Classification. 60E15; 62E20; 62G20
˜
Keywords. ρ-mixing
sequence; inverse moment; asymptotic approximation.
Received: 16 April 2013; Revised: 31 January 2014; Accepted: 25 June 2014
Communicated by Svetlana Jankovic
Supported by the National Natural Science Foundation of China (11201001, 11171001, 11126176), the Natural Science Foundation
of Anhui Province (1208085QA03, 1308085QA03, 1408085QA02), the Youth Science Research Fund of Anhui University, the Students
Innovative Training Project of Anhui University (201410357118) and the Students Science Research Training Program of Anhui
University (kyxl2013003).
Email addresses: [email protected] (Aiting Shen), [email protected] (Huayan Zhu), [email protected] (Ying
Zhang)
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
860
(ii) EXn → ∞, EZ3n < ∞;
n
P
(iii) (Lc condition) E|Zi − EZi |c /Bcn → 0 (c = 3).
i=1
Hu et al. [11] considered weaker conditions: EZ2+δ
< ∞, where Zn satisfies L2+δ condition and 0 < δ ≤ 1.
n
For more details about the inverse moment, one can refer to Wu et al. [12], Wang et al. [13], Sung [14], Shen
[15], Shen et al. [16], and so forth. The main purpose of the paper is to extend the asymptotic approximation
˜
of inverse moment for independent sequence to the case of ρ-mixing
sequence. It is easily seen that the key
to the proof of asymptotic approximation of inverse moment is the exponential inequality. So in Section 2,
˜
we first give the exponential inequality for ρ-mixing
sequence and complete convergence. In Section 3, we
˜
study the asymptotic approximation of inverse moments for ρ-mixing
sequence by using the exponential
inequality.
˜
Firstly, we will give the definition of ρ-mixing
sequence and some useful lemmas.
Let {Xn , n ≥ 1} be a random variable sequence defined on a fixed probability space (Ω, F , P). Let n and
m be positive integers. Write Fnm = σ(Xi , n ≤ i ≤ m) and FS = σ(Xi , i ∈ S ⊂ N). Given σ-algebras B, R in F ,
let
ρ(B, R) =
sup
X∈L2 (B),Y∈L2 (R)
|EXY − EXEY|
.
p
Var(X) · Var(Y)
(1.3)
˜
Define the ρ-mixing coefficients and ρ-mixing
coefficients by
∞
ρ(n) = sup ρ(F1k , Fk+n
), n ≥ 0,
(1.4)
˜
ρ(n)
= sup{ρ(FS , FT ) : finite subsets S, T ⊂ N, such that dist(S, T) ≥ n}, n ≥ 0.
(1.5)
k≥1
Definition 1.1. A sequence {Xn , n ≥ 1} of random variables is said to be ρ-mixing if ρ(n) ↓ 0 as n → ∞. A sequence
˜
˜ < 1.
{Xn , n ≥ 1} of random variables is said to be ρ-mixing
if there exists k ∈ N such that ρ(k)
˜
˜
Remark 1.1. We point out that ρ-mixing
is similar to ρ-mixing, but both are quite different. In fact, ρmixing coefficient (1.5) resembles the definition of the so-called maximal correlation coefficient (1.4), which
is defined by (1.5) with index sets restricted to subsets S of [1, k] and subsets T of [n + k, ∞), n, k ∈ N.
˜
˜
In addition, in the definition of ρ-mixing,
ρ(k)
< 1 for some k ∈ N is needed. While in the definition of
ρ-mixing, ρ(n) ↓ 0 as n → ∞ is needed. Bryc and Smolenski [17] pointed out that even in the stationary case,
˜
˜ , 0. In this case, ρ-mixing
˜
it may happen that ρ(1)
< 1 while limk→∞ ρ(k)
is more general than ρ-mixing.
˜
For more details about the difference between ρ-mixing and ρ-mixing,
one can refer to Bradley [18], Utev
and Peligrad [19], and so on.
˜
˜
The concept of ρ-mixing
was introduced by Bradley [20]. It is easily seen that ρ-mixing
sequence
˜
contains independent sequence as a special case. Hence, studying the limiting behavior of ρ-mixing
is of
˜
great interest. For more details about ρ-mixing
random variables, one can refer to Utev and Peligrad [19],
Zhu [21], Wu and Jiang [22-24], Wang et al. [25], Zhou et al. [26], Wu [27], and so forth.
The following lemmas are useful.
˜
The first one is the moment inequality for ρ-mixing
random variables with exponent 2.
˜
Lemma 1.1. Let {Xn , n ≥ 1} be a sequence of ρ-mixing
random variables with EXn = 0 and EXn2 < ∞ for each n ≥ 1.
Then for any a ≥ 0 and n ≥ 1,
 a+n 2 
 a+n
n
X
 X 

 X



˜ 
Xi  ≤ 1 + 2
ρ(k)
EXi2 .
E 
i=a+1
k=1
i=a+1
(1.6)
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
861
˜
Proof. It follows from the definition of ρ-mixing
sequence that
 a+n 2
a+n
X
X
 X 
E 
Xi  =
EXi2 + 2
E(Xi X j )
i=a+1
i=a+1
≤
a+n
X
a+1≤i< j≤a+n
i=a+1
≤
≤
a+n
X
i=a+1
a+n
X
X
EXi2 + 2
˜ j − i)(EXi2 )1/2 (EX2j )1/2
ρ(
a+1≤i< j≤a+n
EXi2 +
n−1 a+n−k
X
X
k=1 i=a+1
n
X
EXi2 + 2
i=a+1
2
2
˜
ρ(k)(EX
i + EXk+i )
˜
ρ(k)
k=1
a+n
X
EXi2
i=a+1

 a+n
n
X

 X
˜ 
= 1 + 2
ρ(k)
EXi2 .
k=1
i=a+1
This completes the proof of the lemma. ]
˜
The next one is the Rosenthal type maximal inequality for ρ-mixing
random variables, which was
obtained by Utev and Peligrad [18] as follows.
˜
Lemma 1.2. Let {Xn , n ≥ 1} be a sequence of ρ-mixing
random variables, EXi = 0, E|Xi |p < ∞ for some p ≥ 2 and
for every i ≥ 1. Then there exists a positive constant C depending only on p such that


j
p 
 n
p/2 

n


X 
X
X


 






p
2

 

E max Xi  ≤ C 
E|X
|
+
EX
.

i


i



1≤ j≤n 


 i=1

i=1
i=1
Throughout the paper, let {Xn , n ≥ 1} and {Zn , n ≥ 1} be sequences of random variables defined on a fixed
probability space (Ω, F , P). For random variable X, denote k X kr = (E|X|r )1/r , r > 0. C denotes a positive
constant which may be different in various places.
˜
2. Exponential Inequality and Complete Convergence for ρ-Mixing
Sequence
In this section, denote Sn =
n
P
i=1
Xi and ∆2n =
n
P
i=1
EXi2 for each n ≥ 1.
˜
Theorem 2.1. Let {Xn , n ≥ 1} be a sequence of ρ-mixing
random variables with EXn = 0 and |Xn | ≤ d < ∞ a.s. for
each n ≥ 1. Then for any ε > 0 and n ≥ 1,
)
(
ε2
,
(2.1)
P(Sn > ε) ≤ C1 exp −
C2 (4∆2n + ndε)
(
)
ε2
P(Sn < −ε) ≤ C1 exp −
,
(2.2)
C2 (4∆2n + ndε)
(
)
ε2
P(|Sn | > ε) ≤ 2C1 exp −
,
(2.3)
C2 (4∆2n + ndε)
!
n
o
m
P
n−4m
˜ + 1) + 4m , C2 = 8 1 + 2 ρ(k)
˜
where C1 = exp 1 + ρ(m
and 1 ≤ m ≤ n is some positive integer.
k=1
Proof. For fixed n ≥ 1, by 1 ≤ m ≤ n we can see that there exists a nonnegative integer l ≤ n such that
2lm ≤ n < 2(l + 1)m.
(2.4)
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
862
For random variables X1 , X2 , · · · , Xn , we construct the following random variable sequences
Y1
=
X1 + X2 + · · · + Xm ,
Z1 = Xm+1 + Xm+2 + · · · + X2m ,
Y2
=
X2m+1 + X2m+2 + · · · + X3m ,
Z2 = X3m+1 + X3m+2 + · · · + X4m ,
···
···
Yl
=
Yl+1
=
X2(l−1)m+1 + · · · + X(2l−1)m ,
Zl = X(2l−1)m+1 + · · · + X2lm .
(
0,
i f 2lm ≥ n,
X2lm + · · · + Xn ,
i f 2lm < n.
If 2lm > n, we assume that Xn+1 , Xn+2 , · · · , X2lm above are all zero. Obviously,
Sn =
n
X
Xi =
l+1
X
i=1
For any 0 < t ≤
i=1
1
4md ,
Yi +
l
X
Zi .
(2.5)
i=1
it follows from (2.4) that
|tYl+1 | ≤ t(n − 2lm)d ≤ 2tmd < 1 a.s..
By (2.5), Markov’s inequality and Holder’s
inequality, we have
¨
P(Sn > ε) =
=
≤
P(tSn > tε) ≤ exp{−tε}E exp{tSn }





l
l




X
X









exp{−tε}E exp {tYl+1 } exp 
t
Y
exp
t
Z



i
i








i=1
i=1





l
l




X
X









exp{1 − tε}E exp 
t
Y
exp
t
Z



i
i








i=1
≤
i=1

1/2 

1/2

l
l




X
X

 











 E exp 
exp{1 − tε} E exp 
Yi 
2t
Zi 
.
2t









i=1
Denote ti1 = 2(i − 1)m + 1, ti2 = (2i − 1)m and ∆(i) =
ti2
P
j=ti1
EX2j for i = 1, 2, · · · , l. It follows from Lemma 1.1 that


m
X


˜  ∆(i).
EYi2 ≤ 1 + 2
ρ(k)
(2.7)
k=1
By EYi = 0, |4tYi | ≤ 4tmd ≤ 1 a.s. and 1 + x ≤ ex (x ≥ 0), we can see that
2
E e2tYi
=
Ee4tYi = 1 +
∞
X
E(4tY ) j
i
j=2
j!
2
≤
≤
=
≤
(2.6)
i=1
E(4tYi )
1
1
1
1
1+
1+ +
+
+
+ ···
2!
3 4×3 5×4×3 6×5×4×3
E(4tYi )2
1
1
1
1
1+
1 + + 2 + 3 + 4 + ···
2!
3 3
3
3
E(4tYi )2
1
1+
·
≤ 1 + 16t2 EYi2
2!
1 − 13




m


X
n
o



 2 

2
2


˜
exp 16t EYi ≤ exp 
16t
ρ(k)
1
+
2
∆(i)
,









k=1
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
863
which implies that
n
o
2 1/2
exp {2tYi } = E e2tYi ≤ exp C2 t2 ∆(i) , i = 1, 2, · · · , l.
2
(2.8)
˜
Together with the definition of ρ-mixing
sequence and 1 + x ≤ ex (x ≥ 0), it follows that






l
l−1




X





 X 



exp 2t


=
E
Y
exp
E exp 
2t
Y
{2tY
}


i
i

l










i=1
i=1




l−1
l−1




X
X







 ˜
≤ E exp 
2t
Y
E
exp
+
ρ(m
+
1)
exp
2t
Y
exp
{2tY
}
{2tY
}



i
i
l
l




2




i=1
i=1
2




l−1
l−1

 
 

 X 
  X 
 ˜
≤ exp 
2t
Y
exp
+
ρ(m
+
1)
exp
2t
Y
exp
{2tY
}
{2tY
}



i
i
l
l




2
2




i=1
i=1
2
2


l−1

 
 X 
 ˜ + 1) exp 
= 1 + ρ(m
2t
Y
exp
{2tY
}

i
l


2


i=1
2


l−1


X
n
o 



˜ + 1) exp C2 t2 ∆(l) exp 
≤ 1 + ρ(m
2t
Y

i




i=1
2


l−1


X
n
o 



˜ + 1) + C2 t2 ∆(l) exp 
≤ exp ρ(m
2t
Yi 

 .


i=1
2
By the generalized C-S inequality (Kuang [28, p.6]), we can get that


l−1
l−1 l−1
 X

Y
Y


1


exp 
Yi 
exp 2tYi 2(l−1) =
E exp{4(l − 1)tYi } 2(l−1)
2t
 ≤


i=1
i=1
i=1
2
=
l−1
Y
1
E exp{4tYi } exp{4tYi (l − 2)} 2(l−1)
i=1
1
l−1 Y
2 2(l−1)
≤
exp{l − 2}E e2tYi
i=1
≤
1
 2(l−1)




l−1 
m


Y
X



 2 



exp{l − 2} exp 

˜  ∆(i)
16t 1 + 2
ρ(k)






i=1
k=1




m


X



l−2
 8 2 



˜
=
exp
exp 
t
ρ(k)
1
+
2
∆(i)







l − 1

2(l − 1)
i=1
k=1


)
(
l−1


X


l−2
 1

2
exp 
C
t
∆(i)
.
= exp

2


l − 1

2
l−1
Y
(
)
i=1
Therefore,


l


X




E exp 
2t
Yi 




≤
i=1
≤
≤


l−1


X
o 



˜ + 1) + C2 t ∆(l) exp 
exp ρ(m
2t
Yi 




i=1
2
(
)
l−2
˜ + 1) +
exp ρ(m
+ C2 t2 ∆2n
2
n − 4m
˜ + 1) +
exp ρ(m
+ C2 t2 ∆2n .
4m
n
2
(2.9)
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
Similarly, we also have


l



n − 4m
 X 

2 2
˜
≤
exp
E exp 
2t
Z
+
C
t
∆
ρ(m
+
1)
+

i
2
n .




4m
864
(2.10)
i=1
It follows from (2.6), (2.9) and (2.10) that
n
o
P(Sn > ε) ≤ C1 exp −tε + C2 t2 ∆2n .
(2.11)
˜
Since {−Xn , n ≥ 1} is also a sequence of ρ-mixing
random variables with E(−Xn ) = 0 and | − Xn | ≤ d < ∞ a.s.
for each n ≥ 1, it follows from (2.11) that
n
o
P(Sn < −ε) = P(−Sn > ε) ≤ C1 exp −tε + C2 t2 ∆2n .
(2.12)
(2.11) and (2.12) yield that
n
o
P(|Sn | > ε) = P(Sn > ε) + P(Sn < −ε) ≤ 2C1 exp −tε + C2 t2 ∆2n .
Take t =
(2.13)
2ε
. It is easy to check that
C2 (4∆2n + ndε)


m
X


2ε
1
˜  ≥ 8, tmd ≤
C2 = 8 1 + 2
ρ(k)
nd ≤ .
2
4
C2 (4∆n + ndε)
k=1
Therefore, (2.11) implies that
(
)
2C2 ∆2n ε
2ε2
2ε
P (Sn > ε) ≤ C1 exp −
+
·
C2 (4∆2n + ndε) C2 (4∆2n + ndε) C2 (4∆2n + ndε)
(
"
#)
2∆2n
2ε2
≤ C1 exp −
1
−
C2 (4∆2n + ndε)
4∆2n + ndε
)
(
ε2
,
≤ C1 exp −
C2 (4∆2n + ndε)
which implies (2.1). Similarly, we can get inequality (2.2) and (2.3) from (2.12) and (2.13), respectively. We
complete the proof of the theorem. ]
˜
Theorem 2.2 Let {Xn , n ≥ 1} be a sequence of ρ-mixing
random variables with EXn = 0 and |Xn | ≤ d < ∞ a.s. for
∞
∞
P
P
2
˜
each n ≥ 1. Assume that
ρ(n)
< ∞ and
EXn < ∞. Then for any r > 1,
n=1
n=1
−r
n Sn → 0, completely,
(2.14)
and in consequence n−r Sn → 0 a.s..
Proof. For any n ≥ 1, we can choose a positive integer m such that n − 4m ≤ 0, which implies that C1 < ∞.
Thus, by Theorem 2.1, for any ε > 0, we obtain


∞
∞
X
X


n2r ε2


P (|Sn | > nr ε) ≤ 2C1
exp 
−
 C (4 Pn EX2 + ndnr ε) 

2
i=1
i
n=1
n=1


∞
X


n2r ε2


≤ 2C1
exp 
− C (4 P∞ EX2 + n1+r dε) 

2
i=1
i
n=1
≤
C+C
∞
X
nr−1
exp(−C)
< ∞.
n=1
This completes the proof of the theorem. ]
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
865
˜
3. Asymptotic Approximation of Inverse Moments for Nonnegative ρ-Mixing
Sequence
˜
In this section, we will study the asymptotic approximation of inverse moments for nonnegative ρmixing random variables with non-identical distribution. The first one is based on the exponential inequality
that we established in Section 2.
∞
P
˜
˜
Theorem 3.1. Let {Zn , n ≥ 1} be a nonnegative ρ-mixing
sequence with
ρ(n)
< ∞. Suppose that
n=1
(i)
EZ2n < ∞, ∀ n ≥ 1;
(ii)
EXn → ∞, where Xn is defined by (1.1);
(iii)
for some η > 0,
Rn (η) := B−2
n
n
X
E{Z2i I(Zi > ηBn )} → 0, n → ∞;
i=1
(iv) f or some t ∈ (0, 1) and any positive constants a, r, C,
(EXn )t
lim (a + EXn ) · exp −C ·
n→∞
n
(
r
)
= 0.
Then for any a > 0 and r > 0, (1.2) holds
Proof. Firstly, let us decompose Xn as
Xn = U n + Vn ,
(3.1)
where
Un = B−1
n
n
X
Zi I(Zi ≤ ηBn ), Vn = B−1
n
i=1
n
X
Zi I(Zi > ηBn ),
(3.2)
i=1
and denote
µ˜ n = EUn ,
B˜ 2n =
n
X
Var{Zi I(Zi ≤ ηBn )}.
(3.3)
i=1
¿From (3.2) and condition (iii), it can be seen that
EVn ≤
n
1 X
E{Z2i I(Zi > ηBn )} → 0, as n → ∞.
ηB2n i=1
(3.4)
Thus, EXn = EUn + EVn ∼ µ˜ n following from condition (ii). Therefore, (1.2) will be proved if we show that
E(a + Xn )−r ∼ (a + µ˜ n )−r .
(3.5)
By Jensen’s inequality, we have
E(a + Xn )−r ≥ (a + EXn )−r .
(3.6)
Therefore
lim inf(a + µ˜ n )r E(a + Xn )−r ≥ lim inf(a + µ˜ n )r (a + EXn )−r = 1.
n→∞
n→∞
(3.7)
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
866
It is easily seen that
=
B˜ 2n
n
X
{E[Zi I(Zi ≤ ηBn )]2 − [EZi I(Zi ≤ ηBn )]2 }
i=1
=
n
X
{E[Zi − Zi I(Zi > ηBn )]2 − [EZi − Zi I(Zi > ηBn )]2 }
i=1
=
B2n + 2
n
X
EZi · EZi I(Zi > ηBn ) − B2n Rn (η) −
i=1
n
X
[EZi I(Zi > ηBn )]2 ,
i=1
hence
−2
|B˜ 2n B−2
n − 1| ≤ 2Bn
n
X
EZi · EZi I(Zi > ηBn ) + Rn (η) + B−2
n
i=1
n
X
[EZi I(Zi > ηBn )]2 .
(3.8)
i=1
By Jensen’s inequality and condition (iii), we have
B−2
n
n
X
[EZi I(Zi > ηBn )]2 ≤ B−2
n
n
X
i=1
EZ2i I(Zi > ηBn ) = Rn (η) → 0.
(3.9)
i=1
By condition (iii) again and (3.4),
B−2
n
n
X
EZi · EZi I(Zi > ηBn ) ≤
i=1
B−2
n
n
X
EZi I(Zi ≤ ηBn ) · EZi I(Zi > ηBn ) +
≤
i=1
n
X
ηB−1
EZi I(Zi
n
i=1
=
ηEVn + Rn (η) → 0, as n → ∞.
B−2
n
n
X
[EZi I(Zi > ηBn )]2
i=1
> ηBn ) + B−2
n
n
X
EZ2i I(Zi > ηBn )
i=1
(3.10)
Therefore, B˜ 2n ∼ B2n follows from (3.8)-(3.10) immediately, which implies that B˜ n ∼ Bn . For t ∈ (0, 1), where t
is defined in condition (iv), denote
E(a + Xn )−r = Q1 + Q2 ,
(3.11)
where
Q1
=
Q2
=
E(a + Xn )−r I(Un < µ˜ n − µ˜ tn ),
E(a + Xn )−r I(Un ≥ µ˜ n − µ˜ tn ).
(3.12)
(3.13)
Since Xn ≥ Un , it follows that
Q2 ≤ E(a + Xn )−r I(Xn ≥ µ˜ n − µ˜ tn ) ≤ (a + µ˜ n − µ˜ tn )−r .
Therefore
lim sup(a + µ˜ n )−r Q2 ≤ 1
(3.14)
n→∞
from the fact that µ˜ n → ∞ as n → ∞. By Xn ≥ 0, we have
Q1 = E(a + Xn )−r I(Un < µ˜ n − µ˜ tn ) ≤ a−r EI(Un < µ˜ n − µ˜ tn ) = a−r P(Un < µ˜ n − µ˜ tn ).
In the following, we will estimate the probability P(Un < µ˜ n − µ˜ tn ). For fixed n ≥ 1, denote
Wi = −Zi I(Zi ≤ ηBn ) + EZi I(Zi ≤ ηBn ), 1 ≤ i ≤ n,
(3.15)
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
then
867
W1 W2
Wn
˜
,
,··· ,
are ρ-mixing
random variables and
Bn Bn
Bn
 n

X Wi

t
t

P(Un < µ˜n − µ˜ n ) = P 
> µ˜ n  .
Bn
i=1
Obviously




m
∞
X
X







˜  ≤ 8 1 + 2
˜  < ∞.
C2 = 8 1 + 2
ρ(k)
ρ(k)
k=1
k=1
For nany n ≥ 1, we cano choose a positive integer m such that n − 4m ≤ 0, which implies that C1 =
˜ + 1) + n−4m
< ∞.
exp 1 + ρ(m
4m
It is easy to check that
n
X
EW 2
i
i=1
B2n
=
B˜ 2n
B2n
→ 1, n → ∞,
Wi Bn ≤ 2η, 1 ≤ i ≤ n.
By Theorem 2.1, we can get
P(Un < µ˜ n −
µ˜ tn )

 n

X Wi
t

> µ˜ n 
= P 
Bn
i=1






µ˜ 2t


n
≤ C1 exp 
−

P


 C2 4 n EW 2 /B2 + n · 2η · µ˜ t 
n
n
i=1
i
(
)
µ˜ tn
≤ C1 exp −C ·
n
for all n sufficiently large.
By condition (iv) and EXn ∼ µ˜ n , we have
(
)
(EXn )t
r
r
lim (a + µ˜ n ) Q1 ≤ lim C(a + EXn ) exp −C ·
= 0.
n→∞
n→∞
n
(3.16)
Together with (3.11), (3.14) and (3.16), we obtain
lim sup(a + µ˜ n )r E(a + Xn )−r ≤ 1.
(3.17)
n→∞
Combining (3.7) and (3.17), we get (3.5), which implies (1.2). The desired result is obtained. ]
Remark 3.1. If {Z2n , n ≥ 1} is a nonnegative and uniformly integrable random variables sequence with Zn ≥ 0
and B2n ≥ Cn, then for any η > 0, Rn (η) → 0 as n → ∞. In fact,
Rn (η) =
B−2
n
n
X
i=1
≤
E{Z2i I(Zi > ηBn )} ≤
n
sup EZ2i I(Zi > ηBn )
B2n i≥1
C sup EZ2i I(Zi > ηBn ) → 0, as n → ∞.
i≥1
Remark 3.2. The result of Theorem 3.1 for nonnegative ρ-mixing random variables with non-identical
distribution has been obtained by Shen et al. [16, Theorem 3.1]. Just as Remark 1.1 stated that ρ-mixing
˜
˜
and ρ-mixing
are similar, but different and ρ-mixing
is more general than ρ-mixing. Hence, Theorem 3.1
in the paper extends the corresponding one of Shen et al. [16] for ρ-mixing random variables to the case of
˜
ρ-mixing
random variables.
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
868
Remark 3.3. We point out that there is no any specific meaning for condition (iv) in Theorem 3.1, which is
just a technical condition. If the tool exponential inequality used in Theorem 3.1 is replaced by Rosenthal
type maximal inequality, we will show that (1.2) holds under very mild conditions and the condition (iv) in
Theorem 3.1 can be deleted. The result is as follows.
˜
Theorem 3.2. Let 0 < s < 1 and {Zn , n ≥ 1} be a sequence of nonnegative ρ-mixing
random variables. Let
{Mn , n ≥ 1} and {an , n ≥ 1} be sequences
of
positive
constants
such
that
a
≥
C
for
all
n
sufficiently
large, where C is
n
Pn
a positive constant. Denote Xn = M−1
n
k=1 Zk and µn = EXn . Suppose that the following conditions hold:
(i) EZn < ∞ for all n ≥ 1;
(ii) µn → ∞ as n → ∞;
(iii) For some positive number η > 0,
Pn
s
k=1 EZk I(Zk > ηMn µn /an )
→ 0 as n → ∞.
Pn
k=1 EZk
Then (1.2) holds for all real numbers a > 0 and r > 0.
Proof. Noting that f (x) = (a + x)−α is a convex function of x on [0, ∞), by Jensen’s inequality, we have
E(a + Xn )−α ≥ (a + EXn )−α ,
(3.18)
which implies that
lim inf(a + EXn )α E(a + Xn )−α ≥ 1.
(3.19)
n→∞
To prove (1.2), it is enough to prove that
lim sup(a + EXn )α E(a + Xn )−α ≤ 1.
(3.20)
n→∞
In order to prove (3.20), we only need to show that for all δ ∈ (0, 1),
lim sup(a + EXn )α E(a + Xn )−α ≤ (1 − δ)−α .
(3.21)
n→∞
By the condition (iii), we can see that for all δ ∈ (0, 1), there exists positive integer n(δ) > 0 such that
n
X
n
EZk I(Zk > ηMn µsn /an ) ≤
k=1
δX
EZk , n ≥ n(δ).
2
(3.22)
k=1
Let
Un = M−1
n
n
X
Zk I(Zk ≤ ηMn µsn /an ) M−1
n
k=1
n
X
0
Znk
k=1
and
E(a + Xn )−α
=
E(a + Xn )−α I(Un ≥ µn − δµn ) + E(a + Xn )−α I(Un < µn − δµn )
Q1 + Q2 .
(3.23)
For Q1 , we have by the fact Xn ≥ Un that
Q1 ≤ E(a + Xn )−α I(Xn ≥ µn − δµn ) ≤ (a + µn − δµn )−α ,
(3.24)
which implies by condition (ii) that
lim sup(a + µn )α Q1 ≤ lim sup(a + µn )α (a + µn − δµn )−α = (1 − δ)−α .
n→∞
n→∞
(3.25)
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
869
For Q2 , we have, by (3.22), that for n ≥ n(δ),
n
X
µn − EUn = M−1
EZk I(Zk > ηMn µsn /an ) ≤ δµn /2.
n
(3.26)
k=1
Hence, by (3.26), Markov’s inequality, Lemma 1.2 and Cr inequality, we have for any p ≥ 2 and all n ≥ n(δ)
that,
Q2 ≤ a−α P Un < µn − δµn
= a−α P EUn − Un > δµn − (µn − EUn )
≤ a−α P EUn − Un > δµn /2
p
n X
0
0
−p −p Znk − EZnk ≤ a−α P |Un − EUn | > δµn /2 ≤ Cµn Mn E k=1


p/2
n
n
X
X


−p 
p
2
s
 + Cµ−p M−p
≤ Cµn M−2
EZ
I(Z
≤
ηM
µ
/a
)
EZk I(Zk ≤ ηMn µsn /an )
n n n 
k
n
n
n
k

k=1
≤
k=1

p/2
n
X


−p 
s
Cµn M−1
EZk I(Zk ≤ ηMn µsn /an )
n µn /an
k=1
s(p−1)
−p
≤
=
p−1
+Cµn M−1
n µn
/an
−p
Cµn
p/2
h
µsn /an
· µn
n
X
EZk I(Zk ≤ ηMn µsn /an )
k=1
s(p−1) p−1
+ µn
/an
· µn
h −(1−s)p/2 p/2
i
−(1−s)(p−1) p−1
C µn
/an + µn
/an
−(1−s)p/2
i
−(1−s)(p−1)
+ Cµn
.
o
p
2α
and noting that p − 1 ≥ 2 , we have by (3.27) that
Taking p > max 2, 1−s
h −(1−s)p/2
i
−(1−s)(p−1)
+ Cµn
= 0.
lim sup(a + µn )α Q2 ≤ lim sup(a + µn )α Cµn
≤
Cµn
(3.27)
n
n→∞
(3.28)
n→∞
Hence, (3.21) follows from (3.25) and (3.28) immediately. This completes the proof of the theorem. ]
Remark 3.4. When Mn = Bn and an = µsn , the condition (iii) in Theorem 3.2 is weaker than (iii) in Theorem
3.1. Actually, if for some η > 0,
Rn (η) := B−2
n
n
X
EZ2i I(Zi > ηBn ) → 0, n → ∞,
i=1
then
B−1
n
n
X
EZi I(Zi > ηBn ) ≤ η−1 B−2
n
i=1
n
X
EZ2i I(Zi > ηBn ) → 0, n → ∞,
i=1
which implies by µn → ∞ that
Pn
Pn
B−1
EZi I(Zi > ηBn )
n
i=1 EZi I(Zi > ηBn )
= i=1 Pn
→ 0, n → ∞,
µn
i=1 EZi
i.e., condition (iii) in Theorem 3.2 holds.
Acknowledgements. The authors are most grateful to the Editor Svetlana Jankovic and anonymous referee
for careful reading of the manuscript and valuable suggestions which helped to improve an earlier version
of this paper.
Aiting Shen et al. / Filomat 28:4 (2014), 859–870
870
References
[1] D.A. Wooff, Bounds on reciprocal moments with applications and developments in Stein estimation and post-stratification,
Journal of the Royal Statistical Society- Series B, 47 (1985) 362–371.
[2] A.O. Pittenger, Sharp mean-variance bounds for Jensen-type inequalities, Statistics & Probability Letters, 10 (1990) 91–94.
[3] E. Marciniak, J. Wesolowski, Asymptotic Eulerian expansions for binomial and negative binomial reciprocals, Proceedings of the
American Mathematical Society, 127 (1999) 3329–3338.
[4] T. Fujioka, Asymptotic approximations of the inverse moment of the non-central chi-squared variable, Journal of The Japan
Statistical Society, 31 (2001) 99–109.
[5] R.C. Gupta, O. Akman, Statistical inference based on the length-biased data for the inverse Gaussian distribution, Statistics, 31
(1998) 325–337.
[6] W. Mendenhall, E.H. Lehman, An approximation to the negative moments of the positive binomial useful in life-testing,
Technometrics, 2 (1960) 227–242.
[7] C.M. Ramsay, A note on random survivorship group benefits, ASTIN Bulletin, 23 (1993) 149–156.
[8] A. Jurlewicz, K. Weron, Relaxation of dynamically correlated clusters, Journal of Non-Crystalline Solids, 305 (2002) 112–121.
[9] N.L. Garcia, J.L. Palacios, On inverse moments of nonnegative random variables, Statistics & Probability Letters, 53 (2001)
235–239.
[10] M. Kaluszka, A. Okolewski, On Fatou-type lemma for monotone moments of weakly convergent random variables, Statistics &
Probability Letters, 66 (2004) 45–50.
[11] S.H. Hu, G.J. Chen, X.J. Wang, E.B. Chen, On inverse moments of nonnegative weakly convergent random variables, Acta
Mathematicae Applicatae Sinica, 30 (2007) 361–367.
[12] T.J. Wu, X.P. Shi, B.Q. Miao, Asymptotic approximation of inverse moments of nonnegative random variables, Statistics &
Probability Letters, 79 (2009) 1366–1371.
[13] S.H. Sung, On inverse moments for a class of nonnegative random variables. Journal of Inequalities and Applications, 2010 (2010)
13 pages.
[14] X.J. Wang, S.H. Hu, W.Z. Yang, N.X. Ling, Exponential inequalities and inverse moment for NOD sequence, Statistics & Probability
Letters, 80 (2010) 452–461.
[15] A.T. Shen, A note on the inverse moments for nonnegative ρ-mixing random variables, Discrete Dynamics in Nature and Society,
2011 (2011) 8 pages.
[16] A.T. Shen, Y. Shi, W.J. Wang, B. Han, Bernstein-type inequality for weakly dependent sequence and its applications, Revista
Matem´atica Complutense, 25 (2012) 97–108.
[17] W. Bryc, W. Smolenski, Moment conditions for almost sure convergence of weakly correlated random variables, Proceedings of
the American Mathematical Society, 119 (1993) 629–635.
[18] R.C. Bradley, Equivalent Mixing Conditions for Random Fields, The Annals of Probability, 21 (1993) 1921–1926.
[19] S. Utev, M. Peligrad, Maximal inequalities and an invariance principle for a class of weakly dependent random variables, Journal
of Theoretical Probability, 16 (2003) 101–115.
[20] R.C Bradley, On the spectral density and asymptotic normality of weakly dependent random fields, Journal of Theoretical
Probability, 5 (1992) 355–374.
[21] M.H. Zhu, Strong laws of large numbers for arrays of rowwise ρ∗ -mixing random variables, Discrete Dynamics in Nature and
Society, 2007 (2007) 6 pages.
˜
[22] Q.Y. Wu, Y.Y. Jiang, Some strong limit theorems for ρ-mixing
sequences of random variables, Statistics & Probability Letters,
78(2008), 1017–1023.
˜
[23] Q.Y. Wu, Y.Y. Jiang, Some strong limit theorems for weighted product sums of ρ-mixing
sequences of random variables, Journal
of Inequalities and Applications, 2009 (2009) 10 pages.
˜
[24] Q.Y. Wu, Y.Y. Jiang, Chover-type laws of the k-iterated logarithm for ρ-mixing
sequences of random variables, Journal of
Mathematical Analysis and Applications, 366 (2010) 435–443.
[25] X.J. Wang, S.H. Hu, Y. Shen, W.Z. Yang, Some new results for weakly dependent random variable sequences, Chinese Journal of
Applied Probability and Statistics, 26 (2010), 637–648.
[26] X.C. Zhou, C.C. Tan, J.G. Lin, On the strong laws for weighted sums of ρ∗ -mixing random variables, Journal of Inequalities and
Applications, Volume 2011 (2011) 8 pages.
˜
[27] Q.Y. Wu, Further study strong consistency of estimator in linear model for ρ-mixing
random sequences, Journal of Systems
Science and Complexity, 24 (2011) 969–980.
[28] J.C. Kuang, Applied Inequality, (3nd ed.), Shangdong Science and Technology Press, Jinan, 2003.