0% found this document useful (0 votes)
14 views

Complete Convergence of END

Uploaded by

hiếu hữu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Complete Convergence of END

Uploaded by

hiếu hữu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

RACSAM

DOI 10.1007/s13398-016-0323-1

ORIGINAL PAPER

Complete convergence of moving average process based


on widely orthant dependent random variables

Xinran Tao1 · Yi Wu1 · Hao Xia1 · Xuejun Wang1

Received: 11 April 2016 / Accepted: 23 August 2016


© Springer-Verlag Italia 2016

Abstract
∞ In this paper, we investigate the complete convergence of moving average process
i=−∞ ai Yi+n , n ≥ 1, where {Yi , −∞ < i < +∞} is a doubly infinite sequence of random
variables and {an , −∞ < n < +∞} is an absolutely summable sequence of real numbers.
The results obtained in the paper extend and improve the corresponding ones in Qiu and
Chen (Acta Math Sci 35A(4):756–768, 2015 ) from extended negatively dependent (END)
setting to widely orthant dependent (WOD) setting.

Keywords Complete convergence · Moving average process · Widely orthant dependent


random variables

Mathematics Subject Classification 60F15

1 Introduction

Let {Yi , −∞ < i < +∞} be a doubly infinite sequence of random variables defined in the
same probability space {, F , P}, and {an , −∞ < n < +∞} be an absolutely summable
sequence of real numbers. Let


Xn = ai Yi+n , n ≥ 1, (1.1)
i=−∞

Supported by the National Natural Science Foundation of China (11501004, 11526033, 11671012), the
Natural Science Foundation of Anhui Province (1508085J06, 1608085QA02), the Key Projects for
Academic Talent of Anhui Province (gxbjZD2016005), the Quality Engineering Project of Anhui Province
(2015jyxm045), and the Quality Improvement Projects for Undergraduate Education of Anhui University
(ZLTS2015035, ZLTS2015138).

B Xuejun Wang
[email protected]
1 School of Mathematical Sciences, Anhui University, Hefei 230601, People’s Republic of China
X. Tao et al.

{X n , n ≥ 1} is said to be a moving average process generated by {Yi , −∞ < i < +∞}.


For the moving average process {X n , n ≥ 1}, while {Yi , −∞ < i < +∞} is a sequence of
independent and identically distributed (i.i.d.) random variables, Ibragimov [1] established
the central limit theorem, Burton and Dehling [2] obtained a large deviation principle, and
Li et al. [3] gave the complete convergence result for {X n , n ≥ 1}. Zhang [4] and Li and
Zhang [5] extended the complete convergence of moving average process for i.i.d. sequence
to ϕ-mixing sequence and negatively associated (NA, for short) sequence, respectively. For
more works on complete convergence of moving average process, one can refer to [6–8]
among others.
Recently, Qiu and Chen [9] obtained the result of complete convergence of moving average
process for extended negatively dependent (END, for short) sequence.

Theorem A Let α and p be positive constants such that α > 1/2, p > 1 and αp > 1. Let
{Yn , −∞ < n < +∞} be a doubly infinite sequence of END random variables stochastically
dominated by a random variable Y and EYn = 0 if α ≤ 1. Let {an , −∞ < n < +∞} be
mnumbers and {X n , n ≥ 1} be a moving average
an absolutely summable sequence of real
process defined as (1.1). Denote Sm = i=1 X i . If
E|Y | p < ∞,
then for all ε > 0,

  
αp−2 α
n P max |Sm | > εn < ∞.
1≤m≤n
n=1

For p = 1, Qiu and Chen [9] obtained the following result.

Theorem B Let α > 1 and {Yn , −∞ < n < +∞} be a doubly infinite sequence of END
random variables stochastically dominated by a random variable Y . Let {an , −∞ < n <
+∞} be an absolutely summable sequence of real
m numbers and {X n , n ≥ 1} be a moving
average process defined as (1.1). Denote Sm = i=1 X i . If
E|Y | log |Y | < ∞,
then for all ε > 0,

  
n α−2 P max |Sm | > εn α < ∞.
1≤m≤n
n=1

In this paper, we will improve and extend Theorems A and B from END setting to a
more general setting, i.e., widely orthant dependent (WOD) setting, the concept of which
was introduced in Wang et al. [10].

Definition 1.1 For the random variables {X n , n ≥ 1}, if there exists a finite real sequence
{gU (n), n ≥ 1} satisfying for each n ≥ 1 and for all xi ∈ (−∞, ∞),1 ≤ i ≤ n,

n
P(X 1 > x1 , X 2 > x2 , . . . , X n > xn ) ≤ gU (n) P(X j > x j ),
j=1

then we say that the {X n , n ≥ 1} are widely upper orthant dependent (WUOD); if there exists
a finite real sequence {g L (n), n ≥ 1} satisfying for each n ≥ 1 and for all xi ∈ (−∞, ∞),
1 ≤ i ≤ n,
Complete convergence of moving average process. . .


n
P(X 1 ≤ x1 , X 2 ≤ x2 , . . . , X n ≤ xn ) ≤ g L (n) P(X j ≤ x j ),
j=1

then we say that the {X n , n ≥ 1} are widely lower orthant dependent (WLOD). If they are
both WUOD and WLOD, then we say that the {X n , n ≥ 1} are widely orthant dependent
(WOD), and gU (n), g L (n), n ≥ 1, are called dominating coefficients.

Denote g(n) = max{gU (n), g L (n)}, then g(n) ≥ 1. It is easily seen that the WOD
structure includes independent random variables, NA random variables, negatively orthant
dependent (NOD, for short) random variables, negatively superadditive dependent (NSD,
for short) random variables and END random variables as special cases. So studying the
probability limit theory and statistical inference theory based on WOD random variables
is of great interest. Some probability limit properties and applications for WOD random
variables have been obtained. One can refer to Wang et al. [10], Wang and Cheng [11], Chen
el al. [12], Shen [13,14], Shen et al. [15], Yang et al. [16], Wang et al. [17,18] among others.
To end this section, let us recall the concept of slowly varying function.

Definition 1.2 The real valued function l, positive and measurable on (0, ∞), is said to be
slowly varying at infinity if for each λ > 0
l(λx)
lim = 1.
x→∞ l(x)

Throughout this paper, the symbols C, C1 , C2 , . . . represent positive constants which may
vary in different places. Denote log x = ln max{x, e}, and let I (A) be the indicator function
of the set A. x stands for the integer part of x.
This work is organized as follows: Some preliminary lemmas are provided in Sect. 2. The
results of complete convergence and their proofs are stated in Sect. 3.

2 Preliminary lemmas

The following lemmas will be useful to prove the main results. The first one is a basic property
for stochastic domination, which can be found in Wu [19], or Shen et al. [20] for instance.

Lemma 2.1 Let {Yi , −∞ < i < +∞} be a sequence of random variables stochastically
dominated by a random variable Y, i.e, there exists some positive constant C such that

P(|Yi | > x) ≤ C P(|Y | > x), ∀x > 0, ∀ − ∞ < i < +∞.

Then for all v > 0, x > 0 and −∞ < i < +∞,


(i) E|Yi |v I (|Yi | ≤ x) ≤ C E|Y |v I (|Y | ≤ x) + C xv P(|Y | > x);
(ii) E|Yi |v I (|Yi | > x) ≤ C E|Y |v I (|Y | > x).

The following two lemmas are basic properties for WOD random variables, which can be
found in Wang et al. [17].

Lemma 2.2 Let {Yn , n ≥ 1} be a sequence of WOD random variables. If { f n (·), n ≥ 1} are
all nondecreasing (or all nonincreasing), then { f n (Yn ), n ≥ 1} are still WOD.
X. Tao et al.

Lemma 2.3 Let v ≥ 2 and {Yn , n ≥ 1} be a sequence of WOD random variables with
E|Yn |v < ∞ and EYn = 0 for each n ≥ 1. Then there exists some positive constants C1 (v)
and C2 (v) depending only on v such that
 v ⎛ ⎞v/2
 j+n 
   
j+n 
j+n
E  Yi  ≤ C1 (v) E|Yi | + C2 (v)g(n) ⎝
v
E|Yi | ⎠ .
2
i= j+1  i= j+1 i= j+1

Adopting the method used in Stout [21], we can obtain the following result easily by
Lemma 2.3.

Lemma 2.4 Let v ≥ 2 and {Yn , n ≥ 1} be a sequence of WOD random variables with
E|Yn |v < ∞ and EYn = 0 for each n ≥ 1. Then there exists some positive constants C1 (v)
and C2 (v) depending only on v such that
  ⎛ ⎞v/2
 j+m v
   
j+n 
j+n
E max  Yi  ≤ C1 (v)(log n)v E|Yi |v +C2 (v)(log n)v g(n) ⎝ E|Yi |2 ⎠ .
1≤m≤n  
i= j+1 i= j+1 i= j+1

The following lemma is a basic property for the slowly varying function. For the proof,
one can refer to Bai and Su [22].

Lemma 2.5 If h is slowly varying at infinity, then


h(x+u)
(i) lim x→∞ h(x) = 1 for each u > 0;
h(x)
(ii) limk→∞ sup2k ≤x<2k+1 h(2k )
= 1;
(iii) lim x→∞ x δ h(x)
= ∞, lim x→∞ x −δ h(x) = 0 for each δ > 0;

(iv) C1 2 h(ε2 ) ≤ kj=1 2 jr h(ε2 j ) ≤ C2 2kr h(ε2k ) for every r > 0, ε > 0 and positive
kr k

integer k; 
(v) C3 2kr h(ε2k ) ≤ ∞j=k 2 h(ε2 ) ≤ C 4 2 h(ε2 ) for every r < 0, ε > 0 and positive
jr j kr k

integer k.

By the definition of slowly varying function and Lemma 2.5 above, we can obtain the
following result.

Lemma 2.6 If h is slowly varying at infinity, then for positive integer m, we have

(i) mn=1 n h(n) ≤ Cm
s s+1 h(m) for s > −1;

(ii) n=m n h(n) ≤ Cm s+1 h(m) for s < −1;
s
m −1
(iii) n=1 n h(n) ≤ C log m · h(m).

Proof Since the proofs of (i) and (ii) have been presented in Zhou [7], we only give the
proof of (iii) as follows. By the definition of slowly varying function and Lemma 2.5 (ii), we
have
log m+1 i log m+1

m  
2 
n −1 h(n) ≤ C n −1 h(n) ≤ C h(2i )
n=1 i=1 n=2i−1 i=1
log m+1
≤ C log m · h(2 ) ≤ C log m · h(m).
This completes the proof of the lemma.

The last one is indispensable in proving our main results.


Complete convergence of moving average process. . .

Lemma 2.7 Let Y be a random variable and α, β, p, D be positive constants, and αp > 1.
1
Suppose that l(n) is a slowly varying function and n ≥ 1. If E|Y | p l(|Y | α ) < ∞, then we
have
∞ αp−1 1
(i) n=1 n l(n)P(|Y | > n α ) ≤ C E|Y | p l(|Y | α );

(ii) If p < β, then ∞ n=1 n
α( p−β)−1 l(n)E|Y |β I (|Y | ≤ Dn α ) ≤ C E|Y | p l(|Y | α1 );
∞ α( p−β)−1 1
(iii) If p > β, then n=1 n l(n)E|Y |β I (|Y | > n α ) ≤ C E|Y | p l(|Y | α ); if
1
E{|Y | p l(|Y | α ) log |Y |} < ∞, then
∞ −1 α 1
n=1 n l(n)E|Y | I (|Y | > n ) ≤ C E{|Y | l(|Y | ) log |Y |}.
(iv) p p α

Proof The inequalities above can be obtained by Lemma 2.6 and some standard computation,
we will state them one by one.


(i) n αp−1 l(n)P(|Y | > n α )
n=1

 ∞

= n αp−1 l(n) P( j α < |Y | ≤ ( j + 1)α )
n=1 j=n

 
j
= P( j α < |Y | ≤ ( j + 1)α ) n αp−1 l(n)
j=1 n=1


≤C j αp l( j)P( j α < |Y | ≤ ( j + 1)α )
j=1

 1
≤C E|Y | p l |Y | α I ( j α < |Y | ≤ ( j + 1)α )
j=1
1
≤ C E|Y | p l |Y | α ;



(ii) n α( p−β)−1 l(n)E|Y |β I (|Y | ≤ Dn α )
n=1

 n
= n α( p−β)−1 l(n) E|Y |β I (D( j − 1)α < |Y | ≤ D j α )
j=1
n=1

 ∞

= E|Y |β I (D( j − 1)α < |Y | ≤ D j α ) n α( p−β)−1 l(n)
j=1 n= j


≤C j α( p−β) l( j)E|Y |β I (D( j − 1)α < |Y | ≤ D j α )
j=1

 1
≤C E|Y | p l |Y | α I (D( j − 1)α < |Y | ≤ D j α )
j=1
1
≤ C E|Y | p l |Y | α ;
X. Tao et al.



(iii) n α( p−β)−1 l(n)E|Y |β I (|Y | > n α )
n=1

 ∞

= n α( p−β)−1 l(n) E|Y |β I ( j α < |Y | ≤ ( j + 1)α )
n=1 j=n

 
j
= E|Y |β I ( j α < |Y | ≤ ( j + 1)α ) n α( p−β)−1 l(n)
j=1 n=1


≤C j α( p−β) l( j)E|Y |β I ( j α < |Y | ≤ ( j + 1)α )
j=1

 1
≤C E|Y | p l |Y | α I ( j α < |Y | ≤ ( j + 1)α )
j=1
1
≤ C E|Y | p l |Y | α ;



(iv) n −1 l(n)E|Y | p I (|Y | > n α )
n=1

 ∞

= n −1 l(n) E|Y | p I ( j α < |Y | ≤ ( j + 1)α )
n=1 j=n

 
j
= E|Y | p I ( j α < |Y | ≤ ( j + 1)α ) n −1 l(n)
j=1 n=1


≤C log j · l( j)E|Y | p I ( j α < |Y | ≤ ( j + 1)α )
j=1

 1
≤C log |Y | · l |Y | α E|Y | p I ( j α < |Y | ≤ ( j + 1)α )
j=1
1
≤ C E|Y | p l |Y | α log |Y |.

The proof is completed.

3 Main results

In this section, we will present the main results of the paper.

Theorem 3.1 Let α and p be positive constants such that α > 1/2 and αp > 1. Let
{Yn , −∞ < n < +∞} be a doubly infinite sequence of WOD random variables stochastically
dominated by a random variable Y and EYn = 0 if α ≤ 1. Let l(n) be a slowly varying
function. Suppose that g(n) = O(n αt ) for some t ≥ 0 and assume further that t < 1 − 1/αp
when 0 < p ≤ 1 and t < (2 − p)(1 − 1/αp) when 1 < p < 2. Let {an , −∞ < n < +∞}
be an absolutely summable sequence of real numbers and {X n , n ≥ 1} be a moving average
m
process defined as (1.1). Denote Sm = i=1 X i . If
Complete convergence of moving average process. . .

⎧ 1

⎪ E|Y | p+t l(|Y | α ) < ∞, if p > 1,

1
E|Y | l(|Y | α ) log |Y | < ∞, if p = 1,
1+t (3.1)


⎩ 1
E|Y |1+t l(|Y | α ) < ∞, if 0 < p < 1.

Then for all ε > 0,



  
n αp−2 l(n)P max |Sm | > εn α < ∞. (3.2)
1≤m≤n
n=1
∞
Proof By (3.1) and i=−∞ |ai |
< ∞, we obtain that

 ∞   ∞ 
  
E|ai Yi+n | ≤ sup E|Yi+n | |ai | ≤ C E|Y | |ai | < ∞, ∀n ≥ 1.
i=−∞ i∈Z i=−∞ i=−∞

Hence, X n (n ≥ 1) is a.s. meaningful. Take 1/(αp) < q < 1 and further let 1/(αp) < q <
min{1 − t, 1 − t/(2 − p)} if 0 < p < 2. For each n ≥ 1, denote for −∞ < i < +∞ that
(n,1)
Yi = −n αq I (Yi < −n αq ) + Yi I (|Yi | ≤ n αq ) + n αq I (Yi > n αq ),
(n,2)
Yi = (Yi − n αq )I (n αq < Yi ≤ n α + n αq ) + n α I (Yi > n α + n αq ),
(n,3)
Yi = (Yi − n αq − n α )I (Yi > n α + n αq ),
(n,4)
Yi = (Yi + n αq )I (−n α − n αq ≤ Yi < −n αq ) − n α I (Yi < −n α − n αq ),
(n,5)
Yi = (Yi + n αq + n α )I (Yi < −n α − n αq ).

Therefore, for each positive integer n, we have


n  ∞
n  ∞
 
i+n ∞
 
i+n 
5
(n,k)
Sn = Xk = ai Yi+k = ai Yj = ai Yj .
k=1 k=1 i=−∞ i=−∞ j=i+1 i=−∞ j=i+1 k=1

Thus,

  
n αp−2 l(n)P max |Sm | > εn α
1≤m≤n
n=1
⎛   ⎞
∞  ∞ 
    
i+m 5
(n,k) 
= n αp−2 l(n)P ⎝ max  ai Y j  > εn α ⎠
1≤m≤n  
n=1 i=−∞ j=i+1 k=1
⎛   ⎞
∞  ∞ 
    (n,1) 
i+m
≤ n αp−2 l(n)P ⎝ max  ai Y j  > εn α /5⎠
1≤m≤n  
n=1 i=−∞ j=i+1
⎛ ⎞
3  ∞ ∞ i+n
(n,k)
+ n αp−2 l(n)P ⎝ |ai | Yj > εn α /5⎠
k=2 n=1 i=−∞ j=i+1
⎛ ⎞
 ∞
5  ∞
 
i+n
(n,k)
+ n αp−2 l(n)P ⎝ |ai | −Y j > εn α /5⎠
k=4 n=1 i=−∞ j=i+1
=: I1 + I2 + I3 + I4 + I5 . (3.3)
X. Tao et al.

To prove (3.2), we only need to show that I j < ∞, j = 1, 2, 3, 4, 5. To prove I1 < ∞, we


shall prove first that
 
  ∞  (n,1) 
 i+m
n −α max  E ai Y j  → 0 as n → ∞. (3.4)
1≤m≤n  
i=−∞ j=i+1

α ≤ 1, noting that αpq > 1 and EYn = 0, −∞ < n < +∞, we have by the
For the case

assumption i=−∞ |ai | < ∞, Lemma 2.1 and (3.1) that
 
 ∞ 
   (n,1) 
i+m
n −α max  E ai Y j 
1≤m≤n  
i=−∞ j=i+1
 
∞  
  
i+m
(n,1) 
≤n −α
|ai | max   EY j 
1≤m≤n  
i=−∞ j=i+1

 
i+n
 
≤ n −α |ai | E|Y j |I (|Y j | > n αq ) + n αq P(|Y j | > n αq )
i=−∞ j=i+1

 
i+n
   
≤ 2n −α |ai | E|Y j |I |Y j | > n αq ≤ Cn 1−α E|Y |I |Y | > n αq
i=−∞ j=i+1

≤ Cn 1−αpq−α(1−q)
E|Y | p → 0 as n → ∞. (3.5)
∞
For the case α > 1, we have by i=−∞ |ai | < ∞, Lemma 2.1 and (3.1) again that
 
  ∞  (n,1) 
 i+m
n −α max  E ai Y j 
1≤m≤n  
i=−∞ j=i+1
 
∞  i+m 
   (n,1) 
≤ n −α |ai | max  EY j 
1≤m≤n  
i=−∞ j=i+1

 
i+n
 
≤ n −α |ai | E|Y j |I (|Y j | ≤ n αq ) + n αq P(|Y j | > n αq )
i=−∞ j=i+1

≤ Cn 1−α
E|Y | → 0 as n → ∞.

Hence (3.4) holds. To prove I1 < ∞, we only need to show that


⎛   ⎞
∞  ∞ 
   
i+m
(n,1) (n,1) 

I1 =: n αp−2
l(n)P max 
⎝ ai Yj − EY j  > εn α /10⎠ < ∞.

1≤m≤n  
n=1 i=−∞ j=i+1

(n,1) (n,1)
For fixed n ≥ 1, it follows from Lemma 2.2 that {Y j − EY j , −∞ < j < ∞} is still
a sequence of mean zero WOD random variables. Taking v ≥ 2 and v > p (which would
Complete convergence of moving average process. . .

be specified later), weobtain by Markov’s inequality, Holder’s inequality, Cr ’s inequality,



Lemmas 2.1, 2.4 and i=−∞ |ai | < ∞ that
⎧  ⎫v
∞ ⎨   ⎬
  ∞ 
i+m
(n,1) (n,1) 
I1∗ ≤ C n αp−vα−2
l(n)E max   ai Yj − EY j
⎩1≤m≤n  ⎭
n=1 i=−∞ j=i+1 
⎧  ⎫v
∞ ⎨  ∞   (n,1) ⎬
 i+m (n,1) 
≤ C n αp−vα−2
l(n)E |ai | max  (Y j − EY j )
⎩ 1≤m≤n  ⎭
n=1 i=−∞ j=i+1
⎧ ⎛  ⎞⎫v
∞ ⎨  ∞  i+m  ⎬
  (n,1) (n,1) 
= C n αp−vα−2
l(n)E |ai |1−1/v ⎝
|ai |1/v
max  (Y j − EY j )⎠
⎩ 1≤m≤n   ⎭
n=1 i=−∞ j=i+1
  ⎛  v ⎞
∞ ∞ v−1
 ∞  i+m
 (n,1) 
 (n,1) 
≤ C n αp−vα−2 l(n) |ai | ⎝ |ai |E max  (Y j − EY j ) ⎠
1≤m≤n  
n=1 i=−∞ i=−∞ j=i+1

∞ ∞ ⎨ i+n  
 (n,1) v
≤ C n αp−vα−2 l(n) |ai |(log n)v C1 (v) E Y j 

n=1 i=−∞ j=i+1
⎛ ⎞v/2 ⎫

i+n   ⎪

 (n,1) 2
+ C2 (v)g(n) ⎝ E Y j  ⎠


j=i+1


 ∞
 
i+n
 
≤ C n αp−vα−2 l(n) |ai |(log n)v E|Y j |v I (|Y j | ≤ n αq ) + n vαq P(|Y j | > n αq )
n=1 i=−∞ j=i+1
⎧ ⎫v/2

 ∞
 ⎨ 
i+n ⎬
αp−vα−2 v
+C n l(n)g(n) |ai |(log n) [EY j2 I (|Y j | ≤ n αq )+n 2αq P(|Y j | > n αq )]
⎩ ⎭
n=1 i=−∞ j=i+1

  
≤ C n αp−vα−1 (log n)v l(n) E|Y |v I (|Y | ≤ n αq ) + n vαq P(|Y | > n αq )
n=1

  v/2
+C n αp−vα+v/2−2 (log n)v l(n)g(n) EY 2 I (|Y | ≤ n αq ) + n 2αq P(|Y | > n αq )
n=1
∗ ∗
=: I11 + I12 . (3.6)

Noting that q < 1, when 0 < p < 1, we have p −v +qv −q ≤ p −v +v −1 = p −1 < 0,


and thus,
⎧  ∞

⎪ n αp−vα−1 (log n)v l(n)E{|Y | p n (v− p)αq }, if p ≥ 1
⎨C
∗ n=1
I11 ≤

⎪ ∞
⎩C n α( p−v)−1 (log n)v l(n)E{|Y |n (v−1)αq }, if 0 < p < 1
n=1
⎧ ∞

⎪C n α( p−v)(1−q)−1 (log n)v l(n), if p≥1

n=1


⎪ ∞
⎩C n −1+α( p−v+qv−q) (log n)v l(n), if 0< p<1
n=1

< ∞. (3.7)
∗ < ∞. For the case 0 < p < 2, take v = 2. Noting that q < 1 − t if
Now we prove I12
0 < p ≤ 1 and q < 1 − t/(2 − p) if 1 < p < 2, we have
X. Tao et al.

⎧ ∞

⎪ n αp−2α−1+αt (log n)v l(n)E{|Y | p n (2− p)αq }, if 1< p<2
⎨C
∗ n=1
I12 ≤

⎪ ∞
⎩C n −α−1+αt (log n)v l(n)E{|Y |n αq }, if 0< p≤1
n=1
⎧ ∞

⎪ n α(t−(2− p)(1−q))−1 (log n)v l(n), if 1< p<2
⎨C
n=1


⎪ ∞
⎩C n α(q−1+t)−1 (log n)v l(n), if 0< p≤1
n=1
< ∞.

For the case p ≥ 2, take v > max{2, (αp − 1 + αt)/(α − 1/2)}. Noting that EY 2 < ∞, we
have


 ∞


I12 ≤C n αp−vα+v/2−2+αt l(n) = C n αp−v(α−1/2)−2+αt (log n)v l(n) < ∞.
n=1 n=1

Therefore, we can conclude that I1 < ∞ from the above statement. Next, we will prove
I2 < ∞. It follows from (3.1), Lemma 2.1 and (3.5) that


 
i+n ∞
 
i+n
(n,2)
0 ≤ n −α E |ai | Yj ≤ n −α |ai | EY j I (Y j > n αq )
i=−∞ j=i+1 i=−∞ j=i+1
αq
≤ Cn 1−α
E|Y |I (|Y | > n ) → 0 as n → ∞.

To prove I2 < ∞, we only need to show that


⎛ ⎞

 ∞
 
i+k
(n,2) (n,2)
I2∗ =: n αp−2 l(n)P ⎝ |ai | (Y j − EY j ) > εn α /10⎠ < ∞.
n=1 i=−∞ j=i+1

(n,2) (n,2)
For fixed n ≥ 1, it follows from Lemma 2.2 that {Y j − EY j , −∞ < j < ∞} is still
a sequence of mean zero WOD random variables. Similar to the proof of (3.6) we have by
Lemma 2.3 that
⎧  ⎫ v
∞ ⎨ 
∞  i+n ⎬
   (n,2) (n,2) 
I2∗ ≤ C n αp−vα−2 l(n)E |ai |  (Y j − EY j )
⎩  ⎭
n=1 i=−∞ j=i+1
⎧ ⎛ ⎞v/2 ⎫
∞ ∞ ⎪
⎨    2 ⎪

  
i+n
 (n,2) v 
i+n
 (n,2) 
αp−vα−2
≤ C n l(n) |ai | C1 (v) E Y j  +C2 (v)g(n) ⎝ E Y j  ⎠

⎩ ⎪

n=1 i=−∞ j=i+1 j=i+1


 ∞
 
i+n
≤ C n αp−vα−2 l(n) |ai | {E|Y j |v I (|Y j | ≤ 2n α ) + n vα P(|Y j | > n α )}
n=1 i=−∞ j=i+1
⎧ ⎫v/2

 ∞
 ⎨ i+n
  ⎬
+C n αp−vα−2 l(n)g(n) |ai | EY j2 I (|Y j | ≤ 2n α ) + n 2α P(|Y j | > n α )
⎩ ⎭
n=1 i=−∞ j=i+1
Complete convergence of moving average process. . .



≤ C n αp−vα−1 l(n){E|Y |v I (|Y | ≤ 2n α ) + n vα P(|Y | > n α )}
n=1

  v/2
+C n αp−vα+v/2−2 l(n)g(n) EY 2 I (|Y | ≤ 2n α ) + n 2α P(|Y | > n α )
n=1
∗ + I∗ .
=: I21 (3.8)
22

By (3.1) and Lemma 2.7, we get < ∞. For ∗


I21 ∗,
if 0 < p < 2, let v = 2. We obtain by
I22
Lemma 2.7 again that
⎧ ∞
⎪  α( p−2+t)−1

⎪C [n l(n)EY 2 I (|Y | ≤ 2n α )+n α( p+t)−1 l(n)P(|Y | > n α )] if 1 < p < 2

∗ n=1
I22 ≤

⎪ ∞

⎩C [n α(−1+t)−1 l(n)EY 2 I (|Y | ≤ 2n α )+n α(1+t)−1 l(n)P(|Y | > n α )] if 0 < p ≤ 1
n=1

⎨C E|Y | p+t l(|Y | α1 ), if 1 < p < 2

⎩C E|Y |1+t l(|Y | α1 ), if 0 < p ≤ 1
< ∞.
If p ≥ 2, let v > max{2, (αp + αt − 1)/(α − 1/2)}. Noting that EY 2 < ∞, we have

 ∞


I22 ≤C n αp−vα+v/2−2+αt l(n)(E|Y |2 )v/2 ≤ C n αp−v(α−1/2)−2+αt l(n) < ∞.
n=1 n=1

2 < ∞ from the above statement. Next, we will prove


Therefore, we can conclude that I

I3 < ∞. By Markov’s inequality, i=−∞ |ai | < ∞, Lemmas 2.1, 2.7 and (3.1) we obtain
that
⎛ ⎞
∞ ∞ 
i+n
I3 ≤ C n αp−α−2 l(n)E ⎝ |ai | Y j I (Y j > n α )⎠
n=1 n=1 j=i+1


≤C n αp−α−1 l(n)E|Y |I (|Y | > n α )
n=1
⎧ 1

⎪C E|Y | p l(|Y | α ), if p>1


⎨ 1
≤ C E|Y |l(|Y | ) log |Y |, if p=1
α

⎪ ∞


⎩C n αp−α−1 l(n), if 0< p<1
n=1
< ∞.
We can also get I4 < ∞ and I5 < ∞ similarly to the proofs of I2 < ∞ and I3 < ∞,
respectively. This completes the proof of the theorem.
Remark 3.1 Comparing Theorem 3.1 to the corresponding one of Theorems A and B, we
have the following generalizations or improvements:
1. The sequence is generalized from END to WOD;
2. The slowly varying function l(n) is considered here, which can be taken by 1 or log n;
Theorems A and B are special cases of Theorem 3.1 if we take l(n) ≡ 1;
3. The result for the case 0 < p < 1 is also obtained in Theorem 3.1, which is not
considered in Theorems A and B.
X. Tao et al.

Removing the restrict on g(n), we can obtain the following result.


Theorem 3.2 Let α and p be positive constants such that α > 1/2 and αp > 1. Let
{Yn , −∞ < n < +∞} be a doubly infinite sequence of WOD random variables stochastically
dominated by a random variable Y and EYn = 0 if α ≤ 1. Let l(n) be a slowly varying
function. Let {an , −∞ < n < +∞} be an absolutely summable sequence of  real numbers
m
and {X n , n ≥ 1} be a moving average process defined as (1.1). Denote Sm = i=1 X i . If


1

⎪ E|Y | p l(|Y | α ) < ∞, if p > 1,

1
E|Y |l(|Y | α ) log |Y | < ∞, if p = 1, (3.9)



⎩ E|Y |l(|Y | α ) < ∞,
1
if 0 < p < 1.
Then for all ε > 0 and any v > max{2, (αp − 1)/(α − 1/2)},
∞  
n αp−2 l(n)P max |Sm | > εg 1/v (n)n α < ∞. (3.10)
1≤m≤n
n=1
Proof Noting that g(n) ≥ 1, similar to the proof of Theorem 3.1 we have

  
I1∗ ≤ C n αp−vα−1 (log n)v l(n)/g(n) E|Y |v I (|Y | ≤ n αq ) + n vαq P(|Y | > n αq )
n=1
∞
 v/2
+C n αp−vα+v/2−2 (log n)v l(n) EY 2 I (|Y | ≤ n αq ) + n 2αq P(|Y | > n αq )
n=1

  
≤ C n αp−vα−1 (log n)v l(n) E|Y |v I (|Y | ≤ n αq ) + n vαq P(|Y | > n αq )
n=1
∞
 v/2
+C n αp−vα+v/2−2 (log n)v l(n) EY 2 I (|Y | ≤ n αq ) + n 2αq P(|Y | > n αq )
n=1
∗ ∗
=: I11 + I12 .
∗ < ∞ which was proved in (3.7). If p < 2, noting that 0 < q < 1,
It is easily seen that I11
αp > 1 and v > 2, we have



I12 ≤C n −1−(v/2−1)(αp−1) (log n)v l(n)(E|Y | p )v/2 < ∞.
n=1

If p ≥ 2, noting that EY 2 < ∞ and v > (αp − 1)/(α − 1/2), we have



 ∞


I12 ≤C n αp−vα+v/2−2 (log n)v l(n) = C n αp−v(α−1/2)−2 (log n)v l(n) < ∞.
n=1 n=1
Now we have proved I1 < ∞. Similarly, for I2 , we also have


I2∗ ≤ C n αp−vα−1 l(n)/g(n){E|Y |v I (|Y | ≤ 2n α ) + n vα P(|Y | > n α )}
n=1
∞
 v/2
+C n αp−vα+v/2−2 l(n) EY 2 I (|Y | ≤ 2n α ) + n 2α P(|Y | > n α )
n=1
∗ ∗
=: I21 + I22 .
Complete convergence of moving average process. . .

∗ < ∞ by (3.1) and Lemma 2.7. Similar to the proof of I ∗ , we have


It is easy to obtain I21 12

I22 < ∞. Completely analogous to the rest proof of Theorem 3.1, we can also get I3 < ∞,
I4 < ∞ and I5 < ∞. The proof of the theorem is completed.

Acknowledgments The authors are most grateful to the Editor-in-Chief Manuel Lopez–Pellicer and anony-
mous referee for careful reading of the manuscript and valuable suggestions which helped in improving an
earlier version of this paper.

References
1. Ibragimov, I.A.: Some limit theorems for stationary processes. Theory Probab. Appl. 7(4), 349–382 (1962)
2. Burton, R.M., Dehling, H.: Large deviations for some weakly dependent random processes. Stat. Prob.
Lett. 9(5), 397–401 (1990)
3. Li, D.L., Rao, M.B., Wang, X.C.: Complete convergence of moving average processes. Stat. Prob. Lett.
14(2), 111–114 (1992)
4. Zhang, L.X.: Complete convergence of moving average processes under dependence assumptions. Stat.
Prob. Lett. 30(2), 165–170 (1996)
5. Li, Y.X., Zhang, L.X.: Complete moment convergence of moving-average processes under dependence
assumptions. Stat. Prob. Lett. 70(3), 191–197 (2004)
6. Chen, P.Y., Hu, T.C., Volodin, A.: Limiting behaviour of moving average processes under ϕ-mixing
assumption. Stat. Prob. Lett. 79(1), 105–111 (2009)
7. Zhou, X.C.: Complete moment convergence of moving average processes under ϕ-mixing assumptions.
Stat. Prob. Lett. 80(5), 285–292 (2010)
8. Yang, W.Z., Wang, X.J., Ling, N.X., Hu, S.H.: On complete convergence of moving average process for
AANA sequence. Discr. Dyn. Nat. Soc. 2012, Article ID 863931, p. 24 (2012)
9. Qiu, D.H., Chen, P.Y.: Convergence for moving average processes under END set-up. Acta Math. Sci.
35A(4), 756–768 (2015)
10. Wang, K.Y., Wang, Y.B., Gao, Q.W.: Uniform asymptotics for the finite-time ruin probability of a new
dependent risk model with a constant interest rate. Methodol. Comput. Appl. Probab. 15, 109–124 (2013)
11. Wang, Y.B., Cheng, D.Y.: Basic renewal theorems for random walks with widely dependent increments.
J. Math. Anal. Appl. 384, 597–606 (2011)
12. Chen, Y., Wang, L., Wang, Y.B.: Uniform asymptotics for the finite-time ruin probabilities of two kinds
of nonstandard bidimensional risk models. J. Math. Anal. Appl. 401, 114–129 (2013)
13. Shen, A.T.: Bernstein-type inequality for widely dependent sequence and its application to nonparametric
regression models. Abstr. Appl. Anal. 2013, Article ID 862602, p. 9 (2013)
14. Shen, A.T.: On asymptotic approximation of inverse moments for a class of nonnegative random variables.
Stat. J. Theor. Appl. Stat. 48(6), 1371–1379 (2014)
15. Shen, A.T., Yao, M., Wang, W.J., Volodin, A.: Exponential probability inequalities for WNOD random
variables and their applications. RACSAM 110(1), 251–268 (2016)
16. Yang, W.Z., Liu, T.T., Wang, X.J., Hu, S.H.: On the Bahadur representation of sample quantiles for widely
orthant dependent sequences. Filomat 28, 1333–1343 (2014)
17. Wang, X.J., Xu, C., Hu, T.C., Volodin, A., Hu, S.H.: On complete convergence for widely orthant-
dependent random variables and its applications in nonparametric regression models. Test 23(3), 607–629
(2014)
18. Wang, X.J., Hu, S.H.: The consistency of the nearest neighbor estimator of the density function based on
WOD samples. J. Math. Anal. Appl. 429, 497–512 (2015)
19. Wu, Q.Y.: A complete convergence theorem for weighted sums of arrays of rowwise negatively dependent
random variables. J. Inequal. Appl. 2012, Article ID 50, p. 10 (2012)
20. Shen, A.T., Zhang, Y., Volodin, A.: Applications of the Rosenthal-type inequality for negatively super-
additive dependent random variables. Metrika 78, 295–311 (2015)
21. Stout, W.F.: Almost Sure Convergence. Academic Press, New York (1974)
22. Bai, Z.D., Su, C.: The complete convergence for partial sums of i.i.d random variables. Sci. Chin. Ser. A
28(12), 1261–1277 (1985)

You might also like