Literature DB >> 29104407

On the strong convergence for weighted sums of negatively superadditive dependent random variables.

Bing Meng1,2, Dingcheng Wang1, Qunying Wu2.   

Abstract

In this article, some strong convergence results for weighted sums of negatively superadditive dependent random variables are studied without assumption of identical distribution. The results not only generalize the corresponding ones of Cai (Metrika 68:323-331, 2008) and Sung (Stat. Pap. 52:447-454, 2011), but also extend and improve the corresponding one of Chen and Sung (Stat. Probab. Lett. 92:45-52, 2014).

Entities:  

Keywords:  negatively superadditive dependent random variables; strong convergence; weighted sums

Year:  2017        PMID: 29104407      PMCID: PMC5660194          DOI: 10.1186/s13660-017-1530-9

Source DB:  PubMed          Journal:  J Inequal Appl        ISSN: 1025-5834            Impact factor:   2.491


Introduction

Let be a sequence of random variables defined on a fixed probability space . We first review the definitions of negatively associated random variables and negatively superadditive dependent (NSD) random variables.

Definition 1.1

A finite family of random variables is said to be negatively associated (NA) if for every pair of disjoint subsets and of , whenever and are coordinatewise nondecreasing functions such that this covariance exists. An infinite family of random variables is said to be NA if every finite subfamily is NA.

Definition 1.2

(Kemperman [4]) A function ϕ: is called superadditive if for all , where ∨ is for a componentwise maximum and ∧ is for a componentwise minimum.

Definition 1.3

(Hu [5]) A random vector is said to be NSD if where are independent such that and have the same distribution for each i and ϕ is a superadditive function such that the expectations in (1.2) exist. A sequence of random variables is said to be NSD if for every , is NSD. The concept of NA was given by Joag-Dev and Proschan [6], and the concept of NSD was introduced by Hu [5], which was based on the class of superadditive functions. Hu [5] gave an example illustrating that NSD random variables are not necessarily NA, and left an open problem whether NA random variables implies NSD. Christofides and Vaggelatou [7] solved this open problem and showed that NA implies NSD. Thus, it is shown that NSD is much weaker than NA. Because of the wide application of NSD random variables, many authors have studied this concept and obtained some interesting results and applications. For example, we refer to [8-13]. Hence, it is of important significance to extend the limit properties of NA to the case of NSD random variables. The concept of complete convergence was introduced by Hsu and Robbins [14] as follows. A sequence of random variables is said to converge completely to a constant λ if In view of the Borel-Cantelli lemma, the sequence of random variables converging completely to a constant λ implies almost surely (a.s.). Therefore, the complete convergence of random variables is a very important tool in establishing almost sure convergence. The first results concerning complete convergence for normed sums of random variables were due to Hsu and Robbins (1947) [14] and Erdös (1949) [15], and the obtained results have been extended in several directions by many authors. One can refer to [16-20], etc. Recently, Cai [1] obtained the following complete convergence result for weighted sums of NA random variables with identical distribution.

Theorem 1.1

Let be a sequence of NA random variables with identical distribution, and let be a triangular array of constants satisfying for some . Let for some . Furthermore, assume that when . If for some , then, for all , Sung [2] extended the result of Cai [1] under a much weaker moment condition and obtained the following strong convergence results.

Theorem 1.2

Let be a sequence of NA random variables with identical distribution, and let be an array of constants such that for some . Let for some . Furthermore, suppose that when . Then: If , then implies (1.5). If , then implies (1.5). If , then implies (1.5). In the case , Chen and Sung [3] studied the complete convergence for weighted sums of NA random variables under the moment condition , which is weaker than Theorem 1.2. Li et al. [21] extended and improved the result of Chen and Sung [3] to -mixing random variables. Motivated by the above results obtained by Cai [1], Sung [2] and Chen and Sung [3], in this paper, we will further study the complete convergence for weighted sums of NSD random variables. Some complete convergence results for the maximum weighted sums of NSD random variables are obtained without the assumption of an identical distribution. As an application, the Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of NSD random variables is obtained. Our results not only generalize the corresponding ones of Cai [1] and Sung [2], but they also extend and improve the corresponding one of Chen and Sung [3].

Preliminaries

Throughout this paper, C represents a generic positive constant whose value may change from one appearance to the next, and means . Let be the indicator function of the set A.

Definition 2.1

A sequence of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that for all and . In order to prove our main results, we introduce the following lemmas.

Lemma 2.1

(Hu [5]) If is NSD and are nondecreasing functions, then is NSD.

Lemma 2.2

(Wang et al. [11]) Let and be a sequence of NSD random variables with for every . Then, there exists a positive constant depending only on p such that, for every , for , and, for ,

Lemma 2.3

(Sung [22]) Let X be a random variable and be an array of constants satisfying for some . Let for some . Then

Lemma 2.4

(Sung [22]) Let X be a random variable and be an array of constants satisfying or , and for some . Let . If , then

Lemma 2.5

(Wu [23]) Let be a sequence of random variables which is stochastically dominated by a random variable X. For any , and , the following two statements hold:

Main results and proofs

Now we state and prove our main results.

Theorem 3.1

Let be a sequence of NSD random variables which is stochastically dominated by a random variable X, and for some and . Let be an array of constants satisfying . Assume further that when . Then: If , then implies (1.5). If , then implies (1.5).

Theorem 3.2

Let be a sequence of NSD random variables which is stochastically dominated by a random variable X, and for some and . Let be an array of constants satisfying . Assume further that when . If , then implies (1.5).

Remark 3.1

In Theorem 3.1 and Theorem 3.2, we use different methods from those of Sung [2] and Chen and Sung [3] to prove the results, and obtain some strong convergence results for weighted sums of NSD random variables without assumptions of identical distribution. The obtained theorems not only extend the corresponding results of Cai [1] and Sung [2] and Chen and Sung [3] to the case of NSD random variables, but they also improve them.

Proof of Theorem 3.1

Without loss of generality, we suppose that . For , define It is easy to check that, for all , which implies that Firstly, we prove that If , then by , Lemma 2.5, Definition 2.1, the inequality, the Markov inequality and the Hölder inequality, we get If , then by Lemma 2.5, Definition 2.1, the inequality and the Markov inequality, we get again It immediately follows from (3.4) and (3.5), that (3.3) holds. Hence, for n large enough, Then, to prove (1.5), it suffices to prove that and By Lemma 2.3, we can easily obtain For fixed , it is easily seen that is still a sequence of NSD random variables by Lemma 2.1. Hence, for , it follows from Lemma 2.2, the Markov inequality and the Jensen inequality that Firstly, we prove . By Lemma 2.3, we obtain Actually, by Lemma 2.3, we can directly obtain . Hence, we only need to prove in the following two cases. (i) If , take , then by and it follows that (ii) If , we need to divide into three subsets: , and , where . Then we obtain Obviously, by Lemma 2.4, we directly obtain . It follows from and that It follows from , and for , that Therefore, by (3.11)-(3.15), we can see that . Finally, we prove . Actually, take , then by Lemma 2.5, the Markov inequality and , we get Thus, the proof of Theorem 3.1 is completed. □

Proof of Theorem 3.2

Without loss of generality, we suppose that . For , define It is easy to check that, for all , which implies that To prove (1.5), it suffices to show that and We first prove (3.18). Note that By the Markov inequality, we get for any and It is easy to show that and Then, (3.18) holds by (3.20)-(3.24). Now we prove (3.19) in the following two cases. (i) If , similar to the proof of (3.18), we have Note that for any and By the Markov inequality, the inequality and (3.23)-(3.27), we obtain (ii) If , we first prove that By , we have Since and we have and Thus, (3.29) holds by (3.30)-(3.34). Therefore, we only need to prove that Actually, by the Markov inequality, Lemma 2.2, Lemma 2.5, (3.18) and (3.28), we get Thus, the proof of Theorem 3.2 is completed. □

Conclusions

In this paper, we use different methods from those of Sung [2] and Chen and Sung [3] to prove the results, and we obtain some strong convergence results for weighted sums of NSD random variables without the assumption of an identical distribution. Our results extend and improve the corresponding ones of Cai [1] and Sung [2] and Chen and Sung [3] to the case of NSD random variables.
  1 in total

1.  Complete Convergence and the Law of Large Numbers.

Authors:  P L Hsu; H Robbins
Journal:  Proc Natl Acad Sci U S A       Date:  1947-02       Impact factor: 11.205

  1 in total
  1 in total

1.  A difference-based approach in the partially linear model with dependent errors.

Authors:  Zhen Zeng; Xiangdong Liu
Journal:  J Inequal Appl       Date:  2018-10-01       Impact factor: 2.491

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.