Sums of random vectors
Sums of random vectors
- Research Article
8
- 10.1023/a:1013851720494
- Jan 1, 2002
- Journal of Theoretical Probability
Let (Xn, n≥1) be a sequence of independent centered random vectors in Rd. We study the law of the iterated logarithm lim supn→∞(2 log log ‖Bn‖)−1/2 ‖B−1/2nSn‖=1 a.s., where Bn is the covariance matrix of Sn=∑ni=1Xi, n≥1. Application to matrix-normalized sums of independent random vectors is given.
- Research Article
13
- 10.1023/a:1014533500973
- May 1, 2002
- Journal of Mathematical Sciences
We establish a sufficient condition for an almost sure limit theorem for sums of independent random vectors under minimal moment conditions and assumptions on normalizing sequences. We provide an example showing that our condition is close to the optimal one, as well as a related sufficient condition due to Berkes and Dehling. Bibliography: 5 titles.
- Research Article
- 10.1016/s0167-7152(02)00121-9
- May 28, 2002
- Statistics and Probability Letters
Asymptotic expansions of densities of sums of random vectors without third moment
- Research Article
1
- 10.1137/s0040585x97978993
- Jan 1, 2002
- Theory of Probability & Its Applications
This paper proves the law of the iterated logarithm for sums of random vectors in {\bf R}$^k$ normed by linear operators of general type. We suppose that the sequence of the random vectors satisfies Strassen's invariance principle.
- Research Article
31
- 10.1137/s0040585x97978026
- Jan 1, 2001
- Theory of Probability & Its Applications
This paper is a continuation of [A. A. Borovkov and A. A. Mogulskii, Theory Probab. Appl., 43 (1998), pp. 1--12] and [A. A. Borovkov and A. A. Mogulskii, Siberian Math. J., 37 (1996), pp. 647--682]. Let $S(n)=\xi(1)+\cdots +\xi(n)$ be the sum of independent nondegenerate random vectors in $\mathbf{R}^d$ having the same distribution as a random vector~$\xi$. It is assumed that $\varphi(\lambda)= \mathbf{E} \,e^{\langle\lambda,\xi\rangle}$ is finite in a vicinity of a point ${\lambda \in \mathbf{R}^d}$. We obtain asymptotic representations for the probability $\mathbf{P}\{S(n)\in \Delta (x)\}$ and the renewal function $H(\Delta (x))= \sum_{n=1}^{\infty}\mathbf{P}\{S(n)\in \Delta (x)\}$, where $\Delta(x)$ is a cube in $\mathbf{R}^d$ with a vertex at point x and the edge length~$\Delta$. In contrast to the above-mentioned papers, the obtained results are valid, in essence, either without any additional assumptions or under very weak restrictions.
- Research Article
3
- 10.1137/1128077
- Jan 1, 1984
- Theory of Probability & Its Applications
Previous article Next article Sums of Random Vectors with Values in a Hilbert SpaceE. R. VvedenskayaE. R. Vvedenskayahttps://doi.org/10.1137/1128077PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAbout[1] Yu. V. Prokhorov, On the sums of random vectors with values in a Hilbert space, Theory Prob. Appl., 28 (1983), 375–379 0533.60008 LinkGoogle Scholar[2] Y. S. Chow and , H. Robbins, On sums of independent random variables with infinite moments and “fair” games, Proc. Nat. Acad. Sci. U.S.A., 47 (1961), 330–335 23:A2908 0099.35103 CrossrefGoogle Scholar[3] W. Feller, A limit theorem for random variables with infinite moments, Amer. J. Math., 68 (1946), 257–262 8,37a 0060.28704 CrossrefGoogle Scholar[4] H. Kesten, Sums of independent random variables—without moment conditions, Ann. Math. Statist., 43 (1972), 701–732 46:941 0267.60053 CrossrefGoogle Scholar Previous article Next article FiguresRelatedReferencesCited ByDetails On the Limit Behavior of Sums of Random Vectors with Values in Hilbert SpaceE. R. VvedenskayaTheory of Probability & Its Applications, Vol. 29, No. 3 | 17 July 2006AbstractPDF (289 KB) Volume 28, Issue 4| 1984Theory of Probability & Its Applications657-861 History Submitted:28 June 1983Published online:28 July 2006 InformationCopyright © 1984 © Society for Industrial and Applied MathematicsPDF Download Article & Publication DataArticle DOI:10.1137/1128077Article page range:pp. 797-800ISSN (print):0040-585XISSN (online):1095-7219Publisher:Society for Industrial and Applied Mathematics
- Research Article
5
- 10.1137/1128029
- Jan 1, 1984
- Theory of Probability & Its Applications
Previous article Next article On Sums of Random Vectors with Values in Hilbert SpaceYu. V. ProkhorovYu. V. Prokhorovhttps://doi.org/10.1137/1128029PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAbout Previous article Next article FiguresRelatedReferencesCited byDetails On a Conjecture of Yu. V. ProkhorovA. I. Martikainen17 July 2006 | Theory of Probability & Its Applications, Vol. 31, No. 4AbstractPDF (704 KB)Sums of Random Vectors with Values in a Hilbert SpaceE. R. Vvedenskaya28 July 2006 | Theory of Probability & Its Applications, Vol. 28, No. 4AbstractPDF (369 KB) Volume 28, Issue 2| 1984Theory of Probability & Its Applications History Submitted:25 January 1983Published online:17 July 2006 InformationCopyright © Society for Industrial and Applied MathematicsPDF Download Article & Publication DataArticle DOI:10.1137/1128029Article page range:pp. 375-379ISSN (print):0040-585XISSN (online):1095-7219Publisher:Society for Industrial and Applied Mathematics
- Book Chapter
3
- 10.1017/cbo9780511608711.064
- Aug 19, 1993
A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.
- Book Chapter
- 10.1017/cbo9781107049949.082
- Jan 28, 1988
Sums of random vectors
- Research Article
4
- 10.19086/da.3108
- Jan 1, 2018
- Discrete Analysis
Balancing sums of random vectors, Discrete Analysis 2018:4, 16 pp. Imagine that you are working at a small firm, and it is your job to assign incoming tasks, which are sporadic and unpredictable, in a fair way to the firm's employees. Fairness can be measured in several ways: different tasks take different amounts of time, require different amounts of effort, are not equally safe, and offer different rewards. Your aim is to be as fair as possible in all respects. One can model this problem as follows. Each task is a random vector, which represents its various attributes. The random vectors are organized into a sequence, and your task is to partition the vectors into cells, one for each employee. You must choose the cell for each vector as it arrives (and in particular before seeing later vectors), and you wish to do so in such a way that the sums of the vectors in the different cells remain as close to each other as possible. This model applies to many other task-scheduling problems: an important one is directing incoming computational tasks when one has a number of servers. This abstract question is (when suitably formulated) the focus of this paper. The authors contrast it with various other problems that have been looked at. There are three features of the problem that can be changed independently. One possible change is to make the problem deterministic: that is, the sequence of vectors is no longer random, and one is looking for a worst-case result rather than an average-case result. Another is to make the strategy prescient, meaning that one gets to see the entire sequence before deciding how to partition the vectors. A third is to restrict to the one-dimensional case. Some of the resulting variants have long and interesting histories. The authors show that each of these features makes a significant difference. In particular, if the random vectors are unit vectors, then there is an easy strategy in the one-dimensional case to keep the sums roughly equal, which is to add any negative vector to the cell with the largest sum and any positive vector to the cell with the smallest sum. However, as soon as the dimension is higher than 1, the level of imbalance must tend to infinity: one of the results of the paper is, roughly speaking, that it must grow at a rate of at least $(\log t/\log\log t)^{1/2}$. (The authors measure the imbalance using the Euclidean norm. That is, they try to minimize the maximum up to time $t$ of the maximum Euclidean difference between any two cells.) This is then matched by upper bounds that show that the lower bound is sharp or close to sharp under various circumstances. One of the strategies the authors consider is a direct generalization of the simple one-dimensional strategy just mentioned. When a vector $w$ comes in, one looks at the totals $v_1,\dots,v_k$ and assigns $w$ to the cell $i$ for which $\langle v_i,w\rangle$ is minimal. This strategy performs very well. Indeed, for probability distributions that are sufficiently well-behaved, it gives an upper bound that matches the lower bound to within a constant, and for all distributions it achieves a growth rate of around $(\log t)^{1/2}$. Somewhat suprisingly, there are distributions that show that this bound is almost tight as well: for every increasing function $\omega$ that tends to infinity, one can find a distribution that gives a growth rate of $(\log t)^{1/2}/\omega(t)$. The paper ends with a number of interesting open problems.
- Research Article
1
- 10.1023/a:1003226611953
- Dec 1, 1997
- Annals of the Institute of Statistical Mathematics
Existence and nonexistence for moments of limiting random vectors of normalized, lightly trimmed sums of random vectors in the generalized domain of normal attraction of non-Gaussian operator-stable laws are studied. The idea of representing the limiting random vectors by infinite series is essentially used in the proofs.
- Research Article
19
- 10.1137/s0040585x97985182
- Jan 1, 2012
- Theory of Probability & Its Applications
We obtain analogues of the well-known Chebyshev's exponential inequality ${\bf P}(\xi \ge x)\le e^{-\Lambda^{(\xi)}(x)}$, $x>{\bf E}\,\xi,$ for the distribution of a random variable $\xi$, where $\Lambda^{(\xi)}(x):=\sup_\lambda\{\lambda x- \log {\bf E}\,e^{\lambda \xi}\}$ is the large deviation rate function for $\xi$. Generalizations of this relation are established for multivariate random vectors $\xi$, for sums of the vectors, and for trajectories of random processes associated with such sums.
- Research Article
310
- 10.1214/13-aos1161
- Dec 1, 2013
- The Annals of Statistics
We derive a Gaussian approximation result for the maximum of a sum of high-dimensional random vectors. Specifically, we establish conditions under which the distribution of the maximum is approximated by that of the maximum of a sum of the Gaussian random vectors with the same covariance matrices as the original vectors. This result applies when the dimension of random vectors ($p$) is large compared to the sample size ($n$); in fact, $p$ can be much larger than $n$, without restricting correlations of the coordinates of these vectors. We also show that the distribution of the maximum of a sum of the random vectors with unknown covariance matrices can be consistently estimated by the distribution of the maximum of a sum of the conditional Gaussian random vectors obtained by multiplying the original vectors with i.i.d. Gaussian multipliers. This is the Gaussian multiplier (or wild) bootstrap procedure. Here too, $p$ can be large or even much larger than $n$. These distributional approximations, either Gaussian or conditional Gaussian, yield a high-quality approximation to the distribution of the original maximum, often with approximation error decreasing polynomially in the sample size, and hence are of interest in many applications. We demonstrate how our Gaussian approximations and the multiplier bootstrap can be used for modern high-dimensional estimation, multiple hypothesis testing, and adaptive specification testing. All these results contain nonasymptotic bounds on approximation errors.
- Single Report
465
- 10.1920/wp.cem.2013.7613
- Dec 30, 2013
We derive a Gaussian approximation result for the maximum of a sum of high-dimensional random vectors. Specifically, we establish conditions under which the distribution of the maximum is approximated by that of the maximum of a sum of the Gaussian random vectors with the same covariance matrices as the original vectors. This result applies when the dimension of random vectors ($p$) is large compared to the sample size ($n$); in fact, $p$ can be much larger than $n$, without restricting correlations of the coordinates of these vectors. We also show that the distribution of the maximum of a sum of the random vectors with unknown covariance matrices can be consistently estimated by the distribution of the maximum of a sum of the conditional Gaussian random vectors obtained by multiplying the original vectors with i.i.d. Gaussian multipliers. This is the Gaussian multiplier (or wild) bootstrap procedure. Here too, $p$ can be large or even much larger than $n$. These distributional approximations, either Gaussian or conditional Gaussian, yield a high-quality approximation to the distribution of the original maximum, often with approximation error decreasing polynomially in the sample size, and hence are of interest in many applications. We demonstrate how our Gaussian approximations and the multiplier bootstrap can be used for modern high-dimensional estimation, multiple hypothesis testing, and adaptive specification testing. All these results contain nonasymptotic bounds on approximation errors.
- Single Report
4
- 10.1920/wp.cem.2012.4512
- Dec 18, 2012
We derive a central limit theorem for the maximum of a sum of high dimensional random vectors. More precisely, we establish conditions under which the distribution of the maximum is approximated by the maximum of a sum of the Gaussian random vectors with the same covariance matrices as the original vectors. The key innovation of our result is that it applies even if the dimension of random vectors (p) is much larger than the sample size (n). In fact, the growth of p could be exponential in some fractional power of n. We also show that the distribution of the maximum of a sum of the Gaussian random vectors with unknown covariance matrices can be estimated by the distribution of the maximum of the (conditional) Gaussian process obtained by multiplying the original vectors with i.i.d. Gaussian multipliers. We call this procedure the “multiplier bootstrap”. Here too, the growth of p could be exponential in some fractional power of n. We prove that our distributional approximations, either Gaussian or conditional Gaussian, yield a high-quality approximation for the distribution of the original maximum, often with at most a polynomial approximation error. These results are of interest in numerous econometric and statistical applications. In particular, we demonstrate how our central limit theorem and the multiplier bootstrap can be used for high dimensional estimation, multiple hypothesis testing, and adaptive specification testing. All of our results contain non-asymptotic bounds on approximation errors.