Distribution of the Estimators for Autoregressive Time Series with a Unit Root
Abstract Let n observations Y 1, Y 2, ···, Y n be generated by the model Y t = pY t−1 + e t , where Y 0 is a fixed constant and {e t } t-1 n is a sequence of independent normal random variables with mean 0 and variance σ2. Properties of the regression estimator of p are obtained under the assumption that p = ±1. Representations for the limit distributions of the estimator of p and of the regression t test are derived. The estimator of p and the regression t test furnish methods of testing the hypothesis that p = 1.
- Research Article
16
- 10.1214/aop/1023481105
- Oct 1, 1997
- The Annals of Probability
Let ${X_n;n\geq 0}$ be a sequence of random variables. We consider its geometrically weighted series $\xi(\beta)=\sum_{n=0}^\infty \betaX_n$ for $0<\beta < 1$. This paper proves that $\xi (\beta)$ can be approximated by $\sum_{n=0}^\infty \beta^n Y_n$ under some suitable conditions, where ${Y_n; n \geq 0}$ is a sequence of independent normal random variables. Applications to the law of the iterated logarithm for $\xi(\beta)$ are also discussed.
- Research Article
9
- 10.1090/s0002-9939-98-04254-3
- Jan 1, 1998
- Proceedings of the American Mathematical Society
Let X1, X2,... , X2. be a finite exchangeable sequence of Banach space valued random variables, i.e., a sequence such that all joint distributions are invariant under permutations of the variables. We prove that there is an absolute constant c such that if S j-j=1 X_, then P( sup JISjll > A) A/c), 1 0. This generalizes an inequality of Montgomery-Smith and Latala for independent and identically distributed random variables. Our maximal inequality is apparently new even if X1,X2,... is an infinite exchangeable sequence of random variables. As a corollary of our result, we obtain a comparison inequality for tail probabilities of sums of arbitrary random variables over random subsets of the indices. Montgomery-Smith [8] and Latala [7] have independently proved that if X1,-... Xn are independent and identically distributed Banach space valued random variables, then (1) P( sup ZX, > AZ A/C I 0 and 1 < k < n, where c is an absolute constant. It is obvious that this cannot hold for arbitrary independent random variables; as MontgomerySmith [8] notes, we need only let k -= n = 2, X1 1 and X2 -1 to see this. Levy's inequality says that (1) also holds for arbitrary independent symmetric random variables Xi (not necessarily identically distributed). For positive random variables, (1) is trivial, of course. A natural and much-studied extension of the concept of independent and identically distributed random variables is that of exchangeable random variables. We say that a finite sequence X1, ... , Xn of (not necessarily independent) random variables is exchangeable if the n-tuples (XI, ... , Xn) and (X(I), ... , Xr(n)) both have the same distribution whenever ir is a permutation of [n] If{,... ,n}. Evidently an exchangeable sequence of independent random variables is precisely a sequence of independent and identically distributed random variables. Received by the editors August 2, 1996 and, in revised form, December 2, 1996. 1991 Mathematics Subject Classification. Primary 60E15.
- Book Chapter
1
- 10.1016/b978-0-12-095710-1.50010-3
- Jan 1, 1986
- Random Polynomials
CHAPTER 4 - The Number and Expected Number of Real Zeros of Random Algebraic Polynomials
- Research Article
17
- 10.1214/aoms/1177698131
- Oct 1, 1968
- The Annals of Mathematical Statistics
In [3], we began a study of convergence of quadratic forms in independent random variables. Simultaneously, Fau Dyk Tin and G. E. Silov [2] initiated their study of this problem but restricted to the case of quadratic mean convergence and normal variables. Our aim in this paper is to consider carefully the problem of almost sure convergence (convergence with probability one). Several of our results will generalize well known theorems for series of independent random variables. We shall assume throughout that X1, X2, *** is a sequence of independent real random variables with E(Xk) = 0 and E(Xk2) = 1, k = 1, 2, * .. . Note that we do not assume that the Xk's are identically distributed or place conditions on the higher moments. Let (aJk), j, k = 1, 2, ***, be a real (not necessarily symmetric) matrix and let
- Research Article
8
- 10.2307/2288939
- Mar 1, 1988
- Journal of the American Statistical Association
A test of homogeneity is derived for a general sampling model. The alternative hypothesis is a mixture over a parameter of the sampling model, centered at the null hypothesis. The test statistic is derived through a score test. Its functional form is independent of the particular mixing distribution. Suppose Yi (i = 1, …, n) are independent random variables with respective probability density (or mass) functions fi (yi ‖ λi). Under the null hypothesis, suppose all λ i homogeneous and equal to the common value λ0. Under the alternative hypothesis, suppose the λ i behave as random samples taken from a distribution with mean λ0 and finite third moment. The score statistic for testing these hypotheses rejects the null for large values of , where is the maximum likelihood estimate of λ0 under the null hypothesis and . When the sample size n is large, S is normally distributed under both the null hypothesis and a sequence of alternative hypotheses in which the variance of the mixing distribution tends to 0 as n −1/2. For example, reject the null hypothesis of Poisson observations yi in favor of a mixture of Poissons for large values of , where . In addition, suppose the Yi are independent normal random variables with respective means α + βxi and variance σ2. Reject the null hypothesis of a constant slope β in favor of a mixture of slopes for large values of , where , and are the usual maximum likelihood estimates of α, β, and σ, respectively.
- Research Article
10
- 10.1080/03610929008830255
- Jan 1, 1990
- Communications in Statistics - Theory and Methods
For the first-order bilinear time series model where {et} is a sequence of independent normal random variables with mean 0 and variance σ2, the asymptotic distribution of the sample autocorrelation function is obtained and shown to follow a normal distribution. The variance of the asymptotic distribution is of a complicated form and hence a bootstrap estimate of the variance is proposed for large sample inference. This result can be used to distinguish between different bilinear models. Finally, we obtain moment estimators of the parameters and derive their asymptotic distribution.
- Research Article
63
- 10.2307/2346383
- Jan 1, 1981
- Applied Statistics
SUMMARY This paper analyses a sequence of independent normal random variables in which the precision (inverse of the variance) may have been subjected to one change at an unknown point in time. Posterior distributions are found both for the unknown point in time at which the change occurred and for the magnitude of the change. Two examples are given.
- Research Article
30
- 10.1016/0167-7152(94)00227-y
- Nov 1, 1995
- Statistics & Probability Letters
Estimating the number of change points in a sequence of independent normal random variables
- Research Article
86
- 10.1090/s0002-9939-1960-0112190-3
- Jan 1, 1960
- Proceedings of the American Mathematical Society
The original Kolmogorov's inequality [6] has been extended to a martingale inequality by Levy [8] and Ville [12] and later to a semimartingale inequality by Doob [3]. In this note we will extend (1) to a semi-martingale inequality which contains Doob's inequality as a special case. As Kolmogorov's inequality is the key to the proof of the of large for a sequence of independent random variables, we will use our inequality to prove a law of large numbers for a martingale, which will be shown to include the extensions of Kolmogorov's of large for independent random variables [7] made by Brunk [I], Chung [2], Kawata and Udagawa [5], and Prohorov [11], and for dependent random variables made by Levy [8] and Loeve [9]. In the following (W, F, P) will be a probability space, cl, c2, . . . a nonincreasing sequence of positive numbers, xl, x2, * * * a sequence of random variables, yk=XlX2+x2 ? * +Xk and Fk the Borel field generated by xi, x2, * * *, Xk for each k, and for a random variable z we put z+=max(z, 0).
- Book Chapter
9
- 10.1214/lnms/1215461946
- Jan 1, 1992
Let X\,..., Xn be a sequence of dependent random variables from a continuous density function. Denote by X^ = min(Xi,... >Xn) and X(n) = max(Xi,... ,X?) the extreme order statistics. In this article Bonferroni-type inequalities and product-type approximations of order k > 1 are derived for the distribution and the moments of extreme order statistics for a sequence of stationary random variables. These results are particularized to m-spacings from a uniform distribution and moving sums of size m for independent normal random variables. These inequalities and approximations are compared with approximations and asymptotic results that have been previously derived. From the numerical results it is evident that there is merit in studying higher order Bonferroni-type inequalities and product-type approximations. The product-type approximations appear to be the most accurate approximations for the distribution and the moments of extreme order statistics.
- Research Article
67
- 10.1080/00401706.1977.10489592
- Nov 1, 1977
- Technometrics
In this article, a study is made about a shift in the mean of a set of independent normal random variables with unknown common variance. The marginal and joint posterior distributions of the unknown time point and the amount of shift are derived. Small and large sample results are presented.
- Research Article
40
- 10.1016/j.tpb.2005.02.004
- May 23, 2005
- Theoretical Population Biology
Ewens’ sampling formula and related formulae: combinatorial proofs, extensions to variable population size and applications to ages of alleles
- Research Article
8
- 10.1090/s0002-9939-1990-0990432-9
- Jan 1, 1990
- Proceedings of the American Mathematical Society
Let $\left ( {{x_i}} \right )$ be a sequence of random variables. Let $\left ( {{w_i}} \right )$ be a sequence of independent random variables such that for each $i, {w_i}$, has the same distribution as ${x_i}$. If ${S_n} = {x_1} + {x_2} + \cdots + {x_n}$ is a martingale and $\Psi$ is a convex increasing function such that $\Psi \left ( {\sqrt x } \right )$ is concave on $[0,\infty )$ and $\Psi (0) = 0$ then, \[ E\Psi \left ( {{{\max }_{j \leq n}}\left | {\sum \limits _{i = 1}^j {{x_i}} } \right |} \right ) < CE\Psi \left ( {\left | {\sum \limits _{i = 1}^j {{w_i}} } \right |} \right )\] for a universal constant $C,(0 < C < \infty )$ independent of $\Psi ,n$, and $\left ( {{x_i}} \right )$. The same inequality holds if $\left ( {{x_i}} \right )$ is a sequence of nonnegative random variables and $\Psi$ is now any nondecreasing concave function on $[0,\infty )$ with $\Psi (0) = 0$. Interestingly, if $\Psi \left ( {\sqrt x } \right )$ is convex and $\Psi$ grows at most polynomially fast, the above inequality reverses. By comparing martingales to sums of independent random variables, this paper presents a one-sided approximation to the order of magnitude of expectations of functions of martingales. This approximation is best possible among all approximations depending only on the one-dimensional distribution of the martingale differences.
- Research Article
412
- 10.1093/biomet/58.3.509
- Jan 1, 1971
- Biometrika
SUMMARY The point of change in mean in a sequence of normal random variables can be estimated from a cumulative sum test scheme. The asymptotic distribution of this estimate and associated test statistics are derived and numerical results given. The relation to likelihood inference is emphasized. Asymptotic results are compared with empirical sequential results, and some practical implications are discussed. The cumulative sum scheme for detecting distributional change in a sequence of random variables is a well-known technique in quality control, dating from the paper of Page (1954) to the recent expository account by van Dobben de Bruyn (1968). Throughout the literature on cumulative sum schemes the emphasis is placed on tests of departure from initial conditions. The purpose of this paper is to examine a secondary aspect: estimation of the index T in a sequence {xt}, where the departure from initial conditions has taken place. The work is closely related to an earlier paper by Hinkley (1970), in which maximum likelihood estimation and inference were discussed. We consider specifically sequences of normal random variables x1, ..., xT, say, where initially the mean 00 and the variance o2 are known. A cumulative sum, cusum, scheme is used to detect possible change in mean from 00, and for simplicity suppose that it is a one-sided scheme for detecting decrease in mean. Then the procedure is to compute the cumulative sums t
- Research Article
- 10.12988/ams.2014.410812
- Jan 1, 2014
- Applied Mathematical Sciences
Let {Xn,i, i = 1, ..., n;n ∈ N} be a sequence of independent integer-valued random variables with success probabilities P (Xn,i = 1) = pn,i and P (Xn,i = 0) = 1 − pn,i − qn,i for every i ∈ {1, ..., n} and n ∈ N, where pn,i, qn,i ∈ (0, 1) and pn,i + qn,i ∈ (0, 1). Suppose that Nn for n ∈ N are positive integer-valued random variables and independent of the Xn,i’s. Let SNn = ∑Nn i=1Xn,i be random sums of a sequence of independent integer-valued random variables, and let λNn = ∑Nn i=1 pn,i. In this case, Hung and Giang [1] gave a pointwise bound for approximating the probability function of the random sums SNn by the Poisson probability function with mean λNn as follows:
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.