A Comprehensive Framework for Statistical Inference in Measurement System Assessment Studies

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT Measurement system analysis aims to quantify the variability in data attributable to the measurement system and evaluate its contribution to overall data variability. This paper conducts a rigorous theoretical investigation of the statistical methods used in such analyses, focusing on variance components and other critical parameters. While established techniques exist for single‐variable cases, a systematic theoretical exploration of their properties has been largely overlooked. This study addresses this gap by examining estimators for variance components and other key parameters in measurement system assessment, analyzing their statistical properties, and providing new insights into their reliability, performance, and applicability.

Similar Papers
  • Research Article
  • 10.1088/2631-8695/ad7d69
Non-negative least-squares variance and covariance component estimation using the positive-valued function for errors-in-variables models
  • Oct 3, 2024
  • Engineering Research Express
  • Lv Zhipeng

Although (co)variance component estimation has been widely applied in the errors-in-variables (EIV) model, the occurrence of negative variance components is still a major issue in the estimated variance components. This problem may be due to the following unfavorable factors: 1) unreasonable selection of initial variance values; 2) low redundancy in the EIV functional model; 3) improper design in the EIV stochastic model, and 4) other data quality problems. Many attempts have been made to prevent the appearance of negative variance components. In this contribution, a novel and efficient non-negative least-squares variance component estimation using the PVF (PVF-NLS-VCE) is introduced, which can simultaneously estimate the unknown (co)variance components in the EIV stochastic model and the parameters in the EIV functional model. Its principle is to implicitly impose a non-negative constraint by replacing the variance component with the positive-valued function (PVF) whose range is the set of positive real numbers. Two numerical examples using real and simulated data are presented. The numerical results of linear regression are identical to those obtained based on least-squares variance component estimation (LS-VCE) with positive initial values of variance components. The numerical results of two-dimensional affine transformation are shown to prevent negative variance components and precede those obtained by LS-VCE with a negative initial value of variance component. Both numerical examples verify the effectiveness of the PVF-NLS-VCE method whether the initial values of variance components are positive or negative. The proposed PVF-NLS-VCE is a simple, convenient and flexible method to achieve the non-negative estimates of variance components, which can reduce sensitivity to initial value selection and automatically guarantee a non-negative definite covariance matrix.

  • Research Article
  • Cite Count Icon 28
  • 10.1007/s00190-016-0902-0
The effect of errors-in-variables on variance component estimation
  • Apr 12, 2016
  • Journal of Geodesy
  • Peiliang Xu

Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain negative values for variance components. VC estimation in EIV models remains difficult and challenging; and (iii) both the bias-corrected weighted LS estimate and the N-calibrated weighted LS estimate obviously outperform the weighted LS estimate. The intuitively N-calibrated weighted LS estimate is computationally less expensive and shown to statistically perform even better than the bias-corrected weighted LS estimate in producing an almost unbiased estimate of parameters.

  • Research Article
  • Cite Count Icon 27
  • 10.1093/biomet/56.2.313
Quadratic unbiased estimation of variance components for the one-way classification
  • Jan 1, 1969
  • Biometrika
  • David A Harville

The paper deals with the quadratic estimation of the components of variance associated with the one-way random classification where the effects are taken to be indepently and normally distributed and where the class numbers are unequal. An estimator is said to be inquadmissible or quadmissible depending on whether or not there exists a second quadratic estimator having the same expectation and smaller variance. A quadratric estimator is shown to be quadmissible only if it is a function of the minimal sufficient statistic of a certain prescribed form. Certain invariance criteria are introduced. Equations are given for determining locally best quadratic unbiased estimators. Conditions are provided which aid in ascertaining the quadmissibility or inquadmissibility of any given invariant quadratic unbiased estimator of the upper variance component.

  • Research Article
  • Cite Count Icon 2
  • 10.1002/csc2.20518
Triple full‐sibs: A method for estimating components of genetic variance and progeny selection in plants
  • Aug 17, 2021
  • Crop Science
  • Lázaro José Chaves

Quantifying the genetic variability present in plant populations is crucial for the success of selection plans. The partitioning of genetic variance into its components allows inferences about the inheritance of quantitative traits and prediction of the gain from selection. The present study aimed to present an alternative method to estimate components of genetic variance with applications in recurrent selection. The mating scheme is based on biparental cyclic crossing involving three parents in each chain, here called the triple full‐sibs (TFS) family, each of which is composed of three biparental progenies in which individuals are full‐sibs within each progeny and half‐sibs among progenies. The progenies are evaluated in experimental trials, and the total effect of progenies is hierarchically partitioned into the effects of TFS families and progenies within families. From the components of variance, additive and dominance variance, as well as the associated errors, can be estimated. Simulated data are used to illustrate the method of analysis and parameter estimation. The method combines the advantages of North Carolina Design I regarding estimation of variance components with the practicality of conventional full‐sib selection. The TFS method allows different selection strategies according to the selection unit and provides expected genetic gain equal to or greater than unrelated full‐sib selection. There is no further advantage to using more than three parents in each chain‐cross.

  • Research Article
  • Cite Count Icon 12
  • 10.1061/(asce)su.1943-5428.0000050
MINQUE of Variance-Covariance Components in Linear Gauss-Markov Models
  • Dec 9, 2010
  • Journal of Surveying Engineering
  • Peng Junhuan + 3 more

For heterogeneous and correlated observations, the variance components and the covariance components sometimes must be estimated. The forms of best invariant quadratic unbiased estimate (BIQUE) and Helmert-type estimation of variance and covariance components have already been derived by Koch and Grafarend, respectively. After obtaining the minimum norm quadratic unbiased estimate (MINQUE) of variance components, Rao derived only the MINQUE of the variance and covariance components for a special case in which the error vector is composed of a linear combination of independent random effect vectors of zero mean and the same variance-covariance matrix whose variance and covariance components were to be determined. However, an explicit expression of the MINQUE suitable to more general situations has not been derived. This paper defines the natural estimation of covariance components from errors and derives the MINQUE of variance and covariance components. The BIQUE and MINQUE of variance components without covariance components have the same iteration solution; the Helmert solution is only a special case of the MINQUE. However, the three estimates of variance and covariance components are different. The two MINQUE methods obtained in this paper have the advantage independence of the error distribution and offer a reasonable alternative in estimating variance and covariance components, and they can be used in the most general case. Numeric results show that the two MINQUE methods obtained in this paper are feasible.

  • Research Article
  • Cite Count Icon 11
  • 10.1061/(asce)0733-9453(2009)135:1(1)
Jointly Robust Estimation of Unknown Parameters and Variance Components Based on Expectation-Maximization Algorithm
  • Feb 1, 2009
  • Journal of Surveying Engineering
  • Junhuan Peng

Robust estimation of unknown parameters in linear models with only a single error component has been widely investigated. However only a small percentage of literature treats robust estimate of variance components in heteroscedastically mixed models. The correction-based pseudoobservation method, the x-function-based robust maximum likelihood estimate MLE and restricted maximum likelihood estimate methods, as well as the robust Helmert method are the three kinds of typical robust methods for estimating variance components of linear mixed models. However, they are generally affected by different types of scoring functions and various tuning factors based on M estimate defined by Huber from the maximum likelihood type of estimation. In addition, the pseudoobservation method will encounter risks of incorrect corrections due to the misidentification of gross errors. In this paper, gross errors and random or normal errors are assumed to be occasionally additive, independent, normally distributed with different scales, and all are regarded as missing and/or unobservable data. Together with the observations, they form a complete data problem where the unknown parameters and variance components need to be estimated. The expectation-maximization EM algorithm for finding the MLEs is robustified by estimating the variances of gross errors by defining weights and proposing to jointly solve the robust estimation of the unknown parameters and variance components. A numerical example of a global positioning system baseline network shows that the robustified EM algorithm can find a reliable estimate of unknown parameters and variance components, and efficiently separate the gross errors and subrandom effects or errors by computing their respective Bayesian estimate.

  • Research Article
  • Cite Count Icon 12
  • 10.1007/bf00251098
Effects of data imbalance on estimation of heritability
  • Mar 1, 1985
  • Theoretical and Applied Genetics
  • R F Caro + 2 more

Effects of data imbalance on bias, sampling variance and mean square error of heritability estimated with variance components were examined using a random two-way nested classification. Four designs, ranging from zero imbalance (balanced data) to "low", "medium" and "high" imbalance, were considered for each of four combinations of heritability (h(2)=0.2 and 0.4) and sample size (N=120 and 600). Observations were simulated for each design by drawing independent pseudo-random deviates from normal distributions with zero means, and variances determined by heritability. There were 100 replicates of each simulation; the same design matrix was used in all replications. Variance components were estimated by analysis of variance (Henderson's Method 1) and by maximum likelihood (ML). For the design and model used in this study, bias in heritability based on Method 1 and ML estimates of variance components was negligible. Effect of imbalance on variance of heritability was smaller for ML than for Method 1 estimation, and was smaller for heritability based on estimates of sire-plus-dam variance components than for heritability based on estimates of sire or dam variance components. Mean square error for heritability based on estimates of sire-plus-dam variance components appears to be less sensitive to data imbalance than heritability based on estimates of sire or dam variance components, especially when using Method 1 estimation. Estimation of heritability from sire-plus-dam components was insensitive to differences in data imbalance, especially for the larger sample size.

  • Research Article
  • Cite Count Icon 8
  • 10.1214/aoms/1177697705
Variances of Variance-Component Estimators for the Unbalanced 2-Way Cross Classification with Application to Balanced Incomplete Block Designs
  • Apr 1, 1969
  • The Annals of Mathematical Statistics
  • David A Harville

"Best" estimators of variance components for the unbalanced cases of random-effects models are not known. In fact, even for the very simplest of the unbalanced "designs", the balanced incomplete block designs, the question of the existence of minimum variance unbiased estimators remains open (Kapadia and Weeks [5]). The traditional approach to the derivation of variance-component estimators for unbalanced cases has been to pick several quadratic functions of the data, set these functions equal to their expectations, and then solve the resulting system of equations for the variance components. Two of the estimators derived in this fashion for the variance components associated with the unbalanced two-way cross classification are those referred to as the Methods-1 and -3 estimators of Henderson [4]. Method-1 utilizes quadratics analogous to the sums of squares in a balanced analysis of variance. The quadratics employed in Method-3 represent differences between reductions in sums of squares due to fitting different models. Since in Method-3 more differences between reductions are available than one has variance components to estimate, the method is not uniquely defined. Here, the Method-3 estimators of the components associated with the two-way classification are taken to be those in Harville [3], which are the ones most commonly used. Searle [9] obtained algebraic expressions for the sampling variances of the Method-1 estimators of the "two-way" components. Low [6] gave similar expressions for the Method-3 estimators for the zero-interaction case. Their results were obtained by applying well-known formulas for the variances and covariances of quadratic functions of multivariate-normal random variables. These formulas state that if $\\mathbf{y}$ is a random vector having the multivariate normal distribution with mean $\\mathbf{u}$ and variance-covariance matrix $\\mathbf{V}$ and if $\\mathbf{A}$ and $\\mathbf{B}$ are square symmetric matrices of appropriate dimension having fixed elements, then \\begin{align}\\tag{1}\\operatorname{var} \\lbrack\\mathbf{y}'\\mathbf{Ay}\\rbrack &= 4\\mathbf{u}'\\mathbf{AV}\\mathbf{Au} + 2\\mathrm{tr}(\\mathbf{VA})^2\\end{align}and\\begin{align}\\tag{2} \\operatorname{cov} \\lbrack\\mathbf{y}'\\mathbf{Ay},\\mathbf{y}' \\mathbf{B}y\\rbrack &= 4\\mathbf{u}'\\mathbf{AVBu} + 2 \\mathrm{tr} (\\mathbf{VAVB})\\end{align} Searle [8], [10] and Mahamunulu [7] have also used these formulas to obtain algebraic expressions for the variances of commonly-used estimators of the components of variance associated with other unbalanced classifications. In the present paper, results (supplementary to those of Searle) are given which lead to expressions for the sampling variances of Method-3 estimators of the variance components associated with the unbalanced two-way cross classification with interaction. By using these results in combination with those of Searle, the variances of Method-1 and Method-3 estimators can be directly compared for a given set of subclass numbers. The results are shown to simplify when the "unbalancedness" is of the type associated with a balanced incomplete block design. Neither estimator of any component is uniformly better than the other for any such design. (Except for the estimators of the residual component which are identically equal.)

  • Research Article
  • Cite Count Icon 1
  • 10.3724/sp.j.1041.2013.00114
Using Adjusted Bootstrap to Improve the Estimation of Variance Components and Their Variability for Generalizability Theory
  • Nov 27, 2013
  • Acta Psychologica Sinica
  • Guangming Li + 1 more

Bootstrap is a returned re-sampling method used to estimate the variance component and their variability. Adjusted bootstrap method was used by Wiley in p×i design for normal data in 2001. However, Wiley did not compare the difference between adjusted method and unadjusted method when estimating the variability. To expand Wiley’s 2001 study, our study applied Monte Carlo method to simulate four distribution data. The aim of simulation is to explore the effects of four different estimation methods when estimating the variability of estimated variance components for generalizability theory. The four distribution data are normal distribution data, dichotomous distribution data, polytomous distribution data and skewed distribution data. It is common that researchers focus on normal distribution data and neglect non-normal distribution data, yet non-normal distribution data could always be seen in tests such as TOEFL and GRE. There are several methods to estimate the variability of variance components, including traditional, bootstrap, jackknife and Markov Chain Monte Carlo (MCMC). Former research by Li and Zhang (2009) shows that bootstrap method is significantlybetter than traditional, jackknife, and MCMC methods in estimating the variability for four distribution data. Bootstrap method has superior cross-distribution quality when estimating the variability of estimated variance components. Li and Zhang (2009) also suggest that bootstrap method should be adopted with a divide-and-conquer strategy to obtain good estimated standard error and estimated confidence interval and the criteria of such strategy should be set to: boot-p for person, boot-pi for item, and boot-i for person and item. However, it is unclear that which of the bootstrap methods (adjusted and unadjusted) is better for boot-p, boot-pi, and boot-i. Therefore, our study intends to probe into this comparison as well. This aim of the study is to explore whether adjusted bootstrap method is superior to unadjusted method in improving the estimation of variance components and their variability relative for generalizability theory. The simulation is implemented in R statistical programming environment. To simulate skewed data, HyperbolicDist package is used. Some criteria are set to compare the four methods. The bias is considered when variance components and their standard errors are estimated. The smaller the absolute bias is, the more reliable the result is. The criterion of confidence intervals is 80% interval coverage. If the 80% interval coverage is closer to 0.80, the confidence interval is more reliable. The results indicate that for four distribution data, adjusted bootstrap method is superior to unadjusted bootstrap method whether in point estimation of variance components or in variability estimation of variance components. For its improvement of the estimation of variance components and their variability for generalizability theory, adjusted bootstrap should be adopted as soon as possible.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/09712119.2004.9706512
Estimation of (Co)variance Components and Genetic Parameters of Growth Traits in Beef Cattle
  • Dec 1, 2004
  • Journal of Applied Animal Research
  • Hailu Dadi + 2 more

Dadi, H. Schoeman, S.J. and Jordaan, G.F. 2004. Estimation of (co)variance components and genetic parameters of growth traits in beef cattle. J. Appl. Anim. Res., 26: 77–82. Variance components and genetic parameters of birth weight (BW), weaning weight (WW) and average daily gain (ADG) in a multibreed beef cattle population were estimated by Restricted Maximum Likelihood (REML) procedures. Four different unitrait animal models were fitted ranging from a simple model with the animal direct effects as the only random effect to the model allowing for both genetic and permanent maternal environmental effects. The simple model excluding maternal effects most likely inflated direct heritability estimates. The model that included direct genetic and permanent maternal environmental effects generally best described the data analysed. Estimates of permanent maternal environmental effects were 0.15 (BW), 0.24 (WW) and 0.24 (ADG), the effects were the important factor determining WW and ADG. Direct and maternal genetic correlations were 0.61, −0.53 and −0.79 for BW, WW and ADG, respectively under the model accounted both for maternal genetic and permanent maternal environmental effects.

  • Research Article
  • Cite Count Icon 11
  • 10.1137/0916013
Large-Scale Estimation of Variance and Covariance Components
  • Jan 1, 1995
  • SIAM Journal on Scientific Computing
  • Chris Fraley + 1 more

This paper concerns matrix computations within algorithms for variance and covariance component estimation. Hemmerle and Hartley [Technometrics, 15 (1973), pp. 819–831] showed how to compute the objective function and its derivatives for maximum likelihood estimation of variance components using matrices with dimensions of the order of the number of coefficients rather than that of the number of observations. Their approach was extended by Corbeil and Searle [Technometrics, 18 (1976), pp. 31–38] for restricted maximum likelihood estimation. A similar reduction in dimension is possible using expectation-maximization (EM) algorithms. In most cases, variance components are assumed to be strictly positive. We advocate the use of a modification that is numerically stable even if variance component estimates are small in magnitude. For problems in which the number of coefficients is large, Fellner [Proc. Statistical Computing Section, American Statistical Association, 1984, pp. 150–154], [Comm. Statist. Simulat...

  • Research Article
  • Cite Count Icon 38
  • 10.1046/j.1439-0388.2000.00248.x
Bayesian inference for genetic parameter estimation on growth traits for Nelore cattle in Brazil, using the Gibbs sampler
  • Jun 1, 2000
  • Journal of Animal Breeding and Genetics
  • C De U Magnabosco + 2 more

SummaryThis data set consisted of over 29 245 field records from 24 herds of registered Nelore cattle born between 1980 and 1993, with calves sires by 657 sires and 12 151 dams. The records were collected in south‐eastern and midwestern Brazil and animals were raised on pasture in a tropical climate. Three growth traits were included in these analyses: 205‐ (W205), 365‐ (W365) and 550‐day (W550) weight. The linear model included fixed effects for contemporary groups (herd‐year‐season‐sex) and age of dam at calving. The model also included random effects for direct genetic, maternal genetic and maternal permanent environmental (MPE) contributions to observations. The analyses were conducted using single‐trait and multiple‐trait animal models. Variance and covariance components were estimated by restricted maximum likelihood (REML) using a derivative‐free algorithm (DFREML) for multiple traits (MTDFREML). Bayesian inference was obtained by a multiple trait Gibbs sampling algorithm (GS) for (co)variance component inference in animal models (MTGSAM). Three different sets of prior distributions for the (co)variance components were used: flat, symmetric, and sharp. The shape parameters (ν) were 0, 5 and 9, respectively. The results suggested that the shape of the prior distributions did not affect the estimates of (co)variance components. From the REML analyses, for all traits, direct heritabilities obtained from single trait analyses were smaller than those obtained from bivariate analyses and by the GS method. Estimates of genetic correlations between direct and maternal effects obtained using REML were positive but very low, indicating that genetic selection programs should consider both components jointly. GS produced similar but slightly higher estimates of genetic parameters than REML, however, the greater robustness of GS makes it the method of choice for many applications.

  • Research Article
  • Cite Count Icon 1144
  • 10.2307/3001853
Estimation of Variance and Covariance Components
  • Jun 1, 1953
  • Biometrics
  • C R Henderson

The theory of variance component analysis has been discussed recently by Crump (1946, 1951) and by Eisenhart (1947). These papers and, indeed, most of the published works on estimating variance components deal with the one-way classification, with nested classifications, and with factorial classifications having equal subclass numbers. Also most papers on this subject are concerned with what Eisenhart (1947) has called Model II; that is, all elements of the linear model save gi are regarded as random variables. In the above cases, estimation of variance components is usually accomplished by computing the mean squares in the standard analysis of variance, equating these mean squares to their expectations, and solving for the unknown variances. These techniques are described in many statistical textbooks. Unfortunately, research workers in some of those fields in which much use is made of variance component estimates are unable to obtain data which have the above described characteristics. This is particularly true in those fields in which survey data must be used or where, even in a well-planned experiment, the subclasses are of quite unequal size due, for example, to differences in litter numbers. Also,

  • Single Book
  • Cite Count Icon 2
  • 10.1007/978-94-011-1004-4
Proceedings of the International Conference on Linear Statistical Inference LINSTAT ’93
  • Jan 1, 1994
  • T Caliński + 1 more

Estimation, Prediction and Testing in Linear Models. Increments for (co)kriging with trend and pseudo-covariance estimation, L.C.A. Corsten. On the presentation of the minimax linear estimator in the convex linear model, H. Drygas, H. Lauter. Estimation of parameters in a special type of random effects model, J. Volaufova. Recent results in multiple testing - several treatments vs. a specified treatment, C.W. Dunnett. Multiple-multivariate-sequential T2-comparisons, C.P. Kitsos. On diagnosing collinearity-influential points in linear regression, H. Nyquist. Using nonnegative minimum biased quadratic estimation for variable selection in the linear regression model, S. Gnot, H. Knautz, G. Trenkler. Partial least squares and a linear model, D. von Rosen. Robustness. One-way analysis of variance under Tukey contamination - a small sample case simulation study, R. Zielinski. A note on robust estimation of parameters in mixed unbalanced models, T. Bednarski, S. Zontek. Optimal bias bounds for robust estimation in linear models, C.H. Muller. Estimation of Variance Components. Geometrical relations among variance component estimators, L.R. LaMotte. Asymptotic efficiencies of MINQUE and ANOVA variance component estimates in the nonnormal random model, P.H. Westfall. On asymptotic normality of admissible invariant quadratic estimators of variance components, S. Zontek. Admissible nonnegative invariant quadratic estimation in linear models with two variance components, S. Gnot, G. Trenkler, D. Stemann. About the multimodality of the likelihood function when estimating the variance components in a one-way classification by means of the ML or REML method, V. Guiard. Nonlinear Generalizations. Prediction domain in nonlinear models, S. Audrain, R. Tomassone. The geometry of nonlinear inference - accounting of prior and boundaries, A. Pazman. Design and Analysis of Experiments. General balance - artificial theory or practical relevance?, R.A. Bailey. Optimality of generally balanced experimental block designs, B. Bogacka, S. Mejza. Optimality of the orthogonal block design for robust estimation under mixed models, R. Zmyslony, S. Zontek. On generalized binary proper efficiency-balanced block designs, A. Das, S. Kageyama. Design of experiments and neighbour methods, J.-M. Azais. A new look into composite designs, S. Ghosh, W.S. Al-Sabah. Using the complex linear model to search for an optimal juxtaposition of regular fractions, H. Monod, A. Kobilinsky. Some directions in comparison of linear experiments - a review, C. Stepniak, Z. Otachel. Properties of comparison criteria of normal experiments, J. Hauke, A. Markiewicz. Miscellanea. Characterizations of oblique and orthogonal projectors, G. Trenkler. Asymptotic properties of least squares parameter estimators in a dynamic errors-in-variables model, J. ten Vregelaar. A generic look at factor analysis, M. Lejeune. On Q-covariance and its applications, A. Krajka, D. Szynal.

  • Research Article
  • Cite Count Icon 26
  • 10.1007/s00190-014-0693-0
An alternative method for non-negative estimation of variance components
  • Jan 21, 2014
  • Journal of Geodesy
  • Khosro Moghtased-Azar + 2 more

A typical problem of estimation principles of variance and covariance components is that they do not produce positive variances in general. This caveat is due, in particular, to a variety of reasons: (1) a badly chosen set of initial variance components, namely initial value problem (IVP), (2) low redundancy in functional model, (3) an improper stochastic model, and (4) data’s possibility of containing outliers. Accordingly, a lot of effort has been made in order to design non-negative estimates of variance components. However, the desires on non-negative and unbiased estimation can seldom be met simultaneously. Likewise, in order to search for a practical non-negative estimator, one has to give up the condition on unbiasedness, which implies that the estimator will be biased. On the other hand, unlike the variance components, the covariance components can be negative, so the methods for obtaining non-negative estimates of variance components are not applicable. This study presents an alternative method to non-negative estimation of variance components such that non-negativity of the variance components is automatically supported. The idea is based upon the use of the functions whose range is the set of all positive real numbers, namely positive-valued functions (PVFs), for unknown variance components in stochastic model instead of using variance components themselves. Using the PVF could eliminate the effect of IVP on the estimation process. This concept is reparameterized on the restricted maximum likelihood with no effect on the unbiasedness of the scheme. The numerical results show the successful estimation of non-negativity estimation of variance components (as positive values) as well as covariance components (as negative or positive values).

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.