This work studies the effects of sampling variability in Monte Carlo-based methods to estimate very high-dimensional systems. Recent focus in the geosciences has been on representing the atmospheric state using a probability density function, and, for extremely high-dimensional systems, various sample-based Kalman filter techniques have been developed to address the problem of real-time assimilation of system information and observations. As the employed sample sizes are typically several orders of magnitude smaller than the system dimension, such sampling techniques inevitably induce considerable variability into the state estimate, primarily through prior and posterior sample covariance matrices. In this article, we quantify this variability with mean squared error measures for two Monte Carlo-based Kalman filter variants: the ensemble Kalman filter and the ensemble square-root Kalman filter. Expressions of the error measures are derived under weak assumptions and show that sample sizes need to grow proportionally to the square of the system dimension for bounded error growth. To reduce necessary ensemble size requirements and to address rank-deficient sample covariances, covariance-shrinking (tapering) based on the Schur product of the prior sample covariance and a positive definite function is demonstrated to be a simple, computationally feasible, and very effective technique. Rules for obtaining optimal taper functions for both stationary as well as non-stationary covariances are given, and optimal taper lengths are given in terms of the ensemble size and practical range of the forecast covariance. Results are also presented for optimal covariance inflation. The theory is verified and illustrated with extensive simulations.