Abstract
Linear estimation of signals is often based on covariance matrices estimated from training, which can perform poorly if the training data are limited and the estimated covariance matrices are ill-conditioned. Shrinking the covariance matrix toward a scaled identity matrix can improve the robustness against the model uncertainty provided the shrinkage factor is appropriately chosen. This paper introduces several cross-validation schemes for choosing the shrinkage factors in applications where the covariance matrices are replaced with sample covariance matrices or constructed from least squares estimates of the linear model parameters. For cases where the training and out-of-training data are identically distributed, we derive leave-one-out cross-validation (LOOCV) schemes that repeatedly split the training data with respect to time to determine the optimal shrinkage factors for either model fitting or signal estimation. For cases where they are distributed differently, we develop alternative LOOCV schemes that repeatedly split the out-of-training observations with respect to space. We derive computationally efficient implementations of those schemes and provide examples to demonstrate their performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.