Abstract

We consider Tikhonov regularization of linear inverse problems with discrete noisy data containing correlated errors. Generalized cross-validation (GCV) is a prominent parameter choice method, but it is known to perform poorly if the sample size n is small or if the errors are correlated, sometimes giving the extreme value 0. We explain why this can occur and show that the robust GCV methods perform better. In particular, it is shown that, for any data set, there is a value of the robustness parameter below which the strong robust GCV method (R(1)GCV) will not choose the value 0. We also show that, if the errors are correlated with a certain covariance model, then, for a range of values of the unknown correlation parameter, the expected R(1)GCV estimate has a near optimal rate as a n -> infinity. Numerical results for the problem of second derivative estimation are consistent with the theoretical results and show that R(1)GCV gives reliable and accurate estimates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call