Abstract

In many statistical linear inverse problems, one needs to recover classes of similar objects from their noisy images under an operator that does not have a bounded inverse. Problems of this kind appear in many areas of application. Routinely, in such problems clustering is carried out at a pre-processing step and then the inverse problem is solved for each of the cluster averages separately. As a result, the errors of the procedures are usually examined for the estimation step only. The objective of this paper is to examine, both theoretically and via simulations, the effect of clustering on the accuracy of the solutions of general ill-posed linear inverse problems. In particular, we assume that one observes $X_{m} = A f_{m} + \delta \epsilon _{m}$ , $m=1, \cdots, M$ , where functions $f_{m}$ can be grouped into $K$ classes and one needs to recover a vector function $\mathbf {f}= (f_{1},\cdots, f_{M})^{T}$ . We construct an estimator for $\mathbf {f}$ as a solution of a penalized optimization problem which corresponds to the clustering before estimation setting. We derive an oracle inequality for its precision and confirm that the estimator is minimax optimal or nearly minimax optimal up to a logarithmic factor of the number of observations. One of the advantages of our approach is that we do not assume that the number of clusters is known in advance. Subsequently, we compare the accuracy of the above procedure with the precision of estimation without clustering, and clustering following the recovery of each of the unknown functions separately. We conclude that clustering at the pre-processing step is beneficial when the problem is moderately ill-posed. It should be applied with extreme care when the problem is severely ill-posed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call