Abstract

An interesting property for curve length digital estimators is the convergence toward the continuous length and the associate convergence speed when the digitization step h tends to 0. On the one hand, it has been proved that the local estimators do not verify this convergence. On the other hand, DSS and MLP based estimators have been proved to converge but only under some convexity and smoothness or polygonal assumptions. In this frame, a new estimator class, the so called semi-local estimators, has been introduced by Daurat et al. in [4]. For this class, the pattern size depends on the resolution but not on the digitized function. The semi-local estimator convergence has been proved for functions of class \(\mathcal{C}^2\) with an optimal convergence speed that is a \(\mathcal{O}(h^{\frac 1 2})\) without convexity assumption (here, optimal means with the best estimation parameter setting). A semi-local estimator subclass, that we call sparse estimators, is exhibited here. The sparse estimators are proved to have the same convergence speed as the semi-local estimators under the weaker assumptions. Besides, if the continuous function that is digitized is concave, the sparse estimators are proved to have an optimal convergence speed in h. Furthermore, assuming a sequence of functions \(G_h\colon h\mspace{1.0mu}\mathbb{Z} \to h\mspace{1.0mu}\mathbb{Z}\) discretizing a given Euclidean function as h tends to 0, sparse length estimation computational complexity in the optimal setting is a \(\mathcal{O}(h^{-\frac{1}{2}})\).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call