Abstract

We introduce some recent and very recent smoothing methods which focus on the preservation of boundaries, spikes and canyons in presence of noise. We try to point out basic principles they have in common; the most important one is the robustness aspect. It is reflected by the use of `cup functions' in the statistical loss functions instead of squares; such cup functions were introduced early in robust statistics to down weight outliers. Basically, they are variants of truncated squares. We discuss all the methods in the common framework of `energy functions', i.e we associate to (most of) the algorithms a `loss function' in such a fashion that the output of the algorithm or the `estimate' is a global or local minimum of this loss function. The third aspect we pursue is the correspondence between loss functions and their local minima and nonlinear filters. We shall argue that the nonlinear filters can be interpreted as variants of gradient descent on the loss functions. This way we can show that some (robust) M-estimators and some nonlinear filters produce almost the same result.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.