Abstract

Many problems in image processing are solved via the minimization of a cost functional. The most widely used optimization technique is the gradient descent, often used due to its simplicity and applicability where other optimization techniques, e.g., those coming from discrete optimization, can not be used. Yet, gradient descent suffers from a slow convergence, and often to just local minima which highly depends on the condition number of the functional Hessian. Newton- type methods, on the other hand, are known to have a rapid (quadratic) convergence. In its classical form, the Newton method relies on the L2-type norm to define the descent direction. In this paper, we generalize and reformulate this very important optimization method by introducing a novel Newton method based on general norms. This generalization opens up new possibilities in the extraction of the Newton step, including benefits such as mathematical stability and smoothness constraints. We first present the derivation of the modified Newton step in the calculus of variation framework. Then we demonstrate the method with two common objective functionals: variational image deblurring and geodesic active contours. We show that in addition to the fast convergence, different selections norm yield different and superior results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.