Abstract

Computational vision often needs to deal with derivatives of digital images. Such derivatives are not intrinsic properties of digital data; a paradigm is required to make them well-defined. Normally, a linear filtering is applied. This can be formulated in terms of scale-space, functional minimization, or edge detection filters. The main emphasis of this paper is to connect these theories in order to gain insight in their similarities and differences. We do not want, in this paper, to take part in any discussion of how edge detection must be performed, but will only link some of the current theories. We take regularization (or functional minimization) as a starting point, and show that it boils down to Gaussian scale-space if we require scale invariance and a semi-group constraint to be satisfied. This regularization implies the minimization of a functional containing terms up to infinite order of differentiation. If the functional is truncated at second order, the Canny-Deriche filter arises. It is also shown that higher dimensional regularization boils down to a rotated version of the one dimensional case, when Cartesian invariance is imposed and the image is vanishing at the borders. This means that the results from 1D regularization can be easily generalized to higher dimensions. Finally we show how an efficient implementation of regularization of order n can be made by recursive filtering using 2n multiplications and additions per output element without introducing any approximation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call