Abstract

The main objective of this work is to develop an analytical method for designing translation invariant operators via neural network training. A new neural network architecture, called modular morphological neural network (MMNN), is defined using a fundamental result of minimal representations for translation invariant set mappings via mathematical morphology, proposed by Banon and Barrera (1991). The MMNN general architecture is capable of learning both binary and gray-scale translation invariant operators. For its training, ideas of the backpropagation (BP) algorithm and the methodology proposed by Pessoa and Maragos (see Ph.D. thesis, Georgia Institute of Technology, 1997) for overcoming the problem of non-differentiability of the rank functions are used. An alternative MMNN training method via genetic algorithms (GA) is also developed, and a comparative analysis of BP vs. GA training in problems of image restoration and pattern recognition is provided. The MMNN structure can be viewed as a special case of the morphological/rank/linear neural network (MRL-NN), proposed by Pessoa and Maragos (1997), but with specific architecture and training rules. The effectiveness of the proposed BP and GA training algorithms for MMNNs is encouraging, offering alternative design tools for the important class of translation invariant operators.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.