Abstract

The aim of this article is to develop efficient methods of expressing multilevel structured information from various modalities (images, speech, and text) in order to naturally duplicate the structure as it occurs in the human brain. A number of theoretical and practical issues, including the creation of a mathematical model with a stability point, an algorithm, and software implementation for the processing of offline information; the representation of neural networks; and long-term synchronization of the various modalities, must be resolved in order to achieve the goal. An artificial neural network (ANN) of the Cohen–Grossberg type was used to accomplish the objectives. The research techniques reported herein are based on the theory of pattern recognition, as well as speech, text, and image processing algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call