Abstract

Some results obtained from our quantitative analysis of artificial neural networks are presented. Although the results are from only a few neural models, they actually mirror the common advantages and limitations of the well-known neural networks. It can be seen that the common limitations of neural networks are: the learning complexity is generally high; and the quality of the performance of the networks cannot be guaranteed after learning. It is observed that the drawbacks of such networks stem from the lack of usage of prior knowledge and their uncontrollable learning processes. From the analysis, some approaches of using prior knowledge and optimization techniques for controlling the learning process are proposed. The characteristics of new approaches are discussed. Experimental results are given to show the efficiency of the proposed methods. The results show that prior knowledge can be used not only in architecture design but also in the learning process of neural networks so that their learning capacity and network performance can be improved greatly. Although much has been done in neural network research, little is known so far about their global properties, especially, quantitative properties. Therefore, the research also shows that quantitative analysis is of great value to the understanding and improvement of different neural network models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call