Abstract
AbstractThe distributed representation‐type three‐layered perceptron with backpropagation has such problems as the local minimum, long learning time, and ambiguity in internal representation. As a method to cope with those problems, this paper proposes the four‐layered perceptron, together with the learning algorithm, where a hidden layer is added, so that each discrete sample point can perfectly be represented by the corresponding output of the upper hidden layer.First, the learning algorithm of the perceptron is applied successively to the sample points, and the learning is executed so that the input sample points are separated perfectly by the piecewise sets of hyperplanes. In this mechanism, the output matrix of the lower bidder layer output is nonsingular. Consequently, the following four‐layered perceptron can be constructed, where the output matrix of the upper hidden layer is an identity matrix, and any discrete value can be produced as the output from the output layer by adjusting the network coefficients. Computational experiments are made for the realization of the three‐valued logic function, which is a learning problem on the two‐dimensional plane, as well as the pattern recognition problem by the representative sample points. As a result, it is shown that the learning converges in less than 1/100 computation time, compared to the three‐layered perceptron with the backpropagation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.