Abstract

This paper presents learning multilayer Potts perceptrons (MLPotts) for data driven function approximation. A Potts perceptron is composed of a receptive field and a K -state transfer function that is generalized from sigmoid-like transfer functions of traditional perceptrons. An MLPotts network is organized to perform translation from a high-dimensional input to the sum of multiple postnonlinear projections, each with its own postnonlinearity realized by a weighted K-state transfer function. MLPotts networks span a function space that theoretically covers network functions of multilayer perceptrons. Compared with traditional perceptrons, weighted Potts perceptrons realize more flexible postnonlinear functions for nonlinear mappings. Numerical simulations show MLPotts learning by the Levenberg-Marquardt (LM) method significantly improves traditional supervised learning of multilayer perceptrons for data driven function approximation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call