Abstract

In this paper, we study the generalization ability of a simple perceptron which learns an unrealizable Boolean function represented by a perceptron with a non-monotonic transfer function of reversed-wedge type. This type of non-monotonic perceptron is considered as a variant of multilayer perceptron and is parametrized by a single `wedge' parameter a. Reflecting the non-monotonic nature of the target function, a discontinuous transition from the poor generalization phase to the good generalization phase is observed in the learning curve for intermediate values of a. We also find that asymptotic learning curves are classified into the following two categories depending on a. For large a, the learning curve obeys a power law with exponent 1. On the other hand, a power law with exponent is obtained for small a. Although these two exponents are obtained from unstable replica symmetric solutions by using the replica method, they are consistent with the results obtainable without using the replica method in a low-dimensional version of this learning problem. This suggests that our results are good approximations even if they are not exact.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call