Abstract

We exploit the potential of parsimonious higher-order neural classifiers for reduction of hardware expenses, speedup in learning, and robust generalization. Specifically, our neuron model allows for computation of input products of potentially unlimited order. Structural adaptation of the topology is achieved by two alternative algorithms, that ultimately allocate resources for the relevant nonlinear interactions only. At the same time, the problem of combinatorial explosion of higher-order terms is kept in check. The first algorithm, being a deterministic pruning variant, starts with the ultimate higher-order neuron, and performs an iterated process of weight elimination. The second algorithm, implementing a stochastic search, explores the space of sparse topologies. It starts with a randomly allocated set of higher-order terms, and modifies resource allocation, while keeping the size of the architecture fixed. Two challenging classification benchmarks were chosen to demonstrate the excellent performance of the presented approach: first, the two-spirals separation problem, and second the left-/right-shift classification problem for binary strings. Our simulation results show that the proposed model may be a powerful tool for a variety of hard classification problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call