Abstract

The main objective of this research paper is to introduce a novel approach to pattern classification across multiple dimensions. Trans-dimensional learning is concerned with automatically determining network architectures that can generalize not only for fixed-dimensional problems but take a profound step by exemplifying how learning an unrestricted number of problems--which differ in the dimensionality of their input space N--can be accomplished. Relaxing the classification of network units as inputs hidden and outputs leads to the notion of all units of a network being features. Learning is perceived as the process of creating features and integrating these with other constructed features to form a self-adjusting network which can build on prior learned knowledge. This feat is accomplished by utilizing the simple perception rule for local feature training and incorporating evolutionary processes for determining suitable partitions to train individual features on. The basic algorithm introduced here (TDL) is augmented by single feature pruning to further reduce network complexity. An array of experiments is presented to emphasize the learning capabilities of TDL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call