Abstract

The authors investigate the dimension expansion property of 3 layer feedforward neural networks and provide a helpful insight into how neural networks define complex decision boundaries. First, they note that adding a hidden neuron is equivalent to expanding the dimension of the space defined by the outputs of the hidden neurons. Thus, if the number of hidden neurons is larger than the number of inputs, the input data will be warped into a higher dimensional space. Second, they show that the weights between the hidden neurons and the output neurons always define linear boundaries in the hidden neuron space. Consequently, the input data is first mapped non-linearly into a higher dimensional space and divided by linear planes. Then the linear decision boundaries in the hidden neuron space will be warped into complex decision boundaries in the input space.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call