Abstract

We extend backpropagation (BP) learning from ordinary unidirectional training to bidirectional training of deep multilayer neural networks. This gives a form of backward chaining or inverse inference from an observed network output to a candidate input that produced the output. The trained network learns a bidirectional mapping and can apply to some inverse problems. A bidirectional multilayer neural network can exactly represent some invertible functions. We prove that a fixed three-layer network can always exactly represent any finite permutation function and its inverse. The forward pass computes the permutation function value. The backward pass computes the inverse permutation with the same weights and hidden neurons. A joint forward–backward error function allows BP learning in both directions without overwriting learning in either direction. The learning applies to classification and regression. The algorithms do not require that the underlying sampled function has an inverse. A trained regression network tends to map an output back to the centroid of its preimage set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call