Abstract

When multilayer neural networks are implemented with digital hardware, which allows full exploitation of the well developed digital VLSI technologies, the multiply operations in each neuron between the weights and the inputs can create a bottleneck in the system, because the digital multipliers are very demanding in terms of time or chip area. For this reason, the use of weights constrained to be power-of-two has been proposed in the paper to reduce the computational requirements of the networks. In this case, because one of the two multiplier operands is a power-of-two, the multiply operation can be performed as a much simpler shift operation on the neuron input. While this approach greatly reduces the computational burden of the forward phase of the network, the learning phase, performed using the traditional backpropagation procedure, still requires many regular multiplications. In the paper, a new learning procedure, based on the power-of-two approach, is proposed that can be performed using only shift and add operations, so that both the forward and learning phases of the network can be easily implemented with digital hardware.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.