Abstract
Abstract A series of models are developed which predict the silicon area consumed by a neural network. These models predict the area consumed by different parts of a neural network and the effect of the use of different signalling types. The relative size of neural networks that use these different signalling types may thus be assessed. The silicon area consumed by neural networks implemented with local weights and single line inputs is shown to be orders of magnitude smaller than other possible neural network implementations. The use of single line transmission is shown to be the next most effective method. Differential or parallel digital data transmission techniques are shown to be the least satisfactory options with respect to silicon area consumption. In addition the use of rectangular synapse cells is shown to reduce the interconnect area consumed, while asymmetrical signalling techniques are shown to be advantageous.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.