Abstract

There has been continuously growing research interest in Artificial Neural Networks (ANNs) due to their vast acceptance in many real-world applications. This has led to the appearance of a variety of realization methods for multilayer perceptron (MLP) inference models. In this paper, we propose a generalized 3-stage, configurable, multiplier-less, massive parallel realization architecture for MLP inference models. We use offset binary coded distributive arithmetic (OBC-DA), which uses LUTs to reduce hardware requirements, to realize internal computing units of MLPs. The designing efforts to achieve a reduction in resource requirements are two-fold; (i) reduction in LUT size itself, which is achieved by exploiting symmetry induced by OBC-DA and thereby reducing LUT size by 50%, and (ii) reduction in total number of LUTs which is attained by sharing the LUTs. The resource efficiency of the proposed architecture is further improved by decomposing the LUTs into parallel and smaller LUTs. Moreover, the resource efficiency is also enhanced by utilizing the symmetry among the inputs in the MLP layer. The ASIC synthesis results for underlying MLPs establish that the proposed method for the realization of MLPs is much more efficient than the earlier reported methods and coincides with the findings of the generalized hardware-timing complexities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.