Abstract

In many real computations such as arithmetic operations in hidden layers of a neural network, some amounts of inaccuracies can be tolerated without degrading the final results (e.g., maintaining the same level of accuracy for image classification). This paper presents design of kNN-CAM, a k-Nearest Neighbors (kNN)-based Configurable Approximate floating point Multiplier. kNN-CAM utilizes approximate computing opportunities to deliver significant area and energy savings. A kNN engine is trained on a sufficiently large set of input data to learn the quantity of bit truncation that can be performed in each floating point input with the goal of minimizing energy and area. Next, this trained engine is used to predict the level of approximation for unseen data. Experimental results show that kNN-CAM provides about 67% area saving and 19% speedup while losing only 4.86% accuracy when compared to a 100% accurate multiplier. Furthermore, the application of kNN-CAM in implementation of a handwritten digit recognition provides 47.2% area saving while the accuracy is dropped by only 0.3%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.