Abstract

In today's phase of growing technology Convolution Neural Networks (CNNs) are all over the places. It is a thriving segment in machine learning and Artificial Intelligences (AI) techniques. CNN needs bulk amount of computing competence and memory with higher frequency range. In this present investigation, Pre-Accumulator and Post-Multipliers (PAPM) are proposed which accelerate the speed of processor. 4-bit multiplier using Carry Save Adder (CSA) is built with 6Transistors-Adder and sutras of Vedic mathematics is constructed. Accumulator of multiplier and accumulator are designed with Two Level Edge Triggering Flip-Flops (TLET-FF) to increase bandwidth of the memory. The proposed architecture of Multiply Accumulate (MAC) circuit consumes very less power when compared to existing high speed MACs. Performance of accumulator is contrasted with three different, two-level triggered flip-flops namely 16TLET-FF, 14TLET-FF and 12TLET-FFs. The projected MAC replaces the existing multipliers due its low power together with high frequency of operation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.