Abstract

A unique word-serial inner-product processor architecture is proposed to capitalize on the high-speed serial-link bus. To eliminate the input buffers and deserializers, partial products are generated immediately from the serial input data and accumulated by an array of small binary counters operating in parallel to form a reduced partial product matrix directly. The height of the resultant partial product matrix is reduced logarithmically, and hence the carry-save-adder tree needed to complete the inner-product computation is smaller and faster. The small binary counters act as active on-chip buffers to mitigate the workload of the partial product accumulator. Their ability to accumulate partial product bits faster than combinatorial full adder leads to a simple two-stage architecture of high throughput and low latency. The architecture consumes 46% less silicon area, 24% less energy per inner-product computation and 70% less total interconnect length than its merged arithmetic counterpart in 65 nm CMOS process. In addition, the architecture requires only 4 metal layers out of available 7 layers for signal and power routing. By emulating the on-chip serial-link bus architecture on both designs, it is demonstrated that the proposed design is most suited for high-speed on-chip serial-link bus architecture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.