Abstract
First-in–first-out (FIFO) line buffers occupy considerable logic gates and consume significant power of the feature from accelerated segment test (FAST) image corner detection engine. In this study, a FAST-Chip (FAST-C) engine is proposed to eliminate these drawbacks. It comprises three core techniques: arch pixel estimation (APE), hardware-efficient line buffer (HLB), and interleaving-based parallel access (IPA). APE can save three line buffers using gradient information for pixel prediction. With APE, the number of line buffers can be saved by 33%. HLB effectively organizes static random access memories (SRAMs) as line buffers to store reference pixels rather than FIFO. It demands less power and fewer logic gates. IPA makes the FAST-C engine have regular data paths and smooth data scheduling, keeping hardware throughput. Experiment results show that the accuracy difference between FAST and FAST-C is as minor as 2.29%. The proposed FAST-C engine is implemented using the Taiwan Semiconductor Manufacturing Company (TSMC) 0.18- $\mu \text{m}$ CMOS process with a logic gate count of 223k, where the line buffers are included. Its frequency is 613 MHz with a power consumption of 1.18 W. Its maximum throughput can sufficiently support $3840\times 2.160$ (4k) @ 60 frames per second (fps). Compared with sophisticated FAST engines, the FAST-C engine presents outstanding energy efficiency (EE) and area efficiency (AE).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.