Abstract

Computer vision is one of the most active research fields in technology today. Giving machines the ability to see and comprehend the world at the speed of sight creates endless applications and opportunities. Feature detection and description algorithms are considered as the retina for machine vision. However, most of these algorithms are typically computationally intensive, which prevents them from achieving real-time performance. As such, embedded vision accelerators (FPGA, ASIC, etc.) can be targeted due to their inherent parallelizability. This chapter provides a comprehensive study on some of the recent feature detection and description algorithms and their hardware solutions. Specifically, it begins with a synopsis on basic concepts followed by a comparative study, from which the maximally stable extremal regions (MSER) and the scale invariant feature transform (SIFT) algorithms are selected for further analysis due to their robust performance. The chapter then reports some of their recent algorithmic derivatives and highlights their recent hardware designs and architectures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.