Abstract

This paper presents a feature descriptor well suited for limited-resource applications such as an unmanned aerial vehicle embedded systems, small microprocessors, and small low-power field programmable gate array (FPGA) fabric. The basis sparse-coding inspired similarity (BASIS) descriptor utilizes sparse coding to create dictionary images that model the regions in the human visual cortex. Due to the reduced amount of computation required for computing BASIS descriptors, reduced descriptor size, and the ability to create the descriptors without the use of a floating point, this approach is an excellent candidate for FPGA hardware implementation. The bit-level-accurate BASIS descriptor was tested on a dataset of real aerial images with the task of calculating a frame-to-frame homography and compared to software versions of scale-invariant feature transform (SIFT) and speeded-up robust features (SURF). Experimental results show that the BASIS descriptor outperforms SIFT and performs comparably to SURF on frame-to-frame aerial feature point matching. BASIS descriptors require less memory storage than other descriptors and can be computed entirely in hardware, allowing the descriptor to operate at real-time frame rates on a low-power embedded platform such as an FPGA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call