Abstract
Establishing local visual correspondence between video frames is an important and challenging problem in many vision based applications. Local keypoint detection and description based pixel-level matching is a typical way for visual correspondence estimation. Unlike traditional local keypoint descriptor based methods, this paper proposes a comprehensive yet low-dimensional local feature descriptor based on superpixels generated by over segmentation. The proposed local feature descriptor extracts shape feature, texture feature, and color feature from superpixels by orientated center-boundary distance (OCBD), gray-level co-occurrence matrix (GLCM), and saturation histogram (SHIST), respectively. The types of features are more comprehensive than existing descriptors which extract only one specific kind of feature. Experimental results on the widely used Middlebury optical flow dataset prove that the proposed superpixel descriptor achieves triple accuracy compared with the state-of-the-art ORB descriptor which has the same dimension of features with the proposed one. In addition, since the dimension of the proposed superpixel descriptor is low, it is convenient for matching and memory-efficient for hardware implementation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.