Abstract
This paper presents a line matching method based on multiple intensity ordering with uniformly spaced sampling. Line segments are extracted from the image pyramid, with the aim of adapting scale changes and addressing fragmentation problem. The neighborhood of line segments was divided into sub-regions adaptively according to intensity order to overcome the difficulty brought by various line lengths. An intensity-based local feature descriptor was introduced by constructing multiple concentric ring-shaped structures. The dimension of the descriptor was reduced significantly by uniformly spaced sampling and dividing sample points into several point sets while improving the discriminability. The performance of the proposed method was tested on public datasets which cover various scenarios and compared with another two well-known line matching algorithms. The experimental results show that our method achieves superior performance dealing with various image deformations, especially scale changes and large illumination changes, and provides much more reliable correspondences.
Highlights
Feature matching has remained an essential engineering task in image processing and has been widely applied in computer vision, including image registration [1], image-based 3D modelling [2], object recognition [3] and pose estimation [4].Typical feature matching algorithms usually consist of three steps: feature extraction, feature description and feature correspondence
We present the details of the line matching experimental results to evaluate the proposed method compared with mean–standard deviation line descriptor (MSLD) and line–point invariants (LPI)
We proposed a line matching method based on multiple intensity ordering with uniformly spaced sampling, which demonstrated good performance under a variety of scenarios
Summary
Feature matching has remained an essential engineering task in image processing and has been widely applied in computer vision, including image registration [1], image-based 3D modelling [2], object recognition [3] and pose estimation [4]. Typical feature matching algorithms usually consist of three steps: feature extraction, feature description and feature correspondence. Salient and stable features are extracted efficiently. The descriptors are constructed to encode the appearance of the neighborhood. The similarity between the descriptors are measured to evaluate the correspondence. Among the various features used in computer vision, point features have been widely studied [5,6,7,8]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.