Abstract

Hypergraph matching has been attractive in the application of computer vision in recent years. The interference of external factors, such as squeezing, pulling, occlusion, and noise, results in the same target displaying different image characteristics under different influencing factors. After extracting the image feature point description, the traditional method directly measures the feature description using distance measurement methods such as Euclidean distance, cosine distance, and Manhattan distance, which lack a sufficient generalization ability and negatively impact the accuracy and effectiveness of matching. This paper proposes a metric-learning-based hypergraph matching (MLGM) approach that employs metric learning to express the similarity relationship between high-order image descriptors and learns a new metric function based on scene requirements and target characteristics. The experimental results show that our proposed method performs better than state-of-the-art algorithms on both synthetic and natural images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.