Abstract

Deep learning-based line detectors and descriptors have gained significant attention in computer vision. While most existing methods prioritize detecting repeatable line features, they neglect rich contextual information for effective feature detection and description, resulting in poor feature matching and 3D reconstruction. To address this issue and enhance informativeness by obtaining descriptors from line detection regions with strong matching characteristics, we propose an enhanced Multi-scale Line Detector and Descriptor network (MLNet). MLNet utilizes enhanced Hierarchical Feature Aggregation (HFA) and Adaptive Attention Selection (AAS), achieving comprehensive information representation for feature matching and 3D reconstruction. By integrating multi-branches and attention features, HFA optimally captures information from local features and global representations. Furthermore, we propose an AAS mechanism that adaptively selects attention from different feature levels to enhance line descriptors. Experimental results across diverse tasks, including line feature detection, feature matching, and 3D reconstruction, demonstrate the superior performance and generalization ability of our network compared to state-of-the-art techniques in computer vision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call