Abstract
The 3D reconstruction of forests provides a strong basis for scientific regulation of tree growth and fine survey of forest resources. Depth estimation is the key to the 3D reconstruction of inter-forest scene, which directly determines the effect of digital stereo reproduction. In order to solve the problem that the existing stereo matching methods lack the ability to use environmental information to find the consistency of ill-posed regions, resulting in poor matching effect in regions with weak texture, occlusion and other inconspicuous features, LANet, a stereo matching network based on Linear-Attention mechanism is proposed, which improves the stereo matching accuracy by effectively utilizing the global and local information of the environment, thereby optimizing the depth estimation effect. An AM attention module including a spatial attention module (SAM) and a channel attention module (CAM) is designed to model the semantic relevance of inter-forest scenes from the spatial and channel dimensions. The linear-attention mechanism proposed in SAM reduces the overall complexity of Self-Attention from O(n2) to O(n), and selectively aggregates the features of each position by weighted summation of all positions, so as to learn rich contextual relations to capture long-range dependencies. The Self-Attention mechanism used in CAM selectively emphasizes interdependent channel maps by learning the associated features between different channels. A 3D CNN module is optimized to adjust the matching cost volume by combining multiple stacked hourglass networks with intermediate supervision, which further improves the speed of the model while reducing the cost of inferential calculation. The proposed LANet is tested on the SceneFlow dataset with EPE of 0.82 and three-pixel-error of 2.31%, and tested on the Forest dataset with EPE of 0.68 and D1-all of 2.15% both of which outperform some state-of-the-art methods, and the comprehensive performance is very competitive. LANet can obtain high-precision disparity values of the inter-forest scene, which can be converted to obtain depth information, thus providing key data for high-quality 3D reconstruction of the forest.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.