Abstract
The automation of orchard production is increasingly relying on robotics, driven by the advancements in artificial intelligence technology. However, accurately comprehending semantic information and precisely locating various targets within orchard environments remain challenges. Current research often relies on expensive multi-sensor fusion techniques or vision-only approaches that yield inadequate segmentation outcomes for perceiving orchard surroundings. To address these issues, this article proposes a novel approach for target ranging in complex orchard scenes, leveraging semantic segmentation results. The article introduces the MsFF-Segformer model, which employs multi-scale feature fusion to generate high-precision semantic segmentation images. The model incorporates the MiT-B0 encoder, which utilizes a pure attention mechanism, and the MsFF decoder, specifically designed for multi-scale feature fusion. The MsFF decoder includes the AFAM module to effectively align features of adjacent scales. Additionally, the channel attention module and depth separable convolution module are introduced to reduce model parameter size and obtain feature vectors with rich semantic levels, enhancing the segmentation performance of multi-scale targets in orchards. Based on the accurate semantic segmentation outcomes in orchard environments, this study introduces a novel approach named TPDMR that integrates binocular vision to estimate the distances of various objects within orchards. Firstly, the process involves matching the semantic category matrix with the depth information matrix. Subsequently, the depth information array that represents the target category is obtained, and any invalid depth information is filtered out. Finally, the average depth of the target is calculated. Evaluation of the MsFF-Segformer model on a self-made orchard dataset demonstrates superior performance compared to U-net and other models, achieving a Mean Intersection over Union (MIoU) of 86.52 % and a Mean Pixel Accuracy (MPA) of 94.05 %. The parameters and prediction time for a single frame are 15.1 M and 0.019 s, respectively. These values are significantly lower than those of U-net, Deeplabv3+, and Hrnet models, with reductions of 84.1 %, 32.5 %, 5.9 % and 69.4 %, 59.7 %, 64.2 % respectively. The TPDMR method demonstrates a high level of accuracy and stability in target ranging, with a ranging error of less than 6 % across all targets. Furthermore, the overall algorithm runtime is estimated to be approximately 0.8 s, indicating efficient performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.