Abstract

The accuracy of fractional vegetation cover (FVC) estimation has important significance in high-precision agricultural and ecological environment assessments. However, the presence ofshadowsunder natural light conditions can cause problems such as poor vegetation distinguishability and blurred boundaries during vegetation segmentation. This leads to missing or over-segmented vegetation inshadedareas, thus reducing the accuracy of FVC. Polarizationinformation can reflect the texture structure features, edgecharacteristics and surface state information, providing important supplementary information to light intensity information. To address the issue of shadows interfering with vegetation segmentation, a Siamese coupling swin transformer (SiamC Transformer) is proposed in this study. In this network, a dual-flow structure is used to simultaneously retrieve the feature information of vegetation degree of linear polarization (DoLP) images and light intensity images simultaneously to obtain multi-dimensional global semantic information of multi-scale vegetation. In the feature fusion stage, two fusion modules are proposed: an adaptive fusion module (AFM) and an adaptive fusion plus module (AFPM), which can fuse shallow spatial information, texture information and deep semantic information. The AFM and AFPM enable the network to better achieve object localization, region activation and edge sharpening, while improving the segmentation accuracy of small objects. The experimental results show that the mean intersection over union (mIoU) of this network is as high as 97.46% on vegetation datasets consisting of light intensity and polarization images, which is better than that of other algorithms. This network has higher accuracy and adaptability for shadow scenes and improves the accuracy of FVC calculation under shadow conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call