Segmentation of safe navigable areas is a crucial technology for scene parsing in autopilot systems. However, existing segmentation methods often fail to adequately exploit the complementary relationships between multiscale features in complex wild environments, resulting in insufficient information fusion. In pursuit of a resolution, a Progressive Segmentation Network (PSNet) is proposed for navigable areas segmentation, which builds a semantic–spatial information flow branch to dynamically exploit the complementary relationships between multiscale features for progressive guided learning. Specifically, PSNet consists of four essential modules, including the Local Capturer and Global dependence Builder (LCGB), Multi-Directional Pooling Module (MDPM), Fusion-wise Module (FWM), and Spatial Weight Aggregation Module (SWAM). To facilitate efficient information dissemination, the LCGB was utilized to capture dense spatial information, and the MDPM was proposed to extract global geometric information of obstacles. These pieces of information serve as prior knowledge to guide learning. Additionally, we propose the FWM based on attention fusion unit (AFU) and contribution weights unit (CWU) to construct the complementary relationships between multiscale features and obtain rich multiscale fusion information. SWAM is proposed to enhance the significant spatial features from FWM and achieve impressive segmentation results. Extensive experimental results on the wild datasets demonstrate that PSNet outperforms state-of-the-art methods in recognizing safe navigable areas. The code for PSNet will be available at https://github.com/lv881314/PSNet.
Read full abstract