Abstract

Currently, many real-time semantic segmentation networks aim for heightened accuracy, inevitably leading to increased computational complexity and reduced inference speed. Therefore, striking a balance between accuracy and speed has emerged as a crucial concern in this domain. To address these challenges, this study proposes a dual-branch fusion network with multiscale atrous pyramid pooling aggregate contextual features for real-time semantic segmentation (MAFNet). The first key component, the semantics guide spatial-details module (SGSDM) not only facilitates precise boundary extraction and fine-grained classification, but also provides semantic-based feature representation, thereby enhancing support for spatial analysis and decision boundaries. The second component, the multiscale atrous pyramid pooling module (MSAPPM), is designed by combining dilation convolution with feature pyramid pooling operations at various dilation rates. This design not only expands the receptive field, but also aggregates rich contextual information more effectively. To further improve the fusion of feature information generated by the dual-branch, a bilateral fusion module (BFM) is introduced. This module employs cross-fusion by calculating weights generated by the dual-branch to balance the weight relationship between the dual branches, thereby achieving effective feature information fusion. To validate the effectiveness of the proposed network, experiments are conducted on a single A100 GPU. MAFNet achieves a mean intersection over union (mIoU) of 77.4% at 70.9 FPS on the Cityscapes test dataset and 77.6% mIoU at 192.5 FPS on the CamVid test dataset. The experimental results conclusively demonstrated that MAFNet effectively strikes a balance between accuracy and speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call