Currently, numerous high-precision models have been proposed for semantic segmentation, but the model parameters are large and the segmentation speed is slow. Real-time semantic segmentation for urban scenes necessitates a balance between accuracy, inference speed, and model size. In this paper, we present an efficient solution to this challenge, efficient asymmetric attention module net (EAAMNet) for the semantic segmentation of urban scenes, which adopts an asymmetric encoder–decoder structure. The encoder part of the network utilizes an efficient asymmetric attention module to form the network backbone. In the decoding part, we propose a lightweight multi-feature fusion decoder that can maintain good segmentation accuracy with a small number of parameters. Our extensive evaluations demonstrate that EAAMNet achieves a favorable equilibrium between segmentation efficiency, model parameters, and segmentation accuracy, rendering it highly suitable for real-time semantic segmentation in urban scenes. Remarkably, EAAMNet attains a 73.31% mIoU at 128 fps on Cityscapes and a 69.32% mIoU at 141 fps on CamVid without any pre-training. Compared to state-of-the-art models, our approach not only matches their model parameters but also enhances accuracy and increases speed.
Read full abstract