Abstract

Semantic segmentation is a pivotal task in the field of computer vision, encompassing diverse applications and undergoing continuous development. Despite the growing dominance of deep learning methods in this field, many existing network models suffer from trade-offs between accuracy and computational cost, or between speed and accuracy. In essence, semantic segmentation aims to extract semantic information from deep features and optimize them before upsampling output. However, shallow features tend to contain more detailed information but also more noise, while deep features have strong semantic information but lose some spatial information. To address this issue, we propose a novel mutual optimization strategy based on shallow spatial information and deep semantic information, and construct a details and semantic mutual optimization network (DSMONet). This effectively reduces the noise in the shallow features and guides the deep features to reconstruct the lost spatial information, avoiding cumbersome side auxiliary or complex decoders. The Mutual Optimization Module (MOM) includes Semantic Adjustment Details Module (SADM) and Detail Guided Semantic Module (DGSM), which enables mutual optimization of shallow spatial information and deep semantic information. Comparative evaluations against other methods demonstrate that DSMONet achieves a favorable balance between accuracy and speed. On the Cityscapes dataset, DSMONet achieves performances of 79.3% mean of class-wise intersection-over-union (mIoU)/44.6 frames per second (FPS) and 78.0% mIoU/102 FPS. The code is available at https://github.com/m828/DSMONet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call