Abstract

The task of complex scene semantic segmentation is to classify and label the scene image pixel by pixel. For the complex image information in autonomous driving scenes, its characteristics such as many kinds of targets and various scene changes make the segmentation task more difficult, making various kinds of FCN-based networks unable to restore the image information well. In contrast, the encoder–decoder network structure represented by SegNet and UNet uses jump connections and other methods to restore image information. Still, its extraction of shallow details is simple and unfocused. In this paper, we propose a U-shaped convolutional neural network with a jump attention mechanism, which is an improved encoder plus decoder structure to achieve semantic segmentation by four times of convolutional downsampling and four transposed convolutional upsamplings while adding a jump attention module in the upsampling process to realize selective extraction of contextual information from high-dimensional features to guide low-dimensional features, improve the fusion of deep and shallow features, and ensure the consistency of the same type of pixel prediction. The CamVid and Cityscapes datasets are sampled for the experiments, and the model ground mIoU evaluation metrics can reach 66.3% and 69.1%. Compared with other mainstream semantic segmentation algorithms, this method is competitive in terms of segmentation performance and model size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call