Abstract

Over the past two years deep convolutional neural networks have pushed the performance of computer vision systems to soaring heights on semantic segmentation. In this study, the authors present a novel semantic segmentation method of using a deep fully convolutional neural network to achieve image segmentation results with more precise boundary localisation. The above segmentation engine is trainable, and consists of an encoder network with widening residual skipped connections and a decoder network with a pixel-wise classification layer. Here the encoder network with widening residual skipped connections allows the combination of shallow layer features and deep layer semantic features, and the decoder network with classification layer maps the low-resolution encoder features to full resolution image with pixel-wise classification. The experimental results on PASCAL VOC 2012 semantic segmentation dataset and Cityscapes dataset show that the proposed method is effective and competitive.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call