Abstract

With the development of deep learning theory and the decrease of the cost of acquiring massive data, the image semantic segmentation algorithm based on Convolutional Neural Networks (CNNs) is gradually replacing the conventional segmentation algorithm by its high accuracy segmentation performance. By increasing the amount of training data and stacking more convolutional layers to form Deep Convolutional Neural Networks (DCNNs), a neural network model with higher segmentation accuracy can be obtained, but it faces the problems of serious memory consumption and long latency. For some special application scenarios, such as augmented reality and mobile interaction, real-time processing cannot be performed. To improve the speed of semantic segmentation while obtaining the most accurate segmentation results as possible, this paper proposes a semantic segmentation algorithm based on lightweight convolutional neural networks. Taking the computational complexity and segmentation accuracy into account, the algorithm starts from the perspective of extracting high-level semantic features and introduces a position-attention mechanism with richer contextual information to model the relationship between different pixels, avoiding the convolutional local perceptual field to be too small. To recover clearer target boundaries, a channel attention mechanism is introduced in the decoding part of the model to mine more useful feature channel information and effectively improve the fusion of low-level features with high-level features. By verifying the effectiveness of the above model on a publicly available dataset and comparing it with the more popular semantic segmentation methods, the model proposed in this paper has higher semantic segmentation accuracy and reflects certain advantages in objective evaluation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.