Abstract

Image semantic segmentation is one of the key technologies for intelligent systems to understand natural scenes. As one of the important research directions in the field of visual intelligence, this technology has a wide range of application scenarios in the fields of mobile robots, drones, and intelligent driving. However, in practical applications, there may be problems such as inaccurate prediction of semantic labels, loss of segmented objects and background edge information. This paper proposes an improved semantic segmentation network that combines self-attention module and neural architecture search (NAS) method. The method first uses the NAS method to find a semantic segmentation network with multiple resolution branches. During the search process, the searched network structure is adjusted by combining the self-attention module, and then combined with the semantic segmentation networks searched by different branches to integrate into two semantic segmentation network models with different complexity, and finally integrate two network models with different complexity according to the current general teacher–student framework. The input image will first pass through the high complexity model to obtain more accurate parameters, which will affect the training weight of the student network, then pass the image into the low-complexity model to get the final predicted result. The experimental results on the Cityscapes dataset show that the accuracy of the algorithm is 69.8 %, the inference speed is 166.4 FPS, and the actual image segmentation speed is 48/s. It can optimize edge segmentation for better performance in complex scenes and achieve a good balance between real-time performance and accuracy in practical applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.