Abstract

In recent years, the importance of the semantic segmentation field has been increasingly emphasized because autonomous vehicle and artificial intelligence (AI)-based robot technology are being researched extensively; and methods for accurately recognizing objects are required. Previous state-of-the-art segmentation methods have been proven to be effective for databases obtained during daytime. However, in extremely low light or nighttime environments, the shape and color information of objects are very small or disappear due to an insufficient amount of external light, which makes it difficult to train the segmentation network and significantly degrades performance. In our previous work, segmentation performance in a low light environment was improved using the enhancement-based segmentation method. However, low light images could not be restored precisely and segmentation performance improvement was limited because only per-pixel loss functions were used when training the enhancement network. To overcome these drawbacks, we propose a low light image segmentation method based on a modified perceptual cycle generative adversarial network (CycleGAN). Perceptual image enhancement was performed using our network, which significantly improved segmentation performance. Unlike the existing perceptual loss, the Euclidean distance of the feature maps extracted from the pretrained segmentation network was used. In our experiments, we used low light databases generated from two famous road scene open databases, which are Cambridge-driving Labeled Video Database (CamVid) and Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI), and confirmed that our proposed method shows better segmentation performance in extremely low light environments than the existing state-of-the art methods.

Highlights

  • In the field of autonomous vehicle and artificial intelligence (AI)-based robot technologies, the methods for accurately detecting and recognizing objects are required, and the consequent importance of semantic segmentation has greatlyThe associate editor coordinating the review of this manuscript and approving it for publication was Zahid Akhtar .increased

  • Our final model is a combination of a modified perceptual CycleGAN and a pyramid scene parsing network (PSPNet) [15], and through comparative experiments, we proved that the segmentation performance can be significantly improved in extremely low light environments compared to our previous

  • CONVOLUTIONAL NEURAL NETWORKS FOR LOW LIGHT IMAGE SEGMENTATION To measure how much the proposed method improves the segmentation performance in a low light environment, fully convolutional networks (FCN) [27], SegNET [28], PSPNet [15], image cascade network (ICNet) [29], and DeepLabv3+ [30] were used; five state-of-the-art segmentation networks are explained

Read more

Summary

Introduction

In the field of autonomous vehicle and artificial intelligence (AI)-based robot technologies, the methods for accurately detecting and recognizing objects are required, and the consequent importance of semantic segmentation has greatlyThe associate editor coordinating the review of this manuscript and approving it for publication was Zahid Akhtar .increased. In the field of autonomous vehicle and artificial intelligence (AI)-based robot technologies, the methods for accurately detecting and recognizing objects are required, and the consequent importance of semantic segmentation has greatly. With the development of deep learning technologies and computer hardware, various semantic segmentation models based on convolutional neural network (CNN) have been actively studied [48], [49]. The brightness of images is very low in nighttime due to an insufficient amount of external light, and the noise caused by a camera sensor increases. Motion and optical blur is generated in images because of camera’s long exposure time. Due to these problems, semantic segmentation is extremely difficult in low light environment and improving the performance is a challenging problem

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.