Abstract

Semantic segmentation of nighttime images has become an interesting research topic recently. In this work, we focus on semantic object recognition for nighttime driving scenes. The paper proposes a method to adapt the semantic models trained on daytime scenes to nighttime scenes through twilight time. In this process, the Pyramid Scene Parsing Network (PSPNet) model is suggested to provide an advanced framework for pixel prediction. The goal of the method is to reduce the cost of human annotation for nighttime scenes by transferring knowledge from typical daytime illumination conditions. Our model is trained and tested on the Cityscape dataset which is recorded in street scenes and intended for assessing the performance of vision algorithms for major tasks of semantic urban scene understanding. The proposed PSPNet model yields a mIoU record of 44.9% on nighttime driving scenes. Our experiments show that the proposed method is effective for knowledge transfer from daytime scenes to nighttime scenes without using additional human annotation. Further analysis on the proposed method has been presented in this study.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call