Abstract

Abstract. Semantic segmentation models are often affected by illumination changes, and fail to predict correct labels. Although there has been a lot of research on indoor semantic segmentation, it has not been studied in low-light environments. In this paper we propose a new framework, LISU, for Low-light Indoor Scene Understanding. We first decompose the low-light images into reflectance and illumination components, and then jointly learn reflectance restoration and semantic segmentation. To train and evaluate the proposed framework, we propose a new data set, namely LLRGBD, which consists of a large synthetic low-light indoor data set (LLRGBD-synthetic) and a small real data set (LLRGBD-real). The experimental results show that the illumination-invariant features effectively improve the performance of semantic segmentation. Compared with the baseline model, the mIoU of the proposed LISU framework has increased by 11.5%. In addition, pre-training on our synthetic data set increases the mIoU by 7.2%. Our data sets and models are available on our project website.

Highlights

  • Indoor semantic segmentation is a fundamental computer vision task, which assigns a semantic label to each pixel in an indoor scene image

  • We propose a novel framework which exploits the illumination-invariant features for robust low-light indoor semantic segmentation

  • Unlike (Wei et al, 2018) that used reflectance maps to weight the function, we look for clues from the original low-light images to weight the loss function

Read more

Summary

Introduction

Indoor semantic segmentation is a fundamental computer vision task, which assigns a semantic label to each pixel in an indoor scene image. To overcome the negative influence of illumination changes on semantic segmentation, some research work in the field of autonomous driving pre-processed RGB images and transform them into illumination-invariant images based on spectral response (Alshammari et al, 2018, Maddern et al, 2014, Upcroft et al, 2014). These transformation methods are sensitive to the saturation of images, so they are not always effective (Upcroft et al, 2014)

Objectives
Methods
Results
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.