Abstract

Semantic segmentation, which provides pixel-wise class labels for an input image, is expected to improve the movement performance of autonomous robots significantly. However, it is difficult to train a good classifier for target applications; public large-scale datasets are often unsuitable. Actually, a classifier trained using Cityscapes is not enough accurate for the Tsukuba Challenge. To generate an appropriate dataset for the target environment, we attempt to construct a semi-automatic method using a colored point cloud obtained with a 3D scanner. Although some degree of accuracy is achieved, it is not practical. Hence, we propose a novel method that creates images with shadows by rendering them in the 3D space to improve the classification accuracy of actual images with shadows, for which existing methods do not output appropriate results. Experimental results using datasets captured around the Tsukuba City Hall demonstrate that the proposed method was superior when appropriate constraints were applied for shadow generation; the mIoU was improved from 0.358 to 0.491 when testing images were obtained at different locations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call