Abstract

This study focuses on tackling the challenge of building mapping in multi-modal remote sensing data by proposing a novel, deep superpixel-wise convolutional neural network called DeepQuantized-Net, plus a new red, green, blue (RGB)-depth data set named IND. DeepQuantized-Net incorporated two practical ideas in segmentation: first, improving the object pattern with the exploitation of superpixels instead of pixels, as the imaging unit in DeepQuantized-Net. Second, the reduction of computational cost. The generated data set includes 294 RGB-depth images (256 training images and 38 test images) from different locations in the state of Indiana in the U.S., with 1024 × 1024 pixels and a spatial resolution of 0.5 ftthat covers different cities. The experimental results using the IND data set demonstrates the mean F1 scores and the average Intersection over Union scores could increase by approximately 7.0% and 7.2% compared to other methods, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call