Abstract

We recently demonstrated the remarkable performance of scene parsing, and one of its aspects was shown to be relevant to performance, namely, generation of multilevel feature representations. However, most existing scene parsing methods obtain multilevel feature representations with weak distinctions and large spans. Therefore, despite using complex mechanisms, the effects on the feature representations are minimal. To address this, we leverage the inherent multilevel cross-modal data and back propagation to develop a novel feature reconstruction network (FRNet) for RGB-D indoor scene parsing. Specifically, a feature construction encoder is proposed to obtain the features layerwise in a top-down manner, where the feature nodes in the higher layer flow to the adjacent low layer by dynamically changing their structure. In addition, we propose a cross-level enriching module in the encoder to selectively refine and weight the features in each layer in the RGB and depth modalities as well as a cross-modality awareness module to generate the feature nodes containing the modality data. Finally, we integrate the multilevel feature representations simply via dilated convolutions at different rates. Extensive quantitative and qualitative experiments were conducted, and the results demonstrate that the proposed FRNet is comparable to state-of-the-art RGB-D indoor scene parsing methods on two public indoor datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call