Abstract
We recently demonstrated the remarkable performance of scene parsing, and one of its aspects was shown to be relevant to performance, namely, generation of multilevel feature representations. However, most existing scene parsing methods obtain multilevel feature representations with weak distinctions and large spans. Therefore, despite using complex mechanisms, the effects on the feature representations are minimal. To address this, we leverage the inherent multilevel cross-modal data and back propagation to develop a novel feature reconstruction network (FRNet) for RGB-D indoor scene parsing. Specifically, a feature construction encoder is proposed to obtain the features layerwise in a top-down manner, where the feature nodes in the higher layer flow to the adjacent low layer by dynamically changing their structure. In addition, we propose a cross-level enriching module in the encoder to selectively refine and weight the features in each layer in the RGB and depth modalities as well as a cross-modality awareness module to generate the feature nodes containing the modality data. Finally, we integrate the multilevel feature representations simply via dilated convolutions at different rates. Extensive quantitative and qualitative experiments were conducted, and the results demonstrate that the proposed FRNet is comparable to state-of-the-art RGB-D indoor scene parsing methods on two public indoor datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal of Selected Topics in Signal Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.