Abstract

Deep learning for 3D reconstruction have just shown some promising advantanges, where 3D shapes can be predicted from a single RGB image. However, such works are often limited by single feature cue, which does not capture the 3D shape of objects well. To address this problem, an end-to-end 3D reconstruction approach that predicts 3D point cloud from dual-view RGB images is proposed in this paper. It consists of several processing parts. A dual-view 3D reconstruction network is proposed for 3D reconstruction, which predicts object’s point clouds by exploiting two RGB images with different views, and avoids the limitation of single feature cue. Another structure feature learning network is performed to extract the structure features with stronger representation ability from point clouds. A gated control network for data fusion is proposed to gather point clouds. It takes two sets of point clouds with different views as input and fuses them. The proposed approach is thoroughly evaluated with extensive experiments on the widely-used ShapeNet dataset. Both the qualitative results and quantitative analysis demonstrate that this method not only captures the detailed geometric structures of 3D shapes for different object categories with complex topologies, but also achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call