Abstract
Deep learning technology has made great progress in multi-view 3D reconstruction tasks. At present, the mainstream solutions adopt different ways to fusion the features from several views. Among them, attention-based aggregation function performs relatively well and stably, however, it still has an obvious shortcoming the strong independence of each view during predicting the weights for merging leads to a lack of adaption of the global state. In this paper, we propose a global-aware attention-based fusion approach that builds the correlation between each branch and the global feature to provide a comprehensive foundation for weights inference. On the basis of this, we design a complete reconstruction algorithm. Experiments on ShapeNet verify that our method outperforms existing SOTA methods. Furthermore, we propose a view-reduction method based on maximizing diversity and discuss the cost-performance tradeoff of our model to achieve a better performance when facing a heavy input amount and limited computational cost.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.