Learning object shapes from a single image is challenging due to variations in scene content, geometric structures, and environmental factors, which create significant disparities between 2D image features and their corresponding 3D representations, hindering the effective training of deep learning models. Existing learning-based approaches can be divided into two-stage and single-stage methods, each with limitations. Two-stage methods often rely on generating intermediate proposals by searching for similar structures across the entire dataset, a process that is computationally expensive due to the large search space and high-dimensional feature-matching requirements, further limiting flexibility to predefined object categories. In contrast, single-stage methods directly reconstruct 3D shapes from images without intermediate steps, but they struggle to capture complex object geometries due to high feature loss between image features and 3D shapes and limit their ability to represent intricate details. To address these challenges, this paper introduces SS3DNet-AF, a single-stage, single-view 3D reconstruction network with an attention-based fusion (AF) mechanism to enhance focus on relevant image features, effectively capturing geometric details and generalizing across diverse object categories. The proposed method is quantitatively evaluated using the ShapeNet dataset, demonstrating its effectiveness in achieving accurate 3D reconstructions while overcoming the computational challenges associated with traditional approaches.
Read full abstract7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access