Abstract

Increasing the depth of Convolutional Neural Networks (CNNs) has been recognized to provide better generalization performance. However, in the case of 3D CNNs, stacking layers increases the number of learnable parameters linearly, making it more prone to learn redundant features. In this paper, we propose a novel 3D CNN structure that learns shared 2D triplanar features viewed from the three orthogonal planes, which we term S3PNet. Due to the reduced dimension of the convolutions, the proposed S3PNet is able to learn 3D representations with substantially fewer learnable parameters. Experimental evaluations show that the combination of 2D representations on the different orthogonal views learned through the S3PNet is sufficient and effective for 3D representation, with the results outperforming current methods based on fully 3D CNNs. We support this with extensive evaluations on widely used 3D data sources in computer vision: CAD models, LiDAR point clouds, RGB-D images, and 3D Computed Tomography scans. Experiments further demonstrate that S3PNet has better generalization capability for smaller training sets, and learns more of kernels with less redundancy compared to kernels learned from 3D CNNs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.