Abstract

Building an efficient model with a compact structure and less parameters while preserving its competitive performance is meaningful in the field of neural networks. Traditionally, a unique group of parameters is identified for each convolution layer. Inspired by the universal approximation theorem, in this study, we explore a flexible way to configure the parameters of a convolutional neural network model. First, we set a parameter pool that stores a certain number of parameters, through which we can also control the number of parameters of a neural model. Second, we randomly select a group of continuous position parameters from the pool for each convolution layer. Finally, we perform extensive experiments for the standard architectures of the ResNet and DenseNet on several benchmark datasets. In the experiments, on CIFAR-10, most of the models could perform almost as well as the original ones within a 0.7% decline. On the difficult tasks CIFAR-100 and ImageNet, most of the models perform a little less within an approximately 1.5% decline. In this study, we extend weight sharing from the interior of one feature map to any layers, through which we can control the number of parameters of a neural model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.