Abstract

In image processing domain of deep learning, the big size and complexity of the visual data require a large number of learnable variables. Subsequently, the training process consumes enormous computation and memory resources. Based on residual modules, the authors developed a new model architecture that has a minimal number of parameters and layers that enabled us to classify tiny images using much less computation and memory costs. Also, the summation of correlations between pairs of feature maps as an additive penalty in the objective function was used. This technique encourages the kernels to be learned in a way that elicit uncorrelated representations from the input images. Also, employing Fractional pooling helped to have deeper networks that consequently resulted in more informative representation. Moreover, employing periodic learning rate curves, multiple machines are trained with a less total cost. In the training phase, a random augmentation to the input data that prevent the model from being overfitted was applied. Applying MNIST and CIFAR-10 datasets to the proposed model resulted in the classification accuracy of 99.72 and 93.98, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.