Abstract

Convolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, and so on. In CNNs, conventional pooling methods refer to 2×2 max-pooling and average-pooling, which are applied after the convolutional or ReLU layers. In this paper, we propose a Multiactivation Pooling (MAP) Method to make the CNNs more accurate on classification tasks without increasing depth and trainable parameters. We add more convolutional layers before one pooling layer and expand the pooling region to 4×4, 8×8, 16×16, and even larger. When doing large-scale subsampling, we pick top-k activation, sum up them, and constrain them by a hyperparameter σ. We pick VGG, ALL-CNN, and DenseNets as our baseline models and evaluate our proposed MAP method on benchmark datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The classification results are competitive.

Highlights

  • Convolutional neural networks (CNNs) have excellent performance on image classification and many other visual tasks [1,2,3,4,5,6,7,8,9,10] in recent years since AlexNet [11] achieved great success in ImageNet Challenge

  • In Networks with shortcuts structure, like DenseNets, we proposed plain structure (Figure 3) with Multiactivation Pooling (MAP) method into the Transition layers to extract features

  • We propose a new pooling method, which refers to Multiactivation Pooling (MAP) method

Read more

Summary

Introduction

Convolutional neural networks (CNNs) have excellent performance on image classification and many other visual tasks [1,2,3,4,5,6,7,8,9,10] in recent years since AlexNet [11] achieved great success in ImageNet Challenge. The first proposed convolutional neural network, LeNet5 [12], has 5 layers. VGG [13] networks are designed even deeper. Residual Networks (ResNets) [16, 17] and Dense Convolutional Networks (DenseNets) [15] which have been proposed in the last two years start to use shortcuts structure to allow the networks to surpass 100, even 1000-layer barrier

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call