Abstract

This paper proposes a new pooling method called line average pooling (LAP), which operates between the convolution layer and the final output layer, replacing the traditional mapping method, such as Flatten and global average pooling (GAP). LAP effectively reduces the total number of parameters of the model, thereby preventing overfitting effectively while retaining more features from high-level feature maps. Additionally, it increases the fitting speed of the model. We selected the ISIC skin cancer dataset, then examined the performances of three pooling methods: LAP, GAP and Flatten, on a customized CNN model. In addition, we analyzed the fitting degree when the epoch was 100. The experimental results show that, the degree of overfitting using LAP is greatly reduced when compared with Flatten. Compared with GAP, LAP is better and faster in extracting features and fitting the training data. Both GAP and LAP demonstrate good generalization abilities, reaching 87.56% and 88.11% respectively. With proper means of additional regularization, LAP can even perform better than GAP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call