Abstract

Convolutional neural networks encounter exploding/vanishing gradient and higher training error which cannot be overcome by adding layers, enhancing initialization methods, and better optimizers. Residual Neural Networks (ResNet) include added layers into residual blocks. This new design focuses mainly on activation inside the residual blocks and is more successful than CNN in object classification, detection, and segmentation. Deep residual networks have an issue of diminishing feature reuse related to the flowing through the network gradient, but not through all residual blocks. Thus, some blocks might be skipped or make small contributions to the final goal. Three simple ways to reduce the depth and increase the representation power of residual blocks, are adding convolutional layers per block, widening the convolutional layers by adding feature planes, and increasing filter sizes in convolutional layers. Here we propose In-Between Layers Modular (IBLM) ResNet - a modification of Wide Residual Net. The proposed model widens the convolutional layers by adding feature planes interpreted as increase of filter numbers for each convolution layer of our residual module, and adding some topology changes. We demonstrate the IBLM ResNet classification performance on imbalanced datasets with various optimizers and additionally improve the testing results using ensemble methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call