Abstract

The Recursive Convolutional Layer (RCL) is a module that wraps a recursive feedback loop around a convolutional layer (CL). The RCL has been proposed to address some of the shortcomings of Convolutional Neural Networks (CNNs), as its unfolding increases the depth of a network without increasing the number of weights. We investigated the “naïve” substitution of CL with RCL on three base models: a 4-CL model, ResNet, DenseNet and their RCL-ized versions: C-FRPN, R-ResNet, and R-DenseNet using five image classification datasets. We find that this one-to-one replacement significantly improves the performances of the 4-CL model, but not those of ResNet or DenseNet. This led us to investigate the implication of the RCL substitution on the 4-CL model which reveals, among a number of properties, that RCLs are particularly efficient in shallow CNNs. We proceeded to re-visit the first set of experiments by gradually transforming the 4-CL model and the C-FRPN into respectively ResNet and R-ResNet, and find that the performance improvement is largely driven by the training regime whereas any depth increase negatively impacts the RCL-ized version. We conclude that the replacement of CLs by RCLs shows great potential in designing high-performance shallow CNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call