Abstract

Convolutional neural networks (CNN) are increasingly being used to achieve state-of-the-art performance for various plant phenotyping and agricultural tasks. While con-structing such CNN models, a common problem is over-parameterization, which may lead to a model becoming overfit on a training dataset. This problem is particularly relevant for plant datasets with limited variation and/or small samples sizes. Inspection of the loss and accuracy curves is a common way to detect overfitting in a CNN model, but it provides little insight into how the model could be improved. There are several reasons contributing to the overfitting of a CNN model; however, in this paper, we aim at explaining overfitting in a CNN classification model by analyzing the features learned at various depths of the model. We use three plant phenotyping datasets in our experimental studies. Our comparative analysis between the visualizations of the feature maps obtained from overfit and balanced models reveals that the image background often influences an overfit model’s behavior. Researchers with limited deep learning domain knowledge often attempt to build deeper layer models with the hope of improving per-formance. Using Guided Backpropagation, we show how the pairwise similarity matrix between the visualization of the features learned at different depths can be leveraged to pave a new way to potentially select a better CNN model by removing redundant layers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call