Abstract

Convolutional neural networks (CNN) are currently the most widely used tool for the classification of images, especially if such images have large within- and small between- group variance. Thus, one of the main factors driving the development of CNN models is the creation of large, labelled computer vision datasets, some containing millions of images. Thanks to transfer learning, a technique that modifies a model trained on a primary task to execute a secondary task, the adaptation of CNN models trained on such large datasets has rapidly gained popularity in many fields of science, geosciences included. However, the trade-off between two main components of the transfer learning methodology for geoscience images is still unclear: the difference between the datasets used in the primary and secondary tasks; and the amount of available data for the primary task itself. We evaluate the performance of CNN models pretrained with different types of image datasets—specifically, dermatology, histology, and raw food—that are fine-tuned to the task of petrographic thin-section image classification. Results show that CNN models pretrained on ImageNet achieve higher accuracy due to the larger number of samples, as well as a larger variability in the samples in ImageNet compared to the other datasets evaluated.

Highlights

  • Annarita D’Addabbo, the roots of convolutional neural networks (CNN) emerged in the 1980s [1,2], they were only widely adopted in the 2010s, after a model used by Krizhevsky et al [3]

  • We evaluate the results of using transfer learning to classify thin-sections images using models primarily trained on ImageNet, the

  • Appendix A (Tables A1 and A2) shows details of the hyperparameters used for training the CNN models on the primary task

Read more

Summary

Introduction

Annarita D’Addabbo, the roots of convolutional neural networks (CNN) emerged in the 1980s [1,2], they were only widely adopted in the 2010s, after a model used by Krizhevsky et al [3]won the 2012 ImageNet competition challenge [4] by a large margin [5] when competing against traditional machine-learning algorithms. Annarita D’Addabbo, the roots of convolutional neural networks (CNN) emerged in the 1980s [1,2], they were only widely adopted in the 2010s, after a model used by Krizhevsky et al [3]. ImageNet [6] is one of several computer vision datasets (e.g., [7,8,9]) that contributed to the development of CNN models, as well as the standardization of models’ analysis. CNN models used for the classification of images are trained on datasets containing pairs of input data (images) and labels (classes). CNN models need to learn mapping from the input data to the desired labels. It is generally useful to have a large dataset for training CNNs for two main reasons: image data have a very high dimensionality, e.g., a red-green-blue (RGB)

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.