Abstract

In this article, we consider a version of the challenging problem of learning from datasets whose size is too limited to allow generalisation beyond the training set. To address the challenge, we propose to use a transfer learning approach whereby the model is first trained on a synthetic dataset replicating features of the original objects. In this study, the objects were smartphone photographs of near-complete Roman terra sigillata pottery vessels from the collection of the Museum of London. Taking the replicated features from published profile drawings of pottery forms allowed the integration of expert knowledge into the process through our synthetic data generator. After this first initial training the model was fine-tuned with data from photographs of real vessels. We show, through exhaustive experiments across several popular deep learning architectures, different test priors, and considering the impact of the photograph viewpoint and excessive damage to the vessels, that the proposed hybrid approach enables the creation of classifiers with appropriate generalisation performance. This performance is significantly better than that of classifiers trained exclusively on the original data, which shows the promise of the approach to alleviate the fundamental issue of learning from small datasets.

Highlights

  • State-of-the-art deep learning models in artficial intelligence (AI), approaching or even surpassing humans’ object classifying capabilities, require vast training sets comprising millions of training data [3,4]

  • We show that pre-training on appropriately generated simulated datasets may lead to drastic increase of accuracy of up to 20% relative to baseline models pre-trained on ImageNet [3] (Section 3)

  • We demonstrate the positive effect of the different pre-trainings with simulated photographs in all architectures performance

Read more

Summary

Introduction

State-of-the-art deep learning models in artficial intelligence (AI), approaching or even surpassing humans’ object classifying capabilities (such as [1,2]), require vast training sets comprising millions of training data [3,4]. Where Rtrain is the empirical mean of the classifier’s performance on the training set, Ntrain is the size of the training set, 1 − δ is the probability that the expected behaviour of the classifier is within this bound, and h is a measure of the classifier’s complexity (VapnikChervonenkis or VC dimension). For a feed-forward network with Rectified Linear Unit (ReLU) neurons, the VC dimension of the whole network is larger than the VC dimension of a single neuron in the network The latter, in turn, equals the number of its adjustable parameters. If (1) is employed to inform our data acquisition processes, ensuring that the model’s expected performance R does not exceed the value of Rtrain + 0.1 with probability at least 0.95 requires

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call