Abstract

In many applications, one works with neural network models trained by someone else. For such pretrained models, one may not have access to training data or test data. Moreover, one may not know details about the model, e.g., the specifics of the training data, the loss function, the hyperparameter values, etc. Given one or many pretrained models, it is a challenge to say anything about the expected performance or quality of the models. Here, we address this challenge by providing a detailed meta-analysis of hundreds of publicly available pretrained models. We examine norm-based capacity control metrics as well as power law based metrics from the recently-developed Theory of Heavy-Tailed Self Regularization. We find that norm based metrics correlate well with reported test accuracies for well-trained models, but that they often cannot distinguish well-trained versus poorly trained models. We also find that power law based metrics can do much better—quantitatively better at discriminating among series of well-trained models with a given architecture; and qualitatively better at discriminating well-trained versus poorly trained models. These methods can be used to identify when a pretrained neural network has problems that cannot be detected simply by examining training/test accuracies.

Highlights

  • In many applications, one works with neural network models trained by someone else

  • After describing our overall approach, we study in detail three well-known computer vision (CV) architecture series

  • We look in detail at several variations of a popular natural language processing (NLP) architecture series, and we present results from a broader analysis of hundreds of pretrained deep neural networks (DNNs) models

Read more

Summary

Introduction

One works with neural network models trained by someone else. For such pretrained models, one may not have access to training data or test data. We find that power law based metrics can do much better—quantitatively better at discriminating among series of well-trained models with a given architecture; and qualitatively better at discriminating well-trained versus poorly trained models These methods can be used to identify when a pretrained neural network has problems that cannot be detected by examining training/test accuracies. Naïvely—but in our experience commonly, among ML practitioners and ML theorists—if one does not have access to training or testing data, one can say absolutely nothing about the quality of a ML model This may be true in worst-case theory, but models are used in practice, and there is a need for a practical theory to guide that practice. There is a need for a practical theory for pretrained models which can predict how, when, and why such models can be expected to perform well or poorly

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call