Abstract

Machine Learning (ML) has made its way into a wide variety of advanced applications, where high accuracies can be achieved when these ML models are evaluated in the same context as they were trained and validated on. However, when these high-accuracy models are exposed to out-of-distribution points such as noisy inputs, their performance could potentially degrade significantly. Recommending the most suitable ML model that retains a higher accuracy when exposed to these noisy inputs can overcome this performance degradation. For this, a mapping between the noise distribution at the input and the resulting accuracy needs to be obtained. Though, this relationship is costly to evaluate as this is a computationally intensive task. To minimize this computational cost, we employ metalearning to predict this mapping; that is, the performance of different ML models is predicted given the distribution parameters of the input noise. Although metalearning is an established research field, performance predictions based on noise distribution parameters have not been accomplished before. Hence, this research focuses on predicting the per-class classification performance based on the distribution parameters of the input noise. For this, our approach is twofold. First, in order to gain insights in this noise-to-performance relationship, we analyse the per-class performance of well-established convolutional neural networks through our multi-level Monte Carlo simulation. Second, we employ metalearning to learn this relationship between the input noise distribution and the resulting per-class performance in a sample-efficient way by incorporating Latin Hypercube Sampling. The noise performance analyses present novel insights about the per-class performance degradation when gradually increasing noise is augmented on the input. Additionally, we show that metalearning is capable of accurately predicting the per-class performance based on the noise distribution parameters. We also show the relationship between the number of metasamples and the metaprediction accuracy. Consequently, this research enables future work to make accurate classifier recommendations in noisy environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call