Abstract

Most supervised audio recognition systems developed to this point have used a testing set which includes the same categories as the training set database. Such systems are called closed-set recognition (CSR). However, audio recognition in real applications can be more complicated, where the datasets can be dynamic, and novel categories can ceaselessly be detected. Hence, in practice, the usual methods will assign to these novel classes labels which are often incorrect. This work aims to investigate audio open-set recognition (OSR) suitable for multi-classes classification recognition, with a rejection option for classes never seen by the system. A probabilistic calibration of a support vector machine classifier is utilized and formulated under the open-set scenario. For this, it is proposed to apply a threshold technique called peak side ratio (PSR) to the audio recognition task. A candidate label is first examined by a Platt-calibrated support vector machine (SVM) to produce posterior probabilities. The PSR is then used to characterize the distribution of posterior probabilities values. This process helps to determine a threshold in order to reject or accept a particular class. Our proposed method is evaluated on different variations of open sets, using well-known metrics. Experimental results reveal that our proposed method outperforms previous OSR approaches over a wide range of openness values.

Highlights

  • Closed-set recognition systems (CSR) are often governed by misleading assumptions, where all testing and training data are taken from the same database, often with equal distribution

  • The systems are evaluated for the following types of errors:

  • MULTI-CLASS OPEN SET RECOGNITION In the previous subsection, the results showed the ability of our proposed method to identify and reject unknown classes and its ability to accept the known classes without labeling them

Read more

Summary

Introduction

Closed-set recognition systems (CSR) are often governed by misleading assumptions, where all testing and training data are taken from the same database, often with equal distribution. Under these assumptions, several algorithms have achieved significant success in many applications of machine learning. Machine learning algorithms are able to perform empirical risk minimization very well, using their ability to handle large feature spaces and to identify outliers. These assumptions do not reflect some practical applications in which out-of-set data may be encountered. Even if this sample lies far from any of the training samples, it may be classified with a high probability, that is, the algorithm will be wrong, but it may be very

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call