Abstract

An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial) information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

Highlights

  • Automatic classification of data is a standard problem in signal and image processing

  • The proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm

  • In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework

Read more

Summary

Introduction

Automatic classification of data is a standard problem in signal and image processing. In this context, the overall objective of classification is to categorize all data samples into different classes as accurately as possible. Powerful supervised classification methods based on neural networks, genetic algorithms, Bayesian methods, and Markov random fields have been developed (see, e.g., [1, 2, 3]). Even the most advanced methods of automatic classification are typically unable to provide a classification without misclassifications. The main reason for this is the inherent presence of noise in data as well as the structure of the signals and images themselves. The larger the noise level (variance), the greater the probability of misclassifications

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.