Abstract

In this paper, we propose a method to generate an audio output based on spectroscopy data in order to discriminate two classes of data, based on the features of our spectral dataset. To do this, we first perform spectral pre-processing, and then extract features, followed by machine learning, for dimensionality reduction. The features are then mapped to the parameters of a sound synthesiser, as part of the audio processing, so as to generate audio samples in order to compute statistical results and identify important descriptors for the classification of the dataset. To optimise the process, we compare Amplitude Modulation (AM) and Frequency Modulation (FM) synthesis, as applied to two real-life datasets to evaluate the performance of sonification as a method for discriminating data. FM synthesis provides a higher subjective classification accuracy as compared with to AM synthesis. We then further compare the dimensionality reduction method of Principal Component Analysis (PCA) and Linear Discriminant Analysis in order to optimise our sonification algorithm. The results of classification accuracy using FM synthesis as the sound synthesiser and PCA as the dimensionality reduction method yields a mean classification accuracies of 93.81% and 88.57% for the coffee dataset and the fruit puree dataset respectively, and indicate that this spectroscopic analysis model is able to provide relevant information on the spectral data, and most importantly, is able to discriminate accurately between the two spectra and thus provides a complementary tool to supplement current methods.

Highlights

  • Introduction and literature surveySonification of spectral data has been explored in a number of research projects

  • The Amplitude Modulation (AM) method is tested with ten stimuli and the Frequency Modulation (FM) method is tested with another ten stimuli

  • The results show that the mean accuracy obtained using the AM method of synthesis is 50.48% whereas the mean accuracy obtained using the FM method of synthesis is 93.81%

Read more

Summary

Introduction

Introduction and literature surveySonification of spectral data has been explored in a number of research projects. The human ear has the capability to detect audio patterns and to recognise timbres This will allow a further opportunity of introducing variables into the sound so that a listener can discriminate samples by listening to the audio clips, which represent information and data. Cassidy et al used the same synthesis method of formant based vowel synthesis in their approach to sonifying hyperspectral colon tissue Cassidy et al (2004a) Cassidy et al (2004b). They suggested there is potential for using vocal-like sounds for sonification where humans have the ability to identify such types of sounds. The same FM synthesis method, was used in sonifying optical coherence tomography data and images of human tissue, for the purpose of discriminating between human

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.