Abstract

AbstractWe explore the properties of sound and human sound recognition as a means to enhance and accelerate visual-only data analysis methods. The aim of this work is to enable and improve the analysis of large data sets, data requiring rapid analysis, multi-dimensional data, and signal detection in data with low signal-to-noise ratio. We present a prototype tool, StarSound, to sonify data such as astronomical transient light curves, spectra, and power spectra. Stereophonic sound is used to ‘visualise’ and localise the data under examination, and 3-D sound is discussed in conjunction with virtual reality technology, as a means to enhance analysis efficiency and efficacy, including rapid data assessment and training machine learning software. In addition, we explore the use of higher-order harmonics as a means to examine simultaneously multi-dimensional data sets. Such an approach can allow the data to be interpreted in a holistic manner and facilitates the discovery of previously unseen connections and relationships. Furthermore, we exploit the capability of the human brain for selective or focused hearing that enables the identification of desired signals in noisy data, or amidst similar or more significant signals. Finally, we provide research examples that benefit directly from data sonification. The work presented here aims to help tackle the challenges of the upcoming era of Big Data and help optimise, speed up and expand aspects of data analysis requiring human interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call