Abstract
ABSTRACT In the past decade, multimodal data analysis has gained importance, especially for including individuals with visual impairments in education and science dissemination. However, its application in scientific research is still limited due to a lack of conclusive evidence on its robustness and performance. Various sonification tools have been developed, including xsonify, starsound, strauss, and sonouno, which aim to enhance accessibility for both sighted and visually impaired users. This contribution presents sonouno (a data visualization and sonification tool) using data, and comparing to corresponding visuals displays, from established data bases like SDSS, ASAS-SN, and Project Clea for astronomical data. We show that sonouno is able to replicate the visual data displays and provide consistent auditory representations. Key features include marking absorption and emission lines (in both visual and sonification) and multicolumn sonification, which facilitates spectral comparisons through sound. This approach ensures consistency between visual and auditory data, making multimodal displays more viable for use in research, enabling greater inclusion in astronomical investigation. The study suggests that sonouno could be broadly adopted in scientific research and used to develop multimodal training courses and improve data analysis methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.