Abstract
Auditory signals can be described quantitatively by a set of measurable acoustic features (i.e., zero crossing rate, attack slope, etc.) or qualitatively with adjectives, such as whooping, thunderous, melodic, or in comparative terms such as different, louder, etc. Listeners can rate the similarity of signals and assign qualitative descriptions relatively easily; however, most lack the ability to articulate the quantitative basis of these judgments. Because the qualitative differences in signals typically correlate to a measurable difference in the acoustic features, signal similarity ratings can be used to recover the acoustic features that define signal similarity. In the present study, subjects were given combinations of small, unmanned aircraft systems (SUAS) signals consisting of either two different SUASs or SUAS and non-SUAS and asked to rate similarity on a scale from non-similar to highly similar. Using these similarity ratings along with acoustic difference features, machine learning algorithms were trained to predict human responses. These algorithms predict the position of a withheld UAS signal within the similarity feature-space. Crucial prediction acoustic difference features are extracted from the algorithms via feature importance and sensitivity analysis techniques. The extracted acoustic difference features may be inferred as prominent information impacting the human perception of signal similarity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.