Abstract
Explainable artificial intelligence is gaining wider traction within the machine learning community and its application domains at large for its inherent motivation to explain and interpret large-scale machine interpretation of experimental datasets. We will present cognitive sampling as new way to implement explainable AI for acoustical signal processing. In particular, methods based on geometric signal processing and sparse sensing will be harnessed with machine cognition to interpret, classify and predict information autonomously from large-scale acoustic datasets spanning a wide variety of applications. We will compare the difference in performance between traditional supervised and semi-supervised learning architectures such as deep learning, and ensemble approaches, unsupervised learning networks. We will also present preliminary research on implementing cognitive sampling in machine-directed inverse problem-solving techniques such as autoencoders. The end goal is to discover efficient data encodings that enable hitherto unforeseen feature spaces using optimal or close to optimal sampling strategies. Specific applications will include a variety of acoustical environmental sensing applications involving spectral feature generation and interpretation such as sonar signal processing, undersea multipath channel sensing, as well as feature extraction in complex melodic structures in Indian classical music. [Work funded partially by ONR under Grants N00014-19-1-2436, N00014-19-1-2609, N00174-20-1-0016 and N00014-20-1-2626.]
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have