Abstract
Edge computing integrated into the built environment will enable the development of smart acoustic spaces in which the flexural vibrations of the walls and screens that define a space may be employed to generate and monitor sound and serve as unobtrusive and convenient audio interfaces. Integration of data processing and neural decision making on compact, low- power embedded hardware-software systems will make it possible to mediate real-time, personalized acoustic interactions of a user with their environment without the need to connect to the cloud, thus providing privacy and security. In this presentation we show how embedded machine learning (ML) models combined with vibroacoustic control and monitoring of elastic panels may be employed to perform various tasks such as speech recognition, sound source localization, or the detection of acoustic signatures of specific events such as a fall or other health emergencies. The selection of vibroacoustic features, the associated signal processing requirements, and the computational resources utilized by the ML models for various acoustic tasks will be discussed. We also highlight how the distributed modal response of extended flat panels can simplify the sensing and signal processing requirements in such systems.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have