Abstract
Current developments on self-driving cars have increased the interest on autonomous shared taxicabs. While most self-driving technologies focus on the outside environment, there is also a need to provide in-vehicle intelligence (e.g., detect health and safety issues related with the car occupants). Set within an R&D project focused on in-vehicle cockpit intelligence, the research presented in this paper addresses an unsupervised Acoustic Anomaly Detection (AAD) task. Since data is nonexistent in this domain, we first design an in-vehicle sound event data simulator that can realistically mix background audios (recorded from car driving trips) with normal (e.g., people talking, radio on) and abnormal (e.g., people arguing, cough) event sounds, allowing the generation of three synthetic in-vehicle sound datasets. Then, we explore two main sound feature extraction methods (based on a combination of three audio features and mel frequency energy coefficients) and propose a novel Long Short-Term Memory Autoencoder (LSTM-AE) deep learning architecture for in-vehicle sound anomaly detection. Competitive results were achieved by the proposed LSTM-AE when compared with two state-of-the-art methods, namely a dense Autoencoder (AE) and a two-stage clustering.
Highlights
The recent revolution on smart mobility and autonomous cars is producing new businesses and research opportunities, namely in terms of self-driving shared taxicabs
It is highly relevant to monitor what happens inside the vehicle cockpit, developing an automatic computational system that is capable of processing several in-vehicle sensors in order to replace the drivers function to monitor and control health, safety and comfort concerns
This paper focuses more on Autoencoders (AE), a deep learning neural architecture that became a popular for Anomaly Detection (AAD)
Summary
The recent revolution on smart mobility and autonomous cars is producing new businesses and research opportunities, namely in terms of self-driving shared taxicabs. A key change of a selfdriving shared taxicab is the absence of a designated company driver Under this context, it is highly relevant to monitor what happens inside the vehicle cockpit, developing an automatic computational system that is capable of processing several in-vehicle sensors (e.g., sound, image, air particles) in order to replace the drivers function to monitor and control health, safety and comfort concerns. It is highly relevant to monitor what happens inside the vehicle cockpit, developing an automatic computational system that is capable of processing several in-vehicle sensors (e.g., sound, image, air particles) in order to replace the drivers function to monitor and control health, safety and comfort concerns Such system should be able to detect abnormal events occurring inside the vehicle (e.g., heart attack of a single occupant, fight between two passengers). As for the fifth parameter (common_sr), it defines the final sample rate of the generated sound mixture (in Hz)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.