Abstract

In this paper, we develop a technique that integrates acoustics detection with artificial intelligence, empowering robotic arms in production environments with the capacity to monitor, diagnose, and predict anomalies. We combine the deep learning techniques of AI with audio signals collected by an array of microphones. We predict the health status of equipment systems and prevent equipment failure by using this sound data. Our novel approach focuses on converting and analyzing environmental acoustics data. The distinctive contribution of this research lies in its strategy learning and inference. By harnessing large-scale audio data information, we establish AI learning models that automatically parse input data to recognize audio features. We also employ device scheduling operations for comparative detection, endowing the equipment with abnormal diagnosis and prediction capabilities. We utilize commercially available microphone devices for sound collection, voice activity detection, and spectrogram reduction for signal preprocessing and capture before we create a dataset. Finally, we use the Gaussian mixture model (GMM) for unsupervised learning, estimating the probability density of data, and calculating GMM scores of data points for in-depth analysis. In this way, we can judge whether the sound signals collected by the microphone are abnormal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call