Abstract

We present in this paper a wireless acoustic sensor network (WASN) that recognizes a set of sound events or classes from urban environments. The nodes of the WASN are Raspberry Pi devices that not only record the ambient sound, but they also process and recognize a sound event by means of a deep convolutional neural network (CNN). To our knowledge, this is the first WASN running a CNN classifier over low-cost devices. Moreover, the network has been designed according to the open standard FIWARE, so the whole system can be replicated without the need of proprietary software or specific hardware. Although our low-cost WASN achieves similar accuracy compared to other WASNs that perform the classification through cloud or edge computing, our problem is the high computation load required by deep learning algorithms, even in testing mode. Moreover, the WASNs are designed to be constantly monitoring the ambient, which in our case means constantly classifying the background sound''. We propose here to introduce a pre-detection stage prior to the CNN classification in order to save power consumption. In our case, the WASN is placed in a big avenue where the background sound'' event is the usual traffic noise, and we want to detect other sound events as horns, sirens or very loud sounds. We have designed a pre-detection stage that activates the classifier only when an event different from traffic is likely occurring. For this purpose, two parameters based on the sound pressure level are computed and compared with two corresponding thresholds. Experimental results have been carried out with the proposed WASN in the city of Valencia, achieving a six-times reduction of the Raspberry Pi CPU's usage due to the pre-detection stage.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call