Abstract
The issues presented in this work relate to the possibility of detecting the location of the sound source and its effectiveness in production conditions using recordings made with artificial head and processed with using an artificial neural networks. The use of an artificial head and artificial neural networks was motivated by an attempt to map human perception using available computer technology. The work attempts to map the features used for location human sources of sound through digital signal processing and machine learning. Artificial neural networks are commonly used, among others, in image recognition, where, using a camera or camera, the algorithm classifies objects in the image. Machine learning algorithms in addition allow for the implementation of self-learning speech or text recognition models. As part of the work, it was decided to localize the sound for a collection of fifteen different items sources relative to the head. To avoid duplicating similar measurements, the sound source was to the left of the artificial head. For signal processing and programming language was used to prepare the artificial neural network model Python with the Numpy numerical library, a library designed for signal processing Scipy, TensorFlow and Keras.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.