Abstract

Abstract One of the important aspects of our acoustic perceptual skills is auditory deviancy detection. This acoustic mechanism allowed human beings to percept the novel stimulate while regardless of the processes engaged in the focal task. A similar ability, imitating the above-mentioned human’s auditory awareness mechanism will greatly improve the efficiency of artificial perception in complex environment. In this paper, we propose a computational model to mimic such human auditory perception mechanism to verge on this goal. The prosed model consists of two modules: temporal deviancy detection and frequency saliency detection. Combining the information issued from each of the aforementioned modules, the proposed model generates the image indicator that identify the deviant salient-sound which elicit auditory attention shift. The sounds recorded from the real environment have been used for verifying the advantages of the proposed model. The results show that the proposed model is able to point out the deviant salient-sound in a mixture sound clip and shows an acceptable robustness and accuracy. Furthermore, a more comprehensive experiment is performed and illustrate that the proposed model could effectively simulate human auditory attention mechanism.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call