Abstract

Studies show that the way our brain processes incoming speech sounds has a lower-level grounding derived of acoustic similarity. Previous theoretical models of speech sound processing posit that higher-level cognitive process plays little role in perception and in successful and complete processing of speech sounds. The present study investigates if such models may be effectively extended to incorporate influences from higher level cognitive cues, such as voluntary attention, to certain acoustic dimensions of the speech sound stimuli. In this paper, we investigate the relationship in a qualitative way between the efficiency of the language processing and high-level perceptual mechanism through computational simulation of speech perception, and accuracy and reaction-time measurements. The results of experiments lead to an enhancement of existing statistical signal processing and perception models’ predictions. Our findings revealed that acoustic similarity in speech sound signals merely does not accurately predict the acquisition outcome, and the enhancement of natural language learning can be achieved by effectively mining out the auxiliary cognitive cues in these signal processing activities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call