Abstract
Pulling a face to emphasize a spoken point is not seen as just a human prerogative. The perception of human speech can be enhanced by a combination of auditory and visual signals1,2. Animals sometimes accompany their vocalizations with distinctive body postures and facial expressions3, although it is not known whether their interpretation of these signals is unified. Here we use a paradigm in which 'preferential looking' is monitored to show that rhesus monkeys (Macaca mulatta), a species that communicates by means of elaborate facial and vocal expression4,5,6,7, are able to recognize the correspondence between the auditory and visual components of their calls. This crossmodal identification of vocal signals by a primate might represent an evolutionary precursor to humans' ability to match spoken words with facial articulation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.