Abstract
Responses of Atlantic bottle-nosed dolphins (Tursiops Iruncatus) and of humans were collected and analyzed in order to determine the features required for recognition and discrimination of signs (hand signals) in an artificial gestural communication system. Subjects responded to systematically modified signs in which sign components were contrasted for competitive feature salience. One dolphin, with 6 years of training in the language, was shown these modified signs intermixed with normal signs in a linguistic, sentence-comprehension context. A second dolphin, familiar with action signs only and with no sentence-comprehension training, served as a nonlingual control. Human subjects were tested in two parallel tasks. The dolphin with signlanguage experience attended to (in order of importance) location, completed temporal pattern, gross motor motion, and direction of motion, as salient features. Fine motor motion, hand shape, and hand orientation were less salient. The non-sign-language dolphin attended to all sign features equally and was unaffected by temporal pattern changes. Humans tested in a linguistic context attended to (in order) gross motor motion, location, and an interaction of fine motor motion, hand shape, and hand orientation. Direction of motion and temporal pattern were not salient. Nonlinguistic-context humans attended to all sign features equally and were unaffected by temporal pattern changes. Results indicate that language experience and/or testing context affect feature salience for sign recognition. Results also support the notion that there exists a higher order (general purpose) temporal pattern processor in dolphins in which visual as well as acoustic input is processed. Two Atlantic bottle-nosed dolphins at the Kewalo Basin Marine Mammal Laboratory have been the subjects of ongoing research in cognition and sentence comprehension in two arbitrary and artificial languages (Herman, Richards, & Wolz, 1984). These two dolphins came from the west coast of Florida (were caught within one mile of each other), were approximately the same age when caught (aged 2 years, as judged from weight and length), and were subjected to the same procedures and treatments in the 8 years that they have been at the research laboratory. The most distinguishing difference in treatment between the two animals has been the nature of artificial language in which each has been trained. One language is acoustic and is based on computer-generated sounds. The second language is visual and is based on hand
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.