Since 1985 a community of wild Atlantic spotted dolphin (Stenella frontalis) have been observed underwater in the Bahamas. A human-worn, acoustic underwater two-way communication interface was developed and deployed from 2013-2016. Dolphins were exposed to an acoustic referentially based wearable underwater computer/interface. A model/rival system was used with dolphins and human participants during in-water sessions. Artificial and natural objects were labeled with computer generated sounds. Female juvenile spotted dolphins dominated the activity. Group size averaged seven dolphins for an average duration of 37 minute over 58 sessions. Of 243 video audio imitations and 56 Cetacean Hearing Augmentation Telemetry (CHAT) audio imitations, six potential response types were documented and measured. Stand-alone vocal contour mimics and Frequency Modulated Contours were the most common imitations. Within 5 sec of a computer-generated sound playing, of the 191 non-stand-alone vocal responses that were produced, 114 of them (59.7%) were judged as partial accurate matches, 3 of them (1.57%) were judged as non-matching partial imitations of a computer-generated sound, 67 of them (35.08%) were signature whistles, and seven of them (3.67%) were either non-signature whistle vocalizations or a mimic of the start or end tones. Thus, the majority of vocalizations produced by the dolphins within five seconds of a computer-generated sound were partial accurate imitations for the computer-generated sound played. Dolphins demonstrated both immediate and delayed vocal imitation and flexible attempts at imitation but did not show signs of a functional understanding of object labels. Atlantic spotted dolphins showed vocal flexibility in reaction to humans broadcasting computer generated sounds.
Read full abstract