Abstract
In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergenceof speech structure. These signal spaces need to be continuous, non-discretized spaces from which discrete unitsand patterns can emerge. They need to be dissimilar from—but comparable with—the vocal tract, in order tominimize interference from pre-existing linguistic knowledge, while informing us about language. This is a hardbalance to strike. This article outlines a new approach that uses the Leap Motion, an infrared controller that canconvert manual movement in 3d space into sound. The signal space using this approach is more flexible than signalspaces in previous attempts. Further, output data using this approach is simpler to arrange and analyze. Theexperimental interface was built using free, and mostly open- source libraries in Python. We provide our sourcecode for other researchers as open source.
Highlights
In evolutionary linguistics, artificial language learning (ALL) experiments are becoming increasingly commonplace (Scott-Phillips & Kirby, 2010)
These experiments have focused on the emergence of structure on a morphosyntactic level using artificial minilanguages composed from small discrete building blocks (e.g., Kirby et al 2008)
It does not make sense to initially construct artificial signals from discrete building blocks, as it is the emergence of discrete building blocks, which is of interest
Summary
Artificial language learning (ALL) experiments are becoming increasingly commonplace (Scott-Phillips & Kirby, 2010). Experiments investigating the emergence of combinatorial structure become difficult to design with graphical paradigms, as participants are very familiar with presenting content graphically, both using written language and creating iconic representations via drawing. The use of hand-placement to generate precise auditory feedback is not something that occurs in natural language Both visual and acoustic signaling may help contribute to the ecological validity of experiments using the framework. Different experiments need to be structured in different ways, but for the most part, individual learning, iterated learning and communication experiments have a finite number of possible parts to the experiment They usually need a window to create or reproduce signals, one to recognize signals, and one to provide feedback. The application allows the user to modify the playback rate (see Appendix A.4)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have