Abstract

A neural network model of object semantic representation is used to simulate learning of new words from a foreign language. The network consists of feature areas, devoted to description of object properties, and a lexical area, devoted to words representation. Neurons in the feature areas are implemented as Wilson-Cowan oscillators, to allow segmentation of different simultaneous objects via gamma-band synchronization. Excitatory synapses among neurons in the feature and lexical areas are learned, during a training phase, via a Hebbian rule. In this work, we first assume that some words in the first language (L1) and the corresponding object representations are initially learned during a preliminary training phase. Subsequently, second-language (L2) words are learned by simultaneously presenting the new word together with the L1 one. A competitive mechanism between the two words is also implemented by the use of inhibitory interneurons. Simulations show that, after a weak training, the L2 word allows retrieval of the object properties but requires engagement of the first language. Conversely, after a prolonged training, the L2 word becomes able to retrieve object per se. In this case, a conflict between words can occur, requiring a higher-level decision mechanism.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.