Abstract

In this article we present a neural network model of sentence generation. The network has both technical and conceptual innovations. Its main technical novelty is in its semantic representations: the messages which form the input to the network are structured as sequences, so that message elements are delivered to the network one at a time. Rather than learning to linearise a static semantic representation as a sequence of words, our network rehearses a sequence of semantic signals, and learns to generate words from selected signals. Conceptually, the network’s use of rehearsed sequences of semantic signals is motivated by work in embodied cognition, which posits that the structure of semantic representations has its origin in the serial structure of sensorimotor processing. The rich sequential structure of the network’s semantic inputs also allows it to incorporate certain Chomskyan ideas about innate syntactic knowledge and parameter-setting, as well as a more empiricist account of the acquisition of idiomatic syntactic constructions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call