Abstract

ABSTRACT The success of deep learning in natural language processing raises intriguing questions about the nature of linguistic meaning and ways in which it can be processed by natural and artificial systems. One such question has to do with subword segmentation algorithms widely employed in language modeling, machine translation, and other tasks since 2016. These algorithms often cut words into semantically opaque pieces, such as ‘period’, ‘on’, ‘t’, and ‘ist’ in ‘period|on|t|ist’. The system then represents the resulting segments in a dense vector space, which is expected to model grammatical relations among them. This representation may in turn be used to map ‘period|on|t|ist’ (English) to ‘par|od|ont|iste’ (French). Thus, instead of being modeled at the lexical level, translation is reformulated more generally as the task of learning the best bilingual mapping between the sequences of subword segments of two languages; and sometimes even between pure character sequences: ‘p|e|r|i|o|d|o|n|t|i|s|t’ → ‘p|a|r|o|d|o|n|t|i|s|t|e’. Such segmentations and alignments are at work in highly efficient end-to-end machine translation systems, despite their allegedly opaque nature. But do they have linguistic or philosophical plausibility? I attempt to cast light on this question, in the spirit of making artificial intelligence more transparent and explainable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call