Abstract

We present a model for predicting inflected word forms based on morphological analogies. Previous work includes rule-based algorithms that determine and copy affixes from one word to another, with limited support for varying inflectional patterns. In related tasks such as morphological reinflection, the algorithm is provided with an explicit enumeration of morphological features which may not be available in all cases. In contrast, our model is feature-free: instead of explicitly representing morphological features, the model is given a demo pair that implicitly specifies a morphological relation (such as write:writes specifying infinitive:present). Given this demo relation and a query word (e.g. watch), the model predicts the target word (e.g. watches). To address this task, we devise a character-based recurrent neural network architecture using three separate encoders and one decoder.Our experimental evaluation on five different languages shows tha the exact form can be predicted with high accuracy, consistently beating the baseline methods. Particularly, for English the prediction accuracy is 95.60%. The solution is not limited to copying affixes from the demo relation, but generalizes to words with varying inflectional patterns, and can abstract away from the orthographic level to the level of morphological forms.

Highlights

  • Our experimental evaluation on five different languages shows that the exact form can be predicted with high accuracy, consistently beating the baseline methods

  • A good solver for morphological analogies can be of practical help as writing aids for authors, suggesting synonyms in a form specified by examples rather than using explicitly specified forms

  • While morphological inflection tasks have previously been studied using rule-based systems (Koskenniemi 1984; Ritchie et al 1991) and learned string transducers (Yarowsky and Wicentowski 2000; Nicolai et al 2015a; Ahlberg et al 2015; Durrett and DeNero 2013), they have more recently been dominated by character-level neural network models (Faruqui et al 2016; Kann and Schütze 2016) as they address the inherent drawbacks of traditional models that represent words as atomic symbols

Read more

Summary

Morphological analogies

Lepage (1998) presented an algorithm to solve morphological analogies by analyzing three input words, determining changes in prefixes, infixes, and suffixes, and adding or removing them to or from the query word, transforming it into the target: reader:unreadable = doer:x → x = undoable. Stroppa and Yvon (2005) presented algebraic definitions of analogies and a solution for analogical learning as a two-step process: learning a mapping from a memorized situation to a new situation, and transferring knowledge from the known to the unknown situation. The solution takes inspiration from k-nearest neighbour (k-NN) search, where, given a query q, one looks for analogous objects A, B, C from the training data, and selects a suitable output based on a mapping of A, B, C from input space to output space. Our solution can learn very flexible relations and different inflectional patterns

Character based modeling for morphology
Other morphological transformations
Morphological relations in word embedding models
Morphological relational reasoning with analogies
Recurrent neural networks
Model layout
FC relation
Evaluation
Test set
Data ambiguity
AVG Levenshtein NMAS Lepage
Model variants
Validation accuracy
Training trajectory
Mechanisms of word inflection
Target outran gas catfish midlives buzz
Relation embeddings
Word embedding analogies
Example outputs
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call