Abstract

This paper constructs a legal text generation and assembly system in the domain of international investment law. We rely on a corpus of 1600 bilateral investment treaties split into 22 600 articles to train a character-level recurrent neural network (char-RNN). Prior work has shown that while char-RNNs can produce legally meaningful texts, its output tends to be repetitive. In this contribution, we remedy this shortcoming by proposing a new framework for RNN-based text production. First, we elicit priors at the training stage to give more weight to under-represented treaty practice. Second, we use q-gram distance and GloVe word embeddings as filters imposed on the generated texts to draw them closer to a target document. Third, we develop a validation routine that compares the distribution of pre-defined legal concepts in actual and generated texts. Our results indicate that the RNN produces texts that are not repetitive and convey meaningful legal concepts. We conclude by showcasing a practical application of our framework by predicting provisions of the USA-China bilateral investment treaty currently under negotiation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call