Abstract

In recent years, large-scale Pre-trained Language Models (PLMs) like BERT have achieved state-of-the-art results on many NLP tasks. We explore whether BERT understands deontic logic which is important for the fields of legal AI and digital government. We measure BERT's understanding of deontic logic through the Deontic Modality Classification (DMC) task. Experiments show that without fine-tuning or fine-tuning with only a small amount of data, BERT cannot achieve good performance on the DMC task. Therefore, we propose a new method for BERT fine-tuning and prediction, called DeonticBERT. The method incorporates heuristic knowledge from deontic logic theory as an inductive bias into BERT through a template function and a mapping between category labels and predicted words, to steer BERT understand the DMC task. This can also stimulate BERT to recall the deontic logic knowledge learned in pre-training. We use an English dataset widely used as well as a Chinese dataset we constructed to conduct experiments. Experimental results show that on the DMC task, DeonticBERT can achieve 66.9% and 91% accuracy under zero-shot and few-shot conditions, respectively, far exceeding other baselines. This demonstrates that DeonticBERT does enable BERT to understand deontic logic and can handle related tasks without using much fine-tuning data. Our research helps facilitate applying large-scale PLMs like BERT into legal AI and digital government.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call