Abstract

Legal judgment prediction is the most typical application of artificial intelligence technology, especially natural language processing methods, in the judicial field. In a practical environment, the performance of algorithms is often restricted by the computing resource conditions due to the uneven computing performance of the devices. Reducing the computational resource consumption of the model and improving the inference speed can effectively reduce the deployment difficulty of the legal judgment prediction model. To improve the prediction accuracy, enhance the model inference speed, and reduce the model memory consumption, we propose a BERT knowledge distillation-based legal decision prediction model, called KD-BERT. To reduce the resource consumption in the model inference process, we use the BERT pretraining model with lower memory requirements to be the encoder. Then, the knowledge distillation strategy transfers the knowledge to the student model of the shallow transformer structure. Experiment results show that the proposed KD-BERT has the highest F1-score compared with traditional BERT models. Its inference speed is also much faster than the other BERT models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call