Abstract
Legal judgment prediction is the most typical application of artificial intelligence technology, especially natural language processing methods, in the judicial field. In a practical environment, the performance of algorithms is often restricted by the computing resource conditions due to the uneven computing performance of the devices. Reducing the computational resource consumption of the model and improving the inference speed can effectively reduce the deployment difficulty of the legal judgment prediction model. To improve the prediction accuracy, enhance the model inference speed, and reduce the model memory consumption, we propose a BERT knowledge distillation-based legal decision prediction model, called KD-BERT. To reduce the resource consumption in the model inference process, we use the BERT pretraining model with lower memory requirements to be the encoder. Then, the knowledge distillation strategy transfers the knowledge to the student model of the shallow transformer structure. Experiment results show that the proposed KD-BERT has the highest F1-score compared with traditional BERT models. Its inference speed is also much faster than the other BERT models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.