Abstract

Abstract In ample data justice, predicting legal outcomes and identifying similar cases hold significant value. This paper presents an advanced legal prediction algorithm that integrates the specific features of legal texts. Utilizing the Text Rank model, it extracts essential text features from legal provisions and facts, enabling the precise deployment of legal requirements based on detailed case analyses and legal knowledge. To overcome the hurdles of scant training data and the challenge of distinguishing similar legal documents, we developed a similar case matching model employing twin Bert encoders. Our empirical study reveals theft, intentional injury, and fraud as the predominant crimes, with sample counts of 335,745, 174,526, and 47,677, respectively. These top offenses, correlating with the most frequently cited laws, account for 85.79% of our dataset. The analysis further indicates “RMB” as the most recurring word in theft and fraud cases, and “minor injury” in intentional injury instances. Notably, our findings show that categories such as “misappropriation” are prone to misclassification as “embezzlement,” and “robbery” often gets confused with “theft,” highlighting the complexities of legal classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call