Predicting court rulings has gained attention over the past years. The court rulings are among the most important documents in all legal systems, profoundly impacting the lives of the children in case of divorce or separation. It is evident from literature that Natural language processing (NLP) and machine learning (ML) are widely used in the prediction of court rulings. In general, the court decisions comprise several pages and require a lot of space. In addition, extracting valuable information and predicting legal decisions task is difficult. Moreover, the legal system’s complexity and massive litigation make this problem more serious. Thus to solve this issue, we propose a new neural network-based model for predicting court decisions on child custody. Our proposed model efficiently performs an efficient search from a massive court decisions database and accurately identifies specific ones that especially deal with copyright claims. More specially, our proposed model performs a careful analysis of court decisions, especially on child custody, and pinpoints the plaintiff’s custody request, the court’s ruling, and the pivotal arguments. The working mechanism of our proposed model is performed in two phases. In the first phase, the isolation of pertinent sentences within the court ruling encapsulates the essence of the proceedings performed. In the second phase, these documents were annotated independently by using two legal professionals. In this phase, NLP and transformer-based models were employed and thus processed 3,000 annotated court rulings. We have used a massive dataset for the training and refining of our proposed model. The novelty of the proposed model is the integration of bidirectional encoder representations from transformers (BERT) and bidirectional long short-term memory (Bi_LSTM). The traditional methods are primarily based on support vector machines (SVM), and logistic regression. We have performed a comparison with the state-of-the-art model. The efficient results indicate that our proposed model efficiently navigates the complex terrain of legal language and court decision structures. The efficiency of the proposed model is measured in terms of the F1 score. The achieved results show that scores range from 0.66 to 0.93 and Kappa indices from 0.57 to 0.80 across the board. The performance is achieved at times surpassing the inter-annotator agreement, underscoring the model’s adeptness at extracting and understanding nuanced legal concepts. The efficient results proved the potential of the proposed neural network model, particularly those based on transformers, to effectively discern and categorize key elements within legal texts, even amidst the intricacies of judicial language and the layered complexity of appellate rulings.
Read full abstract