- New
- Research Article
- 10.1007/s10506-025-09496-0
- Dec 4, 2025
- Artificial Intelligence and Law
- Pedro A Villa-García + 2 more
- New
- Research Article
- 10.1007/s10506-025-09494-2
- Nov 26, 2025
- Artificial Intelligence and Law
- Daniel Fürst + 3 more
Abstract Legal exploration, analysis, and interpretation remain complex and demanding tasks, even for experienced legal scholars, due to the domain-specific language, tacit legal concepts, and intentional ambiguities embedded in legal texts. In related, text-based domains, Visual Analytics (VA) has become an indispensable tool for navigating documents, representing knowledge, and supporting analytical reasoning. However, legal scholarship presents distinct challenges: it requires managing formal legal structure, drawing on tacit domain knowledge, and documenting intricate and accurate reasoning processes – needs that current VA system designs for law fail to address adequately. We identify and describe key challenges and underexplored opportunities in applying VA to law, exploring how these technologies might better serve the legal domain. Interviews with nine legal experts reveal that current legal information retrieval interfaces do not adequately support the navigational complexity of law, often forcing users to rely on internalized legal expertise instead. To address this gap, we identify a three-phase workflow for legal experts, which highlights opportunities for VA to support legal reasoning through knowledge externalization and provenance tracking, leveraging tree-, graph-, and hierarchy-based visualizations. Through this contribution, our work establishes a user-centered VA workflow for the legal domain, recognizing tacit legal knowledge as a critical element of sense-making and insight generation, and situates these contributions within a broader research agenda for VA in law and other text-based disciplines.
- New
- Research Article
- 10.1007/s10506-025-09491-5
- Nov 26, 2025
- Artificial Intelligence and Law
- Durairaj Thenmozhi + 2 more
- New
- Research Article
- 10.1007/s10506-025-09492-4
- Nov 18, 2025
- Artificial Intelligence and Law
- Karolina Kiejnich-Kruk + 2 more
Abstract This study examines algorithmic support for punishment adjustment in judicial discretionary systems, based on the example of drunk-driving cases in Poland. Using an extensive case study, the research explores how algorithm-based sentencing models can address problems in judicial decision-making processes and mitigate inconsistencies arising from misapplied human discretion. The study’s objectives are the following: (1) identify factors influencing punishment severity in drunk-driving cases in Poland; (2) compare these findings to statutory rules governing punishment in Polish criminal proceedings to uncover inconsistencies; (3) evaluate the practical implications of discretion in this category of cases and propose remedial measures; (4) integrate algorithmic sentencing guidelines into the proposed solution; and (5) introduce a novel metric to quantify undesirable judicial decisions, enhancing automated judge-support systems in criminal proceedings.
- New
- Research Article
- 10.1007/s10506-025-09486-2
- Nov 17, 2025
- Artificial Intelligence and Law
- Minh Duc Nguyen
- New
- Research Article
- 10.1007/s10506-025-09487-1
- Nov 15, 2025
- Artificial Intelligence and Law
- Jack G Conrad + 3 more
- Research Article
- 10.1007/s10506-025-09490-6
- Nov 11, 2025
- Artificial Intelligence and Law
- Lingyi Meng + 5 more
- Research Article
- 10.1007/s10506-025-09484-4
- Nov 11, 2025
- Artificial Intelligence and Law
- Eoin O’connell + 7 more
Abstract Recent advances in Generative Language Models (GLMs) have renewed focus on promising results in zero-shot text classification. However, their off-the-shelf performance on unfamiliar and domain specific tasks remains uncertain. In this legal clause classification task we evaluate a plug-and-play zero-shot prompting strategy for OpenAI’s GPT-4 GLM on a contract clause dataset. We introduce the new CUAD-SL dataset that has been refactored as a single label classification problem as a fairer and more robust legal classification benchmark. In a comparative study, we show that fine-tuning on legal domain data adapts smaller, less complex models to the task at hand, with significant classification accuracy improvement of up to 20.6%, with a best overall performance of 87.8% for the DeBERTa Transformer model compared to GPT-4's 67.2% performance. This study also takes the novel approach of assessing the business feasibility of deploying each of these machine learning models through a detailed cost–benefit analysis that measures the trade-off between performance metrics and low and high usage running costs.
- Research Article
- 10.1007/s10506-025-09488-0
- Oct 17, 2025
- Artificial Intelligence and Law
- David Cevallos-Salas + 4 more
- Research Article
- 10.1007/s10506-025-09480-8
- Oct 14, 2025
- Artificial Intelligence and Law
- Yunhan Li + 5 more