Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Open Access Icon
  • Research Article
  • 10.5204/lthj.3965
Challenging a ‘Hurt First, Fix Later’ Algorithmic System: Is the Tort of Negligence a Regulatory Solution?
  • Dec 8, 2025
  • Law, Technology and Humans
  • Jing Qian

The UK government has recently faced mounting criticism for what is widely perceived as a ‘hurt first, fix later’ approach to implementing algorithmic tools in the welfare system. While Australia has not yet moved in this direction in the wake of the Robodebt scandal, it could still potentially do so. The question this article addresses is one of effective regulation. While various strategies have been explored to tackle challenges arising from the adoption of algorithmic systems in welfare fraud investigation in recent years, this article follows the approach adopted in the Robodebt class action in Australia in proposing the tort of negligence as a source of common law regulation in the era of algorithmic systems. It first critiques the effectiveness of alternative contemporary mechanisms and, second, shows how duty to take reasonable care can be framed to regulate both the design and the subsequent operation of algorithmic systems. Finally, it considers two challenges to the proposal and argues that, despite these challenges, the tort offers a valuable opportunity to enhance fairness, legitimacy, and equity in both system design and regulatory practice, while also mitigating litigation risks.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4081
Automated Decision-Making and the Right to an Explanation Under POPIA in South Africa: A Legal Perspective
  • Nov 18, 2025
  • Law, Technology and Humans
  • Zinhle Novazi

The Protection of Personal Information Act 4 of 2013 (POPIA) establishes crucial safeguards against the risks posed by automated decision-making (ADM), particularly under section 71. This section restricts ADM that produces significant legal or personal effects unless specific exceptions apply. However, POPIA does not explicitly grant a right to an explanation, leaving uncertainties around how data subjects can meaningfully contest or understand ADM decisions. Using a doctrinal and comparative methodology, this article examines the legal implications of the provisions of section 71, focusing on its interpretation as either a prohibition against ADM or merely a right to object. The findings highlight the practical and theoretical challenges of defining ‘solely automated’ processes, revealing potential loopholes where nominal human oversight may undermine protections. Comparisons are drawn with international frameworks, such as the European Union’s General Data Protection Regulation (GDPR), to explore how a right to explanation might enhance transparency, accountability, and data subject rights under POPIA. The article further investigates the adequacy of POPIA’s ‘appropriate measures’ requirement, including the necessity of notification rights and clear standards for providing meaningful explanations. By distinguishing between ex-ante and ex-post explanations and between system functionality versus specific decision rationales, it identifies gaps in POPIA’s framework and proposes legal reforms. The article concludes that POPIA requires reform to strengthen algorithmic accountability and data subject protection. It recommends introducing an explicit right to explanation, clarifying the scope of ADM prohibitions, and implementing independent auditing mechanisms to strike a balance between innovation and accountability.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4037
Testing the Frontier: Generative AI in Legal Education and Beyond
  • Nov 18, 2025
  • Law, Technology and Humans
  • Cari Hyde-Vaamonde + 1 more

Unlike previous AI applications in law, which focused on search and prediction, generative AI (GenAI) has the capacity to produce, on some level, coherent legal writing. It is this capacity that is driving legal educators to fundamentally reconsider approaches to academic integrity and pedagogical practice. This study investigated how legal education can productively integrate GenAI into higher education settings while maintaining academic integrity, specifically examining: (1) how students critically evaluate AI-generated legal content; (2) what limitations they identify; and (3) how collaborative approaches can develop effective guidelines for responsible GenAI use in legal curricula. We employed a novel three-stage intervention, using metacognitive modelling and collaborative co-creation, involving over 125 law students from King’s College London. Data were collected through workshop observations, student evaluations of AI outputs, collaborative guideline development and follow-up interviews. Students consistently demonstrated sophisticated critical evaluation of AI-generated legal content, identifying significant limitations including superficial analysis, a lack of argumentative coherence, citation inadequacies and absence of nuanced understanding. Most notably, students strongly preferred content that demonstrated originality and critical thinking – precisely where AI systems under-performed. Exposure to AI limitations fostered responsible usage attitudes and enhanced students’ confidence in their own analytical capabilities. Our findings demonstrate that critical engagement with AI tools enhances rather than diminishes academic standards. The co-created guidelines offer a transferable model centred on fostering a ‘culture of trust’ rather than prohibition. This transferable approach prepares future legal professionals for an AI-augmented workplace while preserving core values of legal education: critical thinking, ethical reasoning and intellectual rigour.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4330
Legal Education in the Age of Generative Artificial Intelligence
  • Nov 18, 2025
  • Law, Technology and Humans
  • Zubair Abbasi

The rapid emergence of generative artificial intelligence (GenAI) has created both opportunities and challenges for the field of law. While it offers efficient access to legal information, it simultaneously raises questions about the nature of legal reasoning, professional competence, and academic integrity. Large language models (LLMs)—such as ChatGPT, DeepSeek, Gemini, Claude, and Copilot—promise unprecedented efficiency in tasks like research and drafting. Yet they struggle with the normative reasoning, ethical judgment, and contextual interpretation that are foundational to legal thought. This tension forms the central inquiry of this special volume: how can the legal profession and legal education responsibly harness GenAI’s capabilities while safeguarding the core values of authenticity, integrity, critical thinking, and professional accountability? The articles in this special volume on Legal Education in the Age of Generative Artificial Intelligence address this fundamental challenge across pedagogical, empirical, and regulatory dimensions. Together, they establish theoretical and practical frameworks for ‘responsible legal augmentation’ that transform GenAI’s known limitations into resources for developing advanced human judgment.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4223
Roger Brownsword (2024) The Future of Governance: A Radical Introduction to Law. Routledge
  • Nov 18, 2025
  • Law, Technology and Humans
  • Yeliz Figen Döker

Yeliz Figen Döker reviews The Future of Governance: A Radical Introduction to Law by Roger Brownsword.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4200
Responsible Legal Augmentation: Integrating Generative AI into Legal Practice
  • Nov 18, 2025
  • Law, Technology and Humans
  • Zubair Abbasi

This article examines Ayinde v London Borough of Haringey; Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), a landmark High Court judgment addressing the use of generative artificial intelligence (GenAI) in legal practice. The case arose when counsels submitted fictitious AI-generated authorities, prompting the court to consider not only individual lapses but also the broader professional obligations that must govern technological adoption in legal practice. Rejecting prohibition as well as uncritical endorsement, the court articulated a model of responsible augmentation: AI may assist lawyers, but only where outputs are independently verified and presented without misleading the judiciary. The judgment is significant in reaffirming lawyers’ professional duties of honesty, integrity and competence, while extending them to encompass technological literacy. It further underscores that legal practice cannot be reduced to linguistic plausibility alone, but must remain grounded in institutional practices of authority, authenticity and accountability. The decision also carries far-reaching implications for legal education as it highlights the urgency of embedding AI literacy into curricula, not merely as technical training but as critical engagement with law’s epistemic foundations. By reasserting that authenticity and accountability are core professional values, Ayinde signals a jurisprudential transition from tentative accommodation of technological change to its active governance. In doing so, it provides a framework through which courts, regulators and educators can collaborate to integrate GenAI into legal practice while sustaining public trust in the judicial system.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4053
GenAI as Whetstone: A Socratic Framework for Sharpening Critical Thinking in Legal Education
  • Nov 18, 2025
  • Law, Technology and Humans
  • Tamarakemiebi Koroye + 1 more

The integration of generative artificial intelligence (GenAI) into legal education presents a fundamental paradox: while GenAI efficiently parses legal databases and accelerates research, it struggles to model the normative reasoning and ethical contexts foundational to jurisprudential thought. This article employs a dialectical approach to resolve this tension through a ‘Socratic-GenAI’ framework that reconceptualises GenAI as a whetstone sharpening students’ analytical capacities rather than replacing their critical thinking. Through empirical evidence, including students completing tasks 4.7 times faster yet demonstrating 31 per cent lower performance on cross-doctrinal synthesis, this research shows how GenAI’s limitations become pedagogical resources when deliberately leveraged. The framework operationalises integration through structured contention juxtaposing GenAI and human reasoning, critical interrogation protocols and epistemological transparency. Rejecting binary narratives of adoption or resistance, the article offers a roadmap for interconnectedness between human and machine intelligence, providing a template for evaluating emerging technologies against core jurisprudential values while promoting innovation and sustainability in legal training.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4031
Early PLT Student Perceptions of the Integration of Generative Artificial Intelligence in Legal Education
  • Nov 3, 2025
  • Law, Technology and Humans
  • Nicole Landy

This article explores the reflections of Australian law students on the use and integration of Generative Artificial Intelligence (GenAI) in the practical legal training law curriculum. Participants were enrolled as students in the Graduate Diploma in Legal Practice at Queensland University of Technology (QUT) between April and November 2024 and engaged with several GenAI use cases embedded in their law subjects. Surveys were used to assess participants’ perceptions of the incorporation of GenAI into the subjects. The findings indicated that some participants had no prior GenAI experience, but the majority had at least a limited experience. Participants reported that all GenAI use cases improved their GenAI literacy and that they were interested in engaging with different AI tools and applications and wanted to learn how to prompt more effectively. While students’ understanding of GenAI capabilities improved, they remain cautious about using GenAI in their future legal practice, particularly for tasks such as legal research, feedback on a video recordings and written communication. Having engaged with GenAI in their studies, participants reported feeling better prepared for entry into a legal profession that is increasingly incorporating the use of GenAI. Implications from this study include an increased understanding of how best to embed GenAI in legal curriculum and assessment to ensure law students are provided with opportunities to explore the appropriate and responsible use of GenAI and to develop their AI literacy skills.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.3975
Governance-by-Design as an Enabler of AI in Digital Health in Sub-Saharan Africa
  • Oct 27, 2025
  • Law, Technology and Humans
  • Beverley A Townsend

To harness the benefits of artificial intelligence (AI)-enabled healthcare, access to data is a crucial component of AI in digital health technology development and adoption. This requires effective frameworks of digital and data governance. This paper highlights important digital, data, and data-related issues that present unique and pressing challenges to such adoption in sub-Saharan Africa (SSA). Specific non-exclusive challenges in SSA arise from issues around data integrity and quality, interoperability, and data provenance. Related emerging issues centre on surveillance capitalism, data commodification, and coloniality. Certain digital and data governance strategies and solutions in support of the public good are in place and include various legal rights, regulatory policies, and ethics frameworks. Building on these solutions, I advance an innovative and supplementary mechanism of grounding digital and data governance on the theoretical approach of human-centric design and on ideas of embedding ethics and law. As illustrated in India, this ‘third way’ of ‘governance-by-design’ practically embeds and operationalises rules as protocols within the infrastructure and architecture of the technology itself. Accordingly, an inclusive and augmented data and digital governance-by-design solution is offered as an enabler of AI in digital health in SSA.

  • Open Access Icon
  • Research Article
  • 10.5204/lthj.4120
Becoming, Doing, Being: GenAI and the Promise of Professional Identity in Law
  • Oct 14, 2025
  • Law, Technology and Humans
  • Felicity Bell + 1 more

The legal profession offers its members a special identity in exchange for prolonged education and regulatory oversight. This article explores how the emergence of generative artificial intelligence (GenAI) challenges that promise – particularly for new entrants – and the profession’s meaning and value. A key contribution of the article is to bridge two bodies of scholarship: the literature on institutional change (professional versus/and other logics and modes) and the growing research on technology in the professions. By bringing these together and drawing on the existing empirical research, we analyse how GenAI interacts with the processes of ‘becoming’, ‘doing’ and ‘being’ a lawyer – encompassing socialisation, tasks, motivation and esteem. Rather than treating GenAI as a singular threat or solution, we conceptualise its impacts as dependent on its melding with and reshaping existing professional and other belief systems and in certain workplace contexts. We argue that GenAI will reshape the profession’s core promise – what it offers to its members, and by extension, to the state and wider society. In doing so, we raise critical questions: Will aspiring lawyers still be motivated to undertake extensive education and remain in the regulatory fold if the traditional professional payoff becomes more ambiguous? And is the profession capable of imagining new professional identities?