• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Predictive Policing Research Articles (Page 1)

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
352 Articles

Published in last 50 years

Related Topics

  • Role Of Police
  • Role Of Police
  • Police Technology
  • Police Technology
  • Problem-oriented Policing
  • Problem-oriented Policing
  • Police Crime
  • Police Crime
  • Policing Strategies
  • Policing Strategies
  • Police Misconduct
  • Police Misconduct

Articles published on Predictive Policing

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
344 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1108/dta-03-2025-0170
Optimization of the homicide predictive classification algorithm: a study applied to the military police of Minas Gerais state, Brazil
  • Oct 29, 2025
  • Data Technologies and Applications
  • Claudio Leles Nascimento + 2 more

Purpose This study aimed to optimize the homicide predictive classification (HPC) algorithm used by the Military Police of Minas Gerais to increase its predictive accuracy, evaluate the effectiveness of the results over time, and consider its potential for future expansions. Design/methodology/approach Supervised learning techniques were applied, using data extracted from the police reports system of Minas Gerais state, Brazil, covering the period from January 2012 to June 2023. The data were divided into training and test sets, allowing the construction and validation of predictive models. Findings The study demonstrates that optimizing the HPC algorithm enhances predictive accuracy, particularly in identifying high-risk individuals. HPC Test 6 outperformed previous models, achieving 2,405 correct homicide predictions within a 66-month horizon. Research limitations/implications This study’s limitations include reliance on police reports, which may contain incomplete or underreported data, potentially affecting prediction accuracy. The analysis is also restricted to police incident data from Minas Gerais, Brazil, limiting generalizability. Future research should integrate additional datasets, such as the Integrated Prison Management System (SIGPRI) and Civil Registry, to enhance precision. Practical implications The optimized HPC algorithm provides law enforcement with a data-driven tool to enhance crime prevention strategies. By accurately identifying high-risk individuals, the model enables more efficient resource allocation, targeted interventions and proactive policing. Its application supports focused deterrence approaches, improving public safety efforts. Social implications The implementation of predictive policing through the optimized HPC algorithm has significant social implications. By enabling proactive crime prevention, it enhances public safety and fosters community trust in law enforcement. The model supports focused deterrence strategies, potentially reducing homicide rates and violent crime. Expanding predictive analytics to other crime types could further benefit society, promoting a more strategic and effective approach to public security while balancing technological advancements with civil rights protections. Originality/value This study contributes to predictive policing by optimizing the HPC algorithm, demonstrating its effectiveness in long-term crime forecasting. Unlike previous models, HPC Test 6 enhances predictive accuracy, enabling proactive interventions. The research integrates criminological theories with machine learning, offering a novel approach to homicide prevention. Its findings provide valuable insights for law enforcement, emphasizing data-driven decision-making. The study’s methodological advancements highlight the potential for expansion to other crime types, reinforcing its originality. By addressing predictive policing challenges, this research enhances public security strategies and contributes to the broader discourse on technology-driven crime prevention.

  • New
  • Research Article
  • 10.63539/isrn.2025022
The Algorithmic Citizen: How AI is Rewriting Rights, Representation and Responsibility
  • Oct 29, 2025
  • International Social Research Nexus (ISRN)
  • Ms Ritika Goswami

With fast-paced digitization, Artificial Intelligence (AI) is redefining the in-teraction between states and citizens. From predictive policing to biometric welfare systems, algorithmic technologies are no longer on the periphery but at the center of governance. This paper presents the idea of the algorithmic citizen—a subject whose civic status is not just mediated by laws and institu-tions but also by data profiles, predictive models, and unseen computational logics. Whereas AI holds out the promise of efficiency, neutrality, and scale, it tends to entrench inequality, ostracize the vulnerable, and erode democratic accountability. In moving governance away from rights-based standards to probabilistic judgment, access to welfare or justice comes to rely more on statistical inference than on deliberation. Conducted with reference to political theory, sociology, and digital ethics, this paper investigates three case studies: India's Aadhaar biometric identification system and its application in welfare access; the UK's algorithmic use in school allocation and policing; and facial recognition surveillance of democratic cities. These cases highlight that AI systems are far from being unproblematic ally administrative instruments but are active forces determining inclusion, eligibility, and citizenship itself. The article poses urgent questions: What does accountability look like when decisions are outsourced to algorithms? How can citizens assert agency over systems they cannot perceive or oppose? It calls for a shift in AI governance in the direction of democratic values—transparency, explain ability, consent, and participation. Institutional reforms like algorithmic audits, social impact assessments, and rights-based digital safeguards are necessary. Finally, the algorithmic citizen expresses the promise and danger of AI, pointing to the importance of ensuring that digital innovation reinforces rather than erodes democratic life.

  • New
  • Research Article
  • 10.1108/dprg-06-2025-0193
Modernising drug law enforcement in India through legal-tech: comparative insights from Australia and the UK
  • Oct 20, 2025
  • Digital Policy, Regulation and Governance
  • Mohit Nayal + 1 more

Purpose This paper aims to discuss the rise of drug-impaired crime in India. It suggests a legal-tech enforcement framework inspired by Australia and the UK, which use advanced tech for narcotics control. Design/methodology/approach The study uses a hybrid doctrinal-empirical approach, analysing legal frameworks, roadside drug testing and technology in Australia and the UK. It examines how Artificial Intelligence, blockchain and geospatial tools improve law enforcement. Findings India’s enforcement faces issues like fragmented digital infrastructure, weak forensic standards and judicial scepticism about tech evidence. The new framework combines AI predictive policing, blockchain evidence management and geospatial analytics to boost efficiency and transparency. Research limitations/implications The study is limited to two jurisdictions and focuses on legal and policy frameworks rather than operational trials, warranting further empirical validation in the Indian context. Practical implications The framework offers a roadmap for Indian agencies to adopt digital tools for narcotics control, enhancing forensic processes and inter-agency collaboration. Social implications Adoption of this framework encourages privacy-respecting, transparent law enforcement consistent with constitutional safeguards, especially the Puttaswamy judgement. Originality/value To the best of the authors’ knowledge, this is among the first studies to propose an integrated legal-tech model for narcotics enforcement in India, bridging gaps between digital governance, forensic science and legal reform.

  • Research Article
  • 10.30958/ajl.11-4-5
Artificial Intelligence – a Vector for Crime and a Tool for Carrying Out Criminal Justice
  • Sep 30, 2025
  • Athens Journal of Law
  • Carmina-Elena Tolbaru

This study aims to provide a systematic and comprehensive analysis of the role of artificial intelligence (AI) in both criminal activity and the criminal justice system. It highlights the dual nature of AI’s technological advancement: on one hand, its negative aspect, as it enhances the capabilities and efficiency of criminal operations; and on the other hand, its positive potential in strengthening cybersecurity measures. AI can serve as a powerful instrument for criminal conduct, which in turn necessitates its deployment in predictive policing and the more effective attribution and investigation of crimes. Artificial intelligence technology has an essential role in everyday life, in all areas of economic-social life, being the obvious beneficial effect on the whole culture; especially, however, we need to be aware of the negative effects in the field of cybercrime, as AI can help criminals become more sophisticated and scalable, consequently more difficult to detect. Therefore, the rapid progress of information and communication technology is likely to constitute a mechanism for criminal actors to facilitate the commission of crimes, not only by expanding and changing the inherent nature of existing threats, but also by creating new threats.vWithout well-enforced national and international legislation, cyber security is seriously affected and the management of criminal activities becomes increasingly difficult to achieve, which is a significant concern for law enforcement authorities. Using artificial intelligence in investigations is likely to improve and speed up criminal justice work; In fact, this tool is becoming increasingly useful for detecting crime and preventing illegal activities. But on the other hand, through the ability of artificial intelligence to quickly analyse large amounts of data, there is a flagrant violation of the rights and freedoms of individuals, which is a real challenge for criminal justice. In this respect, it is therefore important to develop a legal framework to ensure the responsible use of AI. Keywords: Artificial intelligence; Crime; Criminal justice; Cyber security; Legal framework; Responsibility.

  • Research Article
  • 10.24144/2307-3322.2025.90.4.41
Algorithmic systems and the presumption of innocence: legal analysis of biases and protection mechanisms
  • Sep 29, 2025
  • Uzhhorod National University Herald. Series: Law
  • Y.I Kryknitskyi

The article examines the impact of modern artificial intelligence technologies on the fundamental right of an individual to be presumed innocent until a court judgment becomes final and legally binding. The analysis establishes how automated facial recognition, algorithmic risk-assessment tools for recidivism, and predictive policing systems can introduce bias due to flawed data sampling, opaque algorithmic assumptions, and imperfect modeling methodologies. It finds that deploying these technologies without adequate oversight threatens to shift the burden of proof and to violate the “in dubio pro reo” principle enshrined in Article 6 of the European Convention on Human Rights. The study analyzes Directive (EU) 2016/343 and Regulation (EU) 2024/1689 (the AI Act), which establish minimum standards for criminal proceedings and set requirements for the transparency and accountability of algorithmic systems, and it reviews European Parliament resolutions and recommendations from Fair Trials and Amnesty International concerning defense access to source code and algorithmic audit results. Based on the identified risks, the article argues for the introduction of explainability mechanisms, the creation of independent AI audit bodies, and legislative restrictions on the autonomous use of high-risk technologies without human involvement. It also emphasizes the need for specialized training of judges, prosecutors, and defense attorneys in artificial intelligence and algorithmic fairness, as well as for guaranteeing the accused’s right to review expert assessments of the algorithmic tools used as evidence. It is established that the defense should have a statutory right to access the technical documentation of an algorithm (including descriptions of the sources of training data, validation methods, and the results of independent audits), while duly taking into account regimes for protecting trade secrets. The importance of providing mechanisms for confidential in-court review and of state-funded expert examinations where an individual cannot secure such review independently is analyzed. The advisability of imposing a procedural prohibition on the use of fully autonomous decisions in matters that directly restrict an individual’s liberty (for example, grounds for arrest or for extending a preventive measure) is substantiated – in such cases, the decision must be made by a human decision-maker who is required to consider the explanation provided by the algorithm and to record the reasons for accepting or rejecting its conclusions. It is recommended to introduce, at the national level, supervisory and certification procedures for high-risk algorithms, including periodic independent audits and public reports on their effectiveness and on any biases detected. In the context of transnational electronic evidence, emphasis is placed on the need to take into account the practices of the SIRIUS and TREIO projects when harmonizing rules on data access and cooperation with foreign jurisdictions, in order to prevent algorithmic evidence obtained abroad from evading proper scrutiny.

  • Research Article
  • 10.18189/isicu.2025.32.2.359
공공안전을 위한 인공지능 활용 범죄의 예측 및 예방
  • Aug 31, 2025
  • The Legal Studies Institute of Chosun University
  • Jong Goo Kim

Today, artificial intelligence(AI) is rapidly emerging as a core technology for ensuring public safety, fundamentally reshaping the traditional concept of policing centered on the prediction and prevention of crime. In particular, AI-based predictive policing systems and intelligent CCTV are being actively implemented in countries such as South Korea and the United States, shifting the paradigm of law enforcement from reactive investigations to real-time surveillance and proactive intervention. In Korea, AI policing technologies are being increasingly applied in areas such as protection of stalking victims, recidivism prediction for individuals under electronic monitoring, and crowd density analysis. However, the legal and ethical frameworks to regulate these technologies remain insufficient. This paper examines the global trends of AI-based policing technologies alongside Korea’s policy and legislative developments, while focusing on the potential for human rights violations and discriminatory outcomes as AI autonomy expands under the banner of public safety. Issues such as algorithmic bias, lack of transparency, and the ambiguity of accountability pose serious challenges, particularly in the field of law enforcement. Without securing fairness in targeting and clarity in the attribution of responsibility, the use of predictive models may violate constitutional rights and procedural due process. Ultimately, it is essential to develop legal and policy safeguards in step with the rapid advancement of technology. When AI policing technologies are applied, efforts must be made to strike a balance between the public interest of crime prevention and private interests such as privacy and equality, avoiding the sacrifice of one value for another. In designing policing strategies in the era of artificial intelligence, it is necessary to fully utilize the benefits of technology while ensuring that such systems operate within the boundaries of the rule of law and the protection of fundamental rights. Accordingly, this paper emphasizes that future-oriented policing models employing AI for public safety must be built on the principle of checks and balances, allowing AI to enhance social security as a tool for humans without infringing upon individual freedoms and rights.

  • Research Article
  • 10.53022/oarjet.2025.9.1.0070
AI-Driven Law Enforcement in Hybrid/Multi-Cloud Environments: Balancing Innovation, Privacy, and Equity
  • Jul 30, 2025
  • Open Access Research Journal of Engineering and Technology
  • Praneeth Kamalaksha Patil

The integration of artificial intelligence with hybrid/multi-cloud architectures presents a transformative framework for law enforcement agencies grappling with explosive growth in digital evidence. This article examines how these technologies enable agencies to manage vast quantities of data across distributed environments while maintaining security and compliance. The Edmonton Police Service case demonstrates tangible benefits through dramatically improved access times and significant cost reductions. Technical components including secure connectivity through VPN gateways, direct cloud connections, and federated learning methodologies allow agencies to collaborate without exposing sensitive information. Advanced implementations support predictive policing with privacy safeguards, real-time video analysis at network edges, and robust disaster recovery capabilities. The discussion addresses critical challenges including algorithmic bias, surveillance ethics, and digital divides between well-resourced urban departments and rural agencies. Experimental validation confirms substantial performance advantages in latency reduction, predictive accuracy, and cost efficiency compared to traditional infrastructures. Future directions point toward enhanced edge computing, augmented reality interfaces for officers, and broader social applications including preventative interventions and environmental protection, illustrating how these technologies can extend beyond enforcement to support community wellbeing when implemented with appropriate ethical frameworks.

  • Research Article
  • 10.31995/jgv.2025.v16isi7.004
Artificial Intelligence: Unmasking the Potential Harms and Ethical Dilemmas of a Technological Revolution
  • Jul 25, 2025
  • Journal Global Value
  • Preeti Khanna

Artificial Intelligence (AI) is a fixture of modern society, reshaping industries and transforming human relations, government, and the economy. Although its use in medicine, finance, and automation presents limitless potential for the common good, AI also poses very real dangers that are generally overlooked in the pursuit of innovation. Critically, AI harms in this paper are ethical harms, algorithmic bias, invasions of privacy in data, loss of job, lethal autonomous weapons, spreading disinformation, and existential hazard from Artificial General Intelligence (AGI). Being multidisciplinary in nature, the research offers real-world experiences, including Cambridge Analytica incident, algorithmic injustice in the predictive policing domain, discriminatory mechanisms of hiring people, and the threat posed by lethal autonomous weapon systems. The research also briefly touches on the governance loopholes, ethics, and recommendations for ethical deployment of AI. Concluding with a call for global cooperation, stringent regulation, and ethics-based design guidelines, this research highlights the importance of solving the darker sides of AI in aligning it with humanity’s greater good.

  • Research Article
  • 10.32996/jbms.2025.7.4.10
AI, Legal Pluralism, and Property Governance: Comparative Insights on Rulemaking and Enforcement from the U.S., U.K., Ukraine, China, and India
  • Jul 24, 2025
  • Journal of Business and Management Studies
  • Elena Korol + 1 more

Artificial intelligence (AI) systems are set to assume a growing number of tasks that have been the traditional domain of human rulemaking and rule-enforcing agencies. However, the world into which AI will be deployed is one of legal pluralism and hybrid property governance. Building on legal pluralist scholarship and on parallel developments in the US, UK, Ukraine, China, and India, this article provides an exploratory analysis of how AI systems may fit with plural property rulemaking and enforcement regimes that encompass both formal law and informal social norms. We also use Elinor Ostrom’s Institutional Analysis and Development (IAD) framework to analyze how AI systems may operate in a plural legal environment where local, community-specific, informal “rules-in-use” may depart from stated, formal “rules in books” to produce a system of hybrid property governance. AI has the potential to bring higher levels of efficiency and consistency to such administrative tasks. At the same time, we find that if AI systems are designed to ignore the social and legal pluralism in which they are embedded, they may well erode public trust, legitimacy, and justice in highly socially complex contexts that are too variable or local to be treated as standardized or to have rule-of-law principles uniformly imposed on them. We therefore argue that the operational design details of AI systems and their use in hybrid governance arrangements matter, that rule enforcement algorithms that are context-blind or context-oblivious are likely to have distributive impacts that increase conflict and injustice, and that the context matters because local governance arrangements do. Cases of socially contextualized AI property governance systems, from automated traffic cameras in India to predictive policing in the UK to mortgage fraud detection in the US, illustrate a tension between a desire to automate the standardized enforcement of rules using AI and people’s desire for relational social norms. The article presents a framework and some concrete design considerations to help guide the participatory design of AI in plural property governance contexts that surfaces, engages, and accounts for stakeholders, local norms, and legitimacy criteria. In so doing, we aim to contribute to and expand the normative and institutional AI governance literature as well as the literatures on legal pluralism and institutional design.

  • Research Article
  • 10.3389/fpos.2025.1642328
The Great Terror on steroids: exploring the counterfactual scenario of artificial intelligence-driven purges under Stalin
  • Jul 23, 2025
  • Frontiers in Political Science
  • Henrique Varajidás

For self-evident reasons of historical synchrony, most research probing the frontiers between totalitarianism studies and artificial intelligence studies to date has centered on mass surveillance in Xi Jinping’s China. The Great Terror on Steroids, an exercise in experimental Political Science grounded on a version of the historical-contextual analysis method adapted to support counterfactual reasoning, takes an entirely different approach. Namely, the article explores the counterfactual hypothesis of what difference it could have made if the perpetrators of a key part of the Stalinist Soviet Union’s Great Terror—specifically, the campaign targeting “Trotskyists” in the Party—had had at their disposal an artificial intelligence tool modeled after the cutting-edge technology utilized in predictive policing today. We start by reviewing totalitarianism and artificial intelligence studies, with a focus on their potential intersections. Next, we describe our method, including its promise and limitations. Then, we introduce the Great Terror as a case study. Subsequently, we delve into our research question in detail, process-tracing the origins, background, setup, dynamics, and results of the aforementioned campaign and deducing the advantages and drawbacks that the use of the predictive policing artificial intelligence tool would likely have brought to its design and implementation. We conclude that, on the “positive” side, the selection of targets would have been more neutral in the sense that literally everyone could become one for reasons that would have been almost entirely out of the arbitrary hands of the perpetrators and that the brutal interrogation sessions and inter-related snowballing effects would have been substantially minimized. On the other side, nonetheless, we reckon that enhanced neutrality would in no way have equated with enhanced rationality since, owing to its inherent defects, the tool would not have been able to rid the process of the dark shadow of entirely irrational detentions and escalatory paranoia. Finally, we come to conjecture that the Stalinist leadership would probably have preferred the historical version of the purge due to the key human mobilization functions that the artificial intelligence-boosted version would have precluded.

  • Research Article
  • 10.1177/17438721251351015
Automated Justice and the Performance of Law: Kafka’s The Trial in the Age of Algorithmic Governance
  • Jul 7, 2025
  • Law, Culture and the Humanities
  • Chippy Abraham

Franz Kafka’s The Trial (1925) presents a haunting vision of a legal system that operates autonomously, detached from human will or moral considerations. This article examines Kafka’s portrayal of law as a performative and self-sustaining process rather than a human-centered institution. Drawing on J. L. Austin’s speech act theory and Judith Butler’s concept of performativity, the paper explores how law in The Trial functions through ritualized actions that sustain its authority irrespective of substantive justice. Additionally, using Max Weber’s theory of bureaucracy and contemporary discussions on algorithmic governance, the paper argues that Kafka anticipates modern concerns about automated justice, AI-driven legal decision-making, and predictive policing. The dehumanization of Joseph K. in the novel mirrors contemporary legal realities where individuals become mere data points in bureaucratic and computational legal frameworks. Case studies include AI sentencing systems, automated visa refusals, and predictive policing, all of which reinforce Kafka’s critique of law as an impersonal, inescapable process. The paper further examines the paradox of legal authority in the digital age, where algorithms increasingly mediate justice, often without transparency or accountability. By linking Kafka’s critique of legal performativity to contemporary debates on machine learning in law, bureaucratic statelessness, and predictive surveillance, this paper highlights The Trial ’s continued relevance in an era of algorithmic governance and non-human legal actors.

  • Research Article
  • 10.63163/jpehss.v3i3.513
The Role of AI in Criminal Justice: Predictive Policing, Bias, and Due Process
  • Jul 6, 2025
  • Physical Education, Health and Social Sciences
  • Muhammad Ahsan Iqbal Hashmi + 3 more

The use of Artificial Intelligence (AI) in criminal justice systems across the world is transforming past ways of doing things in terms of law enforcement, judicial proceedings, and criminal prevention. More precisely, predictive policing technologies are designed to make policing as efficient as possible by predicting crime patterns and revealing possible perpetrators with the use of machine learning algorithms. Nonetheless, the speed at which such tools are used is causing significant concerns about legality and ethical aspects especially those arising due to bias in algorithms, lack of transparency, and violation of due process of law. The following paper analyzes the use of AI in contemporary criminal justice systems, its involvement in predictive analytics, its tendency to reproduce systemic favoritism, and the effect in terms of fundamental rights. This paper critically re-assesses global best practices in an interdisciplinary background and with a particular focus upon the changing legal framework in Pakistan through evolving discourse by the academics and the formation of policies. It ends by recommending practical steps to be taken in order to make sure that implementation of AI in the context of criminal justice leads to improvement in the understanding but not diminishment of the idea of fairness, accountability, and legality.

  • Research Article
  • 10.62345/jads.2025.14.2.120
Mechanisms of Algorithmic Governmentality, State-Controlled Consciousness, and Systemic Conditioning: A Techno Authoritarian Analysis of Brave New World
  • Jul 5, 2025
  • Journal of Asian Development Studies
  • Fareeha Zaheer + 2 more

This study examines the mechanisms of algorithmic governmentality, state-controlled consciousness, and systemic conditioning in Aldous Huxley's Brave New World, exploring how governance in the novel operates through predictive control, technological intervention, and psychological manipulation. Drawing on the theoretical framework of algorithmic governmentality (Rouvroy & Berns, 2013), the research analyzes the techno-authoritarian structures that regulate human behavior, suppress individual consciousness, and engineer societal compliance through systemic conditioning. In the novel, genetic standardization, hypnopaedic indoctrination, pharmacological pacification, and data-driven surveillance function as tools of state-controlled consciousness, systematically shaping perception, emotions, and decision making. This study situates Brave New World within contemporary discussions on algorithmic control, bio-political regulation, and digital authoritarianism, demonstrating its prescient relevance to modern-day concerns surrounding AI-driven governance, predictive policing, and data capitalism. Ultimately, this research highlights Huxley's work as a cautionary critique of a future where systemic conditioning and algorithmic governance converge to redefine human autonomy, free will, and ideological conformity.

  • Research Article
  • 10.55092/let20250005
Ethical challenges and innovations in AI-driven predictive policing: the case of China
  • Jul 3, 2025
  • Law, Ethics & Technology
  • Zhenkang Li + 1 more

Ethical challenges and innovations in AI-driven predictive policing: the case of China

  • Research Article
  • 10.17323/2713-2749.2025.2.183.212
The Artificial Intelligence Influence on Structure of Power: Long-Term Transformation
  • Jul 2, 2025
  • Legal Issues in the Digital Age
  • Vladimir Nizov

Integration of artificial intelligence (AI) into public administration marks a pivotal shift in the structure of political power, transcending mere automation to catalyze a long-term transformation of governance itself. The author argues AI’s deployment disrupts the classical foundations of liberal democratic constitutionalism — particularly the separation of powers, parliamentary sovereignty, and representative democracy — by enabling the emergence of algorithmic authority (algocracy), where decision-making is centralized in opaque, technocratic systems. Drawing on political theory, comparative case studies, and interdisciplinary analysis, the researcher traces how AI reconfigures power dynamics through three interconnected processes: the erosion of transparency and accountability due to algorithmic opacity; the marginalization of legislative bodies as expertise and data-driven rationality dominate policymaking; and the ideological divergence in AI governance, reflecting competing visions of legitimacy and social order. The article highlights AI’s influence extends beyond technical efficiency, fundamentally altering the balance of interests among social groups and institutions. While algorithmic governance promises procedural fairness and optimized resource allocation, it risks entrenching epistocratic rule — where authority is concentrated in knowledge elites or autonomous systems — thereby undermining democratic participation. Empirical examples like AI-driven predictive policing and legislative drafting tools, illustrate how power consolidates in executive agencies and technocratic networks, bypassing traditional checks and balances. The study examines paradox of trust in AI systems: while citizens in authoritarian regimes exhibit high acceptance of algorithmic governance, democracies grapple with legitimacy crises as public oversight diminishes. The author contends “new structure of power” will hinge on reconciling AI’s transformative potential with safeguards for human dignity, pluralism, and constitutionalism. It proposes a reimagined framework for governance — one that decentralizes authority along thematic expertise rather than institutional branches, while embedding ethical accountability into algorithmic design. The long-term implications demand interdisciplinary collaboration, adaptive legal frameworks, and a redefinition of democratic legitimacy in an era where power is increasingly exercised by code rather than by humans.

  • Research Article
  • 10.56345/ijrdv12n1s110
Regulating Artificial Intelligence in Democratic Societies: Legal Challenges and Ethical Imperatives for Peace, Development, and Integration
  • Jun 25, 2025
  • Interdisciplinary Journal of Research and Development
  • Ina Lushka

Artificial Intelligence (AI) is reshaping democratic institutions, offering significant opportunities for innovation while also raising serious legal and ethical concerns. Its use in areas like surveillance, predictive policing, hiring, and healthcare challenges core democratic principles such as transparency, accountability, and the protection of fundamental rights. This paper examines how democratic societies can govern AI effectively, ensuring that its development aligns with civil liberties and human dignity. Existing legal frameworks, often outdated, struggle to address the complexities of AI, including issues of bias, discrimination, and the lack of human oversight in automated decision-making. While regulations like the EU’s General Data Protection Regulation (GDPR) provide some safeguards, they fall short in addressing the full scope of AI’s impact. The proposed EU AI Act represents progress toward a harmonized, risk-based approach but raises questions about enforcement and adaptability. Ethical governance must go beyond voluntary guidelines. Binding legal standards are needed to enforce principles such as fairness, explainability, and human-centric design. Furthermore, international cooperation is essential to prevent regulatory gaps and ensure consistent protections across borders. Participatory oversight is also vital. Public trust depends on involving a broad range of stakeholders—citizens, experts, developers, and civil society—in shaping AI policy. Legal systems must anticipate AI’s broader effects, such as job displacement and social inequality, through proactive measures like retraining programs and social protections. Ultimately, AI governance must safeguard democratic values. Transparent, accountable, and inclusive legal frameworks are essential to ensure that AI strengthens—rather than undermines—freedom, justice, and human dignity. Received: 20 April 2025 / Accepted: 17 June 2025 / Published: 25 June 2025

  • Research Article
  • 10.24144/2788-6018.2025.03.1.12
Artificial Intelligence and Human Rights: Challenges for the European Convention on Human Rights
  • Jun 24, 2025
  • Analytical and Comparative Jurisprudence
  • I M Bernaziuk

It is indicated that in the modern world, artificial intelligence technologies have ceased to be a theoretical concept and have become an integral part of everyday life. This significantly transforms social processes, state mechanisms and interpersonal relationships. Artificial intelligence undoubtedly creates powerful opportunities for improving management activities, streamlining state services and stimulating innovative development. At the same time, significant threats arise, including potential privacy violations, discriminatory practices, opacity of administrative decisions, excessive surveillance and limited opportunities to appeal decisions made with the help of such technologies. The article highlights some issues of the growing impact of artificial intelligence technologies on human rights and identifies possible directions for improving the provisions of the European Convention on Human Rights in order to enhance the mechanism for protecting human rights in new socio-legal conditions. The author examines the challenges arising from the use of artificial intelligence technologies in the context of observing and protecting human rights guaranteed by the European Convention on Human Rights. The article analyzes key aspects of the interaction of artificial intelligence technologies with the rights to privacy, non-discrimination, the right to a fair trial and freedom of expression. The article pays special attention to such technologies as facial recognition, predictive policing, algorithmic content moderation and automated decision-making. Based on the analysis, the author summarizes the relevant practice of the European Court of Human Rights, which is gradually adapting to digital realities, while preserving the fundamental principles of the Convention. It is substantiated that existing legal mechanisms, including international ones, are not sufficiently effective in the context of new risks caused by the autonomy, opacity, and potential bias of artificial intelligence technologies. The need to combine technical standards with ethical requirements and legal obligations is emphasized. As a result, the need for harmonization of regulatory and legal framework is substantiated, taking into account the dynamics of the development of artificial intelligence technologies and the priority of protecting human dignity and rights. To this end, the author has developed specific proposals for amendments to individual articles of the European Convention on Human Rights in terms of improving legal mechanisms for protecting human rights in the context of the widespread use of artificial intelligence technologies.

  • Research Article
  • 10.59613/whvhd326
Legal Challenges in Regulating Artificial Intelligence Use in Criminal Justice Systems
  • Jun 21, 2025
  • The Journal of Academic Science
  • Iwannudin Iwannudin + 2 more

This study explores the legal complexities and regulatory challenges associated with the deployment of artificial intelligence (AI) within criminal justice systems, employing a qualitative approach grounded in literature review and library research methodology. As AI technologies are increasingly integrated into predictive policing, risk assessments, facial recognition, and sentencing recommendations, concerns have emerged regarding transparency, accountability, bias, and the protection of fundamental rights. These concerns are particularly acute in criminal justice, where decisions directly impact personal liberty and due process. Through a systematic review of scholarly literature, judicial opinions, legal commentaries, and policy documents from 2015 to 2024, this paper identifies critical legal gaps and normative inconsistencies in how jurisdictions govern AI-based decision-making tools. The analysis reveals that existing legal frameworks often lack the precision and adaptability to address algorithmic opacity, data discrimination, and the shifting locus of accountability from human actors to automated systems. The research also finds significant variation in national approaches, with some countries adopting strict ethical guidelines and regulatory oversight, while others remain largely unregulated. This study contributes to the academic and policy discourse by highlighting the urgent need for a coherent and rights-based legal framework to govern AI in criminal justice. It recommends multi-level governance strategies that include international standards, national legislation, and judicial safeguards to ensure fairness, transparency, and accountability. The paper emphasizes the importance of embedding ethical design principles and human oversight into AI technologies used in criminal justice settings.

  • Research Article
  • 10.63960/sijmds-2025-2262
From Data to Discrimination: Gender, Privacy, and the Politics of Digital Surveillance
  • Jun 16, 2025
  • Synergy: International Journal of Multidisciplinary Studies
  • Mahera Imam + 2 more

A greater amount of surveillance of gendered populations has been brought about as a consequence of the era of datafication, which has the effect of reinforcing the structural forms of inequality that already exist. Taking a critical look at the ways in which surveillance capitalism and algorithmic governance turn privacy into a contentious domain, with a disproportionate impact on women and communities that are excluded, this article examines the ways in which these two factors exacerbate the problem. The purpose of this study is to analyse the ways in which well-established patriarchal and racial biases contribute to the growth of digital vulnerabilities using technology such as facial recognition and predictive policing. In order to accomplish this, it makes use of feminist theories and publications that have been issued by Amnesty International (2022) and the United Nations Women (2023). The digital panopticon has the effect of expanding offline oppression into digital domains so that it can be experienced by a greater number of people. This is in contrast to the data colonialism, which greatly restricts autonomy, particularly in the Global South. Particularly in light of the expansion of cyberstalking, doxxing, and bias guided by artificial intelligence, the absence of gender-sensitive digital regulations continues to be a significant cause for worry. In order to argue that surveillance is a political act of control and to suggest that intersectional digital rights frameworks should be implemented, the goal of this study is to be conducted. It accomplishes this by addressing feminist criticisms that have been made in the past. This organisation seeks to reimagine privacy as a social and feminist concern in the digital age. Its mission is to work for systemic reforms in the fields of law, technology, and policy for the purpose of achieving this goal.

  • Research Article
  • 10.18524/2411-2054.2025.58.331012
BETWEEN ALGORITHM AND JUSTICE: DEHUMANIZATION RISKS OF CRIMINAL PROCEDURE AUTOMATION
  • Jun 15, 2025
  • Constitutional State
  • T V Rodionova

The article explores the critical issue of the dehumanization of criminal proceedings in the context of increasing automation of procedural actions and the growing integration of artificial intelligence technologies into legal practice. This phenomenon is not merely technical or procedural; it signals a profound shift in the nature of legal decision-making and raises fundamental questions about the future of justice in a digital age. A central focus of the study is the examination of how algorithmic systems are currently being applied within the criminal justice process. The paper identifies the primary forms of such implementation, ranging from risk assessment tools to predictive policing algorithms, and evaluates the extent to which these systems influence or even replace human judgment. The article further delves into the transformation of human interaction within criminal proceedings under the influence of technological mediation. It highlights how the interpersonal nature of judicial processes is being altered, and how this transformation introduces new ethical dilemmas regarding agency, accountability, and the emotional dimension of justice. An important concern raised in the study is the risk of reducing complex human circumstances to mere statistical data sets. When procedural decisions are made primarily on the basis of algorithmic outputs, the empathic, contextual, and individualized aspects of justice may be eroded, potentially leading to decisions that are technically efficient but morally and socially deficient. The article outlines strategic directions for preserving the balance between technological efficiency and the humanistic values that underpin justice. It emphasizes the need for cautious and deliberate integration of automation, with safeguards to ensure that technological advancements do not compromise the dignity and rights of individuals involved in criminal proceedings. Ultimately, the study underscores the imperative of adopting a “human-centered” approach in the deployment of automated systems within the criminal process. It insists that such systems must be designed and implemented in a way that upholds each participant’s right to a fair trial and recognizes the irreducible uniqueness of every legal situation.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers