Generative Artificial Intelligence Systems in the Fight Against Corruption: Potential, Threats and Prospects for Ukraine

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Corruption remains one of Ukraine's most pressing challenges, undermining the rule of law, hindering economic development, and eroding public trust in state institutions. In the contemporary digital transformation era, generative Artificial Intelligence (AI) systems present new opportunities for combating corruption through automated solutions for financial flow analysis, anomaly detection, and corruption risk assessment. However, deploying such technological systems raises significant legal, ethical, and technical concerns. This article analyses the potential and challenges of applying generative AI systems in Ukraine's anti-corruption policy. Through comparative analysis of international experience, the study identifies effective methods for implementing AI in Ukraine's law enforcement and governance practices, considering the country's legislative framework and political context. The research examines risks associated with AI implementation, including algorithmic manipulation, cybersecurity threats, data protection concerns, and ethical challenges. The authors propose recommendations for adapting AI technologies to Ukraine's anti-corruption efforts, including developing regulatory frameworks, introducing algorithmic accountability, implementing ethical AI standards, and strengthening international cooperation. The findings demonstrate that, with proper regulation and oversight, generative AI can enhance government transparency and reinforce the rule of law in anti-corruption efforts.

ReferencesShowing 10 of 13 papers
  • Open Access Icon
  • Cite Count Icon 627
  • 10.1007/s10710-017-9314-z
Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning
  • Oct 29, 2017
  • Genetic Programming and Evolvable Machines
  • Jeff Heaton

  • Cite Count Icon 2
  • 10.70445/gtst.1.1.2025.95-120
AI-Powered Anomaly Detection for AML Compliance in US Banking: Enhancing Accuracy and Reducing False Positives
  • Feb 1, 2025
  • Global Trends in Science and Technology
  • Ashok Ghimire

  • Cite Count Icon 1
  • 10.23939/law2024.42.001
Кримінально-правова характеристика кримінальних правопорушень, пов’язаних з корупцією в Україні
  • Jun 24, 2024
  • Visnik Nacional’nogo universitetu «Lvivska politehnika». Seria: Uridicni nauki
  • Volodymyr Ortynskyi

  • Open Access Icon
  • Cite Count Icon 2
  • 10.26512/lstr.v15i1.41729
Intellectual Property Rights on Objects Created by Artificial Intelligence
  • Apr 25, 2023
  • Law, State and Telecommunications Review
  • Maryna Utkina + 3 more

  • Cite Count Icon 35
  • 10.1145/3326365.3326420
Artificial Intelligence
  • Apr 3, 2019
  • Deniz Susar + 1 more

  • Open Access Icon
  • Cite Count Icon 1
  • 10.32782/yuv.v6.2023.41
Artificial intelligence as a new tool for combating crimes in the economic sphere
  • Jan 1, 2023
  • Юридичний вісник
  • D Chaikovskyi

  • Open Access Icon
  • Cite Count Icon 1537
  • 10.1561/2200000056
An Introduction to Variational Autoencoders
  • Jan 1, 2019
  • Foundations and Trends® in Machine Learning
  • Diederik P Kingma + 1 more

  • Open Access Icon
  • Cite Count Icon 6
  • 10.1093/yel/yeac009
Populism and public procurement: an EU response to increased corruption and collusion risks in Hungary and Poland
  • Jan 16, 2023
  • Yearbook of European Law
  • Maciej Bernatt + 1 more

  • Cite Count Icon 2
  • 10.26565/2220-8089-2022-41-04
DIGTIAL TECHNOLOGIES IN COMBATING GLOBAL CORRUPTION
  • Jul 15, 2022
  • The Journal of V. N. Karazin Kharkov National University. Issues of Political Science
  • Nataliya Vinnykova

  • Open Access Icon
  • Cite Count Icon 2
  • 10.2991/msie-19.2020.61
Government Governance of Smart Cities in China
  • Jan 1, 2020
  • Gaoang Sun

Similar Papers
  • Research Article
  • Cite Count Icon 41
  • 10.1016/j.fertnstert.2020.10.040
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
  • Nov 1, 2020
  • Fertility and Sterility
  • Carol Lynn Curchoe + 18 more

Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 28
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 207
  • 10.1016/s2589-7500(21)00132-1
Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review
  • Aug 23, 2021
  • The Lancet Digital Health
  • Albert T Young + 3 more

Artificial intelligence (AI) promises to change health care, with some studies showing proof of concept of a provider-level performance in various medical specialties. However, there are many barriers to implementing AI, including patient acceptance and understanding of AI. Patients' attitudes toward AI are not well understood. We systematically reviewed the literature on patient and general public attitudes toward clinical AI (either hypothetical or realised), including quantitative, qualitative, and mixed methods original research articles. We searched biomedical and computational databases from Jan 1, 2000, to Sept 28, 2020, and screened 2590 articles, 23 of which met our inclusion criteria. Studies were heterogeneous regarding the study population, study design, and the field and type of AI under study. Six (26%) studies assessed currently available or soon-to-be available AI tools, whereas 17 (74%) assessed hypothetical or broadly defined AI. The quality of the methods of these studies was mixed, with a frequent issue of selection bias. Overall, patients and the general public conveyed positive attitudes toward AI but had many reservations and preferred human supervision. We summarise our findings in six themes: AI concept, AI acceptability, AI relationship with humans, AI development and implementation, AI strengths and benefits, and AI weaknesses and risks. We suggest guidance for future studies, with the goal of supporting the safe, equitable, and patient-centred implementation of clinical AI.

  • PDF Download Icon
  • Discussion
  • Cite Count Icon 10
  • 10.1016/s2589-7500(22)00094-2
Artificial intelligence to complement rather than replace radiologists in breast screening
  • Jun 21, 2022
  • The Lancet Digital Health
  • Sian Taylor-Phillips + 1 more

Artificial intelligence to complement rather than replace radiologists in breast screening

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 39
  • 10.1007/s43681-024-00443-4
AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business
  • Feb 23, 2024
  • AI and Ethics
  • Declan Humphreys + 3 more

This paper examines the ethical obligations companies have when implementing generative Artificial Intelligence (AI). We point to the potential cyber security risks companies are exposed to when rushing to adopt generative AI solutions or buying into “AI hype”. While the benefits of implementing generative AI solutions for business have been widely touted, the inherent risks associated have been less well publicised. There are growing concerns that the race to integrate generative AI is not being accompanied by adequate safety measures. The rush to buy into the hype of generative AI and not fall behind the competition is potentially exposing companies to broad and possibly catastrophic cyber-attacks or breaches. In this paper, we outline significant cyber security threats generative AI models pose, including potential ‘backdoors’ in AI models that could compromise user data or the risk of ‘poisoned’ AI models producing false results. In light of these the cyber security concerns, we discuss the moral obligations of implementing generative AI into business by considering the ethical principles of beneficence, non-maleficence, autonomy, justice, and explicability. We identify two examples of ethical concern, overreliance and over-trust in generative AI, both of which can negatively influence business decisions, leaving companies vulnerable to cyber security threats. This paper concludes by recommending a set of checklists for ethical implementation of generative AI in business environment to minimise cyber security risk based on the discussed moral responsibilities and ethical concern.

  • Research Article
  • 10.1093/humrep/deaf097.669
P-363 The Croatia Consensus: Establishing International Best Practices for the Validation and Safe Implementation of Artificial Intelligence in Medically Assisted Reproduction (MAR)
  • Jun 1, 2025
  • Human Reproduction
  • C Hickman + 14 more

Study question What are the key considerations, validation frameworks, and safety guidelines required for the responsible implementation of Artificial Intelligence (AI) systems in MAR clinics? Summary answer The Croatia Consensus establishes internationally agreed-upon best practices for AI validation in MAR, ensuring patient safety, clinical excellence, regulatory compliance, and ethical implementation. What is known already AI applications are increasingly integrated into ART to optimise embryo selection, standardise clinical decision-making, and reduce variability. However, absence of internationally accepted validation frameworks, regulatory guidelines, and ethical oversight poses risks to patient safety and clinical efficacy. Current AI models often lack transparency, generalisation, and robust external validation. Bias in training datasets can lead to inequitable clinical outcomes. The need for structured AI governance in ART is pressing. The Croatia Consensus, formed by global experts (AI Fertility Society), aims to define best practices for AI validation and deployment in MAR clinics. Study design, size, duration A structured Delphi process involving 148 AI and MAR experts was conducted in 2024 to develop international guidelines for AI validation in ART. The consensus methodology included systematic literature reviews, expert panel discussions, and iterative feedback rounds. Topics covered included AI safety, validation protocols, data standardisation, regulatory compliance, and bias mitigation. The final consensus document was reviewed at the AI Fertility Society Meeting and endorsed by multidisciplinary stakeholders, including clinicians, embryologists, ethicists, and AI developers. Participants/materials, setting, methods Consensus guidelines were developed through contributions from embryologists, reproductive specialists, AI researchers, and regulatory experts. The process included a systematic review of AI applications in MAR, gap analysis of existing validation frameworks, and expert recommendations on AI validation strategies. Key aspects included standardised AI reporting (TRIPOD+AI compliance), real-world clinical validation across multiple centres, ethical risk mitigation, and transparent AI decision-making. AI system performance benchmarks were established using clinical outcome measures and patient safety indicators. Main results and the role of chance The Croatia Consensus establishes a comprehensive framework for AI validation in MAR, ensuring patient safety, regulatory compliance, and clinical efficacy. Key recommendations include multi-centre external validation of AI models to ensure generalisation across diverse patient populations, with the TRIPOD+AI framework recommended for transparent reporting. To mitigate bias, AI systems must undergo demographic audits, particularly in embryo selection, to prevent inequitable outcomes. Regulatory compliance with GDPR (EU), FDA (USA), and MHRA (UK) is required before clinical implementation. Transparency is critical; AI models must provide interpretable decisions, including confidence scores, feature importance, and performance metrics. Continuous post-implementation monitoring is essential to detect model drift and ensure patient safety over time. The consensus highlights that unvalidated AI models currently used in MAR clinics may introduce risks to patient outcomes. Implementing the Croatia Consensus framework will help standardise AI validation, mitigate risks, and ensure AI adoption in MAR is both evidence-based and clinically safe. Limitations, reasons for caution The consensus is based on expert opinions and current scientific literature; further empirical studies are required to validate AI best practices. The framework must evolve as AI capabilities and regulatory landscapes develop. Future research should focus on real-world AI deployment outcomes, patient safety, and long-term MAR success rates. Wider implications of the findings This is the first international AI validation framework in MAR. Standardising AI best practices will improve patient safety, optimise clinical outcomes, and enhance trust in AI-assisted fertility treatments. The framework provides a blueprint for MAR clinics, regulatory bodies, and AI developers, ensuring responsible AI integration into reproductive medicine. Trial registration number No

  • Research Article
  • 10.22214/ijraset.2025.73195
Bias and Fairness in AI Systems: A Study of Causes, Impacts, and Mitigation Strategies
  • Jul 31, 2025
  • International Journal for Research in Applied Science and Engineering Technology
  • Archana Sharde

Artificial Intelligence (AI) systems are being used more and more in crucial areas like healthcare, finance, education, and criminal justice. While these systems can enhance efficiency and provide a level of objectivity, they often carry forward the biases that exist in their training data or the way they are designed. This paper delves into the different types and sources of bias found in AI systems, examines their societal and technical effects, and reviews the latest strategies for mitigating these issues. By looking at case studies and comparing fairness metrics and debiasing techniques, this work seeks to offer a thorough understanding of the fairness landscape in AI and highlight ways to foster responsible and equitable AI development. This survey study provides a clear and thorough look at fairness and bias in AI, diving into where these issues come from, how they affect us, and what we can do about them. We take a closer look at the various sources of bias, including data, algorithms, and human decisions, while also shining a light on the growing concern of generative AI bias, which can lead to the reinforcement of societal stereotypes. We evaluate how biased AI systems impact society, particularly in terms of perpetuating inequalities and promoting harmful stereotypes, especially as generative AI plays a bigger role in shaping content that affects public opinion. We discuss several proposed strategies for mitigating these biases, weigh the ethical implications of implementing them, and stress the importance of working together across different fields to make sure these strategies are effective. We also address the negative effects of AI bias on individuals and society, while providing an overview of current methods to tackle it, such as data pre-processing, model selection, and post-processing. We highlight the unique challenges posed by generative AI models and the necessity for strategies specifically designed to tackle these issues. Tackling bias in AI calls for a comprehensive approach that includes diverse and representative datasets, greater transparency and accountability in AI systems, and the exploration of alternative AI frameworks that prioritize fairness and ethical considerations.

  • Research Article
  • 10.71458/kwy3y775
Artificial Intelligence and Design of the Future - Some Serious Deep Thoughts
  • May 23, 2025
  • Kuveza neKuumba: The Zimbabwe Ezekiel Guti University Journal of Design, Innovative Thinking and Practice
  • Chrispen Musekiwa + 2 more

This article explores the impact of artificial intelligence (AI) on society through the lens of technological determinism and singularity theories. Technological determinism is the notion that technology shapes and controls society and human behaviour. Singularity is a theory that asserts that AI has already become a million times smarter than humans and can self-improve beyond what humans first taught AI applications and machines. The Singularity Theory predicts an intelligence explosion from Artificial General Intelligence soon, in which humans are likely to lose the dominion that they have enjoyed since creation, millions of years ago. AI, in its generative and autonomous or selfimproving state or form, may lead to the automation of many tasks currently performed by humans. This could lead to both benefits and challenges, such as increased efficiency but also job losses. In addition, the article discusses the impact of AI on privacy and raises ethical concerns about the potential misuse when in the hands of bad people. It also discusses ways to ensure that AI is used responsibly and beneficially. This includes governmental authorities developing ethical guidelines for AI development and implementation and ensuring that AI systems are safe, transparent and accountable.

  • Research Article
  • 10.5121/ijaia.2025.16205
Consciousness in AI Systems: A Review
  • Mar 28, 2025
  • International Journal of Artificial Intelligence & Applications
  • Mosladdin Mohammad Shueb + 1 more

Existing Artificial Intelligence (AI) can replicate many features of human consciousness. Active research in the field of AI consciousness uses scientific theories of human consciousness to investigate and simulate features of consciousness in AI systems. Approaches and models used in existing AI systems align with theories of consciousness. As a result, content generated by AI reflects features of human consciousness such as creativity and imagination. In many scenarios, AI and the human brain are unable to provide reasons behind their decision making. However, neural networks in task specific AI are more efficient in processing large amounts of data than humans. As a result, there are growing concerns around AI consciousness. Our study addresses these concerns by reviewing scientific theories of consciousness that can be used to investigate consciousness in AI systems. We particularly expound on different methods that can identify, measure, and attribute consciousness in AI systems. Our review explores safety implications from endowing AI with functions of human consciousness. We contend that these implications create a new dimension of consciousness-based AI safety to protect AI and Artificial General Intelligence (AGI).

  • Research Article
  • Cite Count Icon 1
  • 10.5964/jbdgm.195
The creative performance of the AI agents ChatGPT and Google Magenta compared to human-based solutions in a standardized melody continuation task
  • Sep 5, 2024
  • Jahrbuch Musikpsychologie
  • Anton Schreiber + 3 more

Many generative artificial intelligence (AI) systems have been developed over the last decade. Some systems are more of a generic character, and some are specialized in music composition. However, whether these AI systems are serious competitors for human composers remains unclear. Despite increased public interest, there is currently little empirical foundation for a conceivably equivalent performance for creative AI when compared to human experts in a controlled task. Thus, we conducted an online experiment to evaluate the subjectively perceived quality of AI compositions with human-made products (by music students) in a standardized task. Based on a melody continuation paradigm, creative products using AI were generated by the AI agents ChatGPT (Version 3.5) and Google Magenta Studio (Version 2.0). The human creative performances were realized by 57 melodic continuations, composed by music students. In the online evaluation study, listeners (N = 71, mainly musicians) rated the aesthetic quality of the outcomes of the various systems. Additionally, the raters’ musical experience level was controlled as well as the length of the given melody completion task (two probe positions). As a main result, the overall quality of the AI compositions was rated significantly lower on all four target items compared to the human-made products (large effect sizes). Musical experience and the length of the melody did not influence the ratings. We conclude that the current capabilities of AI in the domain of musical creativity determined by a standardized composition task are far below human capabilities. However, we assume rapid progress will be made in the domain of generative music-specific AI systems.

  • Research Article
  • Cite Count Icon 1
  • 10.4467/29567610pib.24.002.19838
Sztuczna inteligencja a bezpieczeństwo państwa
  • Jun 10, 2024
  • Prawo i Bezpieczeństwo
  • Norbert Malec

Technologically advanced artificial intelligence (AI) is making a significant contribution to strengthening national security. AI algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making. Artificial intelligence and machine learning (AI/ML) are crucial for state and integrated hybrid attacks and protecting new threats in cyberspace. Existing AI capabilities have significant potential to impact national security by leveraging existing machine learning technology for automation in labor-intensive activities such as satellite imagery analysis and defense against cyber attacks. This article examines selected aspects of the impact of artificial intelligence on enhancing a state’s ability to protect its interests and its citizens., artificial intelligence through the use of neutron networks, predictive analytics and machine learning algorithms enables security agencies to analyse vast amounts of data and identify patterns indicative of potential threats. Integrating artificial intelligence into surveillance, border control and threat assessment systems enhances the ability to respond preemptively to security challenges. In addition, artificial intelligence algorithms facilitate the processing of vast amounts of information, increasing the speed and accuracy of decision-making by police authorities. The rapid development of AI raises a number of questions for its use in securing not only national security but protecting all citizens. In particular, it is worth answering the question How does artificial intelligence affect national security and clarifying the issue of how law enforcement agencies can use artificial intelligence to maximise the benefits of the new technology in terms of security and protecting communities from rising crime. The analysis is based on a descriptive method in describing the phenomenon; by explaining the concepts and applications of artificial intelligence to determine its role in the national security sphere. An analysis of the usefulness of artificial intelligence in particular in police operations is undertaken, with the aim of defending the thesis that, despite some threats to the protection of human rights from AI, it is becoming the best tool in the fight against all types of crime in the country. Technological advances in AI can also have many positive effects for law enforcement, and useful for law enforcement agencies, for example in facilitating the identification of persons or vehicles, predicting trends in criminal activities, tracking illegal criminal activities or illegal money flows, flagging and responding to fake news. Artificial intelligence (AI) has emerged as one of the biggest threats to information security, but efforts are being made to mitigate this new threat, but also to find solutions on how AI can become an ally in the fight against cyber-security, crime and terrorist threats. Artificial intelligence algorithms search huge datasets of communication traffic, satellite images and social media posts to identify potential cyber security threats, terrorist activities and organized crime. It is advisable, when analyzing the opportunities and threats that AI poses to national and public security, to gain a strategic advantage in the context of rapid technological change and also to manage the many risks associated with AI. The conclusion highlights the impact of AI on national security, creating a range of new opportunities coupled with challenges that government agencies should be prepared for in addressing ethical and security dilemmas. Furthermore, AI improves predictive analytics, thereby enabling security agencies to more accurately anticipate potential threats and enhance their preparedness by identifying vulnerabilities in the national security infrastructure

  • Conference Article
  • Cite Count Icon 1
  • 10.54941/ahfe1004181
Measuring the Impact of Picture-Based Explanations on the Acceptance of an AI System for Classifying Laundry
  • Jan 1, 2023
  • Nico Rabethge + 1 more

Artificial intelligence (AI) systems have increasingly been employed in various industries, including the laundry sector, e.g., to assist the employees sorting the laundry. This study aims to investigate the influence of image-based explanations on the acceptance of an AI system, by using CNNs that were trained to classify color and type of laundry items, with the explanations being generated through Deep Taylor Decomposition, a popular Explainable AI technique. We specifically examined how providing reasonable and unreasonable visual explanations affected the confidence levels of participating employees from laundries in their respective decisions. 32 participants were recruited from a diverse range of laundries, age, experience in this sector and prior experience with AI technologies and were invited to take part in this study. Each participant was presented with a set of 20 laundry classifications made by the AI system. They were then asked to indicate whether the accompanying image-based explanation strengthened or weakened their confidence in each decision. A five-level Likert scale was utilized to measure the impact, ranging from 1 (strongly weakens confidence) to 5 (strongly strengthens confidence). By providing visual cues and contextual information, the explanations are expected to enhance participants' understanding of the AI system's decision-making process. Consequently, we hypothesize that the image-based explanations will strengthen participants' confidence in the AI system's classifications, leading to increased acceptance and trust in its capabilities. The analysis of the results indicated significant main effects for both the quality of explanation and neural network certainties variables. Moreover, the interaction between explanation quality and neural network certainties also demonstrated a notable level of significance.The outcomes of this study hold substantial implications for the integration of AI systems within the laundry industry and other related domains. By identifying the influence of image-based explanations on acceptance, organizations can refine their AI implementations, ensuring effective utilization and positive user experiences. By fostering a better understanding of how image-based explanations influence AI acceptance, this study contributes to the ongoing development and improvement of AI systems across industries. Ultimately, this research seeks to pave the way for enhanced human-AI collaboration and more widespread adoption of AI technologies. Future research in this area could explore alternative forms of visual explanations, to further examine their impact on user acceptance and confidence in AI systems.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 86
  • 10.1016/j.isci.2020.101515
Who Gets Credit for AI-Generated Art?
  • Aug 29, 2020
  • iScience
  • Ziv Epstein + 3 more

SummaryThe recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork's success. Here, we identify natural heterogeneity in the extent to which different people perceive AI as anthropomorphic. We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production. We then show that perceptions of AI anthropomorphicity can be manipulated by changing the language used to talk about AI—as a tool versus agent—with consequences for artists and AI practitioners. Our findings shed light on what is at stake when we anthropomorphize AI systems and offer an empirical lens to reason about how to allocate credit and responsibility to human stakeholders.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-030-80744-3_3
Healthcare Delivery: Leveraging Artificial Intelligence to Strengthen Healthcare Quality
  • Jan 1, 2021
  • Patrick Seitzinger + 2 more

The capabilities of artificial intelligence (AI) in medicine are expanding at an unprecedented pace. AI has the potential to serve as an invaluable asset in harnessing large amounts of data to generate new diagnostic models, inform clinical decision making, and to expand the capabilities of modern medicine. AI has proven beneficial in various fields of diagnostic medicine including radiology, pathology, and laboratory medicine. AI programs have been shown to increase workflow without compromising accurate identification of abnormalities. Aspects of human error in medicine can be mitigated through the use of machine learning programs to serve as quality assurance and failsafe strategy. The advent of AI in healthcare carries many ethical, legal, and accountability challenges. These challenges include issues of transparency, data security, informed consent, and liability. Lack of familiarity with AI systems and fear of unforeseen consequences warrants caution in the implementation of these new tools. However, undue delays in the implementation of AI may unnecessarily hinder the progress of medicine and limit the level of care that can be provided to patients. Healthcare providers should ensure they are well-positioned to adapt to this new technological era of medicine by remaining up to date on the implications of technological advances. Medical education will require continuous revaluation and adaptation to ensure learners gain the appropriate digital literacy to function effectively in AI-assisted medical practice. The implementation of these technologies into healthcare systems represents the greatest healthcare transition of our time. Optimizing this transition in a growing number of medical disciplines will require tactful implementation and diligent risk management.KeywordsQuality improvementArtificial intelligenceHealthcare deliveryMachine learningHealthcare optimization

More from: International Journal of Criminology and Sociology
  • Research Article
  • 10.6000/1929-4409.2025.14.15
The Evolution of Psychological and Emotional Effects of Visitation on Families of People Incarcerated in the United States
  • Jul 25, 2025
  • International Journal of Criminology and Sociology
  • Adam Trahan + 2 more

  • Research Article
  • 10.6000/1929-4409.2025.14.14
Rethinking Crime, Harm, and Corporate Responsibility: Lessons from the Post Office Scandal
  • Jul 24, 2025
  • International Journal of Criminology and Sociology
  • Alisse Drew-Griffiths

  • Research Article
  • 10.6000/1929-4409.2025.14.13
Students' Perceptions of Women in Policing: The Role of Media Portrayal and Representation of Policewomen
  • Jul 22, 2025
  • International Journal of Criminology and Sociology
  • Alberta Mayfair Asare Yeboah

  • Research Article
  • 10.6000/1929-4409.2025.14.12
Cyber Forensic Reporting: Benefits, Elements, Process, Expert Witnesses, and Ethical Considerations
  • Jul 10, 2025
  • International Journal of Criminology and Sociology
  • Cheryl Ann Alexander + 1 more

  • Research Article
  • 10.6000/1929-4409.2025.14.11
From Syndicates to Protocols: Rethinking Organized Crime in the Age of Cybercrime
  • Jul 10, 2025
  • International Journal of Criminology and Sociology
  • Arthur Hartmann

  • Research Article
  • 10.6000/1929-4409.2025.14.10
Generative Artificial Intelligence Systems in the Fight Against Corruption: Potential, Threats and Prospects for Ukraine
  • Apr 25, 2025
  • International Journal of Criminology and Sociology
  • Mykhailo Dumchikov + 1 more

  • Research Article
  • 10.6000/1929-4409.2025.14.09
Assessing the Effectiveness of Compliance Programs Through the Use of the Metaverse and Blockchain
  • Apr 25, 2025
  • International Journal of Criminology and Sociology
  • Nikos Passas + 1 more

  • Research Article
  • 10.6000/1929-4409.2025.14.08
Cryptocurrencies, Blockchain, and Financial Crimes
  • Apr 8, 2025
  • International Journal of Criminology and Sociology
  • Nikos Passas

  • Research Article
  • 10.6000/1929-4409.2025.14.07
Blockchain Forensics - Unmasking Anonymity in Dark Web Transactions
  • Mar 4, 2025
  • International Journal of Criminology and Sociology
  • Jelena Gjorgjev + 2 more

  • Research Article
  • 10.6000/1929-4409.2025.14.04
Can Crypto Currencies Challenge Sovereign Currencies? A Multidisciplinary Overview of Opportunities and Risks
  • Feb 10, 2025
  • International Journal of Criminology and Sociology
  • Hicham Sadok + 1 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon