AI-driven aging digital twins: A roadmap for clinical translation in precision geriatrics.
AI-driven aging digital twins: A roadmap for clinical translation in precision geriatrics.
- Research Article
41
- 10.1016/j.fertnstert.2020.10.040
- Nov 1, 2020
- Fertility and Sterility
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
- Research Article
- 10.63802/grhas.v1.i4.74
- Oct 3, 2025
- Global Review of Humanities, Arts, and Society
This study presents a comprehensive examination of the evolving role of artificial intelligence (AI) in social governance through an integrated bibliometric, theoretical, and case-based methodology. Utilizing CiteSpace-based keyword co-occurrence and burst analyses of data from 2014 to 2024, it identifies ten thematic clusters that illustrate a dual developmental trajectory: a vertical deepening of governance theories and technological paradigms, and a horizontal expansion into domains such as smart cities, environmental governance, and algorithmic administration. A critical review of international theoretical frameworks—including the Technology Acceptance Model, Diffusion of Innovation Theory, Socio-Technical Systems Theory, Algorithmic Governance Theory, and Digital Governance Theory—reveals both their analytical value and limitations in the context of Chinese governance. Comparative case studies, encompassing domestic initiatives such as “City Brain,” smart communities, smart courts, and digital villages, alongside international examples from Singapore, Estonia, and the European Union, highlight China's rapid policy-driven advancements as well as enduring challenges in algorithmic transparency, cross-departmental data integration, and public participation. The study identifies four key governance challenges: data security and privacy, algorithmic ethics and transparency, the digital divide, and coordination inefficiencies. Looking forward, it outlines future trajectories shaped by multimodal AI, generative AI, and digital twin technologies. Policy recommendations call for standardized data governance protocols, robust AI ethics frameworks, targeted digital literacy programs, and enhanced multi-stakeholder collaboration. The findings advance a practice-oriented, interdisciplinary research agenda for intelligent social governance and provide actionable insights for aligning technological innovation with institutional transformation in the AI era.
- Research Article
1
- 10.13107/jcorth.2023.v08i02.578
- Jan 1, 2023
- Journal of Clinical Orthopaedics
ORTHO AI : The Dawn Of A New Era: Artificial Intelligence In Orthopaedics
- Research Article
6
- 10.1016/j.compbiomed.2025.110178
- Jun 1, 2025
- Computers in biology and medicine
Advancing the frontier of artificial intelligence on emerging technologies to redefine cancer diagnosis and care.
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
16
- 10.1162/daed_e_01897
- May 1, 2022
- Daedalus
This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled "Living with Artificial Intelligence," Stuart Russell states that "the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history."2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's "Much Ado About Not Very Much." Putnam questioned the Dædalus issue itself: "Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?" He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a "suitcase word."14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."17In their respective contributions to this volume, "From So Simple a Beginning: Species of Artificial Intelligence" and "If We Succeed," and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a "golden decade," not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the "heuristic search" approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint."20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay "I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale," reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, "Searching for Computer Vision North Stars," Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays "Human Language Understanding & Reasoning" and "The Curious Case of Commonsense Intelligence," Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in "Language & Coding Creativity." Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In "The Machines from Our Future," Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the "industrial" approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one than the other, as and have the success of the current approaches to AI, there are still many shortcomings and as well as problems in It is useful to on one such as when AI does not as or or or that can to or when it on or information about the world, or when it has such as of all of which can to a of public shortcomings have captured the attention of the wider public and as well as among there is an on AI and In recent years, there has been a of to principles and approaches to AI, as well as involving and such as the on AI, that to best important has been the of with to and - in the and developing AI in both and as has been well in recent This is an important in its own but also with to the of the resulting AI and, in its intersections with more the other there are limitations and problems associated with the that AI is not capable of if could to more more or more general AI. In their Turing deep learning and Geoffrey took of where deep learning and highlighted its current such as the with In the case of natural language processing, Manning and Choi the challenges in and despite the of large language Elsewhere, and have the notion that large language models do anything learning, or In & of in a and discuss the problems in systems, the as how to reason about other their systems, and well as challenges in both and especially when the include both humans and Elsewhere, and others a useful of the problems in there is a growing among many that we do not have for the of AI systems, especially as they become more capable and the of use although AI and its related techniques are to be powerful tools for research in science, as examples in this volume and recent examples in which AI not only help results but also by design and become what some have AI to science and and to and challenges for the possibility that more powerful AI could to new in science, as well as progress in some of challenges and has long been a key for many at the frontier of AI research to more capable the of each of AI, the of more general problems that to the possibility of more capable AI learning, reasoning, of and and of these and other problems that could to more capable systems the of whether current characterized by deep learning, the of and and more foundational and and reinforcement or whether different approaches are in such as cognitive agent approaches or or based on logic and probability theory, to name a few. whether and what of approaches be the AI is but many the current along with of and learning architectures have to their about the of the current approaches is associated with the of whether artificial general intelligence can be and if how and Artificial general intelligence is in to what is called that AI and for tasks and goals, such as The development of on the other aims for more powerful AI - at as powerful as is generally to problem or and, in some the capacity to and improve as well as set and its own and the of and when will be is a for most that its achievement have and as is often in and such as A through and The to Ex and it is or there is growing among many at the frontier of AI research that we for the possibility of powerful with to and and with humans, its and use, and the possibility that of could and that we these into how we approach the development of of the research and development, and in AI is of the AI and in its what Nigel Shadbolt the of AI. This is given the for useful and applications and the for in sectors of the However, a few have made the development of their the most of these are and each of which has demonstrated results of increasing still a long way from the most discussed impact of AI and automation is on and the future of This is not In in the of the excitement about AI and and concerns about their impact on a on and the was that such technologies were important for growth and and "the that but not Most recent of this including those I have been involved have and that over time, more are than are that it is the and the and the of will the In their essay AI & and John discuss these for work and further, in & the of & to discuss the with to and and as well as the opportunities that are especially in developing In "The Turing The & of Artificial Intelligence," discusses how the use of human benchmarks in the development of AI the of AI that rather than human He that the AI's development will take in this and resulting for will on the for companies, and a that the that more will be than too much from of the and does not far enough into the future and at what AI will be capable The for AI could from of that in the is and labor and ability to are and and until automation has mostly physical and but that AI will be on more cognitive and tasks based on and, if early examples are even tasks are not of the In other are now in the world machines that that learn and that their ability to do these is to a range of problems they can will be with the range to which the human has been This was and Allen Newell in that this time could be different usually two that new labor will in which will by other humans for their own even when machines may be capable of these as well as or even better than The other is that AI will create so much and all without the for human and the of will be to for when that will the that once the first time since his creation will be with his his to use his from how to the which science and interest will have for to live and and However, most researchers that we are not to a future in which the of will and that until then, there are other and that be in the labor now and in the such as and other and how humans work increasingly capable that and John and discuss in this are not the only of the by AI. Russell a of the potentially from artificial general intelligence, once a of or ten But even we to general-purpose AI, the opportunities for companies and, for the and growth as well as from AI and its related technologies are more than to pursuit and by companies and in the development, and use of AI. At the many the is it is generally that is a in AI, as by its growth in AI research, and as highlighted in several will have for companies and given the of such technologies as discussed by and others the may in the way of approaches to AI and (such as whether they are companies or as and have have the to to in AI. The role of AI in intelligence, systems, autonomous even and other of increasingly In &
- Research Article
- 10.37022/jis.v8i2.107
- Jul 14, 2025
- Journal of Integral Sciences
The fusion of Artificial Intelligence (AI) with modern drug delivery systems marks a pivotal shift in the way therapeutics are designed, administered, and monitored. Traditional drug delivery platforms have long struggled with issues like off-target effects, variable bioavailability, and poor patient adherence. Smart drug delivery systems aim to overcome these limitations by responding to internal or external physiological stimuli—offering precise, targeted, and often self-regulated release of medications. When integrated with AI, these systems gain further intelligence: enabling real-time decision-making, predicting release kinetics, optimizing formulations, and personalizing dosing strategies. This review explores the evolving landscape of AI-assisted smart drug delivery systems, highlighting how machine learning, deep learning, and predictive analytics are redefining the design and deployment of nanocarriers, wearable devices, and hybrid platforms. Special focus is given to AI’s role in material selection, pharmacogenomics, patient stratification, and theranostics. We also address critical challenges related to data privacy, regulatory ambiguity, algorithmic transparency, and ethical accountability. Moreover, emerging opportunities such as digital twins, closed-loop systems, and open-source AI platforms are discussed for their transformative potential. Together, AI and smart delivery platforms offer a promising vision of personalized, adaptive, and data-driven healthcare. As innovation continues to bridge computation with clinical application, the next generation of therapeutics may be as intelligent as they are effective—heralding a future where precision medicine is not just ideal, but inevitable.
- Research Article
- 10.37391/ijbmr.130201
- Jun 30, 2025
- International Journal of Business and Management Research
Employee attrition is a critical challenge for organizations, impacting productivity, operational costs, and workforce stability. Traditional approaches to managing attrition rely on reactive strategies, often failing to provide predictive insights. The advent of Artificial Intelligence (AI) has transformed attrition management by enabling data-driven decision-making, predictive analytics, and proactive employee engagement.This research explores the role of AI in predicting and managing employee attrition through machine learning algorithms, natural language processing (NLP), and AI-driven sentiment analysis. AI models analyze vast datasets, including employee performance metrics, engagement surveys, and organizational culture indicators, to identify early warning signs of attrition. Predictive analytics empowers HR professionals to implement targeted retention strategies, enhance employee experience, and reduce voluntary turnover.Furthermore, AI-driven chatbots and virtual HR assistants contribute to employee satisfaction by offering personalized career development suggestions, real-time feedback, and mental well-being support. Explainable AI (XAI) frameworks ensure transparency in AI-driven decisions, fostering trust between employees and organizations. Despite AI’s potential, ethical concerns, data privacy, and algorithmic biases remain key challenges that require robust governance frameworks.This study provides a comprehensive analysis of AI applications in attrition management, highlighting case studies from multinational corporations that have successfully integrated AI for workforce retention. The findings underscore AI's transformative potential in HRM, enabling organizations to shift from reactive to proactive attrition management strategies. The paper concludes with future research directions on AI’s evolving role in predictive HR analytics and its integration with emerging technologies like blockchain and the metaverse for enhanced workforce planning.
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
- 10.55041/ijsrem27483
- Feb 6, 2025
- INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
The financial technology (FinTech) sector has experienced rapid growth, and with it, the increasing complexity and volume of financial transactions. Fraud detection and risk management are critical challenges for financial institutions, as cyber threats continue to evolve. The integration of Artificial Intelligence (AI) into financial systems offers promising solutions for automating these processes, improving their accuracy and efficiency. This paper explores the potential of AI-driven automation in transforming fraud detection and risk management practices within FinTech. The research examines existing literature, highlights key developments in AI technology, and evaluates the effectiveness of AI models in detecting fraudulent activities and managing financial risk. The study presents a comparative analysis of traditional versus AI-based fraud detection methods, providing evidence of the potential benefits and challenges of AI integration. The findings suggest that AI can significantly enhance fraud detection accuracy, reduce response times, and help institutions manage financial risks proactively. However, issues related to data privacy, algorithmic transparency, and regulatory compliance present challenges that require further exploration. The paper concludes by recommending future research directions and emphasizing the importance of a collaborative approach between AI developers, financial institutions, and regulatory bodies to address these challenges. Keywords: Artificial Intelligence (AI), Fraud Detection, Risk Management, Financial Technology (FinTech), Machine Learning (ML), Deep Learning (DL), Predictive Analytics, Automated Systems, Regulatory Compliance, Explainable AI (XAI), Operational Efficiency, Cybersecurity in Finance, Real-Time Data Analysis, Data Privacy, Financial Risk Mitigation, AI Transparency, Legacy Systems Integration, Fraud Prevention Strategies
- Supplementary Content
3
- 10.1159/000546303
- May 9, 2025
- Acta Haematologica
Background: Artificial intelligence (AI) is reshaping healthcare, with its applications in transfusion medicine (TM) showing great promise to address longstanding challenges. Summary: This review explores the integration of AI-driven tools, including machine learning, deep learning, natural language processing, and predictive analytics, across various domains of TM. From enhancing donor management and optimizing blood product quality to predicting transfusion needs and assessing bleeding risks, AI has demonstrated its potential to improve operational efficiency, patient safety, and resource allocation. Additionally, AI-powered systems enable more accurate blood antigen phenotyping, automate hemovigilance workflows, and streamline inventory management through advanced forecasting models. While these advancements are largely exploratory, early studies highlight the growing importance of AI in improving patient outcomes and advancing precision medicine. However, challenges such as variability in clinical workflows, algorithmic transparency, equitable access, and ethical concerns around data privacy and bias must be addressed to ensure responsible integration. Key Messages: (i) AI-driven tools are being applied across multiple domains of TM. (ii) Early studies demonstrate the potential for AI to improve efficiency, safety, and personalization. (iii) Key implementation challenges include data privacy, workflow integration, and equitable access.
- Research Article
22
- 10.51594/farj.v6i4.1036
- Apr 17, 2024
- Finance & Accounting Research Journal
The rapid evolution of financial technology (fintech) platforms has exponentially increased the volume and sophistication of financial transactions, concurrently elevating the risk and complexity of fraudulent activities. This necessitates a paradigm shift in fraud detection methodologies towards more agile, accurate, and predictive solutions. This paper presents a comprehensive study on the transformative potential of advanced Artificial Intelligence (AI) algorithms in enhancing fintech fraud detection mechanisms. By leveraging cutting-edge AI techniques including deep learning, machine learning, and natural language processing, this research aims to develop a robust fraud detection framework capable of identifying, analyzing, and preventing fraudulent transactions in real-time.
 Our methodology encompasses the deployment of several AI algorithms on extensive datasets comprising genuine and fraudulent financial transactions. Through a comparative analysis, we identify the most effective algorithms in terms of accuracy, efficiency, and scalability. Key findings reveal that deep learning models, particularly those employing neural networks, outperform traditional machine learning models in detecting complex and nuanced fraudulent activities. Furthermore, the integration of natural language processing enables the extraction and analysis of unstructured data, significantly enhancing the detection capabilities.
 Conclusively, this paper underscores the critical role of advanced AI algorithms in revolutionizing fintech fraud detection. It highlights the superior performance of AI-based models over conventional methods, offering fintech platforms a more dynamic and predictive approach to fraud prevention. This research not only contributes to the academic discourse on financial security but also provides practical insights for fintech companies striving to safeguard their operations against fraud.
 Keywords: Artificial Intelligence, Fintech, Fraud Detection, Ethical Ai, Regulatory Compliance, Data Privacy, Algorithmic Bias, Predictive Analytics, Blockchain Technology, Quantum Computing, Interdisciplinary Collaboration, Innovation, Transparency, Accountability, Continuous Learning, Ethical Principles, Real-Time Processing, Financial Sector.
- Book Chapter
- 10.55938/wlp.v1i1.93
- Oct 28, 2024
The COVID-19 pandemic has transformed healthcare making use of artificial intelligence (AI)-assisted healthcare and wearable technologies, however ethical and regulatory issues dominate. Data privacy and algorithm transparency are crucial, and governance frameworks should be developed for the implementation of AI in the healthcare sector. This paper presents a comprehensive examination of the AI environment in diagnostics, highlighting its potential for improvement in diagnoses and assist healthcare, as well as the challenges that must be worked out before it may be effectively implemented. AI contributes insights into mitigation, treatments, and patient satisfaction at various stages of medication, monitoring, and nursing. Advanced hospitals are incorporating AI technology to optimize precision and cost-effectiveness. Robotics supports the handicapped, and predictive analytics and healthcare management participate in medical decision-making. Network connectivity facilitates cost-effective worldwide healthcare access. The rapid advancement of machine learning algorithms, particularly deep learning, has an enormous impact on the healthcare business. This is primarily due to an increase in digital data and processing competence, which has been made possible by advances in hardware technology. AI is increasingly frequently utilized in healthcare for performing high-accuracy undertakings. This study explores the machine learning algorithms and methodologies utilized in healthcare decision making. In accordance with the enormous processing capacity offered by modern technology, neural network-based deep learning approaches have proven advantageous for computational biology. They are frequently employed owing to their outstanding predictive accuracy and dependability. The study highlights the reliance of computational biology and biomedicine-based decision making in healthcare on machine learning algorithms, which make them crucial for AI applications.
- Research Article
- 10.63501/jj9ksr56
- Jun 11, 2025
- INNOVAPATH
Artificial Intelligence (AI), Artificial General Intelligence (AGI), and other emerging technologies are significantly reshaping modern healthcare systems. Their integration across clinical, operational, and public health settings has already produced measurable improvements in diagnostic accuracy, treatment personalization, operational efficiency, and epidemic response. These technologies leverage vast amounts of data, advanced algorithms, and computational power to augment clinical decision-making, optimize workflows, and expand access to care. This manuscript explores the real-world applications of these technologies, drawing on recent literature and case studies to illustrate both their potential and limitations. Specific examples include AI-driven diagnostic imaging, predictive analytics for hospital management, and AI-based models for pandemic surveillance. It also addresses the growing use of AI in personalized medicine and the increasing incorporation of robotics, deep learning, natural language processing, edge computing, quantum computing, health information and learning technologies (HILT), digital twin systems, and neural networks in everyday clinical practice (Topol, 2019; Rajkomar et al., 2019; Esteva et al., 2017). The findings indicate that while AI and related innovations hold promise for revolutionizing care delivery, challenges related to algorithmic bias, data privacy, ethical governance, and regulatory oversight remain critical considerations. The disparity in access to these tools, particularly in low-resource settings, underscores the need for inclusive and equitable frameworks. A multi-stakeholder, ethical, and interdisciplinary approach is required to ensure these tools fulfill their transformative potential while safeguarding patient rights and promoting equitable healthcare outcomes worldwide. As the healthcare landscape evolves, the thoughtful integration of AI, AGI, and complementary technologies will be pivotal in achieving scalable, efficient, and patient-centered care delivery.
- Research Article
- 10.54660/.ijfmr.2023.4.2.27-33
- Jan 1, 2023
- Journal of Frontiers in Multidisciplinary Research
The pharmaceutical industry faces unprecedented challenges including rising development costs, high clinical trial failure rates, and increasing pressure to deliver faster, safer, and more effective therapeutics. In response, the integration of generative artificial intelligence (AI) and big data analytics has emerged as a transformative approach to drug discovery. Generative models such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and transformer-based architectures are revolutionizing the early phases of drug development by enabling de novo molecule generation, protein structure prediction, and optimization of pharmacokinetic properties. Meanwhile, predictive analytics powered by machine learning (ML) and deep learning (DL) techniques are enhancing compound screening, target identification, and clinical trial simulation. This review article explores the convergence of generative AI and big data in pharmaceutical research, detailing their synergistic role in expediting drug discovery pipelines. It provides a comprehensive overview of current methodologies, discusses case studies of AI-driven discoveries, and evaluates the technological infrastructure required to operationalize these advancements. The paper also addresses challenges such as data privacy, model explainability, and validation, while highlighting future trends including quantum AI, multimodal learning, and AI-driven personalized medicine. Ultimately, this review demonstrates how generative AI, when fused with robust data ecosystems, holds the potential to radically transform pharmaceutical innovation.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.