Reflections on Population Studies in the Age of AI

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

In his editorial, former editor of Comparative Population Studies (CPoS) Frans Willekens reflects on the use of artificial intelligence (AI) in population studies. Effective and responsible use of any tool requires a basic understanding of how it works, when it may be used, and when its use should be avoided. When this fundamental principle is observed, AI tools can enrich learning and research and help advance the frontiers of knowledge. Epistemic integrity and accountability remain essential; the advent of AI does not diminish that core value. Although generative AI is currently dominated by machine learning and relies on statistical inference to make predictions and generate content, rule-based AI, which dominated AI in the early days, is making a comeback. Students of population should critically engage with the expanding landscape of AI systems and resist the tendency towards technological monoculture. They should cultivate substantive collaborations with computer scientists to develop domain-specific AI systems that fully prepare population studies − with demography at its core − for the era of AI. * This article belongs to a series celebrating the journal's 50th anniversary.

Similar Papers
  • Research Article
  • Cite Count Icon 37
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Research Article
  • Cite Count Icon 42
  • 10.1016/j.fertnstert.2020.10.040
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
  • Nov 1, 2020
  • Fertility and Sterility
  • Carol Lynn Curchoe + 18 more

Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?

  • Research Article
  • Cite Count Icon 30
  • 10.1016/j.ejmp.2021.03.015
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
  • Mar 1, 2021
  • Physica Medica
  • Nico Buls + 4 more

Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 94
  • 10.1016/j.isci.2020.101515
Who Gets Credit for AI-Generated Art?
  • Aug 29, 2020
  • iScience
  • Ziv Epstein + 3 more

SummaryThe recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork's success. Here, we identify natural heterogeneity in the extent to which different people perceive AI as anthropomorphic. We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production. We then show that perceptions of AI anthropomorphicity can be manipulated by changing the language used to talk about AI—as a tool versus agent—with consequences for artists and AI practitioners. Our findings shed light on what is at stake when we anthropomorphize AI systems and offer an empirical lens to reason about how to allocate credit and responsibility to human stakeholders.

  • Research Article
  • Cite Count Icon 1
  • 10.25313/2520-2294-2022-11-8425
ВПЛИВ ТЕХНОЛОГІЙ ШТУЧНОГО ІНТЕЛЕКТУ НА ЕФЕКТИВНІСТЬ ДІЯЛЬНОСТІ БІЗНЕСУ
  • Jan 1, 2022
  • International scientific journal "Internauka". Series: "Economic Sciences"
  • Nataliіa Skopenko + 2 more

Current challenges have accelerated the implementation of modern business concepts. Among the many practices of continuous business processes improvement is digitalization. Attention is focused on the benefits of digitalization in companies, which is to improve the processes quality, reduce their passage time, quickly fulfil orders, and hence increase customer loyalty. The concept of artificial intelligence is analysed, its three main types are identified: artificial narrow intelligence, general artificial intelligence, artificial superintelligence. Artificial narrow intelligence is focused on solving a narrowly defined, structured task; general artificial intelligence which is aimed at solving any problem, can respond to different environments and situations. Artificial superintelligence will be able to surpass people in absolutely everything, such as coping with creative tasks, decision-making and maintaining emotional relationships. The advantages of using artificial intelligence (accuracy in data processing, the ability to quickly analyse a large amount of information that will facilitate timely decision-making) are revealed. The main threats to the use of artificial intelligence (jobs disappearance, mass unemployment, loss of control over artificial intelligence – robots’ uncontrollability by humans) are also indicated. The most common technologies of artificial intelligence in enterprises (data science, machine learning, robotization) are considered. The business entities experience in the implementation of various artificial intelligence tools in operational activities, in the medical, legal, space, banking, educational spheres of activity, is presented. It was emphasized in the educational field that the annual increase in artificial intelligence is expected to reach 45% by 2030. It is also highlighted that artificial intelligence contributes to business development and global economic activity. The world's key players in the artificial intelligence market are considered, the top 10 world IT corporations are presented, the growth of their key performance indicators after the introduction of artificial intelligence technologies in goods and services is investigated.

  • Research Article
  • Cite Count Icon 2
  • 10.21900/j.alise.2024.1710
The AI-empowered Researcher: Using AI-based Tools for Success in Ph.D. Programs
  • Oct 16, 2024
  • Proceedings of the ALISE Annual Conference
  • Vanessa Kitzie + 5 more

Generative artificial intelligence (AI) changes the picture of graduate education by providing personalized learning, automated feedback, intelligent research assistants, and automated content creation (George, 2023). AI tools will support doctoral students in text generation, language translation, responding to academic queries, and data collection and analysis and encourage self-learning and thinking development (Rasul et al., 2023; Zou & Huang, 2023). They also would be helpful for doctoral students working as teaching assistants and aiding in daily problems (Can et al., 2023; Parker et al., 2024). However, the rise of AI tools also leads to considerations of academic integrity, over-reliance on AI, misinformation, and the potential biases embedded in algorithms (George, 2023; Rasul et al., 2023). Echoing the opportunities and challenges of AI applications in research and learning, the ALISE Doctoral Students SIG wants to encourage a discussion on how doctoral students can use AI tools to empower us in the Ph.D. journey. The panel invites a diverse group of doctoral students/candidates to share how AI tools can facilitate data collection and analysis and their critical understanding of AI systems. Manar Alsaid will talk about using AI and machine learning to detect complex misinformation on social media. The talk aims to enhance our understanding of misinformation and reduce its negative impacts. This presentation will provide valuable insights for research on misinformation and information literacy. Adam Eric Berkowitz will introduce the black-box tinkering method that experimentally discerns how AI systems operate. The method enhances the transparency of AI systems, challenging the technocratic paradigm. With three examples, Berkowitz encourages attendees to learn what black-box tinkering is, how to identify cases using it, and potential opportunities to incorporate it in research. Anisah Herdiyanti will share insights from a study comparing transcripts generated by Otter.ai and Zoom Meetings. The presentation will highlight both the benefits and challenges of AI-based notes and transcription software, including technical concerns and the convenience of automated result delivery. The audience will enhance their understanding of AI tools in qualitative data transcribing and the ethical considerations in the process. Rebecca Bryant Penrose will showcase the use of HeyGen, an AI-based video generator and translation tool, in an international interview project between students at California State University Bakersfield and a Ukrainian artist/author. The presentation will increase awareness of the potential use of AI-based video and help researchers overcome language barriers in data collection. The panel will last 90 minutes, including a 5-minute introduction and a 5-minute wrap-up. Each panelist will have 10 minutes to present their topics, followed by 5-minute Q&As. A 25-minute moderated roundtable discussion will follow the panelists’ presentations to explore the potential use of different AI tools in research, including ChatGPT and AI-powered article summarizers. The panel’s learning outcomes include (1) Identifying challenges and opportunities to incorporate AI tools in research and study and (2) Explaining how to interact with AI tools to improve efficiency in research. It also provides a platform for doctoral students to share their knowledge of how AI changes research approaches and networks with each other.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s44163-025-00316-7
AI and Confidentiality protection in International Commercial Arbitration: Analysis of the existing legal framework
  • May 30, 2025
  • Discover Artificial Intelligence
  • Mark-Silas A Malekela

The use of Generative Artificial Intelligence (AI) tools in international commercial arbitration reveals a complex intersection with the potential risk of confidential data breaches. Adopting a doctrinal research approach, this research article analyses the legal and regulatory framework applicable to ensure responsible and ethical uses of AI so as to protect confidentiality in international arbitration. This article argues that the use of AI in international arbitration has brought in a new age of efficiency and accuracy in international arbitration, but it also raises concerns on the protection of confidentiality as third-party owned AI tools and systems are prone to a potential risk of confidential data breaches and confidentiality violations on volumes of data stored together in AI tools. The application of the guidelines and principles on the use of AI in international arbitration as well as emerging regulations and laws on AI have varied approaches that are either discretionary or only play a guiding role on the protection of confidential information in international arbitration. Ultimately, this article recommends that it is imperative for the upcoming versions of institutional arbitration rules to enhance the confidentiality obligations in arbitration proceedings with a focus on the integration of AI tools. Alternatively, with the use of confidentiality orders, arbitration participants must ensure that appropriate safeguards are in place to ensure that confidentiality is a core consideration from the initial stages of deploying AI tools. Confidentiality by design could also be applied in Generative AIs used by law firms, arbitral tribunals or institutions.

  • Research Article
  • Cite Count Icon 11
  • 10.1057/s41599-024-03968-5
Artificial intelligence may affect diversity: architecture and cultural context reflected through ChatGPT, Midjourney, and Google Maps
  • Jan 6, 2025
  • Humanities and Social Sciences Communications
  • Ingrid Campo-Ruiz

This study aims to understand how widely used Artificial Intelligence (AI) tools reflect the cultural context through the built environment. This research explores how outputs obtained with ChatGPT-4o, Midjourney’s bot on Discord and Google Maps represent the cultural context of Stockholm, Sweden. Cultural context is important because it shapes people’s identity, behaviour, and power dynamics. AI-generated recommendations and images of Stockholm’s cultural context were compared with real photographs, GIS demographic data and socio-economic information about the city. Results show how outputs written with ChatGPT-4o mostly listed museums and other venues popular among visitors, while Midjourney’s bot mostly represented cafes, streets, and furniture, reflecting a cultural context heavily shaped by buildings, consumption and commercial interests. Google Maps shows commercial sites while also enabling users to directly add information about places, like opinions, photographs and the main features of a business. These AI perspectives on cultural context can broaden the understanding of the urban environment and facilitate a deeper insight into the prevailing ideas behind the data that train these algorithms. Results suggest that the generative AI systems analysed convey a narrow view of the cultural context, prioritising buildings and a sense of cultural context that is curated, exhibited and commercialised. Generative AI tools could jeopardise cultural diversity by prioritising some ideas and places as “cultural”, exacerbating power relationships and even aggravating segregation. Consequently, public institutions should promote further discussion and research on AI tools, and help users combine AI tools with other forms of knowledge. The providers of AI systems should ensure more inclusivity in AI training data, facilitate users’ writing of prompts and disclose the limitations of their data sources. Despite the current potential reduction of diversity of the cultural context, AI providers have a unique opportunity to produce more nuanced outputs, which promote more societal diversity and equality.

  • Research Article
  • Cite Count Icon 10
  • 10.1111/ajo.13661
Artificial intelligence: Friend or foe?
  • Apr 1, 2023
  • Australian and New Zealand Journal of Obstetrics and Gynaecology
  • Anusch Yazdani + 2 more

Artificial intelligence (AI) is the simulation of human intelligence in machines that are programmed to think and learn like humans. AI has the potential to revolutionise the way that healthcare professionals diagnose, treat, and manage conditions affecting the female reproductive system. Machine learning (ML) is a subset of AI which deals with the development of algorithms and statistical models that enable computers to learn from and make predictions or decisions without being explicitly programmed to do so. Deep learning (DL) is a subfield of ML that utilises neural networks with multiple layers, known as deep neural networks (DNNs), to learn from data. DNNs are inspired by the structure and function of the human brain and are capable of automatically learning high-level features from raw data, such as images, audio and text. DL has been very successful in various applications such as image and speech recognition, natural language processing and computer vision. ML algorithms can be divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning algorithms are trained on a labelled dataset, where the desired output (label) is already known. Unsupervised learning algorithms are trained on an unlabelled dataset and are used to discover patterns or relationships in the data. Reinforcement learning algorithms are trained using a trial-and-error approach, where the agent receives a reward or penalty for its actions. The goal of reinforcement learning is to learn a policy that maximises the expected reward over time. AI and ML are increasingly being applied in the field of obstetrics and gynaecology, with the potential to improve diagnostic accuracy, patient outcomes, and efficiency of care. AI has been applied to the field of medicine for several decades. One of the earliest examples of AI in medicine was the development of MYCIN in the 1970s, a computer program that could diagnose bacterial infections and recommend appropriate antibiotic treatments. MYCIN was developed by a team at Stanford University led by Edward Shortliffe, and its success demonstrated the potential of AI in medical decision making. In the 1980s, AI-based expert systems such as DXplain, developed at Massachusetts General Hospital, were used to assist in the diagnosis of diseases. These early AI systems were based on rule-based systems and were limited in their capabilities. One of the earliest examples of AI was the development of computer-aided diagnostic systems for ultrasound images in the 1970s and 1980s. These systems were designed to assist radiologists in identifying fetal anomalies and other conditions. In recent years, there has been a renewed interest in the use of AI in obstetrics and gynaecology, driven by advances in ML and the availability of large amounts of data. One of the primary areas in which AI and ML are being used in obstetrics and gynaecology is in the analysis of imaging data, such as ultrasound and magnetic resonance imaging. AI algorithms can be trained to automatically identify and classify different structures in the images, such as the placenta or fetal organs, with high accuracy. Another area of focus is the use of AI to predict preterm birth. Researchers have used ML algorithms to analyse data from electronic health records and identify patterns that are associated with preterm birth. By analysing large datasets of patient information and outcomes, AI algorithms can identify patterns and risk factors that may not be apparent to human analysts. This can help to improve the prediction of obstetric outcomes and guide clinical decision making. In recent years, AI has also been applied in obstetrics and gynaecology for real-time monitoring of high-risk pregnancies and identifying fetal distress. These systems use ML algorithms to analyse data from fetal heart rate monitors and identify patterns that are associated with fetal distress. AI and ML are also being used to develop new tools for the management of gynaecological conditions, such as endometriosis and fibroids. These tools can be used to predict the progression of the disease and guide treatment decisions. One example of the use of AI in benign gynaecology is the development of computer-aided diagnostic systems for endometriosis. These systems use ML algorithms to analyse images of the pelvic region and identify the presence of endometrial tissue, which can be a sign of endometriosis. Another area where AI and ML are being applied is in the management of fibroids. ML algorithms are being used to analyse imaging data and predict the growth and behaviour of fibroids, which can aid in the development of personalised treatment plans. In the field of oncology, AI is being used to improve the accuracy and speed of cancer diagnosis. AI algorithms can analyse images of tissue samples to identify the presence of cancer cells and predict the likelihood of a positive outcome following treatment. AI algorithms can be trained to analyse images from pelvic scans and identify signs of ovarian cancer with high accuracy. In addition to these specific applications, AI and ML are also being used to improve the efficiency and organisation of care in obstetrics and gynaecology. For example, by analysing large amounts of clinical data, AI algorithms can be used to identify patients at high risk of complications, prioritise them for care and ensure that they receive the appropriate level of care in a timely manner. AI and ML have the potential to revolutionise the field of fertility and in vitro fertilisation (IVF). By using data from large patient populations, AI and ML algorithms can help identify patterns and predict outcomes that would be difficult for human experts to discern. This can lead to improvements in diagnosis, treatment planning, and overall success rates for patients undergoing IVF. One area where AI and ML are being applied is in the selection of embryos for transfer during IVF. By analysing images of embryos, AI and ML algorithms can predict which embryos are most likely to result in a successful pregnancy. Another area where AI and ML have shown potential is in the optimisation of culture conditions for embryos. This has the potential to improve the survival and development of embryos, leading to higher pregnancy rates. AI and ML are also being used to improve the timing of embryo transfer during IVF. By analysing data from patient medical histories, AI and ML algorithms can predict the optimal time for transfer to increase the chances of successful pregnancies. In addition to these applications, AI and ML are being used in other areas of fertility and IVF to improve patient outcomes. For example, AI and ML are being used to predict the likelihood of ovarian reserve, predict ovulation timing, and improve the efficiency and cost-effectiveness of fertility clinics. AI and ML are rapidly evolving fields that have the potential to revolutionise the field of surgery. These technologies can be used to assist surgeons in a variety of ways, from pre-operative planning to real-time guidance during procedures. One of the key areas where AI and ML are being applied in surgery is in image analysis. For example, algorithms can be used to automatically segment and identify structures in medical images, such as tumours or blood vessels. This can help surgeons plan procedures more accurately and reduce the risk of complications. Another area where AI and ML are being used in surgery is in the development of robotic systems. These systems can be programmed to perform specific tasks, such as suturing or cutting tissue, with a high degree of precision and accuracy. In addition, robotic systems can be equipped with sensors that provide real-time feedback to the surgeon, which can help to improve the outcome of the procedure. These systems can be programmed with advanced algorithms that allow them to make precise incisions, control bleeding, and minimise tissue damage. AI and ML can also be used to improve the efficiency and safety of surgical procedures. For example, algorithms can be trained to analyse data from vital signs monitors, such as heart rate and blood pressure, and alert surgeons to potential complications in real-time. AI and ML are also being used to assist with post-operative care. For example, algorithms can be used to analyse patient data and predict which patients are at risk of complications, such as infection or bleeding, allowing surgeons to take preventative measures. Overall, AI and ML have the potential to significantly improve the field of surgery by increasing accuracy and precision, reducing the risk of complications, and improving patient outcomes. As the technology continues to advance, it is likely that we will see an increasing number of AI-assisted surgical systems and applications in clinical practice. In gynaecology specifically, there is a scarcity of data and diversity in the data. This can lead to AI models that are not generalisable to certain populations or that make incorrect predictions for certain groups of patients. Overall, AI has the potential to improve the diagnosis and management of obstetrics and gynaecology conditions, and many studies have shown that AI systems can perform at least as well as human experts in several areas. However, it is important to note that AI and ML are still in the early stages of development in obstetrics and gynaecology and more research is needed to fully understand their potential benefits and limitations. Some of the key challenges facing the field include developing AI systems that can explain their decisions, improving the robustness of AI systems to adversarial attacks, and developing AI systems that can operate in a wide range of environments. However, it is important to note that AI is a complementary tool to the obstetrics and gynaecology specialist and it is not meant to replace human expertise. The preceding text is entirely a product of an AI system. The preceding review, Artificial Intelligence in Gynaecology: An Overview was composed and written by an evolutionary AI system, ChatGPT (Chat Generative Pre-trained Transformer). ChatGPT is an AI chatbot underpinned by the GPT architecture, an autoregressive language model that uses DL to produce human-like text. The system was trained on a dataset of over 500 GB of text data derived from books, articles, and websites prior to 2021. The system can engage in responsive dialogue, generate computer code, and produce coherent and fluent text.1 ChatGPT was conceived by OpenAI, an AI laboratory based in San Francisco, California, founded by Elon Musk and Sam Altman in 2015. Since its public release on November 30, 2022, the potential for use and misuse has exponentially grown,2 ultimately leading to the prohibition of the utilisation of AI systems by multiple organisations, including schools and universities. Prompted by this interest in AI, the aim of this study was to assess the capacity of ChatGPT to generate a scientific review. In January 2023, a multidisciplinary study group was assembled to develop the study protocol, confirm the methodology and approve the topic. This research was exempt from ethics review under National Health and Medical Research Council guidelines.3 ChatGPT was instructed to generate an narrative review based on dialogue with the lead author, AY. The input was informed by collaborative meetings of the study group over the study period. The study group nominated the topic, 'Artificial Intelligence in Gynaecology', but ChatGPT generated the title, structure and content for this paper. The study group defined the input parameters for ChatGPT and each AI output was reviewed by the authors for consistency and context, informing the next input. The dialogue thus became increasingly specific and refined in each iteration, as the initial general outline was expanded to include specific subheadings, academic language and academic references. The review was finalised from the ChatGPT output through an explicit composition protocol, limiting assembly to cut and paste, deletion to whole sentences (but not words) and conversion to Australian English. No grammatical or syntax correction was performed. The AI output was cross-referenced and verified by the study group. In this study, ChatGPT generated 7112 words in over 15 iterations, including 32 references. The output was restricted to the final review of 1809 words and nine unique references after removing duplicates4 and incorrect references (19). The final paper was submitted for blinded peer review. Thus, this study has demonstrated the capacity of an AI system, such as ChatGPT, to generate a scientific review through human academic instruction. AI is anticipated to expand the boundaries of evidence-based medicine through the potential of comprehensive analysis and summation of scientific publications. However, unlike systematic reviews or meta-analyses governed by explicit methodology, AI systems such as ChatGPT are the product of DL algorithms that are dependent upon the quality of the input to train the AI. Consequently, unlike systematic reviews, AI systems are bound by the bias, breadth, depth and quality of the training material. A dedicated medical AI would therefore be trained on an appropriate data set, such as the National Library of Medicine Medline/PubMed database. However, the volume of data is challenging: in 2022 alone, there were over 33 million citations equating to a dataset of almost 200 Gb for the minimum dataset. In contrast, ChatGPT has no external reference capabilities, such as access to the internet, search engines or any other sources of information outside of its own model. If forced outside of this framework, ChatGPT may generate plausible-sounding but incorrect or nonsensical responses.4 Most notably, pushing the AI to include references leads the system to generate bizarre fabrications.5 Our paper demonstrated that only 28% (9/32) of the references were authentic, although better than the 11% reported in a recent paper.6 In contrast to human writing, AI-generated content is more likely to be of limited depth, contain factual errors, fabricated references and repeat the instructions used to seed the output.7 The latter results in a formulaic language redundancy that all but identifies AI content. The human authors thus echo the conclusion of ChatGPT that AI is a complementary tool to the specialist and not meant to replace human expertise. For the moment. The authors report no conflicts of interest.

  • Research Article
  • 10.34190/ecie.19.1.2468
Exploring the potential of AI to increase productivity in small marketing teams
  • Sep 20, 2024
  • European Conference on Innovation and Entrepreneurship
  • Aniko Szenftner + 2 more

Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.

  • Research Article
  • Cite Count Icon 17
  • 10.1097/corr.0000000000001679
CORR Synthesis: When Should the Orthopaedic Surgeon Use Artificial Intelligence, Machine Learning, and Deep Learning?
  • Feb 17, 2021
  • Clinical orthopaedics and related research
  • Michael P Murphy + 1 more

CORR Synthesis: When Should the Orthopaedic Surgeon Use Artificial Intelligence, Machine Learning, and Deep Learning?

  • Discussion
  • Cite Count Icon 69
  • 10.1016/s2589-7500(21)00076-5
Continual learning in medical devices: FDA's action plan and beyond
  • Apr 28, 2021
  • The Lancet Digital Health
  • Kerstin N Vokinger + 2 more

Continual learning in medical devices: FDA's action plan and beyond

  • Research Article
  • Cite Count Icon 4
  • 10.1007/s43681-025-00721-9
Systematic literature review on bias mitigation in generative AI
  • Aug 25, 2025
  • AI and Ethics
  • Juveria Afreen + 2 more

In the era of rapid technological advancement, Artificial Intelligence (AI) is a transformative force, permeating diverse facets of society. However, bias concerns have gained prominence as AI systems become integral to decision-making processes. Bias can exert significant and extensive consequences, influencing individuals, groups, and society. The presence of bias in generative AI or machine learning systems can produce content that exhibits discriminating tendencies, perpetuates stereotypes, and contributes to inequalities. Artificial intelligence (AI) systems have the potential to be employed in various contexts that involve sensitive settings, where they are tasked with making significant judgements that can have profound impacts on individuals' lives. Consequently, it is important to establish measures that prevent these decisions from exhibiting discriminating tendencies against specific groups or populations. This exclusive exploration embarks on a comprehensive journey through the nuanced landscape of bias in AI, unravelling its intricate layers to discern different types, pinpoint underlying causes, and illuminate innovative mitigation strategies. Delving deeper, we investigate the roots of bias in AI, revealing a complex interplay of historical legacies, societal imbalances, and algorithmic intricacies. Unravelling the causes involves exploring unintentional reinforcement of existing biases, reliance on incomplete or biased training data, and the potential amplification of disparities when AI systems are deployed in diverse real-world scenarios. Various domains such as text, image, audio, video and more significant advancements in Generative Artificial Intelligence (GAI) were evidenced. Multiple challenges and proliferation of biases occur in different perspectives considered in the study. Against this backdrop, the exploration transitions to a proactive stance, offering a glimpse into cutting-edge mitigation strategies. Diverse and inclusive datasets emerge as a cornerstone, ensuring representative input for AI models. Ethical considerations throughout the development lifecycle and ongoing monitoring mechanisms prove pivotal in mitigating biases that may arise during training or deployment. Technical and non-technical strategies come to the forefront of pursuing fairness and equity in AI. The paper underscores the importance of interdisciplinary collaboration, emphasising that a collective effort spanning developers, ethicists, policymakers, and end-users is paramount for effective bias mitigation. As AI continues its ascent into various spheres of our lives, understanding, acknowledging, and addressing bias becomes an imperative. This exploration seeks to contribute to the discourse, fostering a deeper comprehension of the challenges posed by bias in AI and inspiring a collective commitment to building equitable, trustworthy AI systems for the future.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ejmp.2021.05.008
Focus issue: Artificial intelligence in medical physics.
  • Mar 1, 2021
  • Physica Medica
  • F Zanca + 11 more

Focus issue: Artificial intelligence in medical physics.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.