Integrating Artificial and Human Intelligence: A Partnership for Responsible Innovation in Biomedical Engineering and Medicine.
Historically, the term "artificial intelligence" dates to 1956 when it was first used in a conference at Dartmouth College in the US. Since then, the development of artificial intelligence has in part been shaped by the field of neuroscience. By understanding the human brain, scientists have attempted to build new intelligent machines capable of performing complex tasks akin to humans. Indeed, future research into artificial intelligence will continue to benefit from the study of the human brain. While the development of artificial intelligence algorithms has been fast paced, the actual use of most artificial intelligence (AI) algorithms in biomedical engineering and clinical practice is still markedly below its conceivably broader potentials. This is partly because for any algorithm to be incorporated into existing workflows it has to stand the test of scientific validation, clinical and personal utility, application context, and is equitable as well. In this context, there is much to be gained by combining AI and human intelligence (HI). Harnessing Big Data, computing power and storage capacities, and addressing societal issues emergent from algorithm applications, demand deploying HI in tandem with AI. Very few countries, even economically developed states, lack adequate and critical governance frames to best understand and steer the AI innovation trajectories in health care. Drug discovery and translational pharmaceutical research stand to gain from AI technology provided they are also informed by HI. In this expert review, we analyze the ways in which AI applications are likely to traverse the continuum of life from birth to death, and encompassing not only humans but also all animal, plant, and other living organisms that are increasingly touched by AI. Examples of AI applications include digital health, diagnosis of diseases in newborns, remote monitoring of health by smart devices, real-time Big Data analytics for prompt diagnosis of heart attacks, and facial analysis software with consequences on civil liberties. While we underscore the need for integration of AI and HI, we note that AI technology does not have to replace medical specialists or scientists and rather, is in need of such expert HI. Altogether, AI and HI offer synergy for responsible innovation and veritable prospects for improving health care from prevention to diagnosis to therapeutics while unintended consequences of automation emergent from AI and algorithms should be borne in mind on scientific cultures, work force, and society at large.
- Front Matter
- 10.1088/1742-6596/2078/1/011001
- Nov 1, 2021
- Journal of Physics: Conference Series
We are glad to introduce you that the 2021 3rd International Conference on Artificial Intelligence Technologies and Applications (ICAITA 2021) was successfully held on September 10-12, 2021. In light of worldwide travel restriction and the impact of COVID-19, ICAITA 2021 was carried out in the form of virtual conference to avoid personnel gatherings. Because most participants were still highly enthusiastic about participating in this conference, we chose to carry out ICAITA 2021 via online platform according to the original schedule instead of postponing it.ICAITA 2021 is to bring together innovative academics and industrial experts in the field of Artificial Intelligence Technologies and Applications to a common forum. The primary goal of the conference is to promote research and developmental activities in Artificial Intelligence Technologies and Applications and another goal is to promote scientific information interchange between researchers, developers, engineers, students, and practitioners working all around the world. The conference will be held every year to make it an ideal platform for people to share views and experiences in Artificial Intelligence Technologies and Applications and related areas.This scientific event brings together more than 100 national and international researchers in artificial intelligence technologies and applications. During the conference, the conference model was divided into three sessions, including oral presentations, keynote speeches, and online Q&A discussion. In the first part, some scholars, whose submissions were selected as the excellent papers, were given about 5-10 minutes to perform their oral presentations one by one. Then in the second part, keynote speakers were each allocated 30-45 minutes to hold their speeches.We were pleased to invite three distinguished experts to present their insightful speeches. Our first keynote speaker, Prof. Yau Kok Lim, from Sunway University, Malaysia. His research interests include Applied artificial intelligence, 5G networks, Cognitiveradio networks, Routing and clustering, Trust and reputation, Intelligent transportation system. And then we had Prof. Peter Sincak, from Technical University of Kosice, Slovakia. His research includes Artificial Intelligence and Intelligent Systems. Lastly, we were glad to invite Chinthaka Premachandra, from Shibaura Institute of Technology, Sri Lanka. His research interests include Artificial Intelligence, image processing and robotics. In the last part of the conference, all participants were invited to join in a WeChat group to discuss and explore the academic issues after the presentations. The online discussion was lasted for about 30-60 minutes. The first two parts were conducted via online collaboration tool, Zoom, while the online discussion was carried out through instant communication tool, WeChat. The online platform enabled all participants to join this grand academic event from their own home.We are glad to share with you that we still received lots of submissions from the conference during this special period. Hence, we selected a bunch of high-quality papers and compiled them into the proceedings after rigorously reviewed them. These papers feature following topics but are not limited to: Artificial Intelligence Applications & Technologies, Computing and the Mind, Foundations of Artificial Intelligence and other related topics. All the papers have been through rigorous review and process to meet the requirements of international publication standard.Lastly, we would like to express our sincere gratitude to the Chairman, the distinguished keynote speakers, as well as all the participants. We also want to thank the publisher for publishing the proceedings. May the readers could enjoy the gain some valuable knowledge from the proceedings. We are expecting more and more experts and scholars from all over the world to join this international event next year.The Committee of ICAITA 2021List of titles Committee member, General Conference Chair, Technical Program Committee Chair, Academic Committee Chair, Technical Program Committee Member, Academic Committee Member are available in this Pdf.
- Discussion
8
- 10.1016/j.ejmp.2021.05.008
- Mar 1, 2021
- Physica Medica
Focus issue: Artificial intelligence in medical physics.
- Research Article
4
- 10.56315/pscf12-21peckham
- Dec 1, 2021
- Perspectives on Science and Christian Faith
Masters or Slaves? AI and the Future of Humanity
- Research Article
36
- 10.1016/j.ejmp.2021.03.015
- Mar 1, 2021
- Physica Medica
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
- Research Article
13
- 10.1111/ajo.13661
- Apr 1, 2023
- Australian and New Zealand Journal of Obstetrics and Gynaecology
Artificial intelligence (AI) is the simulation of human intelligence in machines that are programmed to think and learn like humans. AI has the potential to revolutionise the way that healthcare professionals diagnose, treat, and manage conditions affecting the female reproductive system. Machine learning (ML) is a subset of AI which deals with the development of algorithms and statistical models that enable computers to learn from and make predictions or decisions without being explicitly programmed to do so. Deep learning (DL) is a subfield of ML that utilises neural networks with multiple layers, known as deep neural networks (DNNs), to learn from data. DNNs are inspired by the structure and function of the human brain and are capable of automatically learning high-level features from raw data, such as images, audio and text. DL has been very successful in various applications such as image and speech recognition, natural language processing and computer vision. ML algorithms can be divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning algorithms are trained on a labelled dataset, where the desired output (label) is already known. Unsupervised learning algorithms are trained on an unlabelled dataset and are used to discover patterns or relationships in the data. Reinforcement learning algorithms are trained using a trial-and-error approach, where the agent receives a reward or penalty for its actions. The goal of reinforcement learning is to learn a policy that maximises the expected reward over time. AI and ML are increasingly being applied in the field of obstetrics and gynaecology, with the potential to improve diagnostic accuracy, patient outcomes, and efficiency of care. AI has been applied to the field of medicine for several decades. One of the earliest examples of AI in medicine was the development of MYCIN in the 1970s, a computer program that could diagnose bacterial infections and recommend appropriate antibiotic treatments. MYCIN was developed by a team at Stanford University led by Edward Shortliffe, and its success demonstrated the potential of AI in medical decision making. In the 1980s, AI-based expert systems such as DXplain, developed at Massachusetts General Hospital, were used to assist in the diagnosis of diseases. These early AI systems were based on rule-based systems and were limited in their capabilities. One of the earliest examples of AI was the development of computer-aided diagnostic systems for ultrasound images in the 1970s and 1980s. These systems were designed to assist radiologists in identifying fetal anomalies and other conditions. In recent years, there has been a renewed interest in the use of AI in obstetrics and gynaecology, driven by advances in ML and the availability of large amounts of data. One of the primary areas in which AI and ML are being used in obstetrics and gynaecology is in the analysis of imaging data, such as ultrasound and magnetic resonance imaging. AI algorithms can be trained to automatically identify and classify different structures in the images, such as the placenta or fetal organs, with high accuracy. Another area of focus is the use of AI to predict preterm birth. Researchers have used ML algorithms to analyse data from electronic health records and identify patterns that are associated with preterm birth. By analysing large datasets of patient information and outcomes, AI algorithms can identify patterns and risk factors that may not be apparent to human analysts. This can help to improve the prediction of obstetric outcomes and guide clinical decision making. In recent years, AI has also been applied in obstetrics and gynaecology for real-time monitoring of high-risk pregnancies and identifying fetal distress. These systems use ML algorithms to analyse data from fetal heart rate monitors and identify patterns that are associated with fetal distress. AI and ML are also being used to develop new tools for the management of gynaecological conditions, such as endometriosis and fibroids. These tools can be used to predict the progression of the disease and guide treatment decisions. One example of the use of AI in benign gynaecology is the development of computer-aided diagnostic systems for endometriosis. These systems use ML algorithms to analyse images of the pelvic region and identify the presence of endometrial tissue, which can be a sign of endometriosis. Another area where AI and ML are being applied is in the management of fibroids. ML algorithms are being used to analyse imaging data and predict the growth and behaviour of fibroids, which can aid in the development of personalised treatment plans. In the field of oncology, AI is being used to improve the accuracy and speed of cancer diagnosis. AI algorithms can analyse images of tissue samples to identify the presence of cancer cells and predict the likelihood of a positive outcome following treatment. AI algorithms can be trained to analyse images from pelvic scans and identify signs of ovarian cancer with high accuracy. In addition to these specific applications, AI and ML are also being used to improve the efficiency and organisation of care in obstetrics and gynaecology. For example, by analysing large amounts of clinical data, AI algorithms can be used to identify patients at high risk of complications, prioritise them for care and ensure that they receive the appropriate level of care in a timely manner. AI and ML have the potential to revolutionise the field of fertility and in vitro fertilisation (IVF). By using data from large patient populations, AI and ML algorithms can help identify patterns and predict outcomes that would be difficult for human experts to discern. This can lead to improvements in diagnosis, treatment planning, and overall success rates for patients undergoing IVF. One area where AI and ML are being applied is in the selection of embryos for transfer during IVF. By analysing images of embryos, AI and ML algorithms can predict which embryos are most likely to result in a successful pregnancy. Another area where AI and ML have shown potential is in the optimisation of culture conditions for embryos. This has the potential to improve the survival and development of embryos, leading to higher pregnancy rates. AI and ML are also being used to improve the timing of embryo transfer during IVF. By analysing data from patient medical histories, AI and ML algorithms can predict the optimal time for transfer to increase the chances of successful pregnancies. In addition to these applications, AI and ML are being used in other areas of fertility and IVF to improve patient outcomes. For example, AI and ML are being used to predict the likelihood of ovarian reserve, predict ovulation timing, and improve the efficiency and cost-effectiveness of fertility clinics. AI and ML are rapidly evolving fields that have the potential to revolutionise the field of surgery. These technologies can be used to assist surgeons in a variety of ways, from pre-operative planning to real-time guidance during procedures. One of the key areas where AI and ML are being applied in surgery is in image analysis. For example, algorithms can be used to automatically segment and identify structures in medical images, such as tumours or blood vessels. This can help surgeons plan procedures more accurately and reduce the risk of complications. Another area where AI and ML are being used in surgery is in the development of robotic systems. These systems can be programmed to perform specific tasks, such as suturing or cutting tissue, with a high degree of precision and accuracy. In addition, robotic systems can be equipped with sensors that provide real-time feedback to the surgeon, which can help to improve the outcome of the procedure. These systems can be programmed with advanced algorithms that allow them to make precise incisions, control bleeding, and minimise tissue damage. AI and ML can also be used to improve the efficiency and safety of surgical procedures. For example, algorithms can be trained to analyse data from vital signs monitors, such as heart rate and blood pressure, and alert surgeons to potential complications in real-time. AI and ML are also being used to assist with post-operative care. For example, algorithms can be used to analyse patient data and predict which patients are at risk of complications, such as infection or bleeding, allowing surgeons to take preventative measures. Overall, AI and ML have the potential to significantly improve the field of surgery by increasing accuracy and precision, reducing the risk of complications, and improving patient outcomes. As the technology continues to advance, it is likely that we will see an increasing number of AI-assisted surgical systems and applications in clinical practice. In gynaecology specifically, there is a scarcity of data and diversity in the data. This can lead to AI models that are not generalisable to certain populations or that make incorrect predictions for certain groups of patients. Overall, AI has the potential to improve the diagnosis and management of obstetrics and gynaecology conditions, and many studies have shown that AI systems can perform at least as well as human experts in several areas. However, it is important to note that AI and ML are still in the early stages of development in obstetrics and gynaecology and more research is needed to fully understand their potential benefits and limitations. Some of the key challenges facing the field include developing AI systems that can explain their decisions, improving the robustness of AI systems to adversarial attacks, and developing AI systems that can operate in a wide range of environments. However, it is important to note that AI is a complementary tool to the obstetrics and gynaecology specialist and it is not meant to replace human expertise. The preceding text is entirely a product of an AI system. The preceding review, Artificial Intelligence in Gynaecology: An Overview was composed and written by an evolutionary AI system, ChatGPT (Chat Generative Pre-trained Transformer). ChatGPT is an AI chatbot underpinned by the GPT architecture, an autoregressive language model that uses DL to produce human-like text. The system was trained on a dataset of over 500 GB of text data derived from books, articles, and websites prior to 2021. The system can engage in responsive dialogue, generate computer code, and produce coherent and fluent text.1 ChatGPT was conceived by OpenAI, an AI laboratory based in San Francisco, California, founded by Elon Musk and Sam Altman in 2015. Since its public release on November 30, 2022, the potential for use and misuse has exponentially grown,2 ultimately leading to the prohibition of the utilisation of AI systems by multiple organisations, including schools and universities. Prompted by this interest in AI, the aim of this study was to assess the capacity of ChatGPT to generate a scientific review. In January 2023, a multidisciplinary study group was assembled to develop the study protocol, confirm the methodology and approve the topic. This research was exempt from ethics review under National Health and Medical Research Council guidelines.3 ChatGPT was instructed to generate an narrative review based on dialogue with the lead author, AY. The input was informed by collaborative meetings of the study group over the study period. The study group nominated the topic, 'Artificial Intelligence in Gynaecology', but ChatGPT generated the title, structure and content for this paper. The study group defined the input parameters for ChatGPT and each AI output was reviewed by the authors for consistency and context, informing the next input. The dialogue thus became increasingly specific and refined in each iteration, as the initial general outline was expanded to include specific subheadings, academic language and academic references. The review was finalised from the ChatGPT output through an explicit composition protocol, limiting assembly to cut and paste, deletion to whole sentences (but not words) and conversion to Australian English. No grammatical or syntax correction was performed. The AI output was cross-referenced and verified by the study group. In this study, ChatGPT generated 7112 words in over 15 iterations, including 32 references. The output was restricted to the final review of 1809 words and nine unique references after removing duplicates4 and incorrect references (19). The final paper was submitted for blinded peer review. Thus, this study has demonstrated the capacity of an AI system, such as ChatGPT, to generate a scientific review through human academic instruction. AI is anticipated to expand the boundaries of evidence-based medicine through the potential of comprehensive analysis and summation of scientific publications. However, unlike systematic reviews or meta-analyses governed by explicit methodology, AI systems such as ChatGPT are the product of DL algorithms that are dependent upon the quality of the input to train the AI. Consequently, unlike systematic reviews, AI systems are bound by the bias, breadth, depth and quality of the training material. A dedicated medical AI would therefore be trained on an appropriate data set, such as the National Library of Medicine Medline/PubMed database. However, the volume of data is challenging: in 2022 alone, there were over 33 million citations equating to a dataset of almost 200 Gb for the minimum dataset. In contrast, ChatGPT has no external reference capabilities, such as access to the internet, search engines or any other sources of information outside of its own model. If forced outside of this framework, ChatGPT may generate plausible-sounding but incorrect or nonsensical responses.4 Most notably, pushing the AI to include references leads the system to generate bizarre fabrications.5 Our paper demonstrated that only 28% (9/32) of the references were authentic, although better than the 11% reported in a recent paper.6 In contrast to human writing, AI-generated content is more likely to be of limited depth, contain factual errors, fabricated references and repeat the instructions used to seed the output.7 The latter results in a formulaic language redundancy that all but identifies AI content. The human authors thus echo the conclusion of ChatGPT that AI is a complementary tool to the specialist and not meant to replace human expertise. For the moment. The authors report no conflicts of interest.
- Research Article
12
- 10.1097/sla.0000000000005319
- Nov 23, 2021
- Annals of Surgery
Artificial Intelligence for Computer Vision in Surgery: A Call for Developing Reporting Guidelines.
- Research Article
23
- 10.2139/ssrn.3222566
- Aug 14, 2018
- SSRN Electronic Journal
Outline for a German Strategy for Artificial Intelligence
- Research Article
55
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
1
- 10.56536/jbahs.v5i1.111
- Feb 28, 2025
- Journal of Biological and Allied Health Sciences
Artificial Intelligence (AI) is revolutionizing the field of health sciences, reshaping how we teach, learn, and practice medicine. As AI technologies become increasingly integrated into healthcare systems, their impact on health sciences education cannot be overstated. From personalized learning experiences to advanced diagnostic training, AI is poised to enhance the quality and accessibility of education for future healthcare professionals. However, this transformation also raises critical questions about ethics, equity, and the future role of educators in an AI-driven world. The transformative role of Artificial Intelligence (AI) in health sciences education is increasingly recognized as a pivotal factor in shaping the future of medical training and practice. As AI technologies continue to evolve, their integration into educational curricula presents both opportunities and challenges that must be carefully navigated to enhance the learning experience for future healthcare professionals. One of the most significant contributions of AI to health sciences education is its ability to personalize learning. Traditional teaching methods often follow a one-size-fits-all approach, which can leave some students struggling to keep up while others are not sufficiently challenged. AI-powered platforms, such as adaptive learning systems, analyze individual student performance and tailor content to meet their unique needs. For example, tools like Osmosis and AMBOSS use AI to provide customized study plans, ensuring that students focus on areas where they need the most improvement (Topol, 2019). This personalized approach not only improves learning outcomes but also fosters a more inclusive educational environment. AI is also transforming clinical training by simulating real-world scenarios. Virtual patient simulations, powered by AI, allow students to practice diagnosing and treating conditions in a risk-free environment. These simulations can replicate rare or complex cases that students might not encounter during their clinical rotations. For instance, platforms like Touch Surgery and SimX use AI to create immersive surgical and emergency care simulations, providing students with hands-on experience before they enter the operating room (McGaghie et al., 2011). Such tools bridge the gap between theory and practice, preparing students for the complexities of modern healthcare. Moreover, AI is enhancing the role of educators by automating administrative tasks and providing data-driven insights into student performance. Grading, attendance tracking, and even curriculum design can be streamlined using AI, allowing educators to focus on mentoring and engaging with students. AI-driven analytics can also identify at-risk students early, enabling timely interventions to support their academic success (Wartman & Combs, 2018). By augmenting the capabilities of educators, AI empowers them to deliver more impactful and student-centered teaching. AI's potential to revolutionize health sciences education lies in its ability to personalize learning experiences and improve educational outcomes. For instance, AI-driven tools can facilitate realistic simulations and automated assessments, allowing students to engage in practical scenarios that mimic real-world clinical situations (Santos & Lopes, 2024). This capability not only enhances the learning process but also prepares students for the complexities of patient care in a technology-driven environment (Grunhut et al., 2022). Furthermore, the incorporation of AI into curricula can foster critical thinking and decision-making skills, essential for navigating the ethical dilemmas that arise in medical practice (Grunhut et al., 2022). Despite the promising applications of AI in education, the integration of these technologies into medical curricula has been slow. A scoping review highlighted that many medical schools have yet to adopt AI training, primarily due to a lack of systematic evidence supporting its implementation (Lee et al., 2021). Additionally, concerns regarding data protection and the ethical implications of AI use in healthcare education have been raised, indicating a need for comprehensive AI education that addresses these issues (Veras et al., 2023; Frehywot & Vovides, 2023). Students have expressed a desire for more robust training in AI, emphasizing the importance of understanding its role in healthcare delivery and decision-making processes (Ahmad et al., 2023; Derakhshanian et al., 2024). Moreover, the rapid advancement of AI technologies necessitates continuous curriculum updates to keep pace with emerging trends. As noted in recent literature, the integration of AI into biomedical science curricula should include subjects related to informatics, data sciences, and digital health (Sharma et al., 2024). This approach not only equips students with the necessary skills to utilize AI effectively but also prepares them for the evolving landscape of healthcare, where AI will play an integral role in diagnostics, treatment personalization, and patient management (Santos & Lopes, 2024; Secinaro et al., 2021). However, the implementation of AI in health sciences education is not without challenges. Ethical considerations surrounding AI's impact on healthcare equity and the potential for bias in AI algorithms must be addressed (Frehywot & Vovides, 2023; Han et al., 2019). Ensuring that AI technologies are used responsibly and equitably in education and practice is crucial to avoid exacerbating existing disparities in healthcare access and outcomes (Rigby, 2019). Furthermore, the lack of faculty expertise in AI poses a significant barrier to its integration into medical education, highlighting the need for targeted training and resources for educators (Derakhshanian et al., 2024). However, the integration of AI into health sciences education is not without challenges. Ethical concerns, such as data privacy and algorithmic bias, must be addressed to ensure that AI tools are used responsibly. Additionally, there is a risk of over-reliance on AI, potentially undermining the development of critical thinking and clinical judgment skills. Educators must strike a balance between leveraging AI’s capabilities and preserving the human elements of teaching and learning. Equity is another pressing issue. While AI has the potential to democratize education, access to these technologies remains uneven. Institutions in low-resource settings may struggle to adopt AI-driven tools, exacerbating existing disparities in global health education. Policymakers and educators must work together to ensure that the benefits of AI are accessible to all, regardless of geographic or socioeconomic barriers. In conclusion, AI is a powerful tool that holds immense promise for transforming health sciences education. By personalizing learning, enhancing clinical training, and supporting educators, AI can help prepare the next generation of healthcare professionals to meet the demands of an increasingly complex healthcare landscape. However, its integration must be guided by ethical principles and a commitment to equity, However, the successful integration of AI into educational curricula requires a concerted effort to address ethical concerns, update training programs, and equip both students and faculty with the necessary knowledge and skills. As the healthcare landscape continues to evolve, embracing AI in education will be essential for fostering a new generation of healthcare providers who are adept at leveraging technology to improve patient care. As we embrace this technological revolution, we must remember that AI is not a replacement for human expertise but a complement to it. The future of health sciences education lies in the synergy between human ingenuity and artificial intelligence.
- Research Article
4
- 10.1016/j.igie.2023.01.008
- Feb 28, 2023
- iGIE : innovation, investigation and insights
The brave new world of artificial intelligence: dawn of a new era
- Research Article
30
- 10.1016/j.compind.2023.103946
- May 15, 2023
- Computers in Industry
Hybrid intelligence in procurement: Disillusionment with AI’s superiority?
- Book Chapter
8
- 10.1007/978-3-030-92054-8_3
- Dec 15, 2021
The Autonomous transportation systems will become the mainstream in the future; it is particularly important to design effectively automatic control modules. This calls for the algorithms with highly adaptive ability of perception and computation. By actively interacting with the surroundings, optimal control decision strategies can be automatically calculated. Although research efforts have been devoted to this domain for many years, the general mathematical optimization methods can only find the approximated solutions inside systems, which makes the performance of the transportation systems difficult to achieve perfectly optimal. In this context, introduction of intelligent technologies represented by artificial intelligence has been regarded as a promising perspective. Artificial intelligence (AI), a branch of computer science, attempts to understand the essence of intelligence and simulates a novel intelligent machine that can respond in the ways similar to human intelligence. Since the birth of AI, its theory and technology have become increasingly mature, expanding applications in many fields such as robotics, language recognition, image recognition, natural language processing, and expert systems. It can be imagined that the scientific and technological products brought by artificial intelligence in the future will be the “container” of human intelligence. Generalized into transportation systems, the AI abstracts the complex system processes as black boxes and then uses the idea of statistical learning to model the complex system processes. By discovering potential and unobservable patterns, the AI provides more opportunities to solve the uncertainty problems hidden in the transportation systems. Predictably, the AI technology will be the key breakthrough in the evolution from conventional transportation systems to autonomous transportation systems. Therefore, this chapter is organized via three aspects of contents: (1) Overview of AI; (2) Need and Evolution of AI; and (3) AI for Transportation Systems. In this chapter, the wide AI conception is described by dividing it into three branches: cognitive AI, machine learning AI, and deep learning AI. Further, the need and evolution of AI are surveyed via three parts. The first part describes origin and development of AI, the second part describes common approaches and technologies about AI, and the third part describes successive cross-field applications of AI. Finally, application of AI for transportation systems is introduced via three parts: motivation, application status, and challenges.KeywordsAutonomous transportation systemMachine learningArtificial intelligenceExpert systemsHuman intelligenceNatural language processingImage recognitionLanguage recognitionTransportation systemsCognitive AIMachine learning AISupervised learningUnsupervised learningSemi-supervised learningReinforcement learningDeep learning AIArtificial neural networksComputer visionSmart manufacturingSmart citySmart careSmart educationSmart workflow
- Research Article
2
- 10.51702/esoguifd.1583408
- May 15, 2025
- Eskişehir Osmangazi Üniversitesi İlahiyat Fakültesi Dergisi
Artificial intelligence is defined as the totality of systems and programs that imitate human intelligence and can eventually surpass this intelligence over time. The rapid development of these technologies has raised various ethical debates such as moral responsibility, privacy, bias, respect for human rights, and social impacts. This study examines the technical infrastructure of artificial intelligence, the differences between weak and strong artificial intelligence, ethical issues, and theological dimensions in detail, providing a comprehensive perspective on the role of artificial intelligence in human life and the problems it brings. The historical development of artificial intelligence has been shaped by the contributions of various disciplines such as mathematical logic, cognitive science, philosophy, and engineering. From the ancient Greek philosophers to the present day, thoughts on artificial intelligence have raised deep philosophical questions such as human nature, consciousness, and responsibility. The algorithms developed by Alan Turing have contributed to the modern shaping of artificial intelligence and have put forward the first models to assess whether machines have human-like intelligence, such as the “Turing Test”. The study first analyzes the technical infrastructure of artificial intelligence in detail and discusses the current limits and potential of the technology through the distinction between weak and strong artificial intelligence. Weak artificial intelligence includes systems designed to perform specific tasks and do not exhibit general intelligence outside of those tasks, while strong artificial intelligence refers to systems with human-like general intelligence and flexible thinking capacity. Most of the widely used artificial intelligence applications today fall into the category of weak artificial intelligence. However, the development of strong artificial intelligence brings various ethical and theological consequences for humanity. The ethical issues of artificial intelligence include fundamental topics such as autonomy, responsibility, transparency, fairness, and privacy. The decision-making processes of autonomous systems raise serious ethical questions at the societal level. Especially autonomous weapons and artificial intelligence-managed justice systems raise concerns in terms of human rights and individual freedoms. In this context, the ethical framework of artificial intelligence has deep impacts on the future of humanity and human-machine interaction, not just limited to technological boundaries. From a theological perspective, the ability of artificial intelligence to imitate the human mind and creative processes raises deep theological issues such as the creativity of God, the place of human beings in the universe, and consciousness. The questions of whether artificial intelligence systems can gain consciousness and whether these conscious systems can have a spiritual status have led to new debates in theology and philosophy. The ethical principles of artificial intelligence are shaped around principles such as transparency, accountability, autonomy, human control, and data management. In conclusion, determining the ethical and theological principles that need to be considered in the development and application of artificial intelligence is critical for the future of humanity. A comprehensive examination of the ethical and theological dimensions of artificial intelligence technologies is necessary to understand and manage the social impacts of this technology. This study emphasizes the necessity of an interdisciplinary approach for the development of artificial intelligence in harmony with social values and for the benefit of humanity. The study provides an important theoretical framework for future research by shedding light on the complex ethical and theological issues arising from the development and widespread use of artificial intelligence.
- Discussion
7
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Research Article
20
- 10.1108/lhtn-03-2024-0048
- Apr 26, 2024
- Library Hi Tech News
PurposeThis paper aims to explore the intricate relationship between artificial intelligence (AI) and health information literacy (HIL), examining the rise of AI in health care, the intersection of AI and HIL and the imperative for promoting AI literacy and integrating it with HIL. By fostering collaboration, education and innovation, stakeholders can navigate the evolving health-care ecosystem with confidence and agency, ultimately improving health-care delivery and outcomes for all.Design/methodology/approachThis paper adopts a conceptual approach to explore the intricate relationship between AI and HIL, aiming to provide guidance for health-care professionals navigating the evolving landscape of AI-driven health-care delivery. The methodology used in this paper involves a synthesis of existing literature, theoretical analysis and conceptual modeling to develop insights and recommendations regarding the integration of AI literacy with HIL.FindingsImpact of AI on health-care delivery: The integration of AI technologies in health-care is reshaping the industry, offering unparalleled opportunities for improving patient care, optimizing clinical workflows and advancing medical research. Significance of HIL: HIL, encompassing the ability to access, understand and critically evaluate health information, is crucial in the context of AI-driven health-care delivery. It empowers health-care professionals, patients and the broader community to make informed decisions about their health and well-being. Intersection of AI and HIL: The convergence of AI and HIL represents a critical juncture, where technological innovation intersects with human cognition. AI technologies have the potential to revolutionize how health information is generated, disseminated and interpreted, necessitating a deeper understanding of their implications for HIL. Challenges and opportunities: While AI holds tremendous promise for enhancing health-care outcomes, it also introduces new challenges and complexities for individuals navigating the vast landscape of health information. Issues such as algorithmic bias, transparency and accountability pose ethical dilemmas that impact individuals’ ability to critically evaluate and interpret AI-generated health information. Recommendations for health-care professionals: Health-care professionals are encouraged to adopt strategies such as staying informed about developments in AI, continuous education and training in AI literacy, fostering interdisciplinary collaboration and advocating for policies that promote ethical AI practices.Practical implicationsTo enhance AI literacy and integrate it with HIL, health-care professionals are encouraged to adopt several key strategies. First, staying abreast of developments in AI technologies and their applications in health care is essential. This entails actively engaging with conferences, workshops and publications focused on AI in health care and participating in professional networks dedicated to AI and health-care innovation. Second, continuous education and training are paramount for developing critical thinking skills and ethical awareness in evaluating AI-driven health information (Alowais et al., 2023). Health-care organizations should provide opportunities for ongoing professional development in AI literacy, including workshops, online courses and simulation exercises focused on AI applications in clinical practice and research.Originality/valueThis paper lies in its exploration of the intersection between AI and HIL, offering insights into the evolving health-care landscape. It innovatively synthesizes existing literature, proposes strategies for integrating AI literacy with HIL and provides guidance for health-care professionals to navigate the complexities of AI-driven health-care delivery. By addressing the transformative potential of AI while emphasizing the importance of promoting critical thinking skills and ethical awareness, this paper contributes to advancing understanding in the field and promoting informed decision-making in an increasingly digital health-care environment.