Artificial Intelligence for Noninvasive Health Diagnostics.
Noninvasive diagnostic approaches are essential for early detection, patient compliance, and reduction of healthcare burden, yet they often face limitations in sensitivity, specificity, and timely interpretation. Artificial intelligence (AI) and machine learning (ML) address these gaps by uncovering complex patterns in diverse data streams and, in some instances, transforming diagnostics from isolated, ad hoc assessments into continuous, real-time monitoring. This review explores the integration of AI/ML across key noninvasive platforms, including medical imaging, wearable sensors, breath analysis, biofluid-based diagnostics (saliva, sweat, urine), and optical sensing methods. It synthesizes the current state of these technologies while highlighting emerging directions such as federated learning, explainable AI, digital twins, and the incorporation of nanosensors. Alongside technological advances, this review critically discusses barriers to adoption, including data privacy, algorithmic fairness, regulatory hurdles, and system integration challenges. By providing a comprehensive, modality-wise perspective, this article aims to guide researchers, clinicians, healthcare professionals, and policymakers in understanding both the promise and the practical limitations of AI-assisted noninvasive diagnostics. Ultimately, it offers a roadmap for translating innovation into scalable, cost-effective, and patient-centered solutions that can broaden healthcare access and improve outcomes globally.
- Research Article
67
- 10.1016/j.engappai.2023.107620
- Dec 8, 2023
- Engineering Applications of Artificial Intelligence
Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life
- Front Matter
5
- 10.1016/j.clon.2019.09.053
- Nov 1, 2019
- Clinical Oncology
Maximising the Opportunities of Artificial Intelligence for People Living With Cancer
- Single Book
- 10.62311/nesx/97891
- Mar 14, 2025
Abstract: As Artificial Intelligence (AI) advances, so do the risks associated with deepfakes, misinformation, and algorithmic bias, posing significant threats to security, privacy, democracy, and societal trust. "Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems" provides a comprehensive analysis of AI security vulnerabilities, adversarial machine learning, AI-driven misinformation, and bias in automated decision-making. The book explores how AI-generated synthetic media, data poisoning attacks, and biased algorithms are being weaponized for cyber fraud, political manipulation, and unethical automation. It delves into defensive strategies, AI forensic tools, cryptographic AI verification, and fairness-aware machine learning techniques to combat these emerging threats. Additionally, the book examines global AI regulations, governance frameworks, and ethical deployment standards that ensure transparency, accountability, and security in AI-driven ecosystems. Through real-world case studies, technical insights, and policy recommendations, this book serves as an essential resource for AI researchers, cybersecurity professionals, policymakers, and technology leaders aiming to develop trustworthy AI systems that resist adversarial manipulation, misinformation campaigns, and algorithmic bias while fostering fair, transparent, and secure AI adoption. Keywords: AI security, adversarial machine learning, deepfake detection, AI-generated misinformation, synthetic media, bias mitigation, AI ethics, AI governance, trustworthy AI, explainable AI (XAI), fairness-aware machine learning, cryptographic AI, federated learning security, digital forensics, algorithmic bias, data poisoning attacks, model robustness, cybersecurity in AI, misinformation detection, deep learning security, AI regulatory policies, zero-trust AI, blockchain-based content verification, ethical AI deployment, secure AI frameworks, AI transparency, AI-driven cyber threats, fake news detection, AI fraud prevention.
- Research Article
42
- 10.1016/j.icte.2024.05.007
- May 21, 2024
- ICT Express
Digital twins (DTs) are an emerging digitalization technology with a huge impact on today’s innovations in both industry and research. DTs can significantly enhance our society and quality of life through the virtualization of a real-world physical system, providing greater insights about their operations and assets, as well as enhancing their resilience through real-time monitoring and proactive maintenance. DTs also pose significant security risks, as intellectual property is encoded and more accessible, as well as their continued synchronization to their physical counterparts. The rapid proliferation and dynamism of cyber threats in today’s digital environments motivate the development of automated and intelligent cyber solutions. Today’s industrial transformation relies heavily on artificial intelligence (AI), including machine learning (ML) and data-driven technologies that allow machines to perform tasks such as self-monitoring, investigation, diagnosis, future prediction, and decision-making intelligently. However, to effectively employ AI-based models in the context of cybersecurity, human-understandable explanations, and their trustworthiness, are significant factors when making decisions in real-world scenarios. This article provides an extensive study of explainable AI (XAI) based cybersecurity modeling through a taxonomy of AI and XAI methods that can assist security analysts and professionals in comprehending system functions, identifying potential threats and anomalies, and ultimately addressing them in DT environments in an intelligent manner. We discuss how these methods can play a key role in solving contemporary cybersecurity issues in various real-world applications. We conclude this paper by identifying crucial challenges and avenues for further research, as well as directions on how professionals and researchers might approach and model future-generation cybersecurity in this emerging field.
- Book Chapter
- 10.1108/s1548-643520230000020017
- Mar 13, 2023
Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters' suitability and application and disclaims any warranties, express or implied, to their use.
- Research Article
20
- 10.1186/s12911-022-01772-2
- Feb 11, 2022
- BMC Medical Informatics and Decision Making
BackgroundIn the last decade, a lot of attention has been given to develop artificial intelligence (AI) solutions for mental health using machine learning. To build trust in AI applications, it is crucial for AI systems to provide for practitioners and patients the reasons behind the AI decisions. This is referred to as Explainable AI. While there has been significant progress in developing stress prediction models, little work has been done to develop explainable AI for mental health.MethodsIn this work, we address this gap by designing an explanatory AI report for stress prediction from wearable sensors. Because medical practitioners and patients are likely to be familiar with blood test reports, we modeled the look and feel of the explanatory AI on those of a standard blood test report. The report includes stress prediction and the physiological signals related to stressful episodes. In addition to the new design for explaining AI in mental health, the work includes the following contributions: Methods to automatically generate different components of the report, an approach for evaluating and validating the accuracies of the explanations, and a collection of ground truth of relationships between physiological measurements and stress prediction.ResultsTest results showed that the explanations were consistent with ground truth. The reference intervals for stress versus non-stress were quite distinctive with little variation. In addition to the quantitative evaluations, a qualitative survey, conducted by three expert psychiatrists confirmed the usefulness of the explanation report in understanding the different aspects of the AI system.ConclusionIn this work, we have provided a new design for explainable AI used in stress prediction based on physiological measurements. Based on the report, users and medical practitioners can determine what biological features have the most impact on the prediction of stress in addition to any health-related abnormalities. The effectiveness of the explainable AI report was evaluated using a quantitative and a qualitative assessment. The stress prediction accuracy was shown to be comparable to state-of-the-art. The contributions of each physiological signal to the stress prediction was shown to correlate with ground truth. In addition to these quantitative evaluations, a qualitative survey with psychiatrists confirmed the confidence and effectiveness of the explanation report in the stress made by the AI system. Future work includes the addition of more explanatory features related to other emotional states of the patient, such as sadness, relaxation, anxiousness, or happiness.
- Research Article
3
- 10.1002/ksa.12627
- Feb 24, 2025
- Knee surgery, sports traumatology, arthroscopy : official journal of the ESSKA
Digital twin (DT) systems, which involve creating virtual replicas of physical objects or systems, have the potential to transform healthcare by offering personalised and predictive models that grant deeper insight into a patient's condition. This review explores current concepts in DT systems for musculoskeletal (MSK) applications through an overview of the key components, technologies, clinical uses, challenges, and future directions that define this rapidly growing field. DT systems leverage computational models such as multibody dynamics and finite element analysis to simulate the mechanical behaviour of MSK structures, while integration with wearable technologies allows real-time monitoring and feedback, facilitating preventive measures, and adaptive care strategies. Early applications of DT systems to MSK include optimising the monitoring of exercise and rehabilitation, analysing joint mechanics for personalised surgical techniques, and predicting post-operative outcomes. While still under development, these advancements promise to revolutionise MSK care by improving surgical planning, reducing complications, and personalising patient rehabilitation strategies. Integrating advanced machine learning algorithms can enhance the predictive abilities of DTs and provide a better understanding of disease processes through explainable artificial intelligence (AI). Despite their potential, DT systems face significant challenges. These include integrating multi-modal data, modelling ageing and damage, efficiently using computational resources and developing clinically accurate and impactful models. Addressing these challenges will require multidisciplinary collaboration. Furthermore, guaranteeing patient privacy and protection against bias is extremely important, as is navigating regulatory requirements for clinical adoption. DT systems present a significant opportunity to improve patient care, made possible by recent technological advancements in several fields, including wearable sensors, computational modelling of biological structures, and AI. As these technologies continue to mature and their integration is streamlined, DT systems may fast-track medical innovation, ushering in a new era of rapid improvement of treatment outcomes and broadening the scope of preventive medicine. Level of Evidence: Level V.
- Discussion
14
- 10.1016/s2589-7500(19)30124-4
- Sep 24, 2019
- The Lancet Digital Health
Human versus machine in medicine: can scientific literature answer the question?
- Research Article
41
- 10.1016/j.fertnstert.2020.10.040
- Nov 1, 2020
- Fertility and Sterility
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
- Supplementary Content
- 10.3390/s25196207
- Oct 7, 2025
- Sensors (Basel, Switzerland)
HighlightsWhat are the main findings?IoT- and AI-integrated healthcare systems enable continuous health monitoring, personalized treatments, and proactive medical interventions for older adults.The paper identifies key challenges in privacy, security, ethics, interoperability, and user adoption and proposes multi-level defense mechanisms to enhance system reliability and trust.What is the implication of the main finding?Integrating IoT with AI can transform ageing care, improving disease management and promoting healthy, active ageing.Future healthcare systems can become more adaptive, patient-centered, and ethically accountable through advancements such as explainable AI, digital twins, and multimodal sensor fusion.Recent advancements in the Internet of Things (IoT) and artificial intelligence (AI) are unlocking transformative opportunities across society. One of the most critical challenges addressed by these technologies is the ageing population, which presents mounting concerns for healthcare systems and quality of life worldwide. By supporting continuous monitoring, personal care, and data-driven decision-making, IoT and AI are shifting healthcare delivery from a reactive approach to a proactive one. This paper presents a comprehensive overview of IoT-based systems with a particular focus on the Internet of Healthcare Things (IoHT) and their integration with AI, referred to as the Artificial Intelligence of Things (AIoT). We illustrate the operating procedures of IoHT systems in detail. We highlight their applications in disease management, health promotion, and active ageing. Key enabling technologies, including cloud computing, edge computing architectures, machine learning, and smart sensors, are examined in relation to continuous health monitoring, personalized interventions, and predictive decision support. This paper also indicates potential challenges that IoHT systems face, including data privacy, ethical concerns, and technology transition and aversion, and it reviews corresponding defense mechanisms from perception, policy, and technology levels. Future research directions are discussed, including explainable AI, digital twins, metaverse applications, and multimodal sensor fusion. By integrating IoT and AI, these systems offer the potential to support more adaptive and human-centered healthcare delivery, ultimately improving treatment outcomes and supporting healthy ageing.
- Research Article
4
- 10.52783/jes.3052
- May 1, 2024
- Journal of Electrical Systems
The burgeoning evolution of smart cities, characterized by the integration of the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning (ML), heralds a transformative era in urban management and citizen engagement. These technological advancements promise enhanced efficiency in city operations, improved public services, and a sustainable urban environment. However, the complexity and interconnectedness inherent in these systems introduce significant cybersecurity challenges, necessitating innovative approaches to safeguard the digital infrastructure of smart cities. This paper aims to explore the cybersecurity landscape of smart cities from the perspective of integrating IoT, AI, and ML for the creation of digital twins, offering a comprehensive analysis of the opportunities and threats within this domain. Smart cities leverage IoT to connect various components of the urban infrastructure, including transportation systems, utilities, and public services, creating an integrated network of devices that communicate and share data. The incorporation of AI and ML into this framework facilitates intelligent decision-making, enabling the automation of services and the optimization of resources. This synergy enhances the quality of life for residents, promotes economic development, and supports sustainable environmental practices. However, the dependence on digital technologies also exposes smart cities to a range of cybersecurity risks, from data breaches and privacy violations to the disruption of critical infrastructure. The integration of IoT, AI, and ML in smart cities, while offering unprecedented opportunities for urban innovation, also amplifies the complexity of the cybersecurity landscape. IoT devices, often designed with minimal security features, become potential entry points for cyber attacks. The vast amount of data generated and processed by these devices, if compromised, could lead to significant privacy and security breaches. AI and ML models, for their part, are susceptible to manipulation and bias, which can undermine the integrity of decision-making processes. The interconnectivity of systems means that a breach in one sector could have cascading effects throughout the city's infrastructure. Against this backdrop, the paper investigates the role of digital twins in mitigating cybersecurity risks in smart cities. Digital twins, digital replicas of physical entities or systems, offer a powerful tool for simulating and analyzing smart city operations, including cybersecurity scenarios. By mirroring the city's infrastructure in a virtual environment, digital twins allow for the identification of vulnerabilities, the simulation of cyber attacks, and the evaluation of potential impacts. This proactive approach to cybersecurity enables city administrators to anticipate threats and implement protective measures before real-world systems are compromised. The research questions guiding this inquiry include: How can the integration of IoT, AI, and ML enhance the resilience of smart cities against cyber threats? What are the specific cybersecurity challenges presented by these technologies, and how can they be addressed? And, most crucially, what role can digital twins play in fortifying the cybersecurity defenses of smart cities? To address these questions, the paper begins with a review of the current state of smart city technology, focusing on the integration of IoT, AI, and ML. It then delves into the cybersecurity challenges unique to this technological landscape, drawing on recent examples of cyber incidents in smart cities. The analysis highlights the vulnerabilities introduced by the widespread use of IoT devices and the complexities of securing AI and ML systems. Following this, the discussion turns to the potential of digital twins as a cybersecurity tool, examining how they can be employed to detect vulnerabilities, simulate attacks, and plan responses. The paper argues that while the integration of IoT, AI, and ML in smart cities presents significant cybersecurity challenges, it also offers opportunities for innovative solutions. Digital twins emerge as a promising approach to enhancing the cybersecurity posture of smart cities, enabling a dynamic and proactive defense mechanism. By facilitating the simulation of cyber threats in a controlled environment, digital twins allow city administrators to identify weaknesses, test the efficacy of protective measures, and develop more resilient urban infrastructures. In conclusion, the integration of IoT, AI, and ML in smart cities represents a double-edged sword, offering both remarkable opportunities for urban innovation and formidable cybersecurity challenges. This paper underscores the critical importance of adopting a cybersecurity perspective in the development and management of smart cities, highlighting the potential of digital twins as a strategic tool in mitigating these risks. As smart cities continue to evolve, embracing these technologies in a secure and responsible manner will be paramount in realizing their full potential while safeguarding the digital and physical well-being of urban populations.
- Supplementary Content
- 10.3390/bioengineering12090928
- Aug 29, 2025
- Bioengineering
Background/Objectives: Artificial Intelligence (AI) is improving dentistry through increased accuracy in diagnostics, planning, and workflow automation. AI tools, including machine learning (ML) and deep learning (DL), are being adopted in oral medicine to improve patient care, efficiency, and lessen clinicians’ workloads. AI in dentistry, despite its use, faces an issue of acceptance, with its obstacles including ethical, legal, and technological ones. In this article, a review of current AI use in oral medicine, new technology development, and integration barriers is discussed. Methods: A narrative review of peer-reviewed articles in databases such as PubMed, Scopus, Web of Science, and Google Scholar was conducted. Peer-reviewed articles over the last decade, such as AI application in diagnostic imaging, predictive analysis, real-time documentation, and workflows automation, were examined. Besides, improvements in AI models and critical impediments such as ethical concerns and integration barriers were addressed in the review. Results: AI has exhibited strong performance in radiographic diagnostics, with high accuracy in reading cone-beam computed tomography (CBCT) scan, intraoral photographs, and radiographs. AI-facilitated predictive analysis has enhanced personalized care planning and disease avoidance, and AI-facilitated automation of workflows has maximized administrative workflows and patient record management. U-Net-based segmentation models exhibit sensitivities and specificities of approximately 93.0% and 88.0%, respectively, in identifying periapical lesions on 2D CBCT slices. TensorFlow-based workflow modules, integrated into vendor platforms such as Planmeca Romexis, can reduce the processing time of patient records by a minimum of 30 percent in standard practice. The privacy-preserving federated learning architecture has attained cross-site model consistency exceeding 90% accuracy, enabling collaborative training among diverse dentistry clinics. Explainable AI (XAI) and federated learning have enhanced AI transparency and security with technological advancement, but barriers include concerns regarding data privacy, AI bias, gaps in AI regulating, and training clinicians. Conclusions: AI is revolutionizing dentistry with enhanced diagnostic accuracy, predictive planning, and efficient administration automation. With technology developing AI software even smarter, ethics and legislation have to follow in order to allow responsible AI integration. To make AI in dental care work at its best, future research will have to prioritize AI interpretability, developing uniform protocols, and collaboration between specialties in order to allow AI’s full potential in dentistry.
- Research Article
16
- 10.58440/ihr-29-a04
- May 1, 2023
- The International Hydrographic Review
While the field of hydrography is crucial for maritime navigation and other maritime applications, oceanography is the field that provides the relevant data and knowledge for predicting climate change, monitoring marine resources, and exploring marine life. Digital ocean twins combine these two exciting fields and combine ocean observations and ocean models to establish virtual representations of a real world system, in this case the ocean or an ocean area, as well as assets in the ocean and processes within ocean industries or the natural environment. They have the potential to play a critical role in optimising and supporting sustainable ocean development. Digital Twins are synchronised with their real-world counterparts at a specific frequency and fidelity. They can provide valuable insights into the ocean's state and its evolution over time, which can be used to support decision-making in ocean governance and various ocean-related industries. Digital ocean twins can transform human ocean interactions by accelerating holistic understanding, optimal decision-making, and effective interventions. Digital twins of the ocean use ocean observations, historical and forecast data to represent the past and present and simulate possible future scenarios. They are motivated by outcomes, tailored to use cases, powered by integration, built on data, guided by domain knowledge, and implemented in IT systems. In this article, we explore the benefits of digital twins for the ocean, the challenges in developing them, and the current state of the art in ocean digital twin technology. One of the main benefits of digital ocean twins is their ability to provide accurate predictions of ocean conditions under expected interventions. Their information can be used to support decision- making in various applications including ocean-related industries, such as fishing, shipping, and offshore energy production. Additionally, digital twins can help to improve our understanding of the ocean's complex processes and their interactions with human activities, such as climate change, pollution, resource extraction and overfishing. Researchers and IT companies are combining various technologies and data sources, such as the Internet of Things for ocean observations, state of the art data science, artificial intelligence and machine learning, data spaces and vocabularies into digital ocean twins to contextualise data, improve the accuracy of ocean models and make ocean knowledge more accessible to a wide range of users.
- Discussion
3
- 10.1093/neuros/nyab349
- Nov 18, 2021
- Neurosurgery
To the Editor: We are grateful for Dr Lim's observations.1,2 We are similarly encouraged by the increasing number of academic studies involving artificial intelligence (AI) and machine learning (ML), as well as by the increasing attention these fields have received in the popular press. As in many of the interactions between medicine and emerging technologies, the uptake of these methods in the field of neurosurgery has been slow. Despite the impatience, which this sluggish pace cannot help but engender, the need to develop and test safe practices and careful review cannot be hastened. The question of bias in AI and related algorithms has parallels in the simpler model-based analyses commonly used in the medical literature. Such models also require training on a test set, careful validation on unseen data, and then ongoing review in clinical practice. The many iterations of the CHADS2 score (most recently, CHA2DS2-VASc) were initially developed to estimate stroke risk in nonvalvular atrial fibrillation, but used in practice to decide when to begin anticoagulation.3,4 CHADS2 score and similar model-based tools serve as a testament to the capacity of the medical literature to process highly quantitative data and translate them into dynamic clinical practice. To take its rightful place in the field of neurosurgery, AI and related algorithms must pass through a similar process, which relies on openness of both the method and underlying data. Proprietary models and algorithms such as the sepsis model unfortunately seem to circumvent this process and, for this reason, are subject to significant and inescapable limitations. Although we opted to omit Explainable AI in the review, it has the potential to become an essential part of the openness and transparency required by the medical literature. This nascent field aims to put deep learning and similar “black box” algorithms in the same category as traditional models in which the significance and influence of every variable is made immediately apparent. In addition to facilitating ongoing external review and improvement of published models, Explainable AI could significantly mitigate the potential for bias during model development. Nevertheless, those models that make it to clinical practice will still be subject to subsequent, meaningful, postmarketing surveillance. One of the complicating factors in this process is that the tools built to explain them will similarly need to be open and explainable, given the complexity of these systems. We wholeheartedly agree with the paradigm of expert scientists collaborating with clinicians to produce the next generation of clinical AI and ML algorithms. Similar to the collaborations that exist now between physician scientists and statisticians, these collaborations should symbiotically form the basis for creatively building new software and hardware tools. Currently, an unfortunate distinction is often made between data scientists and traditional statisticians. It is essential that traditional statistical rigor be applied to AI and ML approaches even as these newer technologies begin to take their rightful place in clinical practice. Funding JG is supported by NIH K08 CA230172. Disclosures The authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article.
- Book Chapter
3
- 10.1016/b978-0-443-19096-4.00006-7
- Aug 25, 2023
- Emotional AI and Human-AI Interactions in Social Networking
Chapter Twelve - Human AI: Explainable and responsible models in computer vision
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.