Abstract

BackgroundThe study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem.ObjectiveIn this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance.MethodsWe used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology.ResultsTo showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation.ConclusionsThis semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.

Highlights

  • The concept of adverse childhood experiences (ACEs) has been recognized for quite some time but was first formally studied in the CDC-Kaiser landmark study [1], which uncovered the strong connection between ACEs and the development of risk factors for different negative health outcomes that threaten the well-being of populations throughout their life course

  • There is an entire body of research focused on studying the links between ACEs and Social determinants of health (SDoH) and health outcomes, but few intelligent tools are available to assist in the real-time screening of patients and to assess the connection between ACEs and SDoH, which could help to guide patients and families to available resources

  • We describe the main features provided by Semantic Platform for Adverse Childhood Experiences Surveillance (SPACES) through a proof-of-concept prototype that will render the information collected by the QA agent and the recommendation service on a user-friendly interface

Read more

Summary

Introduction

BackgroundThe concept of adverse childhood experiences (ACEs) has been recognized for quite some time but was first formally studied in the CDC-Kaiser landmark study [1], which uncovered the strong connection between ACEs and the development of risk factors for different negative health outcomes that threaten the well-being of populations throughout their life course. Recommendation systems and digital assistants often require machine learning (ML), artificial intelligence (AI), and natural language processing capabilities to effectively connect and harvest the vast amounts of generated data. They need to store, retrieve, and learn from past interactions and experiences with users. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Conclusions: This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.