Abstract
The rapid rise of artificial intelligence (AI) in medicine in the last few years highlights the importance of developing bigger and better systems for data and model sharing. However, the presence of Protected Health Information (PHI) in medical data poses a challenge when it comes to sharing. One potential solution to mitigate the risk of PHI breaches is to exclusively share pre-trained models developed using private datasets. Despite the availability of these pre-trained networks, there remains a need for an adaptable environment to test and fine-tune specific models tailored for clinical tasks. This environment should be open for peer testing, feedback, and continuous model refinement, allowing dynamic model updates that are especially important in the medical field, where diseases and scanning techniques evolve rapidly. In this context, the Discovery Viewer (DV) platform was developed in-house at the Biomedical Engineering and Imaging Institute at Mount Sinai (BMEII) to facilitate the creation and distribution of cutting-edge medical AI models that remain accessible after their development. The all-in-one platform offers a unique environment for non-AI experts to learn, develop, and share their own deep learning (DL) concepts. This paper presents various use cases of the platform, with its primary goal being to demonstrate how DV holds the potential to empower individuals without expertise in AI to create high-performing DL models. We tasked three non-AI experts to develop different musculoskeletal AI projects that encompassed segmentation, regression, and classification tasks. In each project, 80% of the samples were provided with a subset of these samples annotated to aid the volunteers in understanding the expected annotation task. Subsequently, they were responsible for annotating the remaining samples and training their models through the platform's "Training Module". The resulting models were then tested on the separate 20% hold-off dataset to assess their performance. The classification model achieved an accuracy of 0.94, a sensitivity of 0.92, and a specificity of 1. The regression model yielded a mean absolute error of 14.27 pixels. And the segmentation model attained a Dice Score of 0.93, with a sensitivity of 0.9 and a specificity of 0.99. This initiative seeks to broaden the community of medical AI model developers and democratize the access of this technology to all stakeholders. The ultimate goal is to facilitate the transition of medical AI models from research to clinical settings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.