Abstract

Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.

Highlights

  • Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent

  • In the past few years, optical coherence tomography (OCT) has been rapidly implemented into diagnosis[7,13,66] and monitoring of retinal diseases[67,68]. Such measurements are widely used in humans[69,70,71], but not routinely employed in animals (Fig. 6). This is the first report of an automated deep learning (DL) segmentation of vitreo-retinal and choroidal compartments in healthy cynomolgus monkeys, a species commonly used as animal models of human disease as well as for safety assessment in preclinical trials

  • The translation of a previously developed and reproducible Machine learning (ML) framework in humans[24] to animals was successful. This suggests that the basic DL framework was applicable to animals after the ML was adjusted and trained on animal data

Read more

Summary

Introduction

Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The discrepancy between how a computer works and how humans think is known as the “black box problem”[46]: in communication technology and engineering language a system is usually considered as a “black box” that features an input and output path and shows a particular or at least statistically definable sort of operation Such a solution either is not specified in all details or cannot be visualized, so that its mode of working remains unidentified or hidden or in a way that is not (yet) comprehensible to humans. This can cause a major issue due to the frequent incomplete knowledge and interpretability of the algorithm’s internal workings, in particular, for DL models[6,47]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.