Abstract

The popularity of deep learning (DL) in the machine learning community has been dramatically increasing since 2012. The theoretical foundations of DL are well-rooted in the classical neural network (NN). Rule extraction is not a new concept, but was originally devised for a shallow NN. For about the past 30 years, extensive efforts have been made by many researchers to resolve the “black box” problem of trained shallow NNs using rule extraction technology. A rule extraction technology that is well-balanced between accuracy and interpretability has recently been proposed for shallow NNs as a promising means to address this black box problem. Recently, we have been confronting a “new black box” problem caused by highly complex deep NNs (DNNs) generated by DL. In this paper, we first review four rule extraction approaches to resolve the black box problem of DNNs trained by DL in computer vision. Next, we discuss the fundamental limitations and criticisms of current DL approaches in radiology, pathology, and ophthalmology from the black box point of view. We also review the conversion methods from DNNs to decision trees and point out their limitations. Furthermore, we describe a transparent approach for resolving the black box problem of DNNs trained by a deep belief network. Finally, we provide a brief description to realize the transparency of DNNs generated by a convolutional NN and discuss a practical way to realize the transparency of DL in radiology, pathology, and ophthalmology.

Highlights

  • Deep learning (DL) has become an increasingly popular trend in the machine learning community

  • We have provided a review regarding the right direction needed to develop “white box” deep learning (DL) in radiology, pathology, and ophthalmology

  • We believe that the most important point to realize the transparency of DL in radiology, pathology, and ophthalmology is not that driven features rely on filter responses solicited from a large amount of training data, which suffer from a lack of direct human interpretability; rather, we should utilize the high-level abstraction of attributes associated with medical images with prior knowledge graded and/or rated by radiologists, pathologists, and ophthalmologists

Read more

Summary

INTRODUCTION

Deep learning (DL) has become an increasingly popular trend in the machine learning community. At present, various “black box” problems remain for DNNs. By contrast, as machine learning-based predictions become increasingly ubiquitous and affect numerous aspects of our daily lives, the focus of current research has moved beyond model performance (e.g., accuracy) to other factors, such as interpretability and transparency (Yang et al, 2018). To interpret and apply DL to these medical images effectively, sufficient expertise in computer science is required in the clinical setting This is because of the “black box” nature of DL, where results are generated with high accuracy with no specific medical-based reason. We can extract rules using pedagogical (Andrews et al, 1995) approaches such as C4.5, J48graft, the Re-RX family, Trepan (Craven and Shavlik, 1996), and ALPA (Fortuny and Martens, 2015), regardless of the input and output layers in any type of DL for images with high-level abstraction attributes with prior knowledge. The digital database for screening mammography (DDSM) (Michael et al, 1998, 2001) consists of mammographic image assessment categories for the breast imaging reporting and data system (BI-RADS) (Obenauer et al, 2005) and the

Limitations
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.