Abstract
Although non-interpretable (black-box) deep learning models are well known for their accuracy, interpretable deep learning models should be used for high stake decisions, such as: healthcare. In this paper, we present a novel technique of combining the existing state-of-the-art models, and using them as a base model to build an interpretable deep learning model Comb-ProtoPNet. In contrast to usual technique of combining the logits of two (or more) algorithms to form an ensemble algorithm, we combine the algorithms themselves. Our proposed interpretable model applies a prototype layer on top of the convolutional layers of an ensemble base model. We trained and tested our algorithm over the dataset of chest CT-scan images of COVID-19 patients, pneumonia patients and normal people. The use of a certain combination of blocks from two different state-of-the-art models (statistically) significantly improved the accuracy compared to the individual use of the state-of-the-art models as the base models, and this is where the part of the title “One and one make eleven” comes from.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.