Abstract

AbstractOver the past few years, the COVID‐19 virus has had a significant impact on the physical and mental health of people around the world. Therefore, in order to effectively distinguish COVID‐19 patients, many deep learning efforts have used chest medical images to detect COVID‐19. As with model accuracy, interpretability is also important in the work related to human health. This work introduces an interpretable vision transformer that uses the prototype method for the detection of positive patients with COVID‐19. The model can learn the prototype features of each category based on the structural characteristics of ViT. The predictions of the model are obtained by comparing all the features of the prototype in the designed prototype block. The proposed model was applied to two chest X‐ray datasets and one chest CT dataset, achieving classification performance of 99.3%, 96.8%, and 98.5% respectively. Moreover, the prototype method can significantly improve the interpretability of the model. The decisions of the model can be interpreted based on prototype parts. In the prototype block, the entire inference process of the model can be shown and the predictions of the model can be demonstrated to be meaningful through the visualization of the prototype features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call