Abstract

As one of the branches of machine learning, the deep learning model combined with artificial intelligence is widely used in the field of computer vision technology, and the image recognition field represented by medical image analysis is also developing. Its advantage is that it does not rely on human annotation, and the computer can recognize and process the feature information omitted by human beings during the model training process, so as to achieve or even exceed the accuracy of human processing. Based on the general lack of explain ability caused by the unknown data processing process in the deep model, the existing solutions mainly include the establishment of internal explain ability, attention mechanism interpretation of specific models, and the interpretation of unknowable models represented by LIME. The way to quantitatively assess interpretability is still being explored, especially in the interpretative assessment of both doctors and patients in medical decision-related models, several scales have been proposed for reference. The current research on the application of artificial intelligence deep learning models in medical imaging generally pays more attention to accuracy rather than explain ability, resulting in the lack of explain ability, and thus hindering the practical clinical application of deep learning models. Therefore, the need to analyze the development of medical image analysis in the field of artificial intelligence and computer vision technology, and how to balance accuracy and interpretability to develop deep learning models that both doctors and patients can trust will become the research focus of the industry in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call