Abstract

Despite the fact that traditional deep learning (DL) approaches provide promising accuracy and efficiency in medical ultrasound image analysis, they cannot replace the physician in making a diagnosis since the DL model is only appropriate in static application scenarios. Currently, most DL-based models are incapable of learning new tasks in the dynamic clinical environments due to the catastrophic forgetting of old tasks. To address the above problem, we propose an incremental classifier that is sequentially trained on evolving tasks for medical ultrasound images by counterfactual thinking. Specifically, the proposed model consists of a feature extractor and a classifier that can add new classes at any time during training. Toward a more discriminative model in the continual learning setting, a contrastive strategy is designed to leverage fine-grained information by generating a series of counterfactual regions. For model optimization, we design a multi-task loss made up of a knowledge distillation loss, a cross-entropy loss, and a contrasting loss. This objective jointly enjoys the merits of less forgetting, better accuracy, and fine-grained information utilization. A newly collected dataset with 52 medical ultrasound classification tasks is used to demonstrate the effectiveness of our method. The proposed approach achieves 76.59%, 11.67%, and 7.93% in terms of the average incremental accuracy, forgetting rate, and feature retention, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call