Abstract

Generalisation across multiple tasks is a major challenge in deep learning for medical imaging applications, as it can cause a catastrophic forgetting problem. One commonly adopted approach to address these challenges is to train the model from scratch, incorporating old and new data, classes, and tasks. However, this solution comes with its downsides, as it is time-consuming, requires high computational resources, is susceptible to bias, and lacks flexibility. To effectively address these issues, this paper introduces a generalisable DL framework that consists of three key components: self-supervised learning, feature fusion of a single task, and feature fusion of new classes or tasks. Using the proposed framework, DL models with the SVM classifier can accurately detect abnormalities in X-ray tasks, including the humerus and wrist, achieving an accuracy of 92.71% and 90.74%, respectively. These results were achieved using a single classifier with minimal training requirements when new tasks were introduced. Another experiment was performed on chest X-rays, where new classes were added to the pre-existing ones. Without requiring retraining with both old and new classes, our framework achieved a combined class accuracy of 98.18%. This demonstrates that the model has not forgotten the old data. The proposed framework enhances performance and brings flexibility and efficiency to the training process, saving time and computational resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call