Abstract

PurposeHistopathology biopsy imaging is currently the gold standard for the diagnosis of breast cancer in clinical practice. Pathologists examine the images at various magnifications to identify the type of tumor because if only one magnification is taken into account, the decision may not be accurate. This study explores the performance of transfer learning and late fusion to construct multi-scale ensembles that fuse different magnification-specific deep learning models for the binary classification of breast tumor slides.Design/methodology/approachThree pretrained deep learning techniques (DenseNet 201, MobileNet v2 and Inception v3) were used to classify breast tumor images over the four magnification factors of the Breast Cancer Histopathological Image Classification dataset (40×, 100×, 200× and 400×). To fuse the predictions of the models trained on different magnification factors, different aggregators were used, including weighted voting and seven meta-classifiers trained on slide predictions using class labels and the probabilities assigned to each class. The best cluster of the outperforming models was chosen using the Scott–Knott statistical test, and the top models were ranked using the Borda count voting system.FindingsThis study recommends the use of transfer learning and late fusion for histopathological breast cancer image classification by constructing multi-magnification ensembles because they perform better than models trained on each magnification separately.Originality/valueThe best multi-scale ensembles outperformed state-of-the-art integrated models and achieved an accuracy mean value of 98.82 per cent, precision of 98.46 per cent, recall of 100 per cent and F1-score of 99.20 per cent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call