Abstract

Breast cancer is one of the most common types of cancer and it presents itself as being the leading cause of death among women. If its diagnosis occur soon enough, the probability to achieve the cure of the patient can be increased. Recently, it has been more common the use of deep neural network techniques to aid pathologists in their prognosis, but they still do not fully trust them because they lack interpretability. In light of that, this work investigates if previous training of the models as encoders could enhance their accuracy in both classification and interpretability. There were implemented three models to the BreakHis and BreCaHAD dataset: NASNet Mobile, DenseNET201, and MobileNetV2. The experiments have shown that the three models increased the classification performance and two models improved their interpretability using the proposed strategy. DenseNet201 encoder has performed almost 23% better than its vanilla version in classifying a tumor and the NASNet Mobile encoder has improved 28.5% in its tumor interpretation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call