Abstract

Deep neural network (DNN) based models are highly acclaimed in medical image classification. The existing DNN architectures are claimed to be at the forefront of image classification. These models require very large datasets to classify the images with a high level of accuracy. However, fail to perform when trained on datasets of small size. Low accuracy and overfitting are the problems observed when medical datasets of small sizes are used to train a classifier using deep learning models such as Convolutional Neural Networks (CNN). These existing methods and models either always overfit when training on these small datasets or will result in classification accuracy which tends towards randomness. This issue stands even when using Transfer Learning (TL), the current standard for such a scenario. In this paper, we have tested several models including ResNet and VGGs along with more modern models like MobileNets on different medical datasets with transfer learning and without transfer learning. We have proposed solid theories as to why there exists a need for a more novel approach to this issue, and how the current methodologies fail when applied to the aforementioned datasets. Larger, more complex models are not able to converge for smaller datasets. Smaller models with less complexity perform better on the same dataset than their larger model counterparts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.