Abstract

Adapting the profound, deep convolutional neural network models for large image classification can result in the layout of network architectures with a large number of learnable parameters and tuning of those varied parameters can considerably grow the complexity of the model. To address this problem a convolutional Deep-Net Model based on the extraction of random patches and enforcing depth-wise convolutions is proposed for training and classification of widely known benchmark Breast Cancer histopathology images. The classification result of these patches is aggregated using majority vote casting in deciding the final image classification type. It has been observed that the proposed Deep-Net model implementation results when compared with classification results of the VGG Net(16 layers) learned features, outclasses in terms of accuracy when applied to breast tumor Histopathology images. The objective of this work is to examine and comprehensively analyze the sub-class classification performance of the proposed model across all optical magnification frontiers.

Highlights

  • A rapid increase has been observed in the occurrences of breast cancer, especially in Asian nations like China, India, and Malaysia [1][2]

  • We can compare and improve classification accuracy by training various classification models inclusive of linear Support Vector Machine (SVM), K-Nearest Neighbor (KNN) and Multilayer Perceptron Model (MLP) classifiers on the features extracted by the Convolutional base part and using k-fold crossvalidation to estimate the error of the classifier

  • The work presented here proposed a Deep-Net Model based on the extraction of random patches and enforcing depth-wise convolutions which is an enhancement over traditional way of using Deep-Net models for image classification

Read more

Summary

Introduction

A rapid increase has been observed in the occurrences of breast cancer, especially in Asian nations like China, India, and Malaysia [1][2]. It compiles research accomplished with Unsupervised Methods, Supervised Methods, Ensemble techniques, and Deep Methods. To ease the task of training Deep-Net work with huge size images as input literature [15][16] represented each image with one randomly cropped patch, and labeled the patch with the same label as of original image. This approach leads to ambiguity in training examples as one patch may not be the good representative of the entire image. Global feature extraction Methods (LBP, GLCM etc.) shallow features Multiple feature vector (MFV) & transfer learning Graph-manifold & BI-LSTM models convolution neural network (CNN) model with fusion rule (FR) ConvNet based fisher vector (CFV) & Gaussian mixture model (GMM) Deep CNN Incremental boosting convolution networks

Dataset Used
Experiment 1- Global Feature Extraction using Transfer Learning
Soft voting
Proposed Deep-NET Model
Batch size and epochs
Dropout
Batch normalization
Comparison with State-of-The-Art
Conclusion
Findings
10 Authors
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.