Abstract

Many existing intelligent recognition technologies require huge datasets for model learning. However, it is not easy to collect rectal cancer images, so the performance is usually low with limited training samples. In addition, traditional rectal cancer staging is time-consuming, error-prone, and susceptible to physicians’ subjective awareness as well as professional expertise. To settle these deficiencies, we propose a novel deep-learning model to classify the rectal cancer stages of T2 and T3. First, a novel deep learning model (RectalNet) is constructed based on residual learning, which combines the squeeze-excitation with the asymptotic output layer and new cross-convolution layer links in the residual block group. Furthermore, a two-stage data augmentation is designed to increase the number of images and reduce deep learning’s dependence on the volume of data. The experiment results demonstrate that the proposed method is superior to many existing ones, with an overall accuracy of 0.8583. Oppositely, other traditional techniques, such as VGG16, DenseNet121, EL, and DERNet, have an average accuracy of 0.6981, 0.7032, 0.7500, and 0.7685, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call