Abstract

The diagnosis of blood-related diseases involves the identification and characterization of a patient’s blood sample. As such, automated methods for detecting and classifying the types of blood cells have important medical applications in this field. Although deep convolutional neural network (CNN) and the traditional machine learning methods have shown good results in the classification of blood cell images, they are unable to fully exploit the long-term dependence relationship between certain key features of images and image labels. To resolve this problem, we have introduced the recurrent neural networks (RNNs). Specifically, we combined the CNN and RNN in order to propose the CNN–RNN framework that can deepen the understanding of image content and learn the structured features of images and to begin end-to-end training of big data in medical image analysis. In particular, we apply the transfer learning method to transfer the weight parameters that were pre-trained on the ImageNet dataset to the CNN section and adopted a custom loss function to allow our network to train and converge faster and with more accurate weight parameters. Experimental results show that compared with the other CNN models such as ResNet and Inception V3, our proposed network model is more accurate and efficient in classifying blood cell images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call