Abstract

Automated nuclei recognition and detection is a critical step for a number of computer assisted pathology based on image processing techniques. However, automated nuclei recognition and detection is quite challenging due to the exited heterogeneous characteristics of cancer nuclei such as large variability in size, shape, appearance, and texture of the different nuclei. Deep learning approaches, where the most popular one is the deep Convolutional Neural Network (CNN), have been shown to provide encouraging results in different computer vision tasks, and many CNN models learned already with large-scale image dataset such as ImageNet have been released. How to effectively adopt the exiting CNN models to other domain tasks such as medical image analysis has attracted hot attention for transferring the obtained knowledge from the general image set to the specific domain task, which is called as transfer learning. Since the released CNN model usually require a fixed size of input images, transfer learning strategy compulsorily unifies the available images in the target domain to the required size in the CNN models, which maybe modifies the inherent structure in the target images and affect the final performance. This study exploits an adaptable transfer learning strategy flexibly for any size of input images via removing the mathematical operation components but retaining the learned knowledge in the exiting CNN models. We modify the released CNN models: AlexNet, VGGnet and ResNet previously learned with the ImageNet dataset for dealing with the small-size of image patches to implement nuclei recognition. Experimental results show that our proposed adaptable transfer learning strategy achieves promising performance for nuclei recognition compared with a constructed CNN architecture for small-size of images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call