Abstract

In recent years, deep learning has been widely used to segment medical images and assist physicians in better diagnosis and treatment of diseases. Anthrax is a serious infectious disease that has a worldwide distribution. One of the most important ways to diagnose this disease is the microscopic examination of slides containing tissue samples of patients. The state-of-the-art models for segmentation of the slide images are based on deep neural networks and have encoder-decoder architecture, such as fully convolutional network, UNet, and their variants. Skip connections play a key role in such models. However, in many of these models, the skip connections only aggregate features related to the same scales of the encoder and decoder sections, which degrades the quality of the segmentation. We propose an improved UNet-based architecture to segment microscopic images of patient tissue samples. The proposed model, called IRUNet, takes the advantage of inception and residual blocks in the skip connections and combines multi-scale features in order to extract better features for segmentation. Also, to extract powerful representations in the encoder section, several convolutional networks have been used as the backbone and their effect on the segmentation results has been investigated. The experimental results show that despite many challenges in the field of microscopic image analysis such as high image resolution, different contrasts, image artifacts, object crowding, and overlapping, IRUNet has better performance on medical image segmentation compared to the state-of-the-art models. It achieves the precision of 92.8%, the recall rate of 93%, and the Dice score of 92.9% which are outstanding results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call