Abstract

In the realm of analysis of medical image, semantic segmentation is a crucial yet difficult problem. Automatic labeling of various anatomical structures can aid in diagnosis of illness, planning of treatment and evaluation of development. However, because of the wide variation in form and appearance across participants, segmentation is challenging for mechanized labeling and for manual labeling, it is time consuming. Hence in our article we use multi-class deep convolution networks which will be able to explore image models at multimodal blocks. We use selective attention technique which helps to obtain consistent segmentation results. Through this spatial relationship and data augmentation among images are carried out. The proposed model multi-class segmentation model provides accurate and reliable results and the framework is able to support multi-modality when compared with the existing models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call