Abstract

Medical image analysis is one of the research fields that had huge benefits from deep learning in recent years. To earn a good performance, the learning model requires large scale data with full annotation. However, it is a big burden to collect a sufficient number of labeled data for the training. Since there are more unlabeled data than labeled ones in most of medical applications, self-supervised learning has been utilized to improve the performance. However, most of current methods for self-supervised learning try to understand only semantic features of the data, but have not fully utilized properties inherent in medical images. Specifically, in CT or MR images, the spatial or structural information contained in the dataset has not been fully considered. In this paper, we propose a novel method for self-supervised learning in medical image analysis that can exploit both semantic and spatial features at the same time. The proposed method is experimented in the problems of organ segmentation, intracranial hemorrhage detection and the results show the effectiveness of the method.

Highlights

  • IntroductionDeep learning and artificial intelligence have been rising as hot topics in computer vision and image processing

  • For the past decades, deep learning and artificial intelligence have been rising as hot topics in computer vision and image processing

  • ORGAN-AT-RISK SEGMENTATION 1) DATASET OVERVIEW We used StructSeg dataset which has been used in the StructSeg2019 challenge to evaluate the performance of our proposed method

Read more

Summary

Introduction

Deep learning and artificial intelligence have been rising as hot topics in computer vision and image processing. Traditional methods require the manual feature extraction followed by classification algorithms, while deep learning methods provide the compelling ability to automatically learn multi-level visual features from raw or minor amended images and successfully accomplished a variety of tasks such as image classification [1]–[3], object detection [4]–[6] and semantic segmentation [7]–[9]. Are these deep neural networks successful in general tasks but they proved the dominance over traditional methods in medical imaging. With regard to network capacity, The associate editor coordinating the review of this manuscript and approving it for publication was Hiram Ponce

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.