Abstract

Notice of Violation of IEEE Publication Principles Learning Assisted Image Framework for Brain Image Segmentation, by Y. Han and Z. Zhang, in IEEE Access, vol. 8, June 2020, pp. 117028-117035 After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE’s Publication Principles. This paper contains significant portions of text from the papers cited below that were paraphrased without attribution. “DeepIGeoS: A Deep Geodesic Framework for Medical Image Segmentation, by Guotai Wang, Maria A. Zuluaga, Wenqi Li, Rosalind Pratt, Premal A. Patel, Michael Aertsen, Tom Doel, Anna L. David, Jan Deprest, Sebastien Ourselin, and Tom Vercauteren in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 7, July 2019, pp. 1559-1572 Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning, by Guotai Wang, Wenqi Li, Maria A. Zuluaga, Rosalind Pratt, Premal A. Patel, Michael Aertsen, Tom Doel, Anna L. David, Jan Deprest, Sebastien Ourselin, and Tom Vercauteren in IEEE Transactions on Medical Imaging, vol. 37, no. 7, July 2018, pp. 1562-1573, Exacting medical imaging, surgical planning, and many others are very important to handle brain image segmentation. The Convolutional Neural Networks (CNN) has been developed by the efficient auto segmentation technology. In fact, the clinical outcomes are not appropriately specific and detailed. Nevertheless, the lack of sensitivity to images and lack of generality is reduced in traditionally invisible object classes. In this paper, Deep Learning Assisted Image Medical Image Segmentation (DL-IIMIS) is proposed to tackle these difficulties by including CNNs in the bounding box and scribble-based pipeline. To adapt a CNN model to one test frame, it is proposed that image fine tuning and geodesic transformations can be either unsupervised or supervised. In this frame, two applications are involved: 2-D multi-organ magnetic resonance (MR) segmentation, with only two types of training and 3-D segmentation within brain tumor center and in entire brain tumors with different MR sequences where only one MR sequence is reported. Compared with other algorithms, the proposed framework can output a better performance in brain image segmentation.

Highlights

  • Medical image segmentation, which differentiates between organs or tumors and context images including CT or MRI images [1], are one of the most difficult tasks in the field of analysis of medical image to provide important information about type and volume of a certain organ [2]

  • CASE 1: BOUNDING BOX AND IMAGE-SPECIFIC FINE-TUNING (BIFSeg) As discussed in the Figure 3 demonstrates the proposed bounding and image-specific fining dynamic (BIFSeg) method, which provides for classifying different objects into a compatible context by means of a Convolutional Neural Networks (CNN) containing the contents of a bounding box of a specific instance, and binary segmentation

  • The contribution of BIFSeg is focused on segmenting unseen classes of object, starting to set the CNN models to an image adaptation fly guided by user interactions

Read more

Summary

Introduction

Medical image segmentation, which differentiates between organs or tumors and context images including CT or MRI images [1], are one of the most difficult tasks in the field of analysis of medical image to provide important information about type and volume of a certain organ [2]. Previous systems are built using conventional methods including filters for edge detection and computational techniques. Thereafter, machine learning [5] methods that extract manufactured functions have become a long-term dominant technique. The design and extraction of these features has always been the main concern for the development of this system [6] and the complexity of the approaches was regarded as a considerable requirement of their application [7]. In the 2000s, deep learning techniques were taken and their tremendous capabilities throughout image processing tasks began to

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call