Abstract

Pancreatic volume and fat fraction are critical prognoses for metabolic diseases like type 2 diabetes (T2D). Magnetic Resonance Imaging (MRI) is a required non-invasive quantification method for the pancreatic fat fraction. The dramatic development of deep learning has enabled the automatic measurement of MR images. Therefore, based on MRI, we intend to develop a deep convolutional neural network (DCNN) that can accurately segment and measure pancreatic volume and fat fraction. This retrospective study involved abdominal MR images from 148 diabetic patients and 246 healthy normoglycemic participants. We randomly separated them into training and testing sets according to the proportion of 80:20. There were 2364 recognizable pancreas images labeled and pre-treated by an upgraded superpixel algorithm for a discernible pancreatic boundary. We then applied them to the novel DCNN model, mimicking the most accurate and latest manual pancreatic segmentation process. Fat phantom and erosion algorithms were employed to increase the accuracy. The results were evaluated by dice similarity coefficient (DSC). External validation datasets included 240 MR images from 10 additional patients. We assessed the pancreas and pancreatic fat volume using the DCNN and compared them with those of specialists. This DCNN employed the cutting-edge idea of manual pancreas segmentation and achieved the highest DSC (91.2%) compared with any reported models. It is the first framework to measure intra-pancreatic fat volume and fat deposition. Performance validation reflected by regression R2 value between manual operation and trained DCNN segmentation on the pancreas and pancreatic fat volume were 0.9764 and 0.9675, respectively. The performance of the novel DCNN enables accurate pancreas segmentation, pancreatic fat volume, fraction measurement, and calculation. It achieves the same segmentation level of experts. With further training, it may well surpass any expert and provide accurate measurements, which may have significant clinical relevance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.