Abstract
BackgroundThe automatic segmentation of pulmonary vessels from CT images has important significance. However, accurately annotating pulmonary vessels directly in non-contrast CT (NCCT) images is complex and time-consuming. MethodsThis study aims to draw annotations with contrast-enhanced CT (CECT) images and train a deep-learning model for segmenting pulmonary vessels from NCCT images. Two datasets with 63 CT scans were collected. Dataset D1 included 17 cases annotated in CECT images, 10 cases annotated in NCCT images, and 12 NCCT scans. Dataset D2 consisted of 12 CECT and 12 NCCT scans with annotations. First, annotations drawn in CECT images (Dataset D1) are transferred to NCCT images via spatial registration. Second, a CE-NC-VesselSegNet is proposed and trained using the transferred annotations to segment pulmonary vessels from NCCT images. Finally, the CE-NC-VesselSegNet is evaluated and compared with its counterparts. ResultsAfter registration, the maximum and root mean square error between CECT and NCCT images decreases, while the structural similarity and peak signal-to-noise ratio increase. CE-NC-VesselSegNet can accurately segment pulmonary vessels from NCCT images with a Dice of 0.856. In the external validation using Dataset D2, the CE-NC-VesselSegNet achieves a Dice of 0.738, which is higher compared with that of NC-VesselSegNet trained by D2. Visual inspections have shown that CE-NC-VesselSegNet enables more accurate and continuous segmentation compared with its counterpart. ConclusionsAnnotations of pulmonary vessels drawn in CECT images can be transferred to NCCT images via spatial registration. Using these transferred annotations of high quality, a CE-NC-VesselSegNet can be trained to segment pulmonary vessels from NCCT images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.