Abstract
Vessel segmentation in fundus images is a key procedure in the diagnosis of ophthalmic diseases, which can play a role in assisting doctors in diagnosis. Although current deep learning-based methods can achieve high accuracy in segmenting fundus vessel images, the results are not satisfactory in segmenting microscopic vessels that are close to the background region. The reason for this problem is that thin blood vessels contain very little information, with the convolution operation of each layer in the deep network, this part of the information will be randomly lost. To improve the segmentation ability of the small blood vessel region, a multi-input network (MINet) was proposed to segment vascular regions more accurately. We designed a multi-input fusion module (MIF) in the encoder, which is proposed to acquire multi-scale features in the encoder stage while preserving the microvessel feature information. In addition, to further aggregate multi-scale information from adjacent regions, a multi-scale atrous spatial pyramid (MASP) module is proposed. This module can enhance the extraction of vascular information without reducing the resolution loss. In order to better recover segmentation results with details, we designed a refinement module, which acts on the last layer of the network output to refine the results of the last layer of the network to get more accurate segmentation results. We use the HRF, CHASE_DB1 public dataset to validate the fundus vessel segmentation performance of the MINet model. Also, we merged these two public datasets with our collected Ultra-widefield fundus image (UWF) data as one dataset to test the generalization ability of the model. Experimental results show that MINet achieves an F1 score of 0.8324 on the microvessel segmentation task, achieving a high accuracy compared to the current mainstream models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.