Abstract

Performing automatic retinal vessel segmentation on fundus image can obtain clear retinal vessel structure quickly, which will assist doctors to improve the efficiency and reliability of diagnosis. In fundus image, there are many small vessels and some areas with low contrast, and there may be abnormal areas. Therefore, achieving automatic retinal vessel segmentation with high performance is still challenging. The retinal vessel in the image is a topological structure, so the distribution of retinal vessel pixels in each pixel row (or column) should have some relationship to other rows (or columns). Motivated by this observation, we propose Pixel Rows and Columns Relationship Modeling Network (PRCRM-Net) to achieve high-performance retinal vessel segmentation. PRCRM-Net separately models the relationship between different pixel rows and pixel columns of fundus image, and achieves retinal vessel segmentation by classifying the pixels in units of pixel row and pixel column. The input of PRCRM-Net is the feature map extracted by U-Net. PRCRM-Net firstly processes the input feature map into row feature sequence and column feature sequence respectively. Secondly, it models the relationship between the elements in the row feature sequence and column feature sequence respectively based on Transformer. Finally, the updated row feature sequence and column feature sequence are used to obtain row-based segmentation result and column-based segmentation result respectively. And the final segmentation result is the combination of these two types of results. To evaluate the performance of PRCRM-Net, we conduct comprehensive experiments on three representative datasets, DRIVE, STARE and CHASE_DB1. The experiment results show that the proposed PRCRM-Net achieves state-of-the-art performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.