Abstract

Ocular surface diseases affect a significant portion of the population worldwide. Accurate segmentation and quantification of different ocular surface structures are crucial for the understanding of these diseases and clinical decision-making. However, the automated segmentation of the ocular surface structure is relatively unexplored and faces several challenges. Ocular surface structure boundaries are often inconspicuous and obscured by glare from reflections. In addition, the segmentation of different ocular structures always requires training of multiple individual models. Thus, developing a one-model-fits-all segmentation approach is desirable. In this paper, we introduce a randomness-restricted diffusion model for multiple ocular surface structure segmentation. First, a time-controlled fusion-attention module (TFM) is proposed to dynamically adjust the information flow within the diffusion model, based on the temporal relationships between the network's input and time. TFM enables the network to effectively utilize image features to constrain the randomness of the generation process. We further propose a low-frequency consistency filter and a new loss to alleviate model uncertainty and error accumulation caused by the multi-step denoising process. Extensive experiments have shown that our approach can segment seven different ocular surface structures. Our method performs better than both dedicated ocular surface segmentation methods and general medical image segmentation methods. We further validated the proposed method over two clinical datasets, and the results demonstrated that it is beneficial to clinical applications, such as the meibomian gland dysfunction grading and aqueous deficient dry eye diagnosis.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.