Abstract

Optical coherence tomography (OCT) is a promising non-invasive imaging technique that owns many biomedical applications. In this paper, a deep neural network is proposed for enhancing the spatial resolution of OCT en face images. Different from the previous reports, the proposed can recover high-resolution en face images from low-resolution en face images at arbitrary imaging depth. This kind of imaging depth adaptive resolution enhancement is achieved through an external attention mechanism, which takes advantage of morphological similarity between the arbitrary-depth and full-depth en face images. Firstly, the deep feature maps are extracted by a feature extraction network from the arbitrary-depth and full-depth en face images. Secondly, the morphological similarity between the deep feature maps is extracted and utilized to emphasize the features strongly correlated to the vessel structures by using the external attention network. Finally, the SR image is recovered from the enhanced feature map through an up-sampling network. The proposed network is tested on a clinical skin OCT data set and an open-access retinal OCT dataset. The results show that the proposed external attention mechanism can suppress invalid features and enhance significant features in our tasks. For all tests, the proposed SR network outperformed the traditional image interpolation method, e.g. bi-cubic method, and the state-of-the-art image super-resolution networks, e.g. enhanced deep super-resolution network, residual channel attention network, and second-order attention network. The proposed method may increase the quantitative clinical assessment of micro-vascular diseases which is limited by OCT imaging device resolution.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.