Abstract

AbstractHigh‐fidelity segmentation of blood vessels plays a pivotal role in numerous biomedical applications, such as injection assistance, cancer detection, various surgeries, and vein authentication. Near‐infrared (NIR) transillumination imaging is an effective and safe method to visualize the subcutaneous blood vessel network. However, such images are severely blurred because of the light scattering in body tissues. Inspired by the Vision Transformer model, this paper proposes a novel deep learning network known as transformer connection (TRC)‐Unet to capture global blurred and local clear correlations while using multi‐layer attention. Our method mainly consists of two blocks, thereby aiming to remap skip connection information flow and fuse different domain features. Specifically, the TRC extracts global blurred information from multiple layers and suppresses scattering to increase the clarity of vessel features. Transformer feature fusion eliminates the domain gap between the highly semantic feature maps of the convolutional neural network backbone and the adaptive self‐attention maps of TRCs. Benefiting from the long‐range dependencies of transformers, we achieved competitive results in relation to various competing methods on different data sets, including retinal vessel segmentation, simulated blur image segmentation, and real NIR blood vessel image segmentation. Moreover, our method remarkably improved the segmentation results of simulated blur image data sets and a real NIR vessel image data set. The quantitative results of ablation studies and visualizations are also reported to demonstrate the superiority of the TRC‐Unet design. © 2024 The Author(s). IEEJ Transactions on Electrical and Electronic Engineering published by Institute of Electrical Engineers of Japan and Wiley Periodicals LLC.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.