Abstract

Accurate nerve identification is critical during surgical procedures to prevent damage to nerve tissues. Nerve injury can cause long-term adverse effects for patients, as well as financial overburden. Birefringence imaging is a noninvasive technique derived from polarized images that have successfully identified nerves that can assist during intraoperative surgery. Furthermore, birefringence images can be processed under 20ms with a GPGPU implementation, making it a viable image modality option for real-time processing. In this study, we first comprehensively investigate the usage of birefringence images combined with deep learning, which can automatically detect nerves with gains upwards of 14% over its color image-based (RGB) counterparts on the F2 score. Additionally, we develop a deep learning network framework using the U-Net architecture with a Transformer based fusion module at the bottleneck that leverages both birefringence and RGB modalities. The dual-modality framework achieves 76.12 on the F2 score, a gain of 19.6 % over single-modality networks using only RGB images. By leveraging and extracting the feature maps of each modality independently and using each modality's information for cross-modal interactions, we aim to provide a solution that would further increase the effectiveness of imaging systems for enabling noninvasive intraoperative nerve identification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.